CN111259194A - Method and apparatus for determining duplicate video - Google Patents

Method and apparatus for determining duplicate video Download PDF

Info

Publication number
CN111259194A
CN111259194A CN201811458416.2A CN201811458416A CN111259194A CN 111259194 A CN111259194 A CN 111259194A CN 201811458416 A CN201811458416 A CN 201811458416A CN 111259194 A CN111259194 A CN 111259194A
Authority
CN
China
Prior art keywords
face
video
time period
existing
existing video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811458416.2A
Other languages
Chinese (zh)
Other versions
CN111259194B (en
Inventor
李元朋
彭明浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811458416.2A priority Critical patent/CN111259194B/en
Publication of CN111259194A publication Critical patent/CN111259194A/en
Application granted granted Critical
Publication of CN111259194B publication Critical patent/CN111259194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a method and a device for determining repeated videos. The method for determining duplicate videos includes: acquiring a current video; comparing the time period of the face in the current video with the time period of the face in each existing video in the existing video library to obtain the similarity of each existing video and the current video; and determining that the current video is repeated with the existing video in response to the comparison result indicating that the existing video with the similarity to the current video larger than the preset threshold exists in the existing video library. The method can improve the accuracy of determining the repeatability of the video.

Description

Method and apparatus for determining duplicate video
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for determining a duplicate video.
Background
In the current internet video service, in order to save hardware resources and improve user experience, a high-similarity video needs to be identified and deduplicated when the video is stored.
The current video duplicate removal processing method mainly comprises the following steps: in the first method, the duplication is removed according to the MD5 value of the video file. In the second method, de-duplication is performed according to video text metadata information, for example, after a text vector space model is built for each video based on a Vector Space Model (VSM), the distance between two vectors is calculated to obtain the similarity of the videos. In the third method, according to the duplication removal of the video content, the similarity of the video can be calculated through image matching of key frames.
However, in the first method, the MD5 value changes after the video is transcoded, and the video with highly similar content cannot be identified. The second method has high complexity of measurement and calculation time, and is too high in calculation cost in the face of a large amount of internet videos. In the third method, the complexity of the calculation time is higher, and the calculation amount of single similarity is too complex, so that the practical engineering applicability is not realized; and some video contents have high similarity, but key frames are different due to differences in shooting period or post-processing, and the like, so that identification is missed.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining repeated videos.
In a first aspect, an embodiment of the present application provides a method for determining a duplicate video, including: acquiring a current video; comparing the time period of the face in the current video with the time period of the face in each existing video in the existing video library to obtain the similarity of each existing video and the current video; and determining that the current video is repeated with the existing video in response to the comparison result indicating that the existing video with the similarity to the current video larger than the preset threshold exists in the existing video library.
In some embodiments, comparing the time period including the face in the current video with the time period including the face in each existing video in the existing video library to obtain the similarity between each existing video and the current video includes: carrying out face tracking on a face in a current video, and determining a first time period set in which the face in the current video appears; carrying out face tracking on the face in each existing video, and determining a second time period set in which the face in each existing video appears; and comparing the first time period set of the current video with the second time period set of each existing video in the existing video library to obtain the similarity between each existing video and the current video.
In some embodiments, comparing the time period including the face in the current video with the time period including the face in each existing video in the existing video library to obtain the similarity between each existing video and the current video includes: carrying out face tracking on a face in a current video, and determining a first time period set in which the face in the current video appears; carrying out face tracking on the face in each existing video in the existing video library, and determining a second time period set in which the face in each existing video appears; comparing the first time period set of the current video with the second time period sets of all existing videos in the existing video library to obtain a pre-judgment similarity value set; responding to the pre-judgment similar value larger than a preset threshold value in the pre-judgment similar value set, and determining a second face label set corresponding to each second time period of the existing video based on the existing video corresponding to the pre-judgment similar value larger than the preset threshold value; determining a first face label set corresponding to each first time period based on the current video; and comparing a first face label in the first face label set of the current video with a second face label in the second face label set of the existing video to obtain the similarity of each existing video and the current video.
In some embodiments, determining, based on the current video, the first set of face tags corresponding to the respective first time periods comprises: for each video frame in each first time period of the current video, extracting the features of each face to form a multi-dimensional vector, and forming a first face vector matrix based on the multi-dimensional vectors of each face; determining a second face label set corresponding to each second time period of the existing video based on the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value comprises: and for each video frame in each second time period of the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value, extracting the features of each face to form a multi-dimensional vector, and forming a second face vector matrix based on the multi-dimensional vectors of each face.
In some embodiments, the first face tag set includes a first face tag subset sequence, each first face tag subset in the first face tag subset sequence corresponding to a video frame in the current video and including a plurality of face tags; the second face label set comprises a second face label subset sequence, each second face label subset in the second face label subset sequence corresponds to one video frame in the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value and comprises a plurality of face labels; comparing a first face label in a first face label set of a current video with a second face label in a second face label set of the existing video comprises: and comparing the face label subset sequence appearing in the first time period set with the face label subset sequence appearing in the second time period set.
In a second aspect, an embodiment of the present application provides an apparatus for determining duplicate videos, including: a video acquisition unit configured to acquire a current video; the video comparison unit is configured to compare the time period of the face in the current video with the time period of the face in each existing video in the existing video library to obtain the similarity between each existing video and the current video; and the repetition determination unit is configured to respond to the comparison result indicating that the existing video with the similarity greater than the preset threshold exists in the existing video library, and determine that the current video is repeated with the existing video.
In some embodiments, the video comparison unit comprises: the first tracking subunit is configured to perform face tracking on the face in the current video and determine a first time period set in which the face in the current video appears; the second tracking subunit is configured to perform face tracking on the face in each existing video and determine a second time period set in which the face in each existing video appears; and the similarity determining subunit is configured to compare the first time period set of the current video with the second time period set of each existing video in the existing video library to obtain the similarity between each existing video and the current video.
In some embodiments, the video comparison unit comprises: the first tracking subunit is configured to perform face tracking on the face in the current video and determine a first time period set in which the face in the current video appears; the second tracking subunit is configured to perform face tracking on the face in each existing video in the existing video library, and determine a second time period set in which the face in each existing video appears; the pre-comparison subunit is configured to compare the first time period set of the current video with the second time period sets of the existing videos in the existing video library to obtain a pre-judgment similarity value set; the second determining subunit is configured to, in response to a prejudgment similar value larger than a predetermined threshold value existing in the prejudgment similar value set, determine, based on an existing video corresponding to the prejudgment similar value larger than the predetermined threshold value, a second face label set corresponding to each second time period of the existing video; a first determining subunit configured to determine, based on the current video, a first set of face tags corresponding to respective first time periods; and the similarity comparison subunit is configured to compare a first face label in the first face label set of the current video with a second face label in the second face label set of the existing video to obtain the similarity between each existing video and the current video.
In some embodiments, the first determining subunit is further configured to: for each video frame in each first time period of the current video, extracting the features of each face to form a multi-dimensional vector, and forming a first face vector matrix based on the multi-dimensional vectors of each face; the second determining subunit is further configured to: and for each video frame in each second time period of the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value, extracting the features of each face to form a multi-dimensional vector, and forming a second face vector matrix based on the multi-dimensional vectors of each face.
In some embodiments, the first face tag set in the first determining subunit includes a first face tag subset sequence, each first face tag subset in the first face tag subset sequence corresponds to one video frame in the current video and includes a plurality of face tags; a second face label set in the second determining subunit comprises a second face label subset sequence, each second face label subset in the second face label subset sequence corresponds to one video frame in the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value and comprises a plurality of face labels; the similarity contrast subunit is further configured to: and comparing the face label subset sequence appearing in the first time period set with the face label subset sequence appearing in the second time period set.
In a third aspect, an embodiment of the present application provides an apparatus, including: one or more processors; storage means for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method as described in any above.
In a fourth aspect, embodiments of the present application provide a computer-readable medium, on which a computer program is stored, which when executed by a processor implements a method as described above.
According to the method and the device for determining the repeated video, firstly, the current video is obtained; then, comparing the time period of the face in the current video with the time period of the face in each existing video in the existing video library to obtain the similarity of each existing video and the current video; and finally, responding to the result of comparison indicating that the existing video with the similarity greater than the preset threshold exists in the existing video library, and determining that the current video is repeated with the existing video. In the process, the video which is repeated with the current video is determined by comparing the time period of the face in the current video with the time period of the face in each existing video in the existing video library, so that the accuracy of determining the video repeatability is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a schematic flow chart diagram illustrating one embodiment of a method for determining duplicate videos in accordance with the present application;
fig. 3 is a schematic diagram of an application scenario of a method for determining a repeated video according to an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating yet another embodiment of a method for determining duplicate videos in accordance with the present application;
FIG. 5 is a schematic block diagram illustrating an embodiment of an apparatus for determining duplicate video according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings. Those skilled in the art will also appreciate that although the terms "first", "second", etc. may be used herein to describe various sets of time periods, sets of face labels, sequences of face label subsets, tracking sub-units, determining sub-units, etc., these sets of time periods, sets of face labels, sequences of face label subsets, tracking sub-units, determining sub-units, etc., should not be limited by these terms. These terms are only used to distinguish one time period set, face label subset sequence, face label subset, tracking sub-unit, determining sub-unit from other time period sets, face label set, face label subset sequence, face label subset, tracking sub-unit, determining sub-unit.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and servers 105, 106. The network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the servers 105, 106. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user 110 may use the terminal devices 101, 102, 103 to interact with the servers 105, 106 via the network 104 to receive or send messages or the like. Various communication client applications, such as a video capture application, a video play application, an instant messaging tool, a mailbox client, social platform software, a search engine application, a shopping application, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Mov)ing Picture Experts Group Audio LayerIII, motion Picture experts compressed Standard Audio layer 3), MP4 (Mov)ing Picture Experts Group AudioLayer IV, mpeg audio layer 4) players, laptop and desktop computers, and the like.
The servers 105, 106 may be servers providing various services, such as background servers providing support for the terminal devices 101, 102, 103. The background server can analyze, store or calculate the data submitted by the terminal and push the analysis, storage or calculation result to the terminal device.
It should be noted that, in practice, the method for determining duplicate videos provided by the embodiment of the present application is generally performed by the servers 105 and 106, and accordingly, the apparatus for determining duplicate videos is generally disposed in the servers 105 and 106. However, when the performance of the terminal device can satisfy the execution condition of the method or the setting condition of the device, the method for determining the duplicate video provided by the embodiment of the present application may also be executed by the terminal device 101, 102, 103, and the means for determining the duplicate video may also be provided in the terminal device 101, 102, 103.
It should be understood that the number of terminals, networks, and servers in fig. 1 are merely illustrative. There may be any number of terminals, networks, and servers, as desired for an implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for determining duplicate videos in accordance with the present application is shown. The method for determining the repeated video comprises the following steps:
step 201, acquiring a current video.
In this embodiment, an electronic device (e.g., the server or the terminal shown in fig. 1) on which the above-described method for determining a duplicate video operates may obtain a current video from a local or other terminal or server. The current video is a video that needs to be determined whether to be repeated with the existing video.
Step 202, comparing the time period of the face in the current video with the time period of the face in each existing video in the existing video library to obtain the similarity between each existing video and the current video.
In this embodiment, a time period set including a face may be identified from a current video, a time period set including a face may be identified from each existing video, and a similarity between each existing video and the current video may be obtained by comparing the time period set identified from the current video with the time period set identified from each existing video.
The obtained similarity can be obtained directly according to a preset similarity rule, or can be obtained by further processing the data in the current video and the data in the existing video and then comparing the processed data. The similarity rule can be manually set according to experience or determined based on repeated video samples in history.
For example, in a preset similarity rule, if the time period sets in the two videos are set to be completely overlapped, the similarity is determined to be 100%; if the lengths of the time period sets in the two videos are set to be completely equal, but the start times of the two videos are different, the similarity is determined to be 96%. Then, if the time period set including the face identified from the current video is: 15 to 24 seconds, 27 to 34 seconds, 38 to 41 seconds, and 43 to 54 seconds; the time period set including the face recognized from the existing video a is as follows: 15 to 24 seconds, 27 to 34 seconds, 38 to 41 seconds, and 43 to 54 seconds; the time period set including the face recognized from the existing video B is as follows: 12 to 21 seconds, 24 to 31 seconds, 35 to 38 seconds, and 40 to 51 seconds. Then the time period sets of the existing video A and the current video are completely overlapped, and the similarity is 100%; the time periods in the existing video B are respectively 3 seconds earlier than the time period of the current video, and the similarity is 96%.
In some optional implementation manners of this embodiment, comparing the time period including the face in the current video with the time period including the face in each existing video in the existing video library, and obtaining the similarity between each existing video and the current video includes: carrying out face tracking on a face in a current video, and determining a first time period set in which the face in the current video appears; carrying out face tracking on the face in each existing video, and determining a second time period set in which the face in each existing video appears; and comparing the first time period set of the current video with the second time period set of each existing video in the existing video library to obtain the similarity between each existing video and the current video.
In the implementation mode, the video frames with the human faces can be determined by adopting a human face tracking technology, and the time period sets corresponding to the video frames with the human faces are determined. And then comparing the first time period set of the current video with the second time period sets of the existing videos in the existing video library, so that the similarity of the time period sets in the two videos can be quickly determined.
Step 203, in response to the result of the comparison indicating that there exists an existing video in the existing video library, the similarity of which to the current video is greater than the preset threshold, determining that the current video is repeated with the existing video.
In this embodiment, in response to that there is a similarity greater than the threshold in the similarities obtained in step 202, which indicates that the current video is highly similar to the existing video whose similarity is greater than the preset threshold, the existing video corresponding to the similarity greater than the threshold is repeated with the current video.
An exemplary application scenario of the method for determining duplicate videos of the present application is described below in conjunction with fig. 3.
As shown in fig. 3, fig. 3 shows a schematic flow chart of an application scenario of the method for determining a repeated video according to the present application.
As shown in fig. 3, a method 300 for determining duplicate videos runs in an electronic device 310 and may include:
firstly, acquiring a current video 301;
then, comparing the time period 302 including the face in the current video 301 with the time period 305 including the face in each existing video 304 in the existing video library 303 to obtain the similarity 306 between each existing video and the current video;
finally, in response to the result of the comparison indicating that there exists an existing video 308 in the existing video library 303, the similarity 306 of which to the current video 301 is greater than the preset threshold 307, it is determined that the current video 301 is duplicated with the existing video 308.
It should be understood that the application scenario of the method for determining repeated videos shown in fig. 3 is only an exemplary description of the method for determining repeated videos, and does not represent a limitation on the method. For example, the steps shown in fig. 3 above may be implemented in further detail.
According to the method for determining the repeated video, provided by the embodiment of the application, the current video is obtained at first; then comparing the time period of the face in the current video with the time period of the face in each existing video in the existing video library to obtain the similarity of each existing video and the current video; and finally, responding to the result of comparison indicating that the existing video with the similarity greater than the preset threshold exists in the existing video library, and determining that the current video is repeated with the existing video. In the process, the repeated video is determined according to the similarity between the time period including the face in each existing video and the time period including the face in the current video, so that the efficiency and the accuracy of determining the repeated video are improved.
Referring to fig. 4, a flow diagram of yet another embodiment of a method for determining duplicate videos in accordance with the present application is shown.
As shown in fig. 4, a flow 400 of the method for determining a repeated video according to the present embodiment may include the following steps:
in step 401, a current video is acquired.
In this embodiment, an electronic device (e.g., the server or the terminal shown in fig. 1) on which the above-described method for determining a duplicate video operates may obtain a current video from a local or other terminal or server. The current video is a video that needs to be determined whether to be repeated with the existing video.
In step 402, face tracking is performed on the face in the current video, and a first time period set in which the face in the current video appears is determined.
In this embodiment, a face tracking technology is used to track a face in a current video, and image frames in which the face appears in the current video and first timestamps corresponding to the image frames can be obtained. Then, based on these first time stamps, each time period in which a face appears in the current video can be calculated, that is, a first time period set is obtained.
In step 403, face tracking is performed on the faces in the existing videos in the existing video library, and a second time period set in which the faces in the existing videos appear is determined.
In this embodiment, a face tracking technology is used to track the face in each existing video, and image frames in which the face appears in each existing video and second timestamps corresponding to the image frames can be obtained. Then, each time period in which the face appears in each existing video can be calculated based on the second time stamps, that is, a second time period set is obtained.
In step 404, the first time period set of the current video is compared with the second time period set of each existing video in the existing video library to obtain a set of pre-judgment similarity values.
In this embodiment, by first comparing the first time period set of the current video with the second time period sets of the existing videos, the approximate similarity between the two videos can be obtained, and the approximate similarity can be used as a basis for further accurate determination.
In step 405, in response to the pre-judgment similarity value larger than the predetermined threshold exists in the pre-judgment similarity value set, based on the existing video corresponding to the pre-judgment similarity value larger than the predetermined threshold, a second face label set corresponding to each second time segment of the existing video is determined.
In this embodiment, in response to the result of the determination in step 405 being that the pre-determined similarity value greater than the preset threshold exists, the electronic device may further compare the content of the existing video corresponding to the pre-determined similarity value greater than the preset threshold with the content of the current video. Specifically, a second face label set corresponding to each second time period of the existing video may be extracted.
In a specific example, the pre-determined similarity value greater than the predetermined threshold corresponds to 12 second time periods including faces in the existing video C, and corresponding to 5 videos in the first second time period, the face label set C1 included in the 5 videos may be obtained: { "AAA", "BBB", "CCC", "AAA", "DDD", "CCC", "BBB", "DDD", "CCC", "ABC", "ACD", "DDD" }; by analogy, the face label set included in the video C is { C1, C2, C3, C4, C5, C6, C7, C8, C9, C10, C11, C12}, where C1-C12 represent the face label set corresponding to the video in the first second time period to the face label set corresponding to the video in the twelfth second time period in the video C, and 1-12 represent 12 time periods, respectively.
In another specific example, the pre-determined similarity value greater than the predetermined threshold corresponds to 12 second time periods including faces in the existing video D, and corresponding to 5 videos in the first second time period, the face label set D1 included in the 5 videos may be obtained: { "AEF", "BCD", "CMN", "ABF", "DEF", "CBD", "BCE", "DEF", "CBD", "AEF", "ABF", "CBD", "AEF", "ACD", "DEF" }; by analogy, the face label set included in the video D is { D1, D2, D3, D4, D5, D6, D7, D8, D9, D10, D11, D12}, where D1-D12 represent the face label set corresponding to the video in the first second time period to the face label set corresponding to the video in the twelfth second time period in the video D, and 1-12 represent 12 time periods, respectively.
In step 406, a first set of face tags corresponding to respective first time periods is determined based on the current video.
In this embodiment, the electronic device may extract the content in the current video, so as to perform further comparison between the content in the current video and the content of the existing video corresponding to the pre-determined similarity value greater than the predetermined threshold in step 405. Specifically, a first set of face tags corresponding to each first time segment of the current video may be extracted.
In a specific example, 12 first time periods including faces in the current video E correspond to 5 videos in the first time period, and a face label set E1 included in the 5 videos may be obtained: { "AAA", "BBB", "CCC", "AAA", "DDD", "CCC", "BBB", "DDD", "CCC", "ABC", "ACD", "DDD" }; by analogy, the face label set included in the video E is { E1, E2, E3, E4, E5, E6, E7, E8, E9, E10, E11, E12}, where E1-E12 represent the face label set corresponding to the video in the first time segment to the face label set corresponding to the video in the twelfth first time segment in the video E, and 1-12 represent 12 time segments respectively.
In some optional implementations of this embodiment, the first face tag set includes a first face tag subset sequence, and each first face tag subset in the first face tag subset sequence corresponds to one video frame in the current video and includes a plurality of face tags; the second face label set comprises a second face label subset sequence, each second face label subset in the second face label subset sequence corresponds to one video frame in the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value and comprises a plurality of face labels; comparing a first face label in a first face label set of a current video with a second face label in a second face label set of the existing video comprises: and comparing the face label subset sequence appearing in the first time period set with the face label subset sequence appearing in the second time period set.
In this implementation manner, when the electronic device extracts a face label set in an existing video corresponding to a current video or a pre-determined similarity value larger than a predetermined threshold, the electronic device may simultaneously extract a time sequence feature of the face label set. Then, when comparing the face label sets of the two videos, the face label subset sequences of the two videos can be compared. In the implementation mode, not only the content in the video but also the time sequence characteristics in the video are further considered during comparison, so that a more accurate comparison result can be obtained.
In a specific example, the existing video C includes 12 second time periods including faces, and corresponding to 5 videos in the first second time period, face tag subsets respectively corresponding to the 5 videos may be obtained, so as to obtain a face tag subset sequence C1S: { [ "AAA", "BBB", "CCC" ], [ "AAA", "DDD", "CCC", "BBB" ], [ "DDD", "CCC", "ABC" ], [ "CCC", "ABC", "ACD", "CCC" ], [ "ABC", "ACD", "DDD" ] }; by analogy, a face label subset sequence of 12 second time periods { C1S, C2S, C3S, C4S, C5S, C6S, C7S, C8S, C9S, C10S, C11S, C12S } can be obtained, where C1S-C12S represent the face label subset sequence of the video in the first second time period to the face label subset sequence of the video in the twelfth second time period.
In this example, the current video E includes 12 first time periods including faces, and corresponding to 5 videos in the first time period, face tag subsets respectively corresponding to the 5 videos may be obtained, so as to obtain a face tag subset sequence E1S: { [ "AAA", "BBB", "EEE" ], [ "AAA", "DDD", "EEE", "BBB" ], [ "DDD", "EEE", "ABE" ], [ "EEE", "ABE", "AED", "EEE" ], [ "ABE", "AED", "DDD" ] }; by analogy, 12 face label subset sequences { E1S, E2S, E3S, E4S, E5S, E6S, E7S, E8S, E9S, E10S, E11S, E12S } of the second time period can be obtained, where E1S-E12S represent the face label subset sequence corresponding to the video in the first time period to the face label subset sequence corresponding to the video in the twelfth first time period in the video E.
Then, when comparing the existing video C with the current video E, the face label subset sequences of the existing video C and the current video E may be compared. If the comparison results are identical, it can be determined that the two videos are repeated.
In the implementation mode, the repeated video is compared by adopting the face label subset sequence, the content and the time sequence characteristics of the video are considered simultaneously in the comparison process, and the accuracy of judging the repeated video is improved.
In some optional implementations of this embodiment, determining, based on the current video, the first set of face tags corresponding to each first time period includes: for each video frame in each first time period of the current video, extracting the features of each face to form a multi-dimensional vector, and forming a first face vector matrix based on the multi-dimensional vectors of each face; determining a second face label set corresponding to each second time period of the existing video based on the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value comprises: and for each video frame in each second time period of the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value, extracting the features of each face to form a multi-dimensional vector, and forming a second face vector matrix based on the multi-dimensional vectors of each face.
In this implementation manner, when determining a face label set of a time period in which a face is included in a video, a face vector matrix may be used to represent the face label set in a single video. Firstly, one multi-dimensional vector can be adopted to correspond to the characteristics of a single face, so that a multi-dimensional vector matrix with several rows can be obtained when several faces appear in a single video.
In a specific example, the features of N faces in a single video can be extracted and 128-dimensional vectors can be generated, resulting in a 128 × N vector matrix.
In the implementation mode, the comparison of the repeated video is quantized by adopting the multi-dimensional face vector matrix, so that the speed of comparing the repeated video can be increased, and the accuracy of comparing the repeated video is improved.
In step 407, a first face tag in the first face tag set of the current video is compared with a second face tag in the second face tag set of the existing video to obtain a similarity between each existing video and the current video.
In this embodiment, when comparing a first face tag in a first face tag set of a current video with a second face tag in a second face tag set of the existing video, the more the same tags are, the higher the similarity between the existing video and the current video is; the fewer the same tags, the lower the similarity of the existing video to the current video. The method for calculating the similarity here can be obtained according to a preset rule or algorithm. For example, the similarity value may be set as a ratio of the number of identical face labels to the total number of face labels. The face labels in each face label set can also be converted into space vectors, and the similarity is determined according to the distance between the two space vectors.
In a specific example, as described above for the existing video C and the current video E, the face labels of the two videos are identical, and then the similarity between the existing video C and the current video E is 100%.
In another specific example, as described above for the existing video D and the current video E, the face labels of the two videos are greatly different, and only one identical face label exists in 17 face labels, so that the similarity between the existing video D and the current video E is very low, and if the preset similarity value is a ratio of identical face labels to the total number of face labels, the similarity between the existing video D and the current video E is 5.88%.
In step 408, in response to the result of the comparison indicating that there exists an existing video in the existing video library with a similarity greater than a preset threshold with the current video, it is determined that the current video overlaps with the existing video.
In this embodiment, if the comparison result in step 407 indicates that the similarity between the current video and the existing video is greater than the preset threshold, that is, an existing video highly similar to the current video exists in the existing video library, it may be determined that the current video and the existing video are repeated.
On the basis of the embodiment shown in fig. 2, the method for determining the repeated video according to the above embodiment of the present application further compares the video content, thereby improving the accuracy of determining the repeated video.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for determining duplicate videos, which corresponds to the method embodiments shown in fig. 2 to fig. 4, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for determining a duplicate video of the present embodiment may include: a video acquisition unit 510 configured to acquire a current video; a video comparison unit 520 configured to compare a time period including a face in the current video with a time period including a face in each existing video in the existing video library to obtain a similarity between each existing video and the current video; a duplication determination unit 530 configured to determine that the current video is duplicated with an existing video in the existing video library in response to a result of the comparison indicating that the existing video having a similarity greater than a preset threshold with the current video exists.
In some optional implementations of this embodiment, the video comparison unit includes (not shown in the figure): the first tracking subunit is configured to perform face tracking on the face in the current video and determine a first time period set in which the face in the current video appears; the second tracking subunit is configured to perform face tracking on the face in each existing video and determine a second time period set in which the face in each existing video appears; and the similarity determining subunit is configured to compare the first time period set of the current video with the second time period set of each existing video in the existing video library to obtain the similarity between each existing video and the current video.
In some optional implementations of this embodiment, the video comparison unit includes (not shown in the figure): the first tracking subunit is configured to perform face tracking on the face in the current video and determine a first time period set in which the face in the current video appears; the second tracking subunit is configured to perform face tracking on the face in each existing video in the existing video library, and determine a second time period set in which the face in each existing video appears; the pre-comparison subunit is configured to compare the first time period set of the current video with the second time period sets of the existing videos in the existing video library to obtain a pre-judgment similarity value set; the second determining subunit is configured to, in response to a prejudgment similar value larger than a predetermined threshold value existing in the prejudgment similar value set, determine, based on an existing video corresponding to the prejudgment similar value larger than the predetermined threshold value, a second face label set corresponding to each second time period of the existing video; a first determining subunit configured to determine, based on the current video, a first set of face tags corresponding to respective first time periods; and the similarity comparison subunit is configured to compare a first face label in the first face label set of the current video with a second face label in the second face label set of the existing video to obtain the similarity between each existing video and the current video.
In some optional implementations of the present embodiment, the first determining subunit (not shown in the figure) is further configured to: for each video frame in each first time period of the current video, extracting the features of each face to form a multi-dimensional vector, and forming a first face vector matrix based on the multi-dimensional vectors of each face; the second determining subunit (not shown in the figure) is further configured to: and for each video frame in each second time period of the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value, extracting the features of each face to form a multi-dimensional vector, and forming a second face vector matrix based on the multi-dimensional vectors of each face.
In some optional implementations of the embodiment, the first face tag set in the first determining subunit (not shown in the figure) includes a first face tag subset sequence, and each first face tag subset in the first face tag subset sequence corresponds to one video frame in the current video and includes a plurality of face tags; a second face tag set in a second determining subunit (not shown in the figure) includes a second face tag subset sequence, and each second face tag subset in the second face tag subset sequence corresponds to one video frame in the existing video corresponding to a pre-judgment similarity value larger than a predetermined threshold and includes a plurality of face tags; the similarity contrast subunit (not shown in the figure) is further configured to: and comparing the face label subset sequence appearing in the first time period set with the face label subset sequence appearing in the second time period set.
It should be understood that the elements recited in apparatus 500 may correspond to various steps in the methods described with reference to fig. 2-4. Thus, the operations and features described above for the method are equally applicable to the apparatus 500 and the units included therein, and are not described in detail here. Although the first tracking sub-unit and the second tracking sub-unit in the several optional implementations described above appear in different optional implementations, the functions implemented by the first tracking sub-unit in different optional implementations are completely the same, and the functions implemented by the second tracking sub-unit in different optional implementations are also completely the same, so that the same first tracking sub-unit and second tracking sub-unit may be implemented in different optional implementations.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a server according to embodiments of the present application. The terminal device or the server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a video acquisition unit, a video comparison unit, and a repetition determination unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the video acquisition unit may also be described as "a unit that acquires a current video".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a current video; comparing the time period of the face in the current video with the time period of the face in each existing video in the existing video library to obtain the similarity of each existing video and the current video; and determining that the current video is repeated with the existing video in response to the comparison result indicating that the existing video with the similarity to the current video larger than the preset threshold exists in the existing video library.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A method for determining duplicate video, comprising:
acquiring a current video;
comparing the time period of the face in the current video with the time period of the face in each existing video in the existing video library to obtain the similarity of each existing video and the current video;
and determining that the current video is repeated with the existing video in response to the comparison result indicating that the existing video with the similarity to the current video larger than a preset threshold exists in the existing video library.
2. The method of claim 1, wherein the comparing the time period including the face in the current video with the time period including the face in each existing video in the existing video library to obtain the similarity between each existing video and the current video comprises:
performing face tracking on the face in the current video, and determining a first time period set in which the face in the current video appears;
carrying out face tracking on the face in each existing video, and determining a second time period set in which the face in each existing video appears;
and comparing the first time period set of the current video with the second time period set of each existing video in the existing video library to obtain the similarity of each existing video and the current video.
3. The method of claim 1, wherein the comparing the time period including the face in the current video with the time period including the face in each existing video in the existing video library to obtain the similarity between each existing video and the current video comprises:
performing face tracking on the face in the current video, and determining a first time period set in which the face in the current video appears;
performing face tracking on the face in each existing video in the existing video library, and determining a second time period set in which the face in each existing video appears;
comparing the first time period set of the current video with the second time period sets of all existing videos in the existing video library to obtain a pre-judgment similarity value set;
responding to the prejudgment similar value larger than a preset threshold value in the prejudgment similar value set, and determining a second face label set corresponding to each second time period of the existing video based on the existing video corresponding to the prejudgment similar value larger than the preset threshold value;
determining a first face label set corresponding to each first time period based on the current video;
and comparing a first face label in the first face label set of the current video with a second face label in the second face label set of the existing video to obtain the similarity of each existing video and the current video.
4. The method of claim 3, wherein the determining, based on the current video, the first set of face tags for the respective first time period comprises: for each video frame in each first time period of the current video, extracting the features of each face to form a multi-dimensional vector, and forming a first face vector matrix based on the multi-dimensional vectors of each face;
the determining, based on the existing video corresponding to the pre-determined similarity value greater than the predetermined threshold, the second face label set corresponding to each second time segment of the existing video includes: and for each video frame in each second time period of the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value, extracting the features of each face to form a multi-dimensional vector, and forming a second face vector matrix based on the multi-dimensional vectors of each face.
5. The method of any of claims 3 or 4, wherein the first set of face tags comprises a sequence of first face tag subsets, each first face tag subset of the sequence of first face tag subsets corresponding to a video frame of the current video and comprising a plurality of face tags;
the second face label set comprises a second face label subset sequence, and each second face label subset in the second face label subset sequence corresponds to one video frame in the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value and comprises a plurality of face labels;
the comparing a first face label in the first face label set of the current video with a second face label in the second face label set of the existing video includes: and comparing the face label subset sequence appearing in the first time period set with the face label subset sequence appearing in the second time period set.
6. An apparatus for determining duplicate video, comprising:
a video acquisition unit configured to acquire a current video;
the video comparison unit is configured to compare the time period including the face in the current video with the time period including the face in each existing video in the existing video library to obtain the similarity between each existing video and the current video;
a duplication determination unit configured to determine that the current video is duplicated with an existing video in the existing video library in response to a result of the comparison indicating that the existing video having a similarity greater than a preset threshold with the current video exists in the existing video library.
7. The apparatus of claim 6, wherein the video contrast unit comprises:
the first tracking subunit is configured to perform face tracking on the face in the current video and determine a first time period set in which the face in the current video appears;
the second tracking subunit is configured to perform face tracking on the face in each existing video and determine a second time period set in which the face in each existing video appears;
and the similarity determining subunit is configured to compare the first time period set of the current video with the second time period set of each existing video in an existing video library to obtain the similarity between each existing video and the current video.
8. The apparatus of claim 6, wherein the video contrast unit comprises:
the first tracking subunit is configured to perform face tracking on the face in the current video and determine a first time period set in which the face in the current video appears;
the second tracking subunit is configured to perform face tracking on the face in each existing video in the existing video library, and determine a second time period set in which the face in each existing video appears;
the pre-comparison subunit is configured to compare the first time period set of the current video with the second time period sets of the existing videos in the existing video library to obtain a pre-judgment similarity value set;
the second determining subunit is configured to, in response to a pre-judgment similar value larger than a predetermined threshold value existing in the pre-judgment similar value set, determine, based on an existing video corresponding to the pre-judgment similar value larger than the predetermined threshold value, a second face label set corresponding to each second time period of the existing video;
a first determining subunit configured to determine, based on the current video, a first set of face tags corresponding to respective first time periods;
a similarity comparison subunit configured to compare a first face label in the first face label set of the current video with a second face label in the second face label set of the existing video to obtain a similarity between each existing video and the current video.
9. The apparatus of claim 8, wherein the first determining subunit is further configured to: for each video frame in each first time period of the current video, extracting the features of each face to form a multi-dimensional vector, and forming a first face vector matrix based on the multi-dimensional vectors of each face;
the second determining subunit is further configured to: and for each video frame in each second time period of the existing video corresponding to the pre-judgment similarity value larger than the preset threshold value, extracting the features of each face to form a multi-dimensional vector, and forming a second face vector matrix based on the multi-dimensional vectors of each face.
10. The apparatus according to any one of claims 8 or 9, wherein the first set of face tags in the first determining subunit comprises a first sequence of face tag subsets, each first face tag subset in the first sequence of face tag subsets corresponding to a video frame in the current video and comprising a plurality of face tags;
the second face tag set in the second determining subunit includes a second face tag subset sequence, and each second face tag subset in the second face tag subset sequence corresponds to one video frame in the existing video corresponding to a pre-judgment similarity value larger than a predetermined threshold and includes a plurality of face tags;
the similarity contrast subunit is further configured to: and comparing the face label subset sequence appearing in the first time period set with the face label subset sequence appearing in the second time period set.
11. A server, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201811458416.2A 2018-11-30 2018-11-30 Method and apparatus for determining duplicate video Active CN111259194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811458416.2A CN111259194B (en) 2018-11-30 2018-11-30 Method and apparatus for determining duplicate video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811458416.2A CN111259194B (en) 2018-11-30 2018-11-30 Method and apparatus for determining duplicate video

Publications (2)

Publication Number Publication Date
CN111259194A true CN111259194A (en) 2020-06-09
CN111259194B CN111259194B (en) 2023-06-23

Family

ID=70948295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811458416.2A Active CN111259194B (en) 2018-11-30 2018-11-30 Method and apparatus for determining duplicate video

Country Status (1)

Country Link
CN (1) CN111259194B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177538A (en) * 2021-06-30 2021-07-27 腾讯科技(深圳)有限公司 Video cycle identification method and device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359368A (en) * 2008-09-09 2009-02-04 华为技术有限公司 Video image clustering method and system
CN103049459A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Feature recognition based quick video retrieval method
CN103475935A (en) * 2013-09-06 2013-12-25 北京锐安科技有限公司 Method and device for retrieving video segments
US20150205997A1 (en) * 2012-06-25 2015-07-23 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN105512348A (en) * 2016-01-28 2016-04-20 北京旷视科技有限公司 Method and device for processing videos and related audios and retrieving method and device
FR3031825A1 (en) * 2015-01-19 2016-07-22 Rizze METHOD FOR FACIAL RECOGNITION AND INDEXING OF RECOGNIZED PERSONS IN A VIDEO STREAM
KR20180079894A (en) * 2017-01-03 2018-07-11 한국전자통신연구원 System and method for providing face recognition information and server using the method
US20180242027A1 (en) * 2017-02-22 2018-08-23 International Business Machines Corporation System and method for perspective switching during video access
US20180336931A1 (en) * 2017-05-22 2018-11-22 Adobe Systems Incorporated Automatic and intelligent video sorting

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359368A (en) * 2008-09-09 2009-02-04 华为技术有限公司 Video image clustering method and system
CN103049459A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Feature recognition based quick video retrieval method
US20150205997A1 (en) * 2012-06-25 2015-07-23 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN103475935A (en) * 2013-09-06 2013-12-25 北京锐安科技有限公司 Method and device for retrieving video segments
FR3031825A1 (en) * 2015-01-19 2016-07-22 Rizze METHOD FOR FACIAL RECOGNITION AND INDEXING OF RECOGNIZED PERSONS IN A VIDEO STREAM
CN105512348A (en) * 2016-01-28 2016-04-20 北京旷视科技有限公司 Method and device for processing videos and related audios and retrieving method and device
KR20180079894A (en) * 2017-01-03 2018-07-11 한국전자통신연구원 System and method for providing face recognition information and server using the method
US20180242027A1 (en) * 2017-02-22 2018-08-23 International Business Machines Corporation System and method for perspective switching during video access
US20180336931A1 (en) * 2017-05-22 2018-11-22 Adobe Systems Incorporated Automatic and intelligent video sorting

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DEEPTI YADAV 等: ""Face Identification Methodologies in Videos"", 《 2015 GLOBAL CONFERENCE ON COMMUNICATION TECHNOLOGIES (GCCT)》 *
姚青 等: ""基于语义人脸的视频新闻标注"", 《计算机科学》 *
胡一帆 等: ""基于视频监控的人脸检测跟踪识别系统研究"", 《计算机工程与应用》 *
胡一帆 等: ""基于视频监控的人脸检测跟踪识别系统研究"", 《计算机工程与应用》, 1 November 2016 (2016-11-01) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177538A (en) * 2021-06-30 2021-07-27 腾讯科技(深圳)有限公司 Video cycle identification method and device, computer equipment and storage medium
CN113177538B (en) * 2021-06-30 2021-08-24 腾讯科技(深圳)有限公司 Video cycle identification method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111259194B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN108830235B (en) Method and apparatus for generating information
CN109460514B (en) Method and device for pushing information
CN108989882B (en) Method and apparatus for outputting music pieces in video
EP3451328A1 (en) Method and apparatus for verifying information
CN107944481B (en) Method and apparatus for generating information
CN109034069B (en) Method and apparatus for generating information
CN112559800B (en) Method, apparatus, electronic device, medium and product for processing video
KR102002024B1 (en) Method for processing labeling of object and object management server
CN108509611B (en) Method and device for pushing information
CN112200067B (en) Intelligent video event detection method, system, electronic equipment and storage medium
US11164004B2 (en) Keyframe scheduling method and apparatus, electronic device, program and medium
CN109862100B (en) Method and device for pushing information
CN108595448B (en) Information pushing method and device
CN113592535B (en) Advertisement recommendation method and device, electronic equipment and storage medium
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN110209658B (en) Data cleaning method and device
CN111259663A (en) Information processing method and device
CN109934142B (en) Method and apparatus for generating feature vectors of video
CN111897950A (en) Method and apparatus for generating information
CN113378855A (en) Method for processing multitask, related device and computer program product
CN108038172B (en) Search method and device based on artificial intelligence
CN115801980A (en) Video generation method and device
CN108512674B (en) Method, device and equipment for outputting information
CN109919220B (en) Method and apparatus for generating feature vectors of video
CN109064464B (en) Method and device for detecting burrs of battery pole piece

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant