CN109165574A - video detecting method and device - Google Patents
video detecting method and device Download PDFInfo
- Publication number
- CN109165574A CN109165574A CN201810879285.9A CN201810879285A CN109165574A CN 109165574 A CN109165574 A CN 109165574A CN 201810879285 A CN201810879285 A CN 201810879285A CN 109165574 A CN109165574 A CN 109165574A
- Authority
- CN
- China
- Prior art keywords
- sequence
- detected
- key frame
- video
- keyframe
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Television Signal Processing For Recording (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses video detecting method and device.One embodiment of the video detecting method includes: to obtain keyframe sequence to be detected, wherein with the reference video obtained in advance, one of key frame matches for each key frame in keyframe sequence to be detected;The timestamp sequence to be detected of keyframe sequence to be detected is generated, each timestamp in timestamp sequence to be detected is the timestamp of one of them key frame to be detected in keyframe sequence to be detected;Obtain reference time stamp sequence, wherein the reference time stabs in the key frame that sequence is reference video, the timestamp of the key frame to match with one of them keyframe sequence to be detected of keyframe sequence to be detected;Based on the corresponding relationship between timestamp sequence to be detected and acquired reference time stamp sequence, the degree of association of video to be detected and reference video is determined.The implementation realizes automatic video frequency association detection, avoids artificial mode low efficiency, the problems such as custom rule is complicated.
Description
Technical field
The invention relates to field of image processings, and in particular to computer vision field more particularly to video detection
Method and apparatus.
Background technique
Video comparison technology, i.e., by analyzing, understanding the image of video, the information such as voice are carried out two videos similar
Whether degree compares, to judge between video two-by-two to include identical segment.If comprising can also further provide same clip
Initial time and the end time.By the technology, the problem of can solve video duplicate removal, the effect of manual examination and verification can be improved
Rate forbids repeating video storage, saves resource;Duplicate video is provided when can effectively prevent video personalized recommendation again, is improved
User experience.
Primary solutions before the problem are Manual definition's rules, judge whether two videos repeat by rule.
Manually check that badcase defines some policing rules come later, and these rules can cover most of situation.Many institutes
Known, definition rule is one extremely complex and have no the work of technology content, after needing artificially to check a large amount of matching results, always
Knot rule, then formulate it is a series of " if ... if ... ".When reappearing new case, and need to add again
Rule.
Summary of the invention
The embodiment of the present application proposes video detecting method and device.
In a first aspect, the embodiment of the present application provides a kind of video detecting method, comprising: obtain key frame sequence to be detected
Column, wherein each key frame in keyframe sequence to be detected in the reference video obtained in advance, one of key frame phase
Match;The timestamp sequence to be detected of keyframe sequence to be detected is generated, each timestamp in timestamp sequence to be detected is to be checked
It surveys in keyframe sequence, the timestamp of one of them key frame to be detected;Obtain reference time stamp sequence, wherein the reference time
It stabs in the key frame that sequence is reference video, matches with one of them keyframe sequence to be detected of keyframe sequence to be detected
Key frame timestamp;The corresponding relationship between sequence is stabbed based on timestamp sequence to be detected and acquired reference time,
Determine the degree of association of video to be detected and reference video.
In some embodiments, before obtaining keyframe sequence to be detected, method further include: extract video to be detected
Key frame generates keyframe sequence;Keyframe sequence is subjected to matching degree detection with the reference keyframe sequence obtained in advance, is obtained
Keyframe sequence is referred to keyframe sequence to be detected and matching;Wherein: the key frame to be detected in keyframe sequence to be detected
For the frame in keyframe sequence, to match with one of key frame of reference keyframe sequence;Matching refers to keyframe sequence
In matching with reference to key frame be with reference in keyframe sequence, the frame that matches with one of them key frame to be detected is as matching
With reference to key frame.
In some embodiments, keyframe sequence is subjected to matching degree detection with the reference keyframe sequence obtained in advance,
It include: that the key frame in keyframe sequence is inputted pre- by the key frame and with reference to the reference key frame in keyframe sequence
First trained matching degree detection model, to determine the key frame and with reference to the matching degree of key frame.
In some embodiments, based on corresponding between timestamp sequence to be detected and acquired reference time stamp sequence
Relationship determines the degree of association of video to be detected and reference video, comprising: generates timestamp sequence to be detected and the reference time stabs sequence
The corresponding relationship of column;By in corresponding relationship generated input related degree model trained in advance, video to be detected and ginseng are determined
Examine the degree of association of video.
In some embodiments, training obtains related degree model in the following manner: establishing initial association degree model;It obtains
Training sample set, the training sample that training sample is concentrated include the corresponding relationship of two pre-generated videos, and for marking
Infuse the degree of association mark of the degree of association of two videos;Training sample set is inputted in initial association degree model, based on setting in advance
The loss function training initial association degree model set, the related degree model after being trained.
Second aspect, the embodiment of the present application provide a kind of video detecting device, comprising: keyframe sequence acquiring unit,
It is configured to obtain keyframe sequence to be detected, wherein each key frame in keyframe sequence to be detected and the ginseng obtained in advance
It examines in video, one of key frame matches;Timestamp generation unit, be configured to generate keyframe sequence to be detected to
Detection time stabs sequence, and each timestamp in timestamp sequence to be detected is in keyframe sequence to be detected, one of them is to be checked
Survey the timestamp of key frame;Reference time stabs acquiring unit, is configured to obtain reference time stamp sequence, wherein the reference time
It stabs in the key frame that sequence is reference video, matches with one of them keyframe sequence to be detected of keyframe sequence to be detected
Key frame timestamp;Determination unit is configured to stab sequence based on timestamp sequence to be detected and acquired reference time
Corresponding relationship between column determines the degree of association of video to be detected and reference video.
In some embodiments, device further include: extraction unit is configured to extract the key frame of video to be detected, raw
At keyframe sequence;Matching degree detection unit, be configured to by keyframe sequence and the reference keyframe sequence that in advance obtains into
The detection of row matching degree obtains keyframe sequence to be detected and matching with reference to keyframe sequence;Wherein: in keyframe sequence to be detected
Key frame to be detected be the frame that matches with one of key frame of reference keyframe sequence in keyframe sequence;Matching
It with reference to key frame is with reference in keyframe sequence, with one of them key frame phase to be detected with reference to the matching in keyframe sequence
The frame matched is as matching with reference to key frame.
In some embodiments, matching degree detection unit is further configured to: for the key frame in keyframe sequence,
By the key frame and the matching degree detection model trained in advance with reference to the reference key frame input in keyframe sequence, it is somebody's turn to do with determining
The matching degree of key frame and reference key frame.
In some embodiments, determination unit is further configured to: generating timestamp sequence to be detected and reference time
Stab the corresponding relationship of sequence;By in corresponding relationship generated input related degree model trained in advance, video to be detected is determined
With the degree of association of reference video.
In some embodiments, training obtains related degree model in the following manner: establishing initial association degree model;It obtains
Training sample set, the training sample that training sample is concentrated include the corresponding relationship of two pre-generated videos, and for marking
Infuse the degree of association mark of the degree of association of two videos;Training sample set is inputted in initial association degree model, based on setting in advance
The loss function training initial association degree model set, the related degree model after being trained.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress
It sets, for storing one or more programs, when one or more programs are executed by one or more processors, so that one or more
A processor realizes the method as described in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey
Sequence, wherein the method as described in first aspect is realized when program is executed by processor.
Video detecting method and device provided by the embodiments of the present application, firstly, obtaining keyframe sequence to be detected;Then,
Generate the timestamp sequence to be detected of keyframe sequence to be detected;Then, reference time stamp sequence is obtained;Finally, based on to be checked
The corresponding relationship between timestamp sequence and stamp sequence of acquired reference time is surveyed, determines video to be detected and reference video
The degree of association realizes automatic video association detection, avoids low efficiency, custom rule complexity etc. by the way of artificial and ask
Topic.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that the video detecting method of the application one embodiment can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the video detecting method of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the video detecting method of the application;
Fig. 4 is the flow chart according to another embodiment of the video detecting method of the application;
Fig. 5 is the schematic stream of training related degree model in one embodiment according to the video detecting method of the application
Cheng Tu;
Fig. 6 A~Fig. 6 C is the schematic diagram of the sample of trained related degree model;
Fig. 7 is the structure chart according to one embodiment of the video detecting device of the application;
Fig. 8 is adapted for for realizing that the video detection of the embodiment of the present application changes the computer system of the electronic equipment of method
Structural schematic diagram.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the exemplary system of the embodiment of the video detecting method or video detecting device of the application
System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, the various electronic equipments of video playing, including but not limited to smart phone, plate are can be with display screen and supported
Computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic
Image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, move
State image expert's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal is set
Standby 101,102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or
Software module (such as providing multiple softwares of Distributed Services or software module), also may be implemented into single software or soft
Part module.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as to the view that terminal device 101,102,103 uploads
Frequency provides the background process server supported.Background process server can carry out the data such as the video received analyzing etc.
Reason, and by processing result (for example, be used to indicate video that terminal device is uploaded whether with the reference video in presetting database
Duplicate information) feed back to terminal device.
It should be noted that video detecting method provided by the embodiment of the present application can be executed by server 104, accordingly
Ground, video detecting device can be set in server 104.
It should be understood that the number of the terminal device 101,102,103 in Fig. 1, network 104 and server 105 is only to show
Meaning property.According to needs are realized, any number of terminal device, network and server can have.
With continued reference to Fig. 2, it illustrates the processes 200 according to one embodiment of the video detecting method of the application.It should
Video detecting method, comprising the following steps:
Step 201, keyframe sequence to be detected is obtained, wherein each key frame in keyframe sequence to be detected and in advance
In the reference video of acquisition, one of key frame matches.
In the present embodiment, the executing subject of the video detecting method of the present embodiment is (for example, server shown in FIG. 1
105) keyframe sequence to be detected can be obtained by various feasible modes.
It is obtained in advance for example, mode can be connected via wire or wirelessly from the electronic equipment communicated to connect with executing subject
The keyframe sequence to be detected first generated.It should be pointed out that above-mentioned radio connection can include but is not limited to 3G/4G company
It connects, WiFi connection, bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection and other are existing
In known or exploitation in the future radio connection.
Herein, keyframe sequence to be detected can be by one or more key frame to be detected according to time elder generation
The sequence that sequence arrangement is formed afterwards.
In addition, the reference video in this step as with keyframe sequence to be detected it is to be understood that can be compared
Benchmark reference video.In application scenes, reference video be can be from the database for storing video
It directly acquires.Alternatively, reference video is also possible to the video stored in database in other application scenarios
It is obtained after being screened in some way.In these application scenarios, for example, view that can in advance to be stored in database
Frequency is classified (for example, adding label for video), then is screened out from it reference video based on the classification of video.
In this step, the meaning of " matching " is it is to be understood that each key frame to be detected in keyframe sequence to be detected
There is certain or certain general character with certain frame in reference video.For example, in application scenes, if a video frame A with
Occurs identical object (the including but not limited to same person, same animal, same place wind in a certain frame in reference video
Scape etc.), it can be using video frame A as a key frame to be detected in keyframe sequence to be detected.
It is understood that each key to be detected in some optional implementations, in keyframe sequence to be detected
Frame can be derived from the same video clip.It, can be by keyframe sequence to be detected in these optional implementations
The quantity for the key frame to be detected for being included, preliminarily to measure the video and reference video that these key frames to be detected are derived from
Between matching degree.
Step 202, the timestamp sequence to be detected of keyframe sequence to be detected is generated, it is each in timestamp sequence to be detected
Timestamp is the timestamp of one of them key frame to be detected in keyframe sequence to be detected.
Timestamp (timestamp) is can to indicate that a data are already existing, complete before some specific time
, the data that can verify that.Timestamp usually can be a character string, for uniquely identifying the time at certain a moment.
Correspondingly, the timestamp of key frame to be detected can be the number of the shooting time for identifying the key frame to be detected
According to.
It is to be detected in the case that each key frame to be detected in keyframe sequence to be detected is derived from the same video clip
Each key frame to be detected in keyframe sequence can be arranged with shooting time sequencing, correspondingly, generation it is to be detected when
Between stab sequence in, each time stamp data can also arrange in that same order.Specifically, if keyframe sequence to be detected is
{ A, B, C }, and the time stamp data of video frame A is a, and the time stamp data of video frame B is b, the time stamp data of video frame C
For c, then timestamp sequence to be detected is { a, b, c } accordingly with { A, B, C } this keyframe sequence to be detected.
Step 203, reference time stamp sequence is obtained, wherein the reference time stabs in the key frame that sequence is reference video, with
The timestamp for the key frame that one of them of keyframe sequence to be detected keyframe sequence to be detected matches.
For example, in application scenes, can obtain in advance with keyframe sequence to be detected one of them is to be detected
The sequence that the key frame that keyframe sequence matches is constituted, and by the sequence, the time stamp data of each video frame constitutes ginseng
Examine timestamp sequence.
Step 204, based on the corresponding relationship between timestamp sequence to be detected and acquired reference time stamp sequence, really
The degree of association of fixed video to be detected and reference video.
In this step, the corresponding relationship between timestamp sequence to be detected and stamp sequence of acquired reference time for example may be used
To be presented by following mode:
By in timestamp sequence to be detected, the numerical value of each timestamp to be detected is as the abscissa under cartesian coordinate system
(or ordinate) will stab in sequence the reference time, and the numerical value of each reference time stamp is as the ordinate under the cartesian coordinate system
(or abscissa) determines multiple coordinate points with this in cartesian coordinate system.The position of these coordinate points and distribution can be recognized
For the corresponding relationship between timestamp sequence to be detected and acquired reference time stamp sequence can be characterized.
Specifically, it is assumed that keyframe sequence to be detected be { A1, B1, C1 }, timestamp sequence to be detected be a1, b1,
c1}.In addition, video frame A2, B2, C2 are the key frame to match respectively with A1, B1 and C1 in reference video, and a2, b2, c2
The respectively time stamp data of video frame A2, B2, C2.Therefore, reference time stamp sequence is { a2, b2, c2 }.Utilize above description
Mode, using a1, b1, c1 as abscissa, and using a2, b2, c2 as ordinate, available coordinate be [a1,
A2], three points of [b1, b2] and [c1, c2].The position of these three points and distribution can be used to characterize timestamp to be detected
Corresponding relationship between sequence and stamp sequence of acquired reference time.
If being appreciated that video to be detected corresponding to timestamp sequence to be detected and reference video association with higher
Degree, then, in the two, the key frame to match is likely to close even identical at the time of occurred.Therefore, pass through inspection
The corresponding relationship between timestamp sequence to be detected and acquired reference time stamp sequence is surveyed, can determine video to be detected
Whether it is associated with reference video and how is correlation degree.
In some optional implementations, each coordinate points generated by above mode can be fitted,
And judge to be fitted in obtained multinomial, whether each term coefficient meets pre-set condition, if satisfied, then can determine to be detected
The degree of association of video and the reference video is higher.
For example, the video detecting method of the present embodiment is for detecting and some reference video in application scenes
Identical video clip.So, by step 204, detection time stamp sequence and acquired reference time stamp sequence can be treated
Corresponding relationship between column is fitted, and obtains an order polynomial.In these application scenarios, if timestamp sequence to be detected
The fitting result of corresponding relationship between acquired reference time stamp sequence is an order polynomial (that is, secondary or more
Multinomial coefficient be that 0), then may determine that video to be detected and reference video are associated (that is, identical).
Video detecting method provided in this embodiment, firstly, obtaining keyframe sequence to be detected;Then, it generates to be detected
The timestamp sequence to be detected of keyframe sequence;Then, reference time stamp sequence is obtained;Finally, being based on timestamp sequence to be detected
Corresponding relationship between column and stamp sequence of acquired reference time, determines the degree of association of video to be detected and reference video, real
Show the association detection of automatic video, avoids the low efficiency by the way of artificial, the problems such as custom rule is complicated.
With continued reference to the schematic diagram 300 that Fig. 3, Fig. 3 are according to the application scenarios of the video detecting method of the present embodiment.
In application scenarios shown in Fig. 3, the video on its terminal device is uploaded to server and sent out by user's expectation
Cloth.Before publication, server need to it is expected user that the video of publication carries out duplicate checking, that is, judging that user it is expected the view of publication
Whether frequency has existed in database.
In this application scenarios, firstly, user 310 uploads video to be detected to server.Server is receiving use
After the video to be detected that family 310 uploads, video to be detected can be extracted, be carried out with video stored in database
The operation such as matching, so that keyframe sequence to be detected is obtained, as shown in appended drawing reference 321.It is understood that by will be to be checked
The key frame and the video in database for surveying video match thus during obtaining keyframe sequence to be detected, can also be with
The reference video frame sequence to match with the keyframe sequence to be detected is correspondingly filtered out from database.
Then, as shown in appended drawing reference 322, after getting keyframe sequence to be detected, it is to be detected that this can be generated
The timestamp sequence to be detected of keyframe sequence, and reference time stamp sequence is correspondingly obtained, as shown in appended drawing reference 323.Most
Afterwards, based on the corresponding relationship between timestamp sequence to be detected and acquired reference time stamp sequence, video to be detected is determined
With the degree of association of reference video, as shown in appended drawing reference 324.So, by this process, according to timestamp sequence to be detected
Corresponding relationship between column and stamp sequence of acquired reference time, it can be determined that user it is expected that the video of publication is in database
It is no to have existed, so that the repetition of video be avoided to upload.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of video detecting method.The video detection
The process 400 of method, comprising the following steps:
Step 401, the key frame of video to be detected is extracted, keyframe sequence is generated.
Key frame is the frame for referring to description camera lens main contents.It, can be from one according to the complexity of camera lens content
One key frame of one or more key frames or construction is extracted in camera lens.From the defined above of key frame as can be seen that key
Frame can be the frame being present in video to be detected, be also possible to the frame generated based on the frame in video to be detected.
In the present embodiment, the keyframe sequence of video to be detected can be obtained using any feasible mode, including but
It is not limited to the extraction method of key frame based on camera lens, the extraction method of key frame based on content, based drive key-frame extraction
Method, extraction method of key frame based on Euclidean distance etc..Extraction method of key frame described above can pass through existing skill
Art knows that details are not described herein.
Step 402, keyframe sequence is subjected to matching degree detection with the reference keyframe sequence obtained in advance, obtained to be checked
It surveys keyframe sequence and matching refers to keyframe sequence.Wherein, the key frame to be detected in keyframe sequence to be detected is key
In frame sequence, the frame that matches with one of key frame of reference keyframe sequence;Matching is with reference in keyframe sequence
It is to be used as and matched with reference to pass with the frame that one of them key frame to be detected matches with reference in keyframe sequence with reference key frame
Key frame.
For example, the keyframe sequence generated is { A1, A2 ..., An }.The reference keyframe sequence obtained in advance be B1,
B2,…,Bm}.Matching degree detection is carried out by A1 and with reference to each reference frame in keyframe sequence respectively.If A1 matches with B2,
A4 matches with B7, and A9 matches with B3, then, keyframe sequence to be detected generated for example can be { A1, A4, A9 },
And matching with reference to keyframe sequence is then { B2, B7, B3 }.
From above example as can be seen that matching is with reference in keyframe sequence, putting in order for each frame is basis and each matching
It is corresponding with reference to key frame sequence of the key frame to be detected in keyframe sequence to be detected accordingly.In certain application scenarios
In, according to what is determined with each matching with reference to sequence of the corresponding key frame to be detected of key frame in keyframe sequence to be detected
Matching refers to the sequence in keyframe sequence in matching with reference to key frame, might not match with these and refer to reference to key frame
Appearance sequence or identical with reference to the sequence in keyframe sequence in video.
In some optional implementations, keyframe sequence is matched with the reference keyframe sequence obtained in advance
Degree detection, may further include: for the key frame in keyframe sequence, by the key frame and with reference in keyframe sequence
With reference to key frame input matching degree detection model trained in advance, to determine the key frame and with reference to the matching degree of key frame.
Still with the keyframe sequence of generation be { A1, A2 ..., An }, and reference the keyframe sequence obtained in advance for B1,
B2 ..., Bm } for.It is possible, firstly, to by A1 and B1, A1 and B2 ..., A1 and Bm input matching degree trained in advance respectively and detect
Model, to judge with reference in keyframe sequence, if there is the frame to match with key frame A1.Similarly, it can detecte ginseng
Examine in keyframe sequence, if exist with key frame A2 other in keyframe sequence ..., the reference key frame that matches of An.
In addition, herein, matching degree detection model can be convolutional neural networks model.Using training sample to initial volume
Product neural network is trained, and adjusts the parameter in the initial convolutional neural networks based on pre-set loss function, most
It can train to obtain matching degree detection model eventually.
Step 403, keyframe sequence to be detected is obtained, wherein each key frame in keyframe sequence to be detected and in advance
In the reference video of acquisition, one of key frame matches.
Step 404, the timestamp sequence to be detected of keyframe sequence to be detected is generated, it is each in timestamp sequence to be detected
Timestamp is the timestamp of one of them key frame to be detected in keyframe sequence to be detected.
Step 405, reference time stamp sequence is obtained, wherein the reference time stabs in the key frame that sequence is reference video, with
The timestamp for the key frame that one of them of keyframe sequence to be detected keyframe sequence to be detected matches.Herein, with to
The key frame that matches of one of them keyframe sequence to be detected of detection keyframe sequence that is to say to be generated by step 402
Matching refers to key frame.
Step 406, based on the corresponding relationship between timestamp sequence to be detected and acquired reference time stamp sequence, really
The degree of association of fixed video to be detected and reference video.
403~step 406 of above-mentioned steps can be using with embodiment shown in Fig. 2, and step 201~step 204 is similar
Mode execute, details are not described herein.
Figure 4, it is seen that compared with the corresponding embodiment of Fig. 2, the process of the video detecting method of the present embodiment
400, further highlight the keyframe sequence that executing subject carries out key-frame extraction to video to be detected and obtains extraction
Matching degree detection is carried out with reference keyframe sequence, to obtain key frame series to be detected and matching with reference to keyframe sequence
Process.
In some optional implementations of each embodiment of the application, based on timestamp sequence to be detected with it is acquired
Reference time stamp sequence between corresponding relationship can be given birth to first when determining the degree of association of video to be detected and reference video
At the corresponding relationship of timestamp sequence to be detected and reference time stamp sequence;And corresponding relationship generated is inputted into training in advance
Related degree model in, determine the degree of association of video to be detected and reference video.
Shown in Figure 5 in application scenes, related degree model for example can be trained by following mode
It arrives:
Step 501, initial association degree model is established.If related degree model is that convolutional neural networks model herein can be with
An initial association degree model including multiple convolutional layers is initially set up, and is assigned just for each parameter in initial association degree model
Value.
Step 502, training sample set is obtained, the training sample that training sample is concentrated includes two pre-generated videos
Corresponding relationship, and the degree of association mark of the degree of association for marking two videos.
Optionally, training sample concentration may include positive sample and negative sample.Herein, if final purpose is to judge two
Whether video includes identical video clip, then, the positive sample that training sample is concentrated, which for example can be, contains identical view
Corresponding relationship between two videos of frequency segment, and the negative sample that training sample is concentrated for example can be not comprising identical view
Corresponding relationship between two videos of frequency segment.Correspondingly, the labeled data of positive sample, which can be, can be used to indicate that this just
Two videos in sample include the mark (for example, can be indicated with digital " 1 ") of same video segment, and the mark of negative sample
It does not include the mark of same video segment (for example, can that note data, which can be two videos that can be used to indicate that in the positive sample,
To be indicated with digital " 0 ").
It should be noted that herein, more than the corresponding relationship of two videos (for example, video A and video B) can use
The mode of description indicates.Namely: the key frame that in video A, will be matched with a wherein key frame (for example, B1) of video B
Timestamp as abscissa, by video B, the timestamp with the video A key frame to match is obtained as ordinate
The position of coordinate points and distribution.
Referring to shown in Fig. 6 A- Fig. 6 C, respectively in training sample, the schematic diagram of the corresponding relationship of two videos.
It is not difficult to find out that, the sequencing that the frame to match occurs in two videos is very close from Fig. 6 A and Fig. 6 B.
Thus, judging whether two videos include that can show Fig. 6 A and Fig. 6 B in this application scenarios of identical video clip
Corresponding relationship be labeled as positive sample.
And it is not identical from can be seen that the sequencing that the frame to match occurs in two videos in 6C.Thus, sentencing
Whether disconnected two videos include that can be labeled as the corresponding relationship shown in figure C in this application scenarios of identical video clip
Negative sample.
Step 503, training sample set is inputted in initial association degree model, just based on the training of pre-set loss function
Beginning related degree model, the related degree model after being trained.
It after training sample set is inputted initial association degree model, can export in training sample, two videos include phase
The probability (that is, classification information) of same video clip.
By the way that classification information and markup information are inputted pre-set loss function, it can be deduced that a penalty values.It should
Penalty values backpropagation in convolutional neural networks can be adjusted each parameter in convolutional neural networks.
So, by the way that training sample set is circularly inputted convolutional neural networks, determines penalty values and damage
The backpropagation of mistake value can continuously adjust the parameter in convolutional neural networks, until reach trained completion condition (for example,
Penalty values are less than a certain preset penalty values threshold value).
With further reference to Fig. 7, as the realization to method shown in above-mentioned each figure, this application provides a kind of video detection dresses
The one embodiment set, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively
In kind electronic equipment.
As shown in fig. 7, the video detecting device of the present embodiment, including keyframe sequence acquiring unit 701, timestamp generate
Unit 702, reference time stab acquiring unit 703, determination unit 704.
Wherein, keyframe sequence acquiring unit 701 can be configured to obtain keyframe sequence to be detected, wherein to be detected
With the reference video obtained in advance, one of key frame matches for each key frame in keyframe sequence.
Timestamp generation unit 702 can be configured to generate the timestamp sequence to be detected of keyframe sequence to be detected, to
It is the timestamp of one of them key frame to be detected in keyframe sequence to be detected that detection time, which stabs each timestamp in sequence,.
Reference time stamp acquiring unit 703 can be configured to obtain reference time stamp sequence, wherein the reference time stabs sequence
For the key in the key frame of reference video, to match with one of them keyframe sequence to be detected of keyframe sequence to be detected
The timestamp of frame.
Determination unit 704 can be configured to based between timestamp sequence to be detected and acquired reference time stamp sequence
Corresponding relationship, determine the degree of association of video to be detected and reference video.
In some optional implementations, video detecting device can also include extraction unit (not shown) and
With degree detection unit (not shown).
In these optional implementations, extraction unit can be configured to extract the key frame of video to be detected, generate
Keyframe sequence.
Matching degree detection unit can be configured to the reference keyframe sequence progress obtained by keyframe sequence and in advance
It is detected with degree, obtains keyframe sequence to be detected and matching with reference to keyframe sequence.
Wherein, the key frame to be detected in keyframe sequence to be detected is in keyframe sequence, with reference keyframe sequence
The frame that matches of one of key frame;Matching is with reference to key frame sequence with reference to key frame with reference to the matching in keyframe sequence
In column, it is used as and is matched with reference to key frame with the frame that one of them key frame to be detected matches.
In some optional implementations, matching degree detection unit can also be further configured to: for key frame
Key frame in sequence is examined by the key frame and with reference to the matching degree trained in advance of the reference key frame input in keyframe sequence
Model is surveyed, to determine the key frame and with reference to the matching degree of key frame.
In some optional implementations, determination unit 704 can also be further configured to: generate the time to be detected
Stab the corresponding relationship of sequence and reference time stamp sequence;By corresponding relationship generated input related degree model trained in advance
In, determine the degree of association of video to be detected and reference video.
In some optional implementations, related degree model can be trained in the following manner and be obtained: establish initial close
Connection degree model;Training sample set is obtained, the training sample that training sample is concentrated includes that the corresponding of two pre-generated videos is closed
System, and the degree of association mark of the degree of association for marking two videos;Training sample set is inputted into initial association degree model
In, based on pre-set loss function training initial association degree model, the related degree model after being trained.
Below with reference to Fig. 8, it illustrates the electronic equipments for the video detecting method for being suitable for being used to realize the embodiment of the present application
Computer system 800 structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, should not be implemented to the application
The function and use scope of example bring any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in
Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 806 and
Execute various movements appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data.
CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always
Line 804.
I/O interface 805 is connected to lower component: the storage section 806 including hard disk etc.;And including such as LAN card, tune
The communications portion 807 of the network interface card of modulator-demodulator etc..Communications portion 807 executes mailing address via the network of such as internet
Reason.Driver 808 is also connected to I/O interface 805 as needed.Detachable media 809, such as disk, CD, magneto-optic disk, half
Conductor memory etc. is mounted on as needed on driver 808, in order to as needed from the computer program read thereon
It is mounted into storage section 806.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 807, and/or from detachable media
809 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.
The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores
The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And
In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C++,
Further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be complete
It executes, partly executed on the user computer on the user computer entirely, being executed as an independent software package, part
Part executes on the remote computer or executes on a remote computer or server completely on the user computer.It is relating to
And in the situation of remote computer, remote computer can pass through the network of any kind --- including local area network (LAN) or extensively
Domain net (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as utilize ISP
To be connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include keyframe sequence acquiring unit, timestamp generation unit, reference time stamp acquiring unit and determination unit.Wherein, these lists
The title of member does not constitute the restriction to the unit itself under certain conditions, for example, keyframe sequence acquiring unit can be with
It is described as " obtaining the unit of keyframe sequence to be detected ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should
Device: keyframe sequence to be detected is obtained, wherein each key frame in keyframe sequence to be detected is regarded with the reference obtained in advance
In frequency, one of key frame matches;Generate the timestamp sequence to be detected of keyframe sequence to be detected, timestamp to be detected
Each timestamp in sequence is the timestamp of one of them key frame to be detected in keyframe sequence to be detected;When obtaining reference
Between stab sequence, wherein the reference time stab sequence be reference video key frame in, with keyframe sequence to be detected one of them
The timestamp for the key frame that keyframe sequence to be detected matches;Based on timestamp sequence to be detected and acquired reference time
The corresponding relationship between sequence is stabbed, determines the degree of association of video to be detected and reference video.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (12)
1. a kind of video detecting method, comprising:
Obtain keyframe sequence to be detected, wherein each key frame in the keyframe sequence to be detected and the ginseng obtained in advance
It examines in video, one of key frame matches;
Generate the timestamp sequence to be detected of the keyframe sequence to be detected, each time in the timestamp sequence to be detected
Stamp is the timestamp of one of them key frame to be detected in the keyframe sequence to be detected;
Obtaining the reference time stabs sequence, wherein reference time stamp sequence is in the key frame of the reference video, and described
The timestamp for the key frame that one of them of keyframe sequence to be detected keyframe sequence to be detected matches;
Based on the corresponding relationship between the timestamp sequence to be detected and stamp sequence of acquired reference time, determine it is described to
Detect the degree of association of video and the reference video.
2. according to the method described in claim 1, wherein, before the acquisition keyframe sequence to be detected, the method is also
Include:
The key frame of video to be detected is extracted, keyframe sequence is generated;
The keyframe sequence is subjected to matching degree detection with the reference keyframe sequence obtained in advance, obtains key frame to be detected
Sequence and matching refer to keyframe sequence;
Wherein:
Key frame to be detected in the keyframe sequence to be detected is to refer to key frame sequence with described in the keyframe sequence
The frame that one of key frame of column matches;
The matching is in the reference keyframe sequence, with one of them with reference to key frame with reference to the matching in keyframe sequence
The frame that key frame to be detected matches refers to key frame as matching.
It is described by the keyframe sequence and the reference key frame that obtains in advance 3. according to the method described in claim 2, wherein
Sequence carries out matching degree detection, comprising:
For the key frame in keyframe sequence, the key frame and the reference key frame with reference in keyframe sequence are inputted
Trained matching degree detection model in advance, to determine the key frame and with reference to the matching degree of key frame.
4. method described in one of -3 according to claim 1, wherein it is described based on the timestamp sequence to be detected with it is acquired
Reference time stamp sequence between corresponding relationship, determine the degree of association of the video to be detected and the reference video, comprising:
Generate the corresponding relationship of the timestamp sequence to be detected and reference time stamp sequence;
By in corresponding relationship generated input related degree model trained in advance, the video to be detected and the reference are determined
The degree of association of video.
5. according to the method described in claim 4, wherein, training obtains the related degree model in the following manner:
Establish initial association degree model;
Training sample set is obtained, the training sample that the training sample is concentrated includes that the corresponding of two pre-generated videos is closed
System, and the degree of association mark of the degree of association for marking two videos;
The training sample set is inputted in the initial association degree model, it is described just based on the training of pre-set loss function
Beginning related degree model, the related degree model after being trained.
6. a kind of video detecting device, comprising:
Keyframe sequence acquiring unit is configured to obtain keyframe sequence to be detected, wherein the keyframe sequence to be detected
In each key frame with the reference video that obtains in advance, one of key frame matches;
Timestamp generation unit, is configured to generate the timestamp sequence to be detected of the keyframe sequence to be detected, it is described to
It is the time of one of them key frame to be detected in the keyframe sequence to be detected that detection time, which stabs each timestamp in sequence,
Stamp;
Reference time stabs acquiring unit, is configured to obtain reference time stamp sequence, wherein the reference time stamp sequence is institute
It states in the key frame of reference video, matches with one of them keyframe sequence to be detected of the keyframe sequence to be detected
The timestamp of key frame;
Determination unit is configured to based on pair between the timestamp sequence to be detected and acquired reference time stamp sequence
It should be related to, determine the degree of association of the video to be detected and the reference video.
7. device according to claim 6, wherein described device further include:
Extraction unit is configured to extract the key frame of video to be detected, generates keyframe sequence;
Matching degree detection unit is configured to match the keyframe sequence with the reference keyframe sequence obtained in advance
Degree detection obtains keyframe sequence to be detected and matching with reference to keyframe sequence;
Wherein:
Key frame to be detected in the keyframe sequence to be detected is to refer to key frame sequence with described in the keyframe sequence
The frame that one of key frame of column matches;
The matching is in the reference keyframe sequence, with one of them with reference to key frame with reference to the matching in keyframe sequence
The frame that key frame to be detected matches refers to key frame as matching.
8. device according to claim 7, wherein the matching degree detection unit is further configured to:
For the key frame in keyframe sequence, the key frame and the reference key frame with reference in keyframe sequence are inputted
Trained matching degree detection model in advance, to determine the key frame and with reference to the matching degree of key frame.
9. the device according to one of claim 6-8, wherein the determination unit is further configured to:
Generate the corresponding relationship of the timestamp sequence to be detected and reference time stamp sequence;
By in corresponding relationship generated input related degree model trained in advance, the video to be detected and the reference are determined
The degree of association of video.
10. device according to claim 9, wherein training obtains the related degree model in the following manner:
Establish initial association degree model;
Training sample set is obtained, the training sample that the training sample is concentrated includes that the corresponding of two pre-generated videos is closed
System, and the degree of association mark of the degree of association for marking two videos;
The training sample set is inputted in the initial association degree model, it is described just based on the training of pre-set loss function
Beginning related degree model, the related degree model after being trained.
11. a kind of electronic equipment, comprising:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
Now such as method as claimed in any one of claims 1 to 5.
12. a kind of computer readable storage medium, is stored thereon with computer program, wherein described program is executed by processor
Shi Shixian method for example as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810879285.9A CN109165574B (en) | 2018-08-03 | 2018-08-03 | Video detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810879285.9A CN109165574B (en) | 2018-08-03 | 2018-08-03 | Video detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109165574A true CN109165574A (en) | 2019-01-08 |
CN109165574B CN109165574B (en) | 2022-09-16 |
Family
ID=64898830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810879285.9A Active CN109165574B (en) | 2018-08-03 | 2018-08-03 | Video detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109165574B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111651636A (en) * | 2020-03-31 | 2020-09-11 | 易视腾科技股份有限公司 | Video similar segment searching method and device |
CN111935506A (en) * | 2020-08-19 | 2020-11-13 | 百度时代网络技术(北京)有限公司 | Method and apparatus for determining repeating video frames |
CN112218146A (en) * | 2020-10-10 | 2021-01-12 | 百度(中国)有限公司 | Video content distribution method and device, server and medium |
CN112866800A (en) * | 2020-12-31 | 2021-05-28 | 四川金熊猫新媒体有限公司 | Video content similarity detection method, device, equipment and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101258753A (en) * | 2005-09-09 | 2008-09-03 | 汤姆森许可贸易公司 | Video water-mark detection |
CN102737135A (en) * | 2012-07-10 | 2012-10-17 | 北京大学 | Video copy detection method and system based on soft cascade model sensitive to deformation |
CN103514293A (en) * | 2013-10-09 | 2014-01-15 | 北京中科模识科技有限公司 | Method for video matching in video template library |
CN105468755A (en) * | 2015-11-27 | 2016-04-06 | 东方网力科技股份有限公司 | Video screening and storing method and device |
CN106162235A (en) * | 2016-08-17 | 2016-11-23 | 北京百度网讯科技有限公司 | Method and apparatus for Switch Video stream |
CN106649663A (en) * | 2016-12-14 | 2017-05-10 | 大连理工大学 | Video copy detection method based on compact video representation |
CN106778686A (en) * | 2017-01-12 | 2017-05-31 | 深圳职业技术学院 | A kind of copy video detecting method and system based on deep learning and graph theory |
CN107168619A (en) * | 2017-03-29 | 2017-09-15 | 腾讯科技(深圳)有限公司 | User-generated content treating method and apparatus |
CN107180056A (en) * | 2016-03-11 | 2017-09-19 | 阿里巴巴集团控股有限公司 | The matching process and device of fragment in video |
US20180084310A1 (en) * | 2016-09-21 | 2018-03-22 | GumGum, Inc. | Augmenting video data to present real-time metrics |
US20180089203A1 (en) * | 2016-09-23 | 2018-03-29 | Adobe Systems Incorporated | Providing relevant video scenes in response to a video search query |
CN108228835A (en) * | 2018-01-04 | 2018-06-29 | 百度在线网络技术(北京)有限公司 | For handling the method and apparatus of video |
CN108289248A (en) * | 2018-01-18 | 2018-07-17 | 福州瑞芯微电子股份有限公司 | A kind of deep learning video encoding/decoding method and device based on content forecast |
CN108337532A (en) * | 2018-02-13 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Perform mask method, video broadcasting method, the apparatus and system of segment |
-
2018
- 2018-08-03 CN CN201810879285.9A patent/CN109165574B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101258753A (en) * | 2005-09-09 | 2008-09-03 | 汤姆森许可贸易公司 | Video water-mark detection |
CN102737135A (en) * | 2012-07-10 | 2012-10-17 | 北京大学 | Video copy detection method and system based on soft cascade model sensitive to deformation |
CN103514293A (en) * | 2013-10-09 | 2014-01-15 | 北京中科模识科技有限公司 | Method for video matching in video template library |
CN105468755A (en) * | 2015-11-27 | 2016-04-06 | 东方网力科技股份有限公司 | Video screening and storing method and device |
CN107180056A (en) * | 2016-03-11 | 2017-09-19 | 阿里巴巴集团控股有限公司 | The matching process and device of fragment in video |
CN106162235A (en) * | 2016-08-17 | 2016-11-23 | 北京百度网讯科技有限公司 | Method and apparatus for Switch Video stream |
US20180084310A1 (en) * | 2016-09-21 | 2018-03-22 | GumGum, Inc. | Augmenting video data to present real-time metrics |
US20180089203A1 (en) * | 2016-09-23 | 2018-03-29 | Adobe Systems Incorporated | Providing relevant video scenes in response to a video search query |
CN106649663A (en) * | 2016-12-14 | 2017-05-10 | 大连理工大学 | Video copy detection method based on compact video representation |
CN106778686A (en) * | 2017-01-12 | 2017-05-31 | 深圳职业技术学院 | A kind of copy video detecting method and system based on deep learning and graph theory |
CN107168619A (en) * | 2017-03-29 | 2017-09-15 | 腾讯科技(深圳)有限公司 | User-generated content treating method and apparatus |
CN108228835A (en) * | 2018-01-04 | 2018-06-29 | 百度在线网络技术(北京)有限公司 | For handling the method and apparatus of video |
CN108289248A (en) * | 2018-01-18 | 2018-07-17 | 福州瑞芯微电子股份有限公司 | A kind of deep learning video encoding/decoding method and device based on content forecast |
CN108337532A (en) * | 2018-02-13 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Perform mask method, video broadcasting method, the apparatus and system of segment |
Non-Patent Citations (8)
Title |
---|
MENGLIN JIANG 等: "Video Copy Detection Using a Soft Cascade of Multimodal Features", 《2012 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO》 * |
ZHU LIU 等: "Effective and scalable video copy detection", 《MIR "10: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON MULTIMEDIA INFORMATION RETRIEVAL》 * |
刘健: "基于深度学习的溯源视频目标检测与识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
林莹 等: "多特征综合的视频拷贝检测", 《中国图象图形学报》 * |
王乐滋: "基于内容相似性的海量音视频数据检索研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王晓林: "基于内容的视频信息检索技术研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王晶 等: "融合局部特征和全局特征的视频拷贝检测", 《清华大学学报(自然科学版)》 * |
顾佳伟 等: "视频拷贝检测方法综述", 《计算机研究与发展》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111651636A (en) * | 2020-03-31 | 2020-09-11 | 易视腾科技股份有限公司 | Video similar segment searching method and device |
CN111651636B (en) * | 2020-03-31 | 2023-11-24 | 易视腾科技股份有限公司 | Video similar segment searching method and device |
CN111935506A (en) * | 2020-08-19 | 2020-11-13 | 百度时代网络技术(北京)有限公司 | Method and apparatus for determining repeating video frames |
CN112218146A (en) * | 2020-10-10 | 2021-01-12 | 百度(中国)有限公司 | Video content distribution method and device, server and medium |
CN112218146B (en) * | 2020-10-10 | 2023-02-24 | 百度(中国)有限公司 | Video content distribution method and device, server and medium |
CN112866800A (en) * | 2020-12-31 | 2021-05-28 | 四川金熊猫新媒体有限公司 | Video content similarity detection method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109165574B (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108427939B (en) | Model generation method and device | |
CN109165574A (en) | video detecting method and device | |
CN108154196B (en) | Method and apparatus for exporting image | |
CN109344908A (en) | Method and apparatus for generating model | |
CN108446387A (en) | Method and apparatus for updating face registration library | |
CN108830235A (en) | Method and apparatus for generating information | |
CN107766940A (en) | Method and apparatus for generation model | |
CN108038880A (en) | Method and apparatus for handling image | |
CN107919129A (en) | Method and apparatus for controlling the page | |
CN108446651A (en) | Face identification method and device | |
CN109492128A (en) | Method and apparatus for generating model | |
CN109376267A (en) | Method and apparatus for generating model | |
CN109446990A (en) | Method and apparatus for generating information | |
CN109359676A (en) | Method and apparatus for generating vehicle damage information | |
CN107393541A (en) | Information Authentication method and apparatus | |
CN109145828A (en) | Method and apparatus for generating video classification detection model | |
CN108229485A (en) | For testing the method and apparatus of user interface | |
CN108171211A (en) | Biopsy method and device | |
CN109447246A (en) | Method and apparatus for generating model | |
CN109086780A (en) | Method and apparatus for detecting electrode piece burr | |
CN107943877A (en) | The generation method and device of content of multimedia to be played | |
CN107958247A (en) | Method and apparatus for facial image identification | |
CN108389172A (en) | Method and apparatus for generating information | |
CN112231663A (en) | Data acquisition method, device, equipment and storage medium combining RPA and AI | |
CN107729928A (en) | Information acquisition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |