CN108924586A - A kind of detection method of video frame, device and electronic equipment - Google Patents

A kind of detection method of video frame, device and electronic equipment Download PDF

Info

Publication number
CN108924586A
CN108924586A CN201810638981.0A CN201810638981A CN108924586A CN 108924586 A CN108924586 A CN 108924586A CN 201810638981 A CN201810638981 A CN 201810638981A CN 108924586 A CN108924586 A CN 108924586A
Authority
CN
China
Prior art keywords
detected
video
video frame
frame
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810638981.0A
Other languages
Chinese (zh)
Other versions
CN108924586B (en
Inventor
李冠楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201810638981.0A priority Critical patent/CN108924586B/en
Publication of CN108924586A publication Critical patent/CN108924586A/en
Application granted granted Critical
Publication of CN108924586B publication Critical patent/CN108924586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Health & Medical Sciences (AREA)
  • Television Signal Processing For Recording (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The embodiment of the invention provides a kind of detection method of video frame, device and electronic equipments, belong to video detection technology field.This method includes:Obtain video frame to be detected;Obtain the Hash feature of the video frame to be detected;Pre-stored each Hash feature samples in the Hash feature of the video frame to be detected and database are subjected to characteristic matching respectively;The Hash feature samples stored in the database are a kind of head of pre-stored style and/or each Hash feature samples of run-out;If the Hash feature of the video frame to be detected is consistent with a Hash feature samples characteristic matching in Hash feature samples pre-stored in database, the head frame or piece the tail frame video frame to be detected being determined as in video to be detected.The detection efficiency of the head and/or run-out to video file can be improved using the present invention.

Description

A kind of detection method of video frame, device and electronic equipment
Technical field
The present invention relates to video detection technology fields, more particularly to a kind of detection method of video frame, device and electronics Equipment.
Background technique
Currently, there is head/run-out effect that a large amount of certain softwares automatically generate in the homemade video that user uploads, these Short video title/trailer content may occupy the 10% of short video length, the case where not skipping head/run-out option automatically Under, influence user's viewing experience.
Therefore, it is embodied to promote user, video server end can carry out head/run-out to the homemade video that user uploads Detection.The principle of head detection is identical as the principle that run-out detects, and head detection and run-out detection are referred to as to mesh below Mark the detection of video clip.
Currently, the detection to target video segment mainly passes through manually mark head/run-out point, that is, artificial mark Outpour head/run-out start frame and end frame.However, manually indicating head/tail point since short number of videos is numerous and needing Excess resource is expended, so that detection efficiency is low.
Summary of the invention
The detection method for being designed to provide a kind of video frame, device and the electronic equipment of the embodiment of the present invention, to improve The detection efficiency of head and/or run-out to video file.Specific technical solution is as follows:
In a first aspect, a kind of detection method of video frame is provided, the method includes:
Obtain video frame to be detected;
Obtain the Hash feature of the video frame to be detected;
Each Hash feature samples pre-stored in the Hash feature of the video frame to be detected and database are distinguished Carry out characteristic matching;The Hash feature samples stored in the database are the pre-stored a kind of head and/or piece of style Each Hash feature samples of tail;
If one in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples Hash feature samples characteristic matching is consistent, head frame or the run-out video frame to be detected being determined as in video to be detected Frame.
Optionally, the Hash feature samples, including:The perceived hash characteristics of each Sample video frame and average Hash Feature;
The step of Hash feature for obtaining the video frame to be detected, including:
Calculate the perceived hash characteristics and average Hash feature of the video frame to be detected;
Each Hash feature samples pre-stored in the Hash feature of the video frame to be detected and database are distinguished The step of carrying out characteristic matching, including:
By a kind of head of style pre-stored in the perceived hash characteristics of the video frame to be detected and database and/ Or each perceived hash characteristics sample of run-out carries out first distance calculating respectively;
By a kind of head of style pre-stored in the average Hash feature of the video frame to be detected and database and/ Or each average Hash feature samples of run-out carry out second distance calculating respectively;
If the Hash in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples The first distance calculated result of feature samples feature is less than perceptual hash threshold value, and second distance calculated result is less than average Hash Threshold value, it is determined that pre-stored Hash feature samples characteristic matching in the Hash feature and database of the video frame to be detected Unanimously, so that the video frame successful match to be detected;
Alternatively, if in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples Each Hash feature samples feature matches inconsistent, then it fails to match for the video frame to be detected.
Optionally, the step that the video frame to be detected is determined as to head frame or piece tail frame in video to be detected After rapid, further include:
It selects the video frame not detected as video frame to be detected, returns to the acquisition video frame to be detected The step of Hash feature.
Optionally, the step of acquisition video frame to be detected, including:Using all video frames of video to be detected as not Video frame is detected, is never detected in video frame by the sequence of broadcasting and obtains a frame as video frame to be detected.
Optionally, if carrying out head detection to video to be detected, the step of acquisition video frame to be detected, including:
Video frame in the first preset duration originated in the video to be detected is determined as the first detection range;
Key frame is extracted out of described first detection range, is determined as being used to detect head in the video to be detected First does not detect video frame;It is not detected in video frame by extraction sequence from first and obtains a frame as the first video frame to be detected;
Pre-stored each Hash feature samples in the Hash feature by the video frame to be detected and database The step of progress characteristic matching is respectively:By the Hash feature of the first video frame to be detected with it is pre-stored each in database Head Hash feature samples carry out characteristic matching respectively;
The step of video frame for selecting one not detect is as video frame to be detected for:By the sequential selection one of broadcasting A first does not detect video frame as the first video frame to be detected;
If carrying out run-out detection to video to be detected, the step of acquisition video frame to be detected, including:
Video frame in the second preset duration before terminating in the video to be detected is determined as the second detection range;
Key frame is extracted out of described second detection range, is determined as being used to detect run-out in the video to be detected Second does not detect video frame;It is not detected in video frame by extraction sequence from second and obtains a frame as the second video frame to be detected;
Pre-stored each Hash feature samples in the Hash feature by the video frame to be detected and database The step of progress characteristic matching is respectively:By the Hash feature of the second video frame to be detected with it is pre-stored each in database Run-out Hash feature samples carry out characteristic matching respectively;
The step of video frame for selecting one not detect is as current video frame to be detected for:It is selected by the sequence of broadcasting It selects one second and does not detect video frame as the second video frame to be detected.
Optionally, the use for extracting key frame out of described first detection range, being determined as in the video to be detected In the step of detect head first does not detect video frame, including:
By the first preset interval, multiple first are extracted out of described first detection range and does not detect video frame;
It is described to extract key frame out of described second detection range, it is determined as being used for detection lug in the video to be detected Not the step of the second of head does not detect video frame, including:
By the second preset interval, multiple second are extracted out of described second detection range and does not detect video frame.
Optionally, by sequential selection one first of broadcasting do not detect video frame as the first video frame to be detected it Before, this method further includes:
Judge matching result that upper one has carried out the matched first video frame to be detected whether with current matching The matching result of one video frame to be detected is identical;
If it is not the same, then by upper one carried out the first of the matched first video frame to be detected and current matching to The video frame not detected between detection video frame is determined as first and does not detect video frame;It executes described by the sequential selection one played A first does not detect the step of video frame is as the first video frame to be detected;
Alternatively, execution is described not to detect video frame as first by sequential selection one first of broadcasting if identical The step of video frame to be detected;
Before not detecting video frame as the second video frame to be detected by sequential selection one second of broadcasting, this method Further include:
Judge matching result that upper one has carried out the matched second video frame to be detected whether with current matching The matching result of two video frames to be detected is identical;
If it is not the same, then by upper one carried out the second of the matched second video frame to be detected and current matching to The video frame not detected between detection video frame is determined as second and does not detect video frame;It executes described by the sequential selection one played A second does not detect the step of video frame is as the second video frame to be detected;
Alternatively, execution is described not to detect video frame as second by sequential selection one second of broadcasting if identical The step of video frame to be detected.
Optionally, the method also includes:To the head frame or piece tail frame being confirmed as in the video to be detected into Row fusion, obtains target video segment.
Optionally, the described pair of head frame being confirmed as in the video to be detected or piece tail frame merge, and obtain The step of target video segment, including:
Obtain the temporal information of the head frame or piece tail frame that are each confirmed as in the video to be detected;
The video frame of Time Continuous is determined as a matching sub-piece;
Judge whether the time difference between adjacent matching sub-piece is less than or equal to preset time of fusion difference threshold value;
The neighbor sub-piece that time difference is less than or equal to preset time of fusion difference threshold value is permeated matching Segment;
Selected from fused matching segment one as target video segment.
Optionally, it is described selected from fused matching segment one as target video segment the step of, including:
By first or the last one matching segment is determined as target video segment.
Optionally, it is described selected from fused matching segment one as target video segment the step of, including:
It calculates in each matching segment, ratio shared by the head frame or piece tail frame being confirmed as in the video to be detected Example value;
The ratio value is maximum, and the matching segment for being greater than preset ratio threshold value is determined as current matching segment;
If current matching segment includes the head frame in video to be detected in video to be detected, current matching segment is judged Next matching segment whether meet preset first splicing condition;
If meeting preset first splicing condition, current matching segment is spliced with next segment that matches, Spliced matching segment is determined as current matching segment, returns to next matching segment of the judgement current matching segment The step of whether meeting preset first splicing condition;
If being unsatisfactory for preset first splicing condition, current matching segment is determined as target video segment;
If current matching segment includes the piece tail frame in video to be detected in video to be detected, current matching segment is judged Upper matching segment whether meet preset second splicing condition;
If meeting preset second splicing condition, current matching segment is matched into segment with upper one and is spliced, Spliced matching segment is determined as current matching segment, returns to the upper matching segment of the judgement current matching segment The step of whether meeting preset second splicing condition;
If being unsatisfactory for preset second splicing condition, current matching segment is determined as target video segment.
Optionally, the preset first splicing condition is:If current matching segment and next time difference for matching segment Less than or equal to default first splicing threshold value and next matching segment is confirmed as the head frame in the video to be detected Or ratio value >=max (α ss shared by piece tail framei_best,sst), it is determined that current clip and next matching segment Spliced;
The preset second splicing condition is:If the time difference that current matching segment matches segment with upper one is less than or waits Splice threshold value in default second and the upper matching segment is confirmed as head frame or run-out in the video to be detected Ratio value >=max (α ss shared by framei_best,sst), it is determined that segment is matched with described upper one to output segment and is spelled It connects;
Wherein α is goal-selling threshold value, ssi_bestTo be confirmed as the head frame or run-out in the video to be detected Shared by the head frame being confirmed as in the video to be detected or piece tail frame of the maximum matching segment of ratio value shared by frame Ratio value, sstFor preset ratio threshold value.
Optionally, if calculated maximum ratio value is not more than preset ratio threshold value, and current in video to be detected With the head frame that segment includes in video to be detected, then first matching segment is determined as target video segment;
If calculated maximum ratio value is not more than preset ratio threshold value, and current matching segment packet in video to be detected The piece tail frame in video to be detected is included, then the last one matching segment is determined as target video segment.
Second aspect, provides a kind of detection device of video frame, and described device includes:
First obtains module, for obtaining video frame to be detected;
Second obtains module, for obtaining the Hash feature of the video frame to be detected;
Matching module, for by pre-stored each Hash in the Hash feature of the video frame to be detected and database Feature samples carry out characteristic matching respectively;The Hash feature samples stored in the database are a kind of pre-stored style Each Hash feature samples of head and/or run-out;
Matching result determining module, if for the video frame to be detected Hash feature with it is pre-stored in database A Hash feature samples characteristic matching in Hash feature samples is consistent, and the video frame to be detected is determined as view to be detected Head frame or piece tail frame in frequency.
Optionally, the Hash feature samples, including:The perceived hash characteristics of each Sample video frame and average Hash Feature;
Described second obtains module, is specifically used for:Calculate the video frame to be detected perceived hash characteristics and average Kazakhstan Uncommon feature;
The matching module, including:First distance computing unit and second distance computing unit;
The first distance computing unit, for will in the perceived hash characteristics of the video frame to be detected and database it is pre- A kind of head of the style first stored and/or each perceived hash characteristics sample of run-out carry out first distance calculating respectively;
The second distance computing unit, for will in the average Hash feature of the video frame to be detected and database it is pre- A kind of head of the style first stored and/or each average Hash feature samples of run-out carry out second distance calculating respectively;
The matching result determining module, is specifically used for:If in the Hash feature of the video frame to be detected and database The first distance calculated result of Hash feature samples feature in pre-stored Hash feature samples is less than perceptual hash threshold value, And second distance calculated result is less than average Hash threshold value, it is determined that in the Hash feature and database of the video frame to be detected Pre-stored Hash feature samples characteristic matching is consistent, so that the video frame successful match to be detected;
Alternatively, if in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples Each Hash feature samples feature matches inconsistent, then it fails to match for the video frame to be detected.
Optionally, further include selecting module, the video frame to be detected is determined as in video to be detected for described After the step of head frame or piece tail frame,
It selects the video frame not detected as video frame to be detected, returns to the acquisition video frame to be detected The step of Hash feature.
Optionally, described first module is obtained, be specifically used for:Using all video frames of video to be detected as not detecting view Frequency frame is never detected in video frame by the sequence of broadcasting and obtains a frame as current video frame to be detected.
Optionally, described first module is obtained, including:First detection unit and second detection unit;First detection Unit, including:First range determines that subelement and the first video frame to be detected determine subelement;
First range determines subelement, is used for when carrying out head detection to video to be detected, will be described to be detected The video frame in the first preset duration originated in video is determined as the first detection range;
First video frame to be detected determines subelement, for extracting key frame out of described first detection range, really Be set in the video to be detected for detecting the first video frame to be detected of head, by extraction sequence from the first view to be detected A frame is obtained in frequency frame as the current first video frame to be detected;
The matching module is used for when carrying out head detection to video to be detected, by the current first video frame to be detected Hash feature and database in pre-stored each head Hash feature samples carry out characteristic matching respectively;
The selecting module, for video to be detected carry out head detection when, by broadcasting sequential selection one not First video frame to be detected of detection is as the current first video frame to be detected;
The second detection unit, including:Second range determines that subelement and the second video frame to be detected determine subelement;
Second range determines subelement, is used for when carrying out run-out detection to video to be detected, will be described to be detected The video frame in the second preset duration before terminating in video is determined as the second detection range;
Second video frame to be detected determines subelement, for extracting key frame out of described second detection range, really Be set in the video to be detected for detecting the second video frame to be detected of run-out, by extraction sequence from the second view to be detected A frame is obtained in frequency frame as the current second video frame to be detected;
The matching module is used for when carrying out run-out detection to video to be detected, by the current second video frame to be detected Hash feature and database in pre-stored each run-out Hash feature samples carry out characteristic matching respectively;
The selecting module, for video to be detected carry out run-out detection when, by broadcasting sequential selection one not Second video frame to be detected of detection is as the current second video frame to be detected.
Optionally, the described first video frame to be detected determines subelement, is specifically used for:By the first preset interval, from described Multiple first is extracted in first detection range does not detect video frame;
Second video frame to be detected determines subelement, is specifically used for:By the second preset interval, from second detection Multiple second is extracted in range does not detect video frame.
Optionally, described device further includes:
First judges matching result module, for not detecting in the selecting module by sequential selection one first of broadcasting Before video frame is as the first video frame to be detected,
Judge matching result that upper one has carried out the matched first video frame to be detected whether with current matching The matching result of one video frame to be detected is identical;
If it is not the same, then by upper one carried out the first of the matched first video frame to be detected and current matching to The video frame not detected between detection video frame is determined as first and does not detect video frame;It executes described by the sequential selection one played A first does not detect the step of video frame is as the first video frame to be detected;
Alternatively, execution is described not to detect video frame as first by sequential selection one first of broadcasting if identical The step of video frame to be detected;
Described device further includes:
Second judges matching result module, for not detecting in the selecting module by sequential selection one second of broadcasting Before video frame is as the second video frame to be detected,
Judge matching result that upper one has carried out the matched second video frame to be detected whether with current matching The matching result of one video frame to be detected is identical;
If it is not the same, then by upper one carried out the second of the matched second video frame to be detected and current matching to The video frame not detected between detection video frame is determined as second and does not detect video frame;It executes described by the sequential selection one played A second does not detect the step of video frame is as the second video frame to be detected;
Alternatively, execution is described not to detect video frame as second by sequential selection one second of broadcasting if identical The step of video frame to be detected.
Optionally, the Fusion Module, including:Temporal information acquiring unit, matching sub-piece determination unit, time of fusion Poor judging unit, matching segment composition unit and target video Piece Selection unit;
Temporal information acquiring unit, for obtaining the head frame or run-out that are each confirmed as in the video to be detected The temporal information of frame;
Sub-piece determination unit is matched, for the video frame of Time Continuous to be determined as a matching sub-piece;
Time of fusion difference judging unit, for judging whether the time difference between adjacent matching sub-piece is less than or equal to Preset time of fusion difference threshold value;
Segment composition unit is matched, for the time difference to be less than or equal to the neighbor of preset time of fusion difference threshold value Sub-piece permeates a matching segment;
Target video Piece Selection unit, for selected from fused matching segment one as target video piece Section.
Optionally, the target video Piece Selection unit, is specifically used for:By first or the last one matching segment is true It is set to target video segment.
Optionally, the target video Piece Selection unit, including:Ratio value computation subunit, current matching segment are true Stator unit, the first splicing condition judgment sub-unit, the first splicing subelement, first object video clip determine subelement, the Two splicing condition judgment sub-units, the second splicing subelement and the second target video segment determine subelement;
Ratio value computation subunit, for calculating in each matching segment, the piece being confirmed as in the video to be detected Ratio value shared by head frame or piece tail frame;
Current matching segment determines subelement, for the ratio value is maximum, and is greater than the matching of preset ratio threshold value Segment is determined as current matching segment;
First splicing condition judgment sub-unit, if including in video to be detected for current matching segment in video to be detected Head frame, then judge whether next matching segment of current matching segment meets preset first splicing condition;
First splicing subelement, if for meeting preset first splicing condition, by current matching segment with it is next A matching segment is spliced, and spliced matching segment is determined as current matching segment, returns to the judgement current matching Whether next matching segment of segment meets the step of preset first splicing condition;
First object video clip determines subelement, if for being unsatisfactory for preset first splicing condition, it will be current Matching segment is determined as target video segment;
Second splicing condition judgment sub-unit, if including in video to be detected for current matching segment in video to be detected Piece tail frame, then judge whether the upper matching segment of current matching segment meets preset second splicing condition;
Second splicing subelement, if for meeting preset second splicing condition, by current matching segment and upper one A matching segment is spliced, and spliced matching segment is determined as current matching segment, returns to the judgement current matching Whether the upper matching segment of segment meets the step of preset second splicing condition;
Second target video segment determines subelement, if for being unsatisfactory for preset second splicing condition, it will be current Matching segment is determined as target video segment.
Optionally, the preset first splicing condition is:If current matching segment and next time for matching segment Difference is less than or equal to default first splicing threshold value and next matching segment is confirmed as the head in the video to be detected Ratio value >=max (α ss shared by frame or piece tail framei_best,sst), it is determined that current clip and next matching piece Duan Jinhang splicing;
The preset second splicing condition is:If the time difference that current matching segment matches segment with upper one be less than or Equal to default second splicing threshold value and the upper matching segment is confirmed as head frame or piece in the video to be detected Ratio value >=max (α ss shared by tail framei_best,sst), it is determined that segment is matched with described upper one to output segment and is spelled It connects;
Wherein α is goal-selling threshold value, ssi_bestTo be confirmed as the head frame or run-out in the video to be detected The head frame of the maximum matching segment of ratio value shared by frame being confirmed as in the video to be detected or piece tail frame institute The ratio value accounted for, sstFor preset ratio threshold value.
Optionally, the determining current matching segment subelement, is also used to:
If calculated maximum ratio value is not more than preset ratio threshold value, and current matching segment packet in video to be detected The head frame in video to be detected is included, then first matching segment is determined as target video segment;
If calculated maximum ratio value is not more than preset ratio threshold value, and current matching segment packet in video to be detected The piece tail frame in video to be detected is included, then the last one matching segment is determined as target video segment.
The third aspect, provides a kind of electronic equipment, the electronic equipment include processor, communication interface, memory and Communication bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any method of claim 1-13 Step.
Present invention implementation additionally provides a kind of computer readable storage medium, storage in the computer readable storage medium There is the step of computer program, the computer program realizes the detection method of any of the above-described video frame when being executed by processor.
The embodiment of the invention also provides a kind of computer program products comprising instruction, when it runs on computers When, so that computer executes the detection method of any of the above-described video frame.
The detection method and system of video frame provided in an embodiment of the present invention, can be by the Hash of current video frame to be detected Pre-stored each Hash feature samples carry out characteristic matching respectively in feature and database, will be with Hash feature in database Sample characteristics match head frame or the frame that consistent video frame to be detected is determined as in video to be detected.In this way, using this hair Bright embodiment can be directly with each video frame to be detected in video to be detected and head and/or run-out feature in database Sample carries out feature comparison, detection head and/or trailer content is carried out to video to be detected based on image consistency, due to this hair Bright embodiment is directly handled video frame information, thus to picture variation more robust, it is more convenient to use, accurately.
Certainly, implement any of the products of the present invention or method it is not absolutely required at the same reach all the above excellent Point.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described.
Fig. 1 is a kind of detection method flow chart of video frame of the embodiment of the present invention;
Fig. 2 is a kind of detection method schematic diagram of video frame of the embodiment of the present invention;
Fig. 3 is the video frame fusion method flow chart in a kind of detection method of video frame of the embodiment of the present invention;
Fig. 4 is the method flow of the selection target video clip in a kind of detection method of video frame of the embodiment of the present invention Figure;
Fig. 5 is a kind of structure of the detecting device schematic diagram of video frame of the embodiment of the present invention;
Fig. 6 is the structural schematic diagram of a kind of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is described.
In the prior art, to the mainly artificial mark head/run-out point of the detection of target video segment, that is, people Work marks out head/run-out start frame and end frame.However, manually indicating head/tail point since short number of videos is numerous It needs to expend excess resource, so that detection efficiency is low.
Based on above-mentioned consideration, the present invention provides a kind of detection method of video frame, device and electronic equipments.The above method It can be applied to server, including:Obtain video frame to be detected;Obtain the Hash feature of the video frame to be detected;It will be described Pre-stored each Hash feature samples carry out characteristic matching respectively in the Hash feature and database of video frame to be detected;Institute State each Hash feature sample of head and run-out that the Hash feature samples stored in database are a kind of pre-stored style This;If a Hash spy in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples It is consistent to levy sample characteristics matching, the head frame or piece the tail frame video frame to be detected being determined as in video to be detected.Benefit With method provided in an embodiment of the present invention, target video segment can be identified based on image consistency, it will not be by video quality With the influence of video time.The prior art that compares is by manually indicating head/run-out point, side provided in an embodiment of the present invention Method can be improved the efficiency of head frame and/or piece the tail frame detection to video file.
The above method is introduced with specific embodiment below.
Referring to Fig. 1, Fig. 1 is a kind of detection method flow chart of video frame of the embodiment of the present invention, is included the following steps:
Step 101, video frame to be detected is obtained.
In a kind of implementation, the video frame to be detected of video to be detected is obtained by server, video to be detected can be The short-sighted frequency of self-control uploaded by user.
In a kind of implementation, a frame can be obtained from video to be detected at random as video frame to be detected.
In a kind of implementation, a frame is obtained from video frame to be detected as current video to be detected by the sequence of broadcasting Frame sequentially chooses video frame to be detected when carrying out head detection backward since first frame, when carrying out run-out detection, from Last frame starts to choose video frame to be detected to front sequence.
Step 102, the Hash feature of the video frame to be detected is obtained.
Specifically, the Hash feature of the video to be detected includes perceived hash characteristics and average Hash feature, can be Acquisition is calculated by server.
Step 103, by pre-stored each Hash feature in the Hash feature of the video frame to be detected and database Sample carries out characteristic matching respectively;The Hash feature samples stored in the database are a kind of pre-stored head of style And/or each Hash feature samples of run-out.
In a kind of implementation, it only can include each Hash feature samples of head in database, or only include piece The Hash feature samples of tail, or the Hash feature samples comprising head and run-out;When the Hash for containing only head in database When feature samples, it can only detect to match consistent head frame with sample characteristics in database;When containing only run-out in database Hash feature samples when, can only detect to match consistent tail frame with sample characteristics in database;If containing in database Head and run-out Hash feature samples, then can identify and match consistent head frame and run-out with sample characteristics in database Frame.
In a kind of implementation, first by artificial judgment video title to be detected and/or the style of run-out, then will be to be checked The Hash feature samples surveyed in video database corresponding with its style are matched.
For example, a video to be detected is that head content belongs to style 1 by artificial judgment, then by this video and wind to be detected Sample is matched in the database of lattice 1.
In a kind of implementation, the Hash feature samples stored in database can be obtained in the following way.
Can be with the video of several (style A) head and/or run-out with a kind of specified style of artificial selection, and identify The frame number of head and/or trailer content start frame in video and end frame, template data set and template as this method Point information;
According to template point information, each frame data for belonging to head and/or trailer content are concentrated to template data, i.e., it is described Sample video frame calculates separately perceived hash characteristics and average Hash feature, i.e. joint Hash feature, and deposits to the two Storage constructs the head and/or run-out property data base of style A.
Wherein, the method for obtaining Hash feature samples can be:
Gray level image is converted by input frame image, computational length is the perceived hash characteristics of 64 dimensions;
Gray level image is converted by input frame image, computational length is the average Hash feature of 64 dimensions;
The calculated result of perceived hash characteristics sample and average Hash feature samples is spliced into feature vector, i.e., joint is breathed out Uncommon feature, stores to database as the head of style A and/or the feature samples of run-out.
In fact, the perceived hash characteristics of calculating video frame and average Hash feature belong to the prior art, it is no longer superfluous here It states.
In this step, by pre-stored a kind of piece of style in the Hash feature of each video frame to be detected and database The Hash feature samples of head and/or run-out carry out can specifically include apart from calculating respectively:
The perceived hash characteristics of Sample video frame in the perceived hash characteristics of video to be detected and database are edited Distance calculates.
The average Hash feature of Sample video frame in the average Hash feature and database of video to be detected is edited Distance calculates.
Step 104, if the Hash feature of current video frame to be detected and pre-stored Hash feature samples in database In a Hash feature samples characteristic matching it is consistent, the head frame video frame to be detected being determined as in video to be detected Or piece tail frame.
In a kind of embodiment, each perceived hash characteristics sample in the perceived hash characteristics and database of video frame to be detected Editing distance is less than default perceptual hash threshold value, and each average Hash in the average Hash feature and database of video frame to be detected Feature samples are less than the video frame to be detected of default average Hash threshold value, then the Hash feature and data of current video frame to be detected A Hash feature samples characteristic matching in library in pre-stored Hash feature samples is consistent.
Consistent feature samples are matched with video frame Hash character to be detected if can not find in the database, are recognized Head or trailer content are not belonging to for current video frame.
Specifically, the video frame to be detected is determined as in video to be detected head frame or piece tail frame the step of it Afterwards, further include:
It selects the video frame to be detected not detected as current video frame to be detected, it is described to be checked to return to the acquisition The step of surveying the Hash feature of video frame.
It, can be according to video playing sequential selection video frame to be detected in a kind of implementation.
For example, when carrying out head detection, after having detected a video frame to be detected, by playing sequence selection it is next to Detection video frame continues to test, and completes until all video frames to be detected detect;When carrying out run-out detection, one has been detected After video frame to be detected, a upper video frame to be detected is selected to continue to test by playing sequence, until all videos to be detected Frame detection is completed.
As it can be seen that detecting video frame using the embodiment of the present invention, the Hash feature and sample for calculating video frame to be detected are utilized The editing distance of the Hash feature of video frame detects the video frame in video to be detected based on image consistency, can be quickly quasi- Really detect the head and/or trailer content of video to be detected, it is easy to use.
Specifically, referring to fig. 2, Fig. 2 is a kind of detection method schematic diagram of video frame provided in an embodiment of the present invention, packet It includes:First process and the second process.
First process:Establish the database of a kind of head of style and/or each Hash feature samples of run-out.
As shown in Fig. 2, it is possible, firstly, to be directed to each specified style artificial selection multiple template video, such as:For wind Lattice A, style A can be the video for automatically generating head or run-out using video template by certain software for editing, select template view Frequently 1 and template video 2.
Then, for the template video of selection 1 and template video 2, the frame model of manual identification head and/or run-out is carried out It encloses.
In this step, with the start frame of manual identification's head and/or tail content in video and frame information can be terminated, will risen Beginning frame and end frame information are as template point information.
Again for each video frame in frame range, according to the point information being manually entered, to belonging in head or run-out Each frame data held calculate separately perceptual hash and average Hash feature, and calculated result is spliced into a feature vector, as wind The head of lattice A or a Hash feature samples of run-out, store to database.
Specifically, the perceived hash characteristics and average Hash feature of each template video frame can be calculated, the two is spliced, is obtained To the joint Hash feature of each template video frame, it is stored in head and/or run-out property data base.
In a kind of implementation, the perceived hash characteristics of each template video frame and the method for average Hash feature, packet are calculated It includes:
Gray level image is converted by input frame image, computational length is the perceived hash characteristics of 64 dimensions;
Gray level image is converted by input frame image, computational length is the average Hash feature of 64 dimensions.
Second process:Video to be detected is detected.
As described in Figure 2, it for the detection of video to be detected, may include steps of:
Firstly, extracting the video frame that key frame is detected as needs from video to be detected.
Here it is possible to all regard all video frames of video frame to be detected as key frame.
It is of course also possible to first determine detection range, then extract key frame.
Specifically, if current matching is head matching, only using the preceding T second data of video as detection range.If current Matching is that run-out matches, then only using the last T second data of video as detection range.
For the video data in detection range, 1 frame key frame is extracted every K frame.
Wherein, the typical set-up of T is 30-60, and the representative value of K is 5.
Such as:Typical head market is 5~20s, as long as then detecting first 1 minute or preceding half point of video to be detected The video of clock whether have head can, do not need to detect all videos content.Specifically, can detecte view to be detected First 30 seconds of frequency, the frame per second of general video are one second 30 frame, can extract a frame key frame every 5 frames according to playing sequence.
Specifically, if characteristic matching one of the time upper two adjacent key frames all with the sample Hash feature in database It causes, that is, belongs to head or trailer content, then this two adjacent key frame is all determined as head frame or piece tail frame;If phase The characteristic matching of adjacent two key frames and the Hash feature samples in database is inconsistent, i.e. a video frame belong to head or Trailer content, one is not belonging to head or trailer content, then the video all detected the video frame between this two frame as needs Frame does not detect video frame, is detected frame by frame.
Specifically, be directed to each key frame as video to be detected, frame 1~frame N, respectively with the sample that is stored in database This video frame carries out image consistency matching, including:
The perceived hash characteristics and average Hash feature for calculating each frame, obtain joint Hash feature.It is specific to calculate Method is identical as database creation process, is not repeated herein.
By a kind of head of style pre-stored in the Hash feature of video frame to be detected and database and/or run-out Each Hash feature samples are carried out apart from calculating respectively.
Distance value is less than to the video frame to be detected of preset threshold, is determined as head in the video to be detected or run-out Video frame.Namely obtain the matching result of each video frame to be detected.
Specific calculation is as follows:
Calculate each perceived hash characteristics sample in the perceived hash characteristics and database of video frame to be detected first is compiled Distance is collected, and calculates the average Hash feature of video frame to be detected and the second of average Hash feature samples each in database and compiles Collect distance.
If the first editing distance<T1, and the second editing distance<T2, then it is assumed that this video frame to be detected and database are Sino-Kazakhstan Uncommon feature samples characteristic matching is consistent, that is, has image consistency.If the first editing distance<T1 and the second editing distance >=T2, First editing distance >=T1 and the second editing distance<T2 or the first editing distance >=T1 and the second editing distance >=T2, then recognize Video frame to be detected and Hash feature samples characteristic matching in database are inconsistent thus, that is, do not have image consistency.
Wherein, T1, T2 can be configured according to the Stringency that detection requires image consistency, it is desirable that stringenter, threshold It is worth smaller, representative value can be between the 5%~25% of Hash characteristic length.
It optionally, can be by the head in video frame to be detected and database if carrying out head detection to video to be detected Hash feature samples carry out characteristic matching, can be by video frame to be detected and data if carrying out run-out detection to video to be detected Run-out Hash feature samples in library carry out characteristic matching.
In a kind of implementation, Hash feature samples are with markd in database, and head Hash feature samples have Piece labeling head, run-out Hash feature samples are marked with run-out, so when video to be detected carries out head matching, need and data Head Hash feature samples carry out characteristic matching in library, similarly, when video to be detected carries out run-out matching, only need to in database Run-out Hash feature samples carry out characteristic matching.
Optionally, it will test that result belongs to head or the video frame of trailer content is denoted as fm, will test result and be not belonging to piece The video frame of head or trailer content is denoted as fn
Specifically, the video frame for the head or run-out that can be obtained according to matching result is merged and is filtered.
Specifically, being merged to the video frame of head or run-out in the video to be detected is confirmed as, then to fusion Segment afterwards is filtered, and is obtained the video frame of head and/or run-out in video to be detected, is exported as testing result.
As it can be seen that detecting target video segment using the embodiment of the present invention, server can be based on image consistency, to be checked It surveys video and carries out frame level detection, the accuracy of detection is substantially increased, and by the way of extracting key frame, to video to be detected It is sampled detection, shortens detection time, detection speed is fast.
Specifically, Fig. 3 is the video frame in a kind of detection method of video frame provided in an embodiment of the present invention referring to Fig. 3 Fusion method flow chart, includes the following steps:
Step 301, obtain each be confirmed as the video frame of head frame or piece tail frame in the video to be detected when Between information.
In a kind of implementation, target video segment in the video to be detected is each confirmed as by server acquisition The time of video frame.
Step 302, the video frame of Time Continuous is determined as a matching sub-piece.
In a kind of implementation, video frame continuous in time is a matching sub-piece, in a matching segment, includes Several matching sub-pieces.
Step 303, judge whether the time difference between adjacent matching sub-piece is less than or equal to preset time of fusion Poor threshold value.
In a kind of implementation, a matching sub-piece Si, this matching segment start frame be denoted asThis matching segment End frame be denoted asIf two adjacent matching segment Si, Si+1Frame pitch in time is not more than T3, i.e.,
Wherein T3 is the setting of frame level error tolerance, and the representative value of T3 is K.
Step 304, the neighbor sub-piece that the time difference is less than or equal to preset time of fusion difference threshold value is fused to One matching segment.
In a kind of implementation, the neighbor sub-piece of preset time of fusion difference threshold value is less than or equal to the time difference Mixing operation is carried out, i.e. update SiEnd frameAnd S is deleted in matching list of pattern clipsi+1
Wherein SiFor i-th of matching segment,The last frame of segment is matched for i-th,It is a for (i+1) Match the last frame of segment, Si+1For (i+1) a matching segment.
Step 305, selected from fused matching segment one as target video segment.
In a kind of implementation, if in fused matching segment only including a segment, using this segment as Target video segment.If in fused matching segment include two or more matching segments, select one as Object matching segment.
From the foregoing, it can be seen that it is each true that server will meet image consistency using method provided in an embodiment of the present invention The video frame for being set to target video segment in the video to be detected is merged, and is made from fused matching Piece Selection one For target video segment, the target video segment obtained in this way is more accurate, more convenient to use.
Specifically, described select one as target video segment from fused matching segment, fusion can choose Afterwards first in matching segment or the last one matching segment are determined as target video segment.
Specifically, referring to fig. 4, Fig. 4 is the selection mesh in a kind of detection method of video frame provided in an embodiment of the present invention The method flow diagram for marking video clip, includes the following steps:
Step 401, it calculates in each matching segment, the head frame or piece tail frame that are confirmed as in the video to be detected Shared ratio value.
In a kind of implementation, it is confirmed as fused matching segment S in the video to be detectediIn meet the figure As percentage of the frame number in this segment totalframes of coherence request, percentage is calculated by formula (1), is denoted as Stability percentage.
Wherein, SsiThe stability percentage of segment, f are matched for i-thmFor the frame number of the video frame of target video segment,The end frame of segment is matched for i-th,The start frame of segment is matched for i-th.
Step 402, the ratio value is maximum, and the matching segment for being greater than preset ratio threshold value is determined as current matching piece Section.
In a kind of implementation, by the maximum matching segment of the stability, and stability of this matching segment need to be greater than Preset ratio threshold value selects current matching segment by formula (2), (3).
ssi_best=max (ssi|i∈[1,…,I]) (2)
ssi_best> sst (3)
Wherein, ssi_bestFor the stability of the maximum matching segment of the stability, ssiThe steady of segment is matched for i-th Qualitative, I is the serial number of the last one matching segment, sstFor preset ratio threshold value.
Step 403, if current matching segment includes the head frame detected in video in video to be detected, judge current Whether next matching segment with segment meets preset first splicing condition.
In a kind of implementation, by server judge current matching segment next matching segment whether meet it is preset First splicing condition.
Step 404, if meeting preset first splicing condition, by current matching segment with it is next match segment into Spliced matching segment is determined as current matching segment by row splicing, returns to the next of the judgement current matching segment Whether matching segment meets the step of preset first splicing condition.
Step 405, if being unsatisfactory for preset first splicing condition, current matching segment is determined as target video piece Section.
Step 406, if current matching segment includes the piece tail frame in video to be detected in video to be detected, judgement is current Whether the upper matching segment of matching segment meets preset second splicing condition.
Step 407, if meeting preset second splicing condition, by current matching segment with upper one match segment into Spliced matching segment is determined as current matching segment by row splicing, returns to upper one of the judgement current matching segment Whether matching segment meets the step of preset second splicing condition.
Step 408, if being unsatisfactory for preset second splicing condition, current matching segment is determined as target video piece Section.
From the foregoing, it can be seen that using method provided in an embodiment of the present invention, the object matching segment determined by server is more Accurately, enable head and/or trailer content be accurately identified out, skip head and/or run-out after convenient automatically.
Specifically, the preset first splicing condition is:If current matching segment and next time difference for matching segment Less than or equal to default first splicing threshold value and next matching segment is confirmed as the head frame in the video to be detected Or ratio value >=max (α ss shared by piece tail framei_best,sst), it is determined that current clip and next matching segment Spliced;
In a kind of implementation, default first splicing condition is:
And ssnext_i≥max(α·ssi_best,sst) (5)
Wherein,For next matching segment first frame, ResultendFor current matching segment last frame, T4 For default first splicing threshold value, the representative value of T4 is twice of T3, i.e. 2K, ssnext_iFor the stability of next segment, α is Goal-selling threshold value, ssi_bestFor the stability of the maximum matching segment of the stability, sstFor preset ratio threshold value.
The preset second splicing condition is:If the time difference that current matching segment matches segment with upper one is less than or waits Splice threshold value in default second and the upper matching segment is confirmed as head frame or run-out in the video to be detected Ratio value >=max (α ss shared by framei_best,sst), it is determined that segment is matched with described upper one to output segment and is spelled It connects;
In a kind of implementation, default second splicing condition is:
And sslast_i≥max(α·ssi_best,sst) (7)
Wherein, ResultstaFor the current matching segment first frame,For it is described it is upper one matching segment last Frame, T5For default second splicing threshold value, the representative value of T5 is twice of T3, i.e. 2K, sslast_iFor the stabilization of a upper segment Property, α is goal-selling threshold value, ssi_bestFor the stability of the maximum matching segment of the stability, sstFor preset ratio threshold Value.
From the foregoing, it can be seen that server can be according to preset threshold to stability using method provided in an embodiment of the present invention Highest matching segment is spliced with thereon/following segment, acquisition target video segment, in this way, obtained target video Segment is more accurate.
Specifically, if calculated maximum ratio value is not more than preset ratio threshold value, and current in video to be detected With the head frame that segment includes in video to be detected, then first matching segment is determined as target video segment;
If calculated maximum ratio value is not more than preset ratio threshold value, and current matching segment packet in video to be detected The piece tail frame in video to be detected is included, then the last one matching segment is determined as target video segment.
In a kind of implementation, target video segment is determined as to the head segment or run-out segment of video to be detected.Example Such as, comprising the head frame being confirmed as in video to be detected in target video segment, and target video segment is by video to be detected The 1st frame formed to the 100th frame, then 100 frames before this video to be detected are determined as to the head segment of this video to be detected.
From the foregoing, it can be seen that obtained target video segment is more accurate using method provided in an embodiment of the present invention.Because If calculated maximum scale value is not more than preset threshold, illustrate the similarity of video clip to be detected Yu Sample video segment It is very low, so the accuracy of target fragment can be improved using the above method.
From the foregoing, it can be seen that server can be based on image consistency to be detected using method provided in an embodiment of the present invention Target video segment in video is detected, and accuracy in detection is high, and speed is fast, easy to use.
Due to identical technical concept, corresponding to embodiment of the method shown in Fig. 1, the embodiment of the invention also provides a kind of views The detection device of frequency frame, as shown in figure 5, the device includes:
First obtains module 501, for obtaining video frame to be detected;
Second obtains module 502, for obtaining the Hash feature of the video frame to be detected;
Matching module 503, for by the Hash feature of the video frame to be detected with it is pre-stored each in database Hash feature samples carry out characteristic matching respectively;The Hash feature samples stored in the database are a kind of pre-stored wind The head of lattice and/or each Hash feature samples of run-out;
Matching result determining module 504, if the Hash feature of the video frame to be detected with it is pre-stored in database A Hash feature samples characteristic matching in Hash feature samples is consistent, and the video frame to be detected is determined as view to be detected Head frame or piece tail frame in frequency.
In embodiments of the present invention, the Hash feature samples, including:The perceived hash characteristics of each Sample video frame With average Hash feature;
Described second obtains module, is specifically used for:Calculate the video frame to be detected perceived hash characteristics and average Kazakhstan Uncommon feature;
The matching module, including:First distance computing unit and second distance computing unit;
The first distance computing unit, for will in the perceived hash characteristics of the video frame to be detected and database it is pre- A kind of head of the style first stored and/or each perceived hash characteristics sample of run-out carry out first distance calculating respectively;
The second distance computing unit, for will in the average Hash feature of the video frame to be detected and database it is pre- A kind of head of the style first stored and/or each average Hash feature samples of run-out carry out second distance calculating respectively;
The matching result determining module, is specifically used for:If in the Hash feature of the video frame to be detected and database The first distance calculated result of Hash feature samples feature in pre-stored Hash feature samples is less than perceptual hash threshold value, And second distance calculated result is less than average Hash threshold value, it is determined that in the Hash feature and database of the video frame to be detected Pre-stored Hash feature samples characteristic matching is consistent, so that the video frame successful match to be detected;
Alternatively, if in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples Each Hash feature samples feature matches inconsistent, then it fails to match for the video frame to be detected.
It in embodiments of the present invention, further include selecting module, for being determined as the video frame to be detected for described After the step of head frame or piece tail frame in video to be detected,
It selects the video frame not detected as video frame to be detected, returns to the acquisition video frame to be detected The step of Hash feature.
In embodiments of the present invention, described first module is obtained, be specifically used for:All video frames of video to be detected are made Not detect video frame, is never detected in video frame by the sequence of broadcasting and obtain a frame as video frame to be detected.
In embodiments of the present invention, described first module is obtained, including:First detection unit and second detection unit;
The first detection unit, including:First range determines that subelement and the first video frame to be detected determine subelement;
First range determines subelement, is used for when carrying out head detection to video to be detected, will be described to be detected The video frame in the first preset duration originated in video is determined as the first detection range;
First video frame to be detected determines subelement, for extracting key frame out of described first detection range, really It is set in the video to be detected and does not detect video frame for detecting the first of head, does not detect view from first by extraction sequence A frame is obtained in frequency frame as the first video frame to be detected;
The matching module is used for when carrying out head detection to video to be detected, by the Kazakhstan of the first video frame to be detected Pre-stored each head Hash feature samples carry out characteristic matching respectively in uncommon feature and database;
The selecting module, for when carrying out head detection to video to be detected, by the sequential selection one the of broadcasting One does not detect video frame as the first video frame to be detected;
The second detection unit, including:Second range determines that subelement and the second video frame to be detected determine subelement;
Second range determines subelement, is used for when carrying out run-out detection to video to be detected, will be described to be detected The video frame in the second preset duration before terminating in video is determined as the second detection range;
Second video frame to be detected determines subelement, for extracting key frame out of described second detection range, really It is set in the video to be detected and does not detect video frame for detecting the second of run-out, does not detect view from second by extraction sequence A frame is obtained in frequency frame as the second video frame to be detected;
The matching module is used for when carrying out head detection to video to be detected, by the Kazakhstan of the second video frame to be detected Pre-stored each run-out Hash feature samples carry out characteristic matching respectively in uncommon feature and database;
The selecting module, for when carrying out run-out detection to video to be detected, by the sequential selection one the of broadcasting Two do not detect video frame as the second video frame to be detected.
In embodiments of the present invention, the described first video frame to be detected determines subelement, is specifically used for:Between being preset by first Every extracting multiple first out of described first detection range and do not detect video frame;
Second video frame to be detected determines subelement, is specifically used for:By the second preset interval, from second detection Multiple second is extracted in range does not detect video frame.
In embodiments of the present invention, described device further includes:
Described device further includes:
First judges matching result module, for not detecting in the selecting module by sequential selection one first of broadcasting Before video frame is as the first video frame to be detected,
Judge matching result that upper one has carried out the matched first video frame to be detected whether with current matching The matching result of one video frame to be detected is identical;
If it is not the same, then by upper one carried out the first of the matched first video frame to be detected and current matching to The video frame not detected between detection video frame is determined as first and does not detect video frame;It executes described by the sequential selection one played A first does not detect the step of video frame is as the first video frame to be detected;
Alternatively, execution is described not to detect video frame as first by sequential selection one first of broadcasting if identical The step of video frame to be detected;
Described device further includes:
Second judges matching result module, for not detecting in the selecting module by sequential selection one second of broadcasting Before video frame is as the second video frame to be detected,
Judge matching result that upper one has carried out the matched second video frame to be detected whether with current matching The matching result of one video frame to be detected is identical;
If it is not the same, then by upper one carried out the second of the matched second video frame to be detected and current matching to The video frame not detected between detection video frame is determined as second and does not detect video frame;It executes described by the sequential selection one played A second does not detect the step of video frame is as the second video frame to be detected;
Alternatively, execution is described not to detect video frame as second by sequential selection one second of broadcasting if identical The step of video frame to be detected.
In embodiments of the present invention, described device further includes Fusion Module;
The Fusion Module, for melting to the head frame or piece tail frame that are confirmed as in the video to be detected It closes, obtains target video segment.
In embodiments of the present invention, the Fusion Module, including:Temporal information acquiring unit, matching sub-piece determine single Member, time of fusion difference judging unit, matching segment composition unit and target video Piece Selection unit;
Temporal information acquiring unit, for obtaining the head frame or run-out that are each confirmed as in the video to be detected The temporal information of frame;
Sub-piece determination unit is matched, for the video frame of Time Continuous to be determined as a matching sub-piece;
Time of fusion difference judging unit, for judging whether the time difference between adjacent matching sub-piece is less than or equal to Preset time of fusion difference threshold value;
Segment composition unit is matched, for the time difference to be less than or equal to the neighbor of preset time of fusion difference threshold value Sub-piece permeates a matching segment;
Target video Piece Selection unit, for selected from fused matching segment one as target video piece Section.
In embodiments of the present invention, the target video Piece Selection unit, is specifically used for:By first or the last one Matching segment is determined as target video segment.
In embodiments of the present invention, the target video Piece Selection unit, including:It is ratio value computation subunit, current Matching segment determines that subelement, the first splicing condition judgment sub-unit, the first splicing subelement, first object video clip determine Subelement, the second splicing condition judgment sub-unit, the second splicing subelement and the second target video segment determine subelement;
Ratio value computation subunit, for calculating in each matching segment, the piece being confirmed as in the video to be detected Ratio value shared by head frame or piece tail frame;
Current matching segment determines subelement, for the ratio value is maximum, and is greater than the matching of preset ratio threshold value Segment is determined as current matching segment;
First splicing condition judgment sub-unit, if including in video to be detected for current matching segment in video to be detected Head frame, then judge whether next matching segment of current matching segment meets preset first splicing condition;
First splicing subelement, if for meeting preset first splicing condition, by current matching segment with it is next A matching segment is spliced, and spliced matching segment is determined as current matching segment, returns to the judgement current matching Whether next matching segment of segment meets the step of preset first splicing condition;
First object video clip determines subelement, if for being unsatisfactory for preset first splicing condition, it will be current Matching segment is determined as target video segment;
Second splicing condition judgment sub-unit, if including in video to be detected for current matching segment in video to be detected Piece tail frame, then judge whether the upper matching segment of current matching segment meets preset second splicing condition;
Second splicing subelement, if for meeting preset second splicing condition, by current matching segment and upper one A matching segment is spliced, and spliced matching segment is determined as current matching segment, returns to the judgement current matching Whether the upper matching segment of segment meets the step of preset second splicing condition;
Second target video segment determines subelement, if for being unsatisfactory for preset second splicing condition, it will be current Matching segment is determined as target video segment.
In embodiments of the present invention, the preset first splicing condition is:If current matching segment and next matching The time difference of segment is less than or equal to default first splicing threshold value and next matching segment is confirmed as the view to be detected Ratio value >=max (α ss shared by head frame and piece tail frame in frequencyi_best,sst), it is determined that current clip and it is described under One matching segment is spliced;
The preset second splicing condition is:If the time difference that current matching segment matches segment with upper one be less than or Equal to default second splicing threshold value and the upper matching segment is confirmed as head frame or piece in the video to be detected Ratio value >=max (α ss shared by tail framei_best,sst), it is determined that segment is matched with described upper one to output segment and is spelled It connects;
Wherein α is goal-selling threshold value, ssi_bestTo be confirmed as the head frame or run-out in the video to be detected Shared by the head frame being confirmed as in the video to be detected or piece tail frame of the maximum matching segment of ratio value shared by frame Ratio value, sstFor preset ratio threshold value.
In embodiments of the present invention, the determining current matching segment subelement, is also used to:
If calculated maximum ratio value is not more than preset ratio threshold value, and current matching segment packet in video to be detected The head frame in video to be detected is included, then first matching segment is determined as target video segment;
If calculated maximum ratio value is not more than preset ratio threshold value, and current matching segment packet in video to be detected The piece tail frame in video to be detected is included, then the last one matching segment is determined as target video segment.
The embodiment of the invention also provides a kind of electronic equipment, as shown in fig. 6, include processor 601, communication interface 602, Memory 603 and communication bus 604, wherein processor 601, communication interface 602, memory 603 are complete by communication bus 604 At mutual communication,
Memory 603, for storing computer program;
Processor 601 when for executing the program stored on memory 603, realizes following steps:
Obtain video frame to be detected;
Obtain the Hash feature of the video frame to be detected;
Each Hash feature samples pre-stored in the Hash feature of the video frame to be detected and database are distinguished Carry out characteristic matching;The Hash feature samples stored in the database are the pre-stored a kind of head and/or piece of style Each Hash feature samples of tail;
If one in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples Hash feature samples characteristic matching is consistent, head frame or the run-out video frame to be detected being determined as in video to be detected Frame.
There are the above embodiments as it can be seen that since the embodiment of the present invention is Sino-Kazakhstan by the Hash feature of video to be detected and database Uncommon feature samples are carried out apart from calculating, therefore be can use image consistency and judged whether frame to be detected is in head or run-out Hold, and by extracting key frame, shortens the detection used time.In turn, the target of video to be detected is detected using the embodiment of the present invention When video clip, target video segment can be rapidly and accurately obtained.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc..For just It is only indicated with a thick line in expression, figure, it is not intended that an only bus or a type of bus.
Communication interface is for the communication between above-mentioned electronic equipment and other equipment.
Memory may include random access memory (Random Access Memory, RAM), also may include non-easy The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal Processing, DSP), it is specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing It is field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete Door or transistor logic, discrete hardware components.
In another embodiment provided by the invention, a kind of computer readable storage medium is additionally provided, which can It reads to be stored with computer program in storage medium, the computer program realizes any of the above-described video frame when being executed by processor The step of method.
In another embodiment provided by the invention, a kind of computer program product comprising instruction is additionally provided, when it When running on computers, so that the method that computer executes any video frame in above-described embodiment.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.The computer program Product includes one or more computer instructions.When loading on computers and executing the computer program instructions, all or It partly generates according to process or function described in the embodiment of the present invention.The computer can be general purpose computer, dedicated meter Calculation machine, computer network or other programmable devices.The computer instruction can store in computer readable storage medium In, or from a computer readable storage medium to the transmission of another computer readable storage medium, for example, the computer Instruction can pass through wired (such as coaxial cable, optical fiber, number from a web-site, computer, server or data center User's line (DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another web-site, computer, server or Data center is transmitted.The computer readable storage medium can be any usable medium that computer can access or It is comprising data storage devices such as one or more usable mediums integrated server, data centers.The usable medium can be with It is magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk Solid State Disk (SSD)) etc..
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device, For the embodiments such as electronic equipment, since it is substantially similar to the method embodiment, so being described relatively simple, related place ginseng See the part explanation of embodiment of the method.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (27)

1. a kind of detection method of video frame, which is characterized in that including:
Obtain video frame to be detected;
Obtain the Hash feature of the video frame to be detected;
The Hash feature of the video frame to be detected and each Hash feature samples pre-stored in database are carried out respectively Characteristic matching;The Hash feature samples stored in the database are the head and/or run-out of a kind of pre-stored style Each Hash feature samples;
If a Hash in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples Feature samples characteristic matching is consistent, the head frame or piece the tail frame video frame to be detected being determined as in video to be detected.
2. according to the method described in claim 1, it is characterized in that:The Hash feature samples, including:Each Sample video The perceived hash characteristics of frame and average Hash feature;
The step of Hash feature for obtaining the video frame to be detected, including:
Calculate the perceived hash characteristics and average Hash feature of the video frame to be detected;
The Hash feature of the video frame to be detected and each Hash feature samples pre-stored in database are carried out respectively The step of characteristic matching, including:
By a kind of head and/or piece of style pre-stored in the perceived hash characteristics of the video frame to be detected and database Each perceived hash characteristics sample of tail carries out first distance calculating respectively;
By the pre-stored a kind of head and/or piece of style in the average Hash feature of the video frame to be detected and database Each average Hash feature samples of tail carry out second distance calculating respectively;
If the Hash feature in the Hash feature of the video frame to be detected and database in pre-stored Hash feature samples The first distance calculated result of sample characteristics is less than perceptual hash threshold value, and second distance calculated result is less than average Hash threshold Value, it is determined that pre-stored Hash feature samples characteristic matching one in the Hash feature and database of the video frame to be detected It causes, so that the video frame successful match to be detected;
Alternatively, if each of the Hash feature of the video frame to be detected and Hash feature samples pre-stored in database Hash feature samples feature matches inconsistent, then it fails to match for the video frame to be detected.
3. the method according to claim 1, wherein described be determined as view to be detected for the video frame to be detected After the step of head frame or piece tail frame in frequency, further include:
It selects the video frame not detected as video frame to be detected, returns to the Hash for obtaining the video frame to be detected The step of feature.
4. according to the method described in claim 3, it is characterized in that:
The step of acquisition video frame to be detected, including:Using all video frames of video to be detected as not detecting video frame, It is never detected in video frame by the sequence of broadcasting and obtains a frame as video frame to be detected.
5. according to the method described in claim 3, it is characterized in that:
If carrying out head detection to video to be detected, the step of acquisition video frame to be detected, including:
Video frame in the first preset duration originated in the video to be detected is determined as the first detection range;
Key frame is extracted out of described first detection range, is determined as being used to detect the first of head in the video to be detected Video frame is not detected;It is not detected in video frame by extraction sequence from first and obtains a frame as the first video frame to be detected;
Pre-stored each Hash feature samples are distinguished in the Hash feature by the video frame to be detected and database Carry out characteristic matching the step of be:By pre-stored each head in the Hash feature of the first video frame to be detected and database Hash feature samples carry out characteristic matching respectively;
The step of video frame for selecting one not detect is as video frame to be detected for:By sequential selection one of broadcasting One does not detect video frame as the first video frame to be detected;
If carrying out run-out detection to video to be detected, the step of acquisition video frame to be detected, including:
Video frame in the second preset duration before terminating in the video to be detected is determined as the second detection range;
Key frame is extracted out of described second detection range, is determined as being used to detect the second of run-out in the video to be detected Video frame is not detected;It is not detected in video frame by extraction sequence from second and obtains a frame as the second video frame to be detected;
Pre-stored each Hash feature samples are distinguished in the Hash feature by the video frame to be detected and database Carry out characteristic matching the step of be:By pre-stored each run-out in the Hash feature of the second video frame to be detected and database Hash feature samples carry out characteristic matching respectively;
The step of video frame for selecting one not detect is as video frame to be detected for:By sequential selection one of broadcasting Two do not detect video frame as the second video frame to be detected.
6. according to the method described in claim 5, it is characterized in that:
It is described to extract key frame out of described first detection range, it is determined as being used to detect head in the video to be detected First the step of not detecting video frame, including:
By the first preset interval, multiple first are extracted out of described first detection range and does not detect video frame;
It is described to extract key frame out of described second detection range, it is determined as being used to detect head in the video to be detected Second the step of not detecting video frame, including:
By the second preset interval, multiple second are extracted out of described second detection range and does not detect video frame.
7. according to the method described in claim 5, it is characterized in that:
Before not detecting video frame as the first video frame to be detected by sequential selection one first of broadcasting, this method is also wrapped It includes:
Judge matching result that upper one has carried out the matched first video frame to be detected whether with the first of current matching to The matching result for detecting video frame is identical;
If it is not the same, then having carried out the first to be detected of the matched first video frame to be detected and current matching for upper one The video frame not detected between video frame is determined as first and does not detect video frame;It executes described by playing sequence selection one first The step of video frame is as the first video frame to be detected is not detected;
Alternatively, if identical, execute that described by sequential selection one first of broadcasting not detect video frame to be checked as first The step of surveying video frame;
Before not detecting video frame as the second video frame to be detected by sequential selection one second of broadcasting, this method is also wrapped It includes:
Judge matching result that upper one has carried out the matched second video frame to be detected whether with the second of current matching to The matching result for detecting video frame is identical;
If it is not the same, then having carried out the second to be detected of the matched second video frame to be detected and current matching for upper one The video frame not detected between video frame is determined as second and does not detect video frame;It executes described by the sequential selection played one the Two do not detect the step of video frame is as the second video frame to be detected;
Alternatively, if identical, execute that described by sequential selection one second of broadcasting not detect video frame to be checked as second The step of surveying video frame.
8. described in any item methods according to claim 1~7, it is characterised in that:
The method also includes:The head frame or piece tail frame that are confirmed as in the video to be detected are merged, obtained Target video segment.
9. according to the method described in claim 8, it is characterized in that:
The described pair of head frame being confirmed as in the video to be detected or piece tail frame merge, and obtain target video segment The step of, including:
Obtain the temporal information of the head frame or piece tail frame that are each confirmed as in the video to be detected;
The video frame of Time Continuous is determined as a matching sub-piece;
Judge whether the time difference between adjacent matching sub-piece is less than or equal to preset time of fusion difference threshold value;
The neighbor sub-piece that time difference is less than or equal to preset time of fusion difference threshold value is permeated a matching segment;
Selected from fused matching segment one as target video segment.
10. according to the method described in claim 9, it is characterized in that:
It is described selected from fused matching segment one as target video segment the step of, including:
By first or the last one matching segment is determined as target video segment.
11. according to the method described in claim 9, it is characterized in that:
It is described selected from fused matching segment one as target video segment the step of, including:
It calculates in each matching segment, ratio shared by the head frame or piece tail frame being confirmed as in the video to be detected Value;
The ratio value is maximum, and the matching segment for being greater than preset ratio threshold value is determined as current matching segment;
If current matching segment includes the head frame in video to be detected in video to be detected, judge under current matching segment Whether one matching segment meets preset first splicing condition;
If meeting preset first splicing condition, current matching segment is spliced with next segment that matches, will be spelled Matching segment after connecing is determined as current matching segment, whether returns to the next matching segment for judging current matching segment The step of meeting preset first splicing condition;
If being unsatisfactory for preset first splicing condition, current matching segment is determined as target video segment;
If current matching segment includes the piece tail frame in video to be detected in video to be detected, the upper of current matching segment is judged Whether one matching segment meets preset second splicing condition;
If meeting preset second splicing condition, current matching segment is matched into segment with upper one and is spliced, will spelled Matching segment after connecing is determined as current matching segment, whether returns to the upper matching segment for judging current matching segment The step of meeting preset second splicing condition;
If being unsatisfactory for preset second splicing condition, current matching segment is determined as target video segment.
12. according to the method for claim 11, it is characterised in that:
The preset first splicing condition is:If current matching segment is less than or equal to pre- with next time difference for matching segment If first splicing threshold value and next matching segment be confirmed as head frame or piece tail frame institute in the video to be detected Ratio value >=max (the α ss accounted fori_best, sst), it is determined that current clip is spliced with next segment that matches;
The preset second splicing condition is:If the time difference that current matching segment matches segment with upper one is less than or equal to pre- If second splicing threshold value and it is described it is upper one match segment be confirmed as head frame or piece tail frame institute in the video to be detected Ratio value >=max (the α ss accounted fori_best, sst), it is determined that segment is matched with described upper one to output segment and is spliced;
Wherein α is goal-selling threshold value, ssi_bestFor the head frame being confirmed as in the video to be detected or piece tail frame institute Ratio shared by the head frame being confirmed as in the video to be detected or piece tail frame of the maximum matching segment of the ratio value accounted for Example value, sstFor preset ratio threshold value.
13. according to the method for claim 11, which is characterized in that the method also includes:
If calculated maximum ratio value be not more than preset ratio threshold value, and in video to be detected current matching segment include to The head frame in video is detected, then first matching segment is determined as target video segment;
If calculated maximum ratio value be not more than preset ratio threshold value, and in video to be detected current matching segment include to The piece tail frame in video is detected, then the last one matching segment is determined as target video segment.
14. a kind of detection device of video frame, which is characterized in that including:
First obtains module, for obtaining video frame to be detected;
Second obtains module, for obtaining the Hash feature of the video frame to be detected;
Matching module, for by pre-stored each Hash feature in the Hash feature of the video frame to be detected and database Sample carries out characteristic matching respectively;The Hash feature samples stored in the database are a kind of pre-stored head of style And/or each Hash feature samples of run-out;
Matching result determining module, if for pre-stored Hash in the Hash feature and database of the video frame to be detected A Hash feature samples characteristic matching in feature samples is consistent, and the video frame to be detected is determined as in video to be detected Head frame or piece tail frame.
15. device according to claim 14, it is characterised in that:The Hash feature samples, including:Each sample view The perceived hash characteristics of frequency frame and average Hash feature;
Described second obtains module, is specifically used for:Perceived hash characteristics and the average Hash for calculating the video frame to be detected are special Sign;
The matching module, including:First distance computing unit and second distance computing unit;
The first distance computing unit, for will be deposited in advance in the perceived hash characteristics of the video frame to be detected and database A kind of head of style and/or each perceived hash characteristics sample of run-out of storage carry out first distance calculating respectively;
The second distance computing unit, for depositing the average Hash feature of the video frame to be detected in advance with database A kind of head of style of storage and/or each average Hash feature samples of run-out carry out second distance calculating respectively;
The matching result determining module, is specifically used for:If in the Hash feature of the video frame to be detected and database in advance The first distance calculated result of Hash feature samples feature in the Hash feature samples of storage is less than perceptual hash threshold value, and the Two are less than average Hash threshold value apart from calculated result, it is determined that in the Hash feature and database of the video frame to be detected in advance The Hash feature samples characteristic matching of storage is consistent, so that the video frame successful match to be detected;
Alternatively, if each of the Hash feature of the video frame to be detected and Hash feature samples pre-stored in database Hash feature samples feature matches inconsistent, then it fails to match for the video frame to be detected.
16. device according to claim 14, which is characterized in that further include selecting module, for it is described will be described to be checked It surveys after the step of video frame is determined as the head frame in video to be detected or piece tail frame,
It selects the video frame not detected as video frame to be detected, returns to the Hash for obtaining the video frame to be detected The step of feature.
17. device according to claim 16, which is characterized in that described first obtains module, is specifically used for:It will be to be detected All video frames of video are never detected in video frame by the sequence of broadcasting as video frame is not detected and obtain a frame as to be checked Survey video frame.
18. device according to claim 16, which is characterized in that described first obtains module, including:First detection unit And second detection unit;
The first detection unit, including:First range determines that subelement and the first video frame to be detected determine subelement;
First range determines subelement, is used for when carrying out head detection to video to be detected, by the video to be detected Video frame in first preset duration of middle starting is determined as the first detection range;
First video frame to be detected determines subelement, for extracting key frame out of described first detection range, is determined as Video frame is not detected for detecting the first of head in the video to be detected, does not detect video frame from first by extraction sequence One frame of middle acquisition is as the first video frame to be detected;
The matching module is used for when carrying out head detection to video to be detected, and the Hash of the first video frame to be detected is special Sign carries out characteristic matching with pre-stored each head Hash feature samples in database respectively;
The selecting module, for video to be detected carry out head detection when, by broadcasting sequential selection one first not Video frame is detected as the first video frame to be detected;
The second detection unit, including:Second range determines that subelement and the second video frame to be detected determine subelement;
Second range determines subelement, is used for when carrying out run-out detection to video to be detected, by the video to be detected The video frame in the second preset duration before middle end is determined as the second detection range;
Second video frame to be detected determines subelement, for extracting key frame out of described second detection range, is determined as Video frame is not detected for detecting the second of run-out in the video to be detected, does not detect video frame from second by extraction sequence One frame of middle acquisition is as the second video frame to be detected;
The matching module is used for when carrying out run-out detection to video to be detected, and the Hash of the second video frame to be detected is special Sign carries out characteristic matching with pre-stored each run-out Hash feature samples in database respectively;
The selecting module, for video to be detected carry out run-out detection when, by broadcasting sequential selection one second not Video frame is detected as the second video frame to be detected.
19. device according to claim 18, which is characterized in that
First video frame to be detected determines subelement, is specifically used for:By the first preset interval, from first detection range Interior extraction multiple first does not detect video frame;
Second video frame to be detected determines subelement, is specifically used for:By the second preset interval, from second detection range Interior extraction multiple second does not detect video frame.
20. device according to claim 18, it is characterised in that:
Described device further includes:
First judges matching result module, for not detecting video by sequential selection one first of broadcasting in the selecting module Before frame is as the first video frame to be detected,
Judge matching result that upper one has carried out the matched first video frame to be detected whether with the first of current matching to The matching result for detecting video frame is identical;
If it is not the same, then having carried out the first to be detected of the matched first video frame to be detected and current matching for upper one The video frame not detected between video frame is determined as first and does not detect video frame;It executes described by the sequential selection played one the One does not detect the step of video frame is as the first video frame to be detected;
Alternatively, if identical, execute that described by sequential selection one first of broadcasting not detect video frame to be checked as first The step of surveying video frame;
Described device further includes:
Second judges matching result module, for not detecting video by sequential selection one second of broadcasting in the selecting module Before frame is as the second video frame to be detected,
Judge matching result that upper one has carried out the matched second video frame to be detected whether with the first of current matching to The matching result for detecting video frame is identical;
If it is not the same, then having carried out the second to be detected of the matched second video frame to be detected and current matching for upper one The video frame not detected between video frame is determined as second and does not detect video frame;It executes described by the sequential selection played one the Two do not detect the step of video frame is as the second video frame to be detected;
Alternatively, if identical, execute that described by sequential selection one second of broadcasting not detect video frame to be checked as second The step of surveying video frame.
21. 4~20 described in any item devices according to claim 1, it is characterised in that:Described device further includes Fusion Module;
The Fusion Module is obtained for merging to the head frame or piece tail frame that are confirmed as in the video to be detected Obtain target video segment.
22. device according to claim 21, it is characterised in that:
The Fusion Module, including:Temporal information acquiring unit, matching sub-piece determination unit, time of fusion difference judging unit, Match segment composition unit and target video Piece Selection unit;
Temporal information acquiring unit, for obtain the head frame or piece tail frame that are each confirmed as in the video to be detected when Between information;
Sub-piece determination unit is matched, for the video frame of Time Continuous to be determined as a matching sub-piece;
Time of fusion difference judging unit, for judging it is default whether the time difference between adjacent matching sub-piece is less than or equal to Time of fusion difference threshold value;
Segment composition unit is matched, for the time difference to be less than or equal to the neighbor sub-pieces of preset time of fusion difference threshold value Section permeates a matching segment;
Target video Piece Selection unit, for selected from fused matching segment one as target video segment.
23. device according to claim 22, it is characterised in that:
The target video Piece Selection unit, is specifically used for:By first or the last one matching segment is determined as target view Frequency segment.
24. device according to claim 22, it is characterised in that:
The target video Piece Selection unit, including:Ratio value computation subunit, current matching segment determine subelement, One splicing condition judgment sub-unit, the first splicing subelement, first object video clip determine that subelement, the second splicing condition are sentenced Disconnected subelement, the second splicing subelement and the second target video segment determine subelement;
Ratio value computation subunit, for calculating in each matching segment, the head frame being confirmed as in the video to be detected Or ratio value shared by piece tail frame;
Current matching segment determines subelement, for the ratio value is maximum, and is greater than the matching segment of preset ratio threshold value It is determined as current matching segment;
First splicing condition judgment sub-unit, if including the piece in video to be detected for current matching segment in video to be detected Head frame, then judge whether next matching segment of current matching segment meets preset first splicing condition;
First splicing subelement, if for meeting preset first splicing condition, by current matching segment and next Spliced with segment, spliced matching segment is determined as current matching segment, returns to the judgement current matching segment Next matching segment whether meet it is preset first splicing condition the step of;
First object video clip determines subelement, if for being unsatisfactory for preset first splicing condition, by current matching Segment is determined as target video segment;
Second splicing condition judgment sub-unit, if including the piece in video to be detected for current matching segment in video to be detected Tail frame, then judge whether the upper matching segment of current matching segment meets preset second splicing condition;
Second splicing subelement, if for meeting preset second splicing condition, by current matching segment and upper one Spliced with segment, spliced matching segment is determined as current matching segment, returns to the judgement current matching segment Upper matching segment whether meet it is preset second splicing condition the step of;
Second target video segment determines subelement, if for being unsatisfactory for preset second splicing condition, by current matching Segment is determined as target video segment.
25. device according to claim 24, it is characterised in that:
The preset first splicing condition is:If current matching segment is less than or equal to pre- with next time difference for matching segment If first splicing threshold value and next matching segment be confirmed as head frame or piece tail frame institute in the video to be detected Ratio value >=max (the α ss accounted fori_best, sst), it is determined that current clip is spliced with next segment that matches;
The preset second splicing condition is:If the time difference that current matching segment matches segment with upper one is less than or equal to pre- If second splicing threshold value and it is described it is upper one match segment be confirmed as head frame or piece tail frame institute in the video to be detected Ratio value >=max (the α ss accounted fori_best, sst), it is determined that segment is matched with described upper one to current clip and is spliced;
Wherein α is goal-selling threshold value, ssi_bestFor the head frame being confirmed as in the video to be detected or piece tail frame institute Ratio shared by the head frame being confirmed as in the video to be detected or piece tail frame of the maximum matching segment of the ratio value accounted for Example value, sstFor preset ratio threshold value.
26. device according to claim 24, which is characterized in that the determining current matching segment subelement is also used to:
If calculated maximum ratio value be not more than preset ratio threshold value, and in video to be detected current matching segment include to The head frame in video is detected, then first matching segment is determined as target video segment;
If calculated maximum ratio value be not more than preset ratio threshold value, and in video to be detected current matching segment include to The piece tail frame in video is detected, then the last one matching segment is determined as target video segment.
27. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any method and step of claim 1-13.
CN201810638981.0A 2018-06-20 2018-06-20 Video frame detection method and device and electronic equipment Active CN108924586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810638981.0A CN108924586B (en) 2018-06-20 2018-06-20 Video frame detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810638981.0A CN108924586B (en) 2018-06-20 2018-06-20 Video frame detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108924586A true CN108924586A (en) 2018-11-30
CN108924586B CN108924586B (en) 2021-01-08

Family

ID=64420706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810638981.0A Active CN108924586B (en) 2018-06-20 2018-06-20 Video frame detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108924586B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110290419A (en) * 2019-06-25 2019-09-27 北京奇艺世纪科技有限公司 Video broadcasting method, device and electronic equipment
CN110324657A (en) * 2019-05-29 2019-10-11 北京奇艺世纪科技有限公司 Model generation, method for processing video frequency, device, electronic equipment and storage medium
CN110505495A (en) * 2019-08-23 2019-11-26 北京达佳互联信息技术有限公司 Multimedia resource takes out frame method, device, server and storage medium
CN110855904A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium
CN111027419A (en) * 2019-11-22 2020-04-17 腾讯科技(深圳)有限公司 Method, device, equipment and medium for detecting video irrelevant content
CN111291610A (en) * 2019-12-12 2020-06-16 深信服科技股份有限公司 Video detection method, device, equipment and computer readable storage medium
CN111356015A (en) * 2020-02-25 2020-06-30 北京奇艺世纪科技有限公司 Duplicate video detection method and device, computer equipment and storage medium
CN111479130A (en) * 2020-04-02 2020-07-31 腾讯科技(深圳)有限公司 Video positioning method and device, electronic equipment and storage medium
CN112040287A (en) * 2020-08-31 2020-12-04 聚好看科技股份有限公司 Display device and video playing method
CN112291589A (en) * 2020-10-29 2021-01-29 腾讯科技(深圳)有限公司 Video file structure detection method and device
CN113382283A (en) * 2020-03-09 2021-09-10 上海哔哩哔哩科技有限公司 Video title identification method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323948A (en) * 2011-09-07 2012-01-18 上海大学 Automatic detection method for title sequence and tail leader of TV play video
US20130114745A1 (en) * 2006-05-23 2013-05-09 Lg Electronics Inc. Digital television transmitting system and receiving system and method of processing broadcast data
CN103152632A (en) * 2013-03-05 2013-06-12 天脉聚源(北京)传媒科技有限公司 Method and device for locating multimedia program
CN105554514A (en) * 2015-12-09 2016-05-04 福建天晴数码有限公司 Method and system for processing opening songs of videos
CN106101573A (en) * 2016-06-24 2016-11-09 中译语通科技(北京)有限公司 The grappling of a kind of video labeling and matching process
CN107135401A (en) * 2017-03-31 2017-09-05 北京奇艺世纪科技有限公司 Key frame extraction method and system
CN107133266A (en) * 2017-03-31 2017-09-05 北京奇艺世纪科技有限公司 The detection method and device and database update method and device of video lens classification

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130114745A1 (en) * 2006-05-23 2013-05-09 Lg Electronics Inc. Digital television transmitting system and receiving system and method of processing broadcast data
CN102323948A (en) * 2011-09-07 2012-01-18 上海大学 Automatic detection method for title sequence and tail leader of TV play video
CN103152632A (en) * 2013-03-05 2013-06-12 天脉聚源(北京)传媒科技有限公司 Method and device for locating multimedia program
CN105554514A (en) * 2015-12-09 2016-05-04 福建天晴数码有限公司 Method and system for processing opening songs of videos
CN106101573A (en) * 2016-06-24 2016-11-09 中译语通科技(北京)有限公司 The grappling of a kind of video labeling and matching process
CN107135401A (en) * 2017-03-31 2017-09-05 北京奇艺世纪科技有限公司 Key frame extraction method and system
CN107133266A (en) * 2017-03-31 2017-09-05 北京奇艺世纪科技有限公司 The detection method and device and database update method and device of video lens classification

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324657A (en) * 2019-05-29 2019-10-11 北京奇艺世纪科技有限公司 Model generation, method for processing video frequency, device, electronic equipment and storage medium
CN110290419B (en) * 2019-06-25 2021-11-26 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment
CN110290419A (en) * 2019-06-25 2019-09-27 北京奇艺世纪科技有限公司 Video broadcasting method, device and electronic equipment
CN110505495A (en) * 2019-08-23 2019-11-26 北京达佳互联信息技术有限公司 Multimedia resource takes out frame method, device, server and storage medium
CN110505495B (en) * 2019-08-23 2021-12-07 北京达佳互联信息技术有限公司 Multimedia resource frame extraction method, device, server and storage medium
CN111027419A (en) * 2019-11-22 2020-04-17 腾讯科技(深圳)有限公司 Method, device, equipment and medium for detecting video irrelevant content
CN111027419B (en) * 2019-11-22 2023-10-20 腾讯科技(深圳)有限公司 Method, device, equipment and medium for detecting video irrelevant content
CN110855904B (en) * 2019-11-26 2021-10-01 Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium
CN110855904A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium
CN111291610A (en) * 2019-12-12 2020-06-16 深信服科技股份有限公司 Video detection method, device, equipment and computer readable storage medium
CN111291610B (en) * 2019-12-12 2024-05-28 深信服科技股份有限公司 Video detection method, device, equipment and computer readable storage medium
CN111356015A (en) * 2020-02-25 2020-06-30 北京奇艺世纪科技有限公司 Duplicate video detection method and device, computer equipment and storage medium
CN111356015B (en) * 2020-02-25 2022-05-10 北京奇艺世纪科技有限公司 Duplicate video detection method and device, computer equipment and storage medium
CN113382283A (en) * 2020-03-09 2021-09-10 上海哔哩哔哩科技有限公司 Video title identification method and system
CN113382283B (en) * 2020-03-09 2023-07-04 上海哔哩哔哩科技有限公司 Video title identification method and system
CN111479130B (en) * 2020-04-02 2023-09-26 腾讯科技(深圳)有限公司 Video positioning method and device, electronic equipment and storage medium
CN111479130A (en) * 2020-04-02 2020-07-31 腾讯科技(深圳)有限公司 Video positioning method and device, electronic equipment and storage medium
CN112040287A (en) * 2020-08-31 2020-12-04 聚好看科技股份有限公司 Display device and video playing method
CN112291589A (en) * 2020-10-29 2021-01-29 腾讯科技(深圳)有限公司 Video file structure detection method and device
CN112291589B (en) * 2020-10-29 2023-09-22 腾讯科技(深圳)有限公司 Method and device for detecting structure of video file

Also Published As

Publication number Publication date
CN108924586B (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN108924586A (en) A kind of detection method of video frame, device and electronic equipment
CN112565825B (en) Video data processing method, device, equipment and medium
CN109803180B (en) Video preview generation method and device, computer equipment and storage medium
US8326087B2 (en) Synchronizing image sequences
KR20190116199A (en) Video data processing method, device and readable storage medium
CN109034159A (en) image information extracting method and device
CN109121022B (en) Method and apparatus for marking video segments
CN113691836B (en) Video template generation method, video generation method and device and electronic equipment
CN113542777B (en) Live video editing method and device and computer equipment
CN110309060B (en) Detection method and device for updating identification algorithm, storage medium and computer equipment
CN108519991A (en) A kind of method and apparatus of main broadcaster&#39;s account recommendation
CN109743589B (en) Article generation method and device
CN109409364A (en) Image labeling method and device
CN109408367A (en) A kind of method and terminal of the control element identifying interactive interface
CN108062341A (en) The automatic marking method and device of data
CN108197030A (en) Software interface based on deep learning tests cloud platform device and test method automatically
CN113407773A (en) Short video intelligent recommendation method and system, electronic device and storage medium
CN113852832A (en) Video processing method, device, equipment and storage medium
CN111340015B (en) Positioning method and device
CN109409321A (en) A kind of determination method and device of camera motion mode
CN113989476A (en) Object identification method and electronic equipment
CN113515998A (en) Video data processing method and device and readable storage medium
CN113515997A (en) Video data processing method and device and readable storage medium
US20140286624A1 (en) Method and apparatus for personalized media editing
CN113934888A (en) Video tag processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant