CN109922373A - Method for processing video frequency, device and storage medium - Google Patents
Method for processing video frequency, device and storage medium Download PDFInfo
- Publication number
- CN109922373A CN109922373A CN201910193143.1A CN201910193143A CN109922373A CN 109922373 A CN109922373 A CN 109922373A CN 201910193143 A CN201910193143 A CN 201910193143A CN 109922373 A CN109922373 A CN 109922373A
- Authority
- CN
- China
- Prior art keywords
- video
- target
- scene
- clip
- video clip
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The embodiment of the present application provides a kind of method for processing video frequency, device and storage medium, which comprises is split video according to taking lens, obtains multiple video clips;Obtain the corresponding default label of the multiple video clip;Obtain the time point in the multiple video clip with the default tag match;According to the time point, one or more target video segments with the default tag match are obtained;One target video segment is formed into new target video, or multiple target video fragment combinations are formed into the new target video that can continuously play.One or more target video segments in video can be obtained according to default label, then it plays a target video segment or continuously plays the multiple target video segment, without further according to the original entire video of video playing played in order, can only be shown and the video clip of default tag match to user, meet the individual demand of different user.
Description
Technical field
This application involves intelligent video-image monitoring field, in particular to a kind of method for processing video frequency, device and storage are situated between
Matter.
Background technique
The epoch of more and more young men such as life between this present apparent time, selection use various video applications
Watch the video programs such as TV play, the film that they like.When the new TV directory is shown, they always like fast big
Chin or cheek quickly knows the content and wonderful of video, and since current most of program is dilatory there are plot or user is closed
The reasons such as note degree difference lead to user's always progress bar in tow when watching program, and ceaselessly fast forward or rewind goes to see that they think
The partial content seen, takes up one's energy and effort.
Existing video playing mode is usually to play out according to original video content, and user can only be regarded by adjusting
The broadcasting speed of frequency removes the speed rhythm of control video playing, can not be gone to play suitable content according to user demand.And
The method of Traditional control broadcasting speed goes broadcasting content to will lead to audio distortion, and user experience is bad.
Summary of the invention
The embodiment of the present application provides a kind of method for processing video frequency, device and storage medium, can according to user to video into
Row processing, making that treated, video meets the needs of different user.
The embodiment of the present application provides a kind of method for processing video frequency comprising:
The video is split according to taking lens, obtains multiple video clips;
Obtain the corresponding default label of the multiple video clip;
Obtain the time point in the multiple video clip with the default tag match;
According to the time point, one or more target video segments with the default tag match are obtained;
One target video segment is formed into new target video, or by multiple target video fragment combination shapes
The target video that can continuously play of Cheng Xin.
The embodiment of the present application provides a kind of video process apparatus comprising:
Video segmentation module obtains multiple video clips for video to be split according to taking lens;
Default label acquisition module, for obtaining the corresponding default label of the multiple video clip;
Time point obtains module, for obtaining the time point in the multiple video clip with the default tag match;
Target video segment obtains module, for obtaining one with the default tag match according to the time point
Or multiple target video segments;
Processing module, for a target video segment to be formed new target video, or by multiple targets
Video clip combines to form the new target video that can continuously play.
The embodiment of the present application also provides a kind of storage medium, computer program is stored in the storage medium, when described
When computer program is run on computers, so that the computer executes method for processing video frequency as described above.
In method for processing video frequency provided by the embodiments of the present application, device and storage medium, first by the video according to shooting
Camera lens is split, and obtains multiple video clips;Then the corresponding default label of the multiple video clip is obtained;Then it obtains
In the multiple video clip with the time point of the default tag match;Subsequently according to the time point, obtain with it is described
One or more target video segments of default tag match;A target video segment is finally formed new target to regard
Frequently, or by multiple target video fragment combinations form the new target video that can continuously play.It can be according to default label
One or more target video segments in video are obtained, one or more of target video segments are then continuously played, and
Not further according to the original entire video of video playing played in order, can only be shown to user and the piece of video of default tag match
Section, meets the individual demand of different user.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described.It should be evident that the drawings in the following description are only some examples of the present application, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is the first flow diagram of method for processing video frequency provided by the embodiments of the present application.
Fig. 2 is second of flow diagram of method for processing video frequency provided by the embodiments of the present application.
Fig. 3 is the third flow diagram of method for processing video frequency provided by the embodiments of the present application.
Fig. 4 is another flow diagram of method for processing video frequency provided by the embodiments of the present application.
Fig. 5 is the schematic diagram of video process apparatus provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description.Obviously, described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, those skilled in the art's every other implementation obtained without creative efforts
Example, shall fall in the protection scope of this application.
Referring to Fig. 1, Fig. 1 is the first flow diagram of method for processing video frequency provided by the embodiments of the present application, video
The process of processing method can specifically include:
101, the video is split according to taking lens, obtains multiple video clips.
The video processing of the embodiment of the present application can establish on the basis of shot segmentation, that is, divided a camera lens list
Member is that a video analysis object is analyzed.Shot segmentation is to utilize the light of background by the background information modeling to video
What stream information and distribution of color information were split the camera lens of video.Wherein, in a camera lens, the Optic flow information of background and
Distribution of color be it is metastable, when the information change between consecutive frame is huge, be then judged as when more than threshold value in the timestamp
Shot change occurs.It is thus possible to video is split according to taking lens, obtain multiple video clips.
Wherein, each video clip is that the same camera lens is continuously shot to obtain, and two adjacent video clips are then two
A different camera lens shooting.It should be noted that when if two video clips of same camera lens shooting appear in different
Between in section, that is, two video clips of the same camera lens shooting disconnect on playing timing, then it is assumed that are two video clips.
102, obtain the corresponding default label of the multiple video clip.It can first be sampled from video and obtain multiframe figure
Then picture identifies a frame image or multiple image according to image recognition technology, obtains the characteristics of image in every frame image.
Then default label is extracted from the characteristics of image of every frame image.Wherein, default label can for scene tag, movement label and
At least one of people tag etc..
103, obtain the time point in the multiple video clip with the default tag match.
After obtaining multiple video clips, image recognition is carried out to each video clip.Specifically, can be to each piece of video
Section obtains multiple sample frames using frequency by default, then carries out image recognition to each sample frame, identify in sample frame whether
There is the feature with default tag match, is not processed if without if;The time point for default label occur is recorded if having.Time point can
To be interpreted as the time point of entire video playing.
Wherein, identify whether have with the feature of default tag match in sample frame it is to be understood that whether having in sample frame
Feature identical with default label, it is such as similar as whether there is feature similar with default label in same person's object or sample frame
Scene or movement.
104, according to the time point, obtain one or more target video segments with the default tag match.
Obtain in each video clip and after the time point of default tag match, obtain with one of default tag match or
Multiple target video segments.
Target video segment will can be entirely used as with the video clip of default tag match.
It can also obtain being marked with the time point of the initial position of default tag match and with pre- in video clip
The time point of matched final position is signed, target is then partitioned into according to the time point at the time point of initial position and final position
Video clip.So as in one or more video clips, obtain one or more target video segments.
105, a target video segment is formed into new target video, or by multiple target video segment groups
It closes and forms the new target video that can continuously play.
Finally, one or more target video fragment combinations are formed a new target video, which can be with
One video clip is played, or multiple video clips are playd in order.Without further according to original video playing played in order
Entire video can only show and the video clip of default tag match to user, meet the individual demand of different user.
Referring to Fig. 2, Fig. 2 is second of flow diagram of method for processing video frequency provided by the embodiments of the present application, one
In a little embodiments, the default label can be movement label.In the method for processing video frequency, video is carried out according to taking lens
After the step of dividing, obtaining multiple video clips, may include:
201, extract the video Optic flow information of each video clip;
202, the video clip for determining that the video Optic flow information is greater than preset threshold is target video segment, and is marked
Note acts label;
203, extract the image information and audio-frequency information of the target video segment;
204, obtaining each frame in the target video segment according to described image information and audio-frequency information is that movement starts
The first probability and for movement terminate the second probability;
205, using a frame of first maximum probability as movement start frame, a frame of second maximum probability is made
To act end frame;
206, action video set of segments and described dynamic is obtained according to the movement start frame and the movement end frame
Make the time point of video clip set.207, according to the time point, obtain the one or more with the movement tag match
Target video segment;
208, a target video segment is formed into new target video, or by multiple target video segment groups
It closes and forms the new target video that can continuously play.
In a manner of the video analysis that movement is served as theme, the variety shows such as action movie and competitive sports are mainly used in, are known
The excellent movement segment occurred in other video, for example fight, joyride, the information changes violent and significant splendid moment such as goal
And recorded, the excellent viewing mode for the customizations such as reviewing is provided for user.
To act in a manner of the video analysis served as theme, the wonderful in video is often high priest's aggregation, and information
Measure fast-changing time slice.First by extracting video Optic flow information, the fast candidate wonderful of information change is filtered out
Then candidate camera lens is sent into excellent motion detection module by camera lens, the module extract simultaneously video image and acoustic information into
Row differentiates, exports whether each frame in the camera lens is movement beginning or end or ongoing probability, finally by maximum probability
Head and the tail input video movement and Activity recognition model of the beginning and end frame as motor unit are analyzed, and obtain the classification of motion
Result.
Wherein, to act in a manner of the video analysis served as theme, when video analysis is completed, the video clip that will test into
Row classification is sorted out, and lists the time point of each wonderful.
Wherein, video playing is carried out in different ways on line, i.e., user can be according to analysed personage under line, field
Scape, the information such as movement, the broadcast mode that unrestricted choice needs.
Excellent action recognition in the embodiment of the present application, it is possible, firstly, to Optic flow information estimation is carried out to video, by screening,
The camera lens of camera lens that Optic flow information changes greatly as candidate excellent movement is filtered out, the camera lens video is then sent into excellent
Section detection discrimination module is often also being listened since excellent movement segment is in video other than visually giving impact force
In feel and other segments have apparent difference, therefore discrimination module has merged vision and sound both modalities which information is differentiated,
Image and acoustic information are sent into the same model framework in present example to be trained, each frame is exported and is opened corresponding to movement
Begin, the probability for carrying out and terminating.Then, selection high probability starts the frame terminated with high probability and the frame feeding movement of centre is known
Other module, wherein action recognition module is the action recognition model of pre-training, the type of final output movement.Finally, likewise,
The wonderful occurred in video is sorted out, and exports time point.
In some embodiments, described that action video segment is obtained according to the movement start frame and the movement end frame
After the step of set, further includes:
When the quantity of action video set of segments is multiple, multiple action video set are arranged, and defeated
The temporal information of each action video set out.
Referring to Fig. 3, Fig. 3 is the third flow diagram of method for processing video frequency provided by the embodiments of the present application, it is described
Default label can be scene tag.The process of method for processing video frequency can specifically include:
301, the video is split according to taking lens, obtains multiple video clips.
The video processing of the embodiment of the present application can establish on the basis of shot segmentation, that is, divided a camera lens list
Member is that a video analysis object is analyzed.Shot segmentation is to utilize the light of background by the background information modeling to video
What stream information and distribution of color information were split the camera lens of video.Wherein, in a camera lens, the Optic flow information of background and
Distribution of color be it is metastable, when the information change between consecutive frame is huge, be then judged as when more than threshold value in the timestamp
Shot change occurs.It is thus possible to video is split according to taking lens, obtain multiple video clips.
Wherein, each video clip is that the same camera lens is continuously shot to obtain, and two adjacent video clips are then two
A different camera lens shooting.It should be noted that when if two video clips of same camera lens shooting appear in different
Between in section, that is, two video clips of the same camera lens shooting disconnect on playing timing, then it is assumed that are two video clips.
302, interval sampling is carried out to each video clip, obtains multiple sample frames.
Interval sampling is carried out to each video clip, sample frequency can be a frame is per second, ten frames are per second etc., to obtain
Multiple sample frames.
303, obtain the scene information in a sample frame in multiple sample frames.
One is chosen from multiple sample frames and is used as reference frame, then obtains the scene information of the reference frame.Reference frame can
To obtain at random, a frame can also be obtained at random in several frames since most.Continuous multiple sample frames can also be compared
It is right, the best frame of effect is therefrom selected as reference frame.It is adopted it is of course also possible to obtain one respectively from multiple video clips
The scene information of sample frame, to obtain multiple scene informations of multiple sample frames.
304, when the scene information is ancient costume scene, multiple samplings are identified according to ancient costume scene Recognition algorithm
Frame obtains the scene tag for corresponding to each video clip;When the scene information is modern scene, according to modern scene
Recognizer identifies multiple sample frames, obtains the scene tag for corresponding to each video clip.
Wherein, in a manner of the video analysis that scene is served as theme, the long videos such as movie and television play are mainly used in.It can be according to ancient costume
The acute and big subject matter of modern play two, collects a large amount of scene sample respectively, has trained the scene mode of classes up to a hundred.Substantially it enumerates existing
Scene in movie and television play, when carrying out video analysis, place and scene to the numerous appearance of video intermediate frequency carry out content integration, from
And determine that from video be based on ancient costume scene or based on modern scene.
In some embodiments, it after obtaining the step of corresponding to the scene tag of each video clip, can also wrap
It includes:
The time segment information of each video clip is obtained, when the time segment information corresponds to video clip broadcasting
Between section;
Multiple video clips are arranged according to timing;
When the two neighboring scene tag is identical, it is merged into a scene tag, the period letter after the merging
Breath includes the time segment information for merging the first two video clip.
305, the multiple video clip is classified according to the scene tag, obtains multiple video clip set,
And view label of the scene figure as corresponding video clip set is obtained from each video clip set.
In a manner of the video analysis that scene is served as theme, in video analysis, scene with indoor and outdoors be two big foundation class,
And defining different indoor scene and outdoor scene class label respectively according to costume piece and modern play, indoor scene label is such as existing
Generation acute office, shop, kitchen, bedroom etc., the bedroom in costume piece, study, prison house etc., in outdoor scene such as modern play
Recreation ground, park, road, platform, the woods in costume piece, garden, steep cliff, fairground etc..It is also understood that first pressing ancient costume
Scene and modern scene carry out the first classification, carry out the second subseries further according to indoor and outdoors.
In a manner of the video analysis that scene is served as theme, the database of a scene can establish, the database essential record
Scene, that is, prevailing scenario the label and temporal information of the numerous appearance of video intermediate frequency.Due to scene is usually fixed in video one
Camera lens shooting, i.e. scene under a lens unit is that belong to same class other, so, it need to only sample the frame in camera lens
Image input fields scape disaggregated model, can analyze the class label of current scene.
In some embodiments, in a manner of the video analysis that scene is served as theme, scene analysis model can use depth
The methods of model extraction scene characteristic is practised, and merges conventional machines study the different scene classifier of training.
In some embodiments, in a manner of the video analysis that scene is served as theme, the scene information analyzed is integrated,
According to the scene frequency of occurrences first, the sequence of time second merges the duplicate scene information of label, exports final field
Scape list.
306, obtain the time point in the video clip set in each video clip with the default tag match.
307, according to the time point, obtain and the matched one or more target video segments of the scene tag.
It obtains in video clip set obtaining and scene mark in each video clip and after scene tag matched time point
Sign matched one or more target video segments.
Target video segment will can be entirely used as with the matched video clip of scene tag.
Can also in video clip, obtain with the time point of the matched initial position of scene tag and with scene mark
The time point of matched final position is signed, target is then partitioned into according to the time point at the time point of initial position and final position
Video clip.So as in one or more video clips, obtain one or more target video segments.
308, a target video segment is formed into new target video, or by multiple target video segment groups
It closes and forms the new target video that can continuously play.
Finally, a target video is formed new target video, or multiple target video fragment combinations are formed one
New target video, which can play a video clip, or multiple video clips are playd in order.And no longer
According to the original entire video of video playing played in order, can only be shown to user and the matched video clip of scene tag,
Meet the individual demand of different user.
Scene discriminance analysis in the embodiment of the present application can carry out equal interval sampling to a lens unit first, such as
In the way of 1 frame/s, sample frame is then sent into scene Recognition module a, judgement is ancient costume or modern times.If it is ancient costume,
Then the sample frame of remaining all lens units is all sent directly into ancient costume scene Recognition module and is then left if it is modern scene
The sample frames of all lens units be all sent directly into modern scene Recognition module.Wherein ancient costume scene Recognition module and modern field
Scape identification module is all made of the Feature Selection Model and classifier of convolutional neural networks CNN model training.Then, pass through field
After scape taxon, ballot analysis carried out to the analysis result under the camera lens, i.e., using most that a kind of label for labelling mirrors
Head.Finally, merging the adjacent cells of adjacent same label, and according to frequency and time after the completion of the analysis of all lens units
It is ranked up, the time that the figure and the scene for exporting scene occur.
Referring to Fig. 4, Fig. 4 is the 4th kind of flow diagram of method for processing video frequency provided by the embodiments of the present application, preset
Label is people tag, and the process of method for processing video frequency can specifically include:
401, the video is split according to taking lens, obtains multiple video clips.
The video processing of the embodiment of the present application can establish on the basis of shot segmentation, that is, divided a camera lens list
Member is that a video analysis object is analyzed.Shot segmentation is to utilize the light of background by the background information modeling to video
What stream information and distribution of color information were split the camera lens of video.Wherein, in a camera lens, the Optic flow information of background and
Distribution of color be it is metastable, when the information change between consecutive frame is huge, be then judged as when more than threshold value in the timestamp
Shot change occurs.It is thus possible to video is split according to taking lens, obtain multiple video clips.
Wherein, each video clip is that the same camera lens is continuously shot to obtain, and two adjacent video clips are then two
A different camera lens shooting.It should be noted that when if two video clips of same camera lens shooting appear in different
Between in section, that is, two video clips of the same camera lens shooting disconnect on playing timing, then it is assumed that are two video clips.
402, recognition of face and/or human bioequivalence are carried out to the personage in each video clip, determine target person, and
According to target person, people tag is set.
In some embodiments, multiple personages in each video clip carry out recognition of face and/or human body is known
Not, the step of determining target person, comprising:
Target person is determined according to preset rules;
After determining target person, can also include:
The face characteristic of the target person is obtained, and the face characteristic is compared with presetting database;
If the target person is in the preset database, obtain presetting database in face characteristic the first mass value, with
And the second mass value of presently described face characteristic;
If the second mass value is greater than the first mass value, current face's image is replaced into everybody face figure in presetting database
Picture;
If the face characteristic of the target person is not in the preset database, by the face characteristic of the target person and/
Or characteristics of human body is stored in presetting database.
The video clip of one lens unit can be sent into detection mould first by character recognition method in the embodiment of the present application
Block, wherein detection module includes two kinds of detectors, and a kind of human-face detector, a kind of human body detector will when detecting face
As a result it is sent into face screening module, which mainly screens the face of the high priest in image, reject other incoherent people
Face.Secondly, tracking to by the personage after screening, its track stream is obtained.Wherein tracking module is selected according to the distance of camera lens
Different tracking modes is selected, the distance of camera lens can be differentiated according to the size and ratio of Face datection frame.Then, personage is obtained
Track stream after, by 6 frames/carry out equal interval sampling in the way of the second in the stream of track, in conjunction with tracking stream location information to sampling
Face in frame is detected and (shortens detection time, while only detecting useful target), and the face group of output is sent into video human face
Identification module, wherein video human face identification module includes two parts information, and one is spy in the video clip after face fusion
Reference breath, one be this feature confidence level, i.e. quality point, due to recognition of face it is fuzzy block posture when identification
Performance is lower, still training recognition of face model when consider qualitative factor, allow model to export face simultaneously
Feature and characteristic mass.Then, the personage in this group of face characteristic and figure database is compared, if it does not exist, then
The character features and number are put into database, and select a wherein face head portrait deposit database, if it does, judging it
Quality point, characteristic information and face head image information in more new database.Finally, when all lens units of this section of video have been analyzed
Cheng Shi counts it to each high priest that video occurs and duration occurs, and is ranked up, and exports the face figure of personage,
Time of occurrence point corresponding with its.
In some embodiments, multiple time points that the acquisition target person occurs in multiple video clips
Step, comprising:
Obtain the lens parameters that current video segment uses;
When the lens parameters are portrait attachment, according to the face characteristic of the target person, the target person is obtained
The multiple time points occurred in multiple video clips;
When the lens parameters are remote camera lens, according to the characteristics of human body of the target person, the target person is obtained
The multiple time points occurred in multiple video clips.
403, obtain multiple time points that the target person occurs in multiple video clips.
Specifically, being mainly used in TV play, film etc. is by limited main in a manner of the video analysis that personage serves as theme
In the long video of personage's composition.
Wherein, it in a manner of the video analysis that personage serves as theme, is mainly combined by recognition of face and human bioequivalence more
Model method is identified and is tracked to the high priest in video, and records good each personage's time of occurrence.
In a manner of the video analysis that personage serves as theme, need to establish high priest's database, the database root is according to different
TV play or film dynamic generation, i.e., each TV play, every film have the figure database of oneself.In the database
Personage is the high priest of plot involved in current television play or film, rather than utility man and spectators, and each personage
Identity be it is unique, i.e., be not in that there are two identity informations by a people in database, user facilitated select an only viewing on line
The video content that some personage occurs.
In a manner of the video analysis that personage serves as theme, when the video to some camera lens carries out personage's analysis, basis first
Certain rule determines the high priest in the camera lens, then extracts its face characteristic and carries out with the feature in figure database
It compares, the personage is judged whether in figure database, if it was not then its face characteristic and characteristics of human body are stored in data
Library, and indicate ID, then it is tracked, records the time of appearance.If carrying out dynamic more according to the quality of feature
Newly, if the characteristic mass that stores of the character features mass ratio of current clip is excellent, it is updated to current people's feature, otherwise, then
The feature in database is kept, then it is tracked again, records the time of appearance.
In a manner of the video analysis that personage serves as theme, the mode that video tracking mode selects face and human body to combine,
Portrait attachment selects face tracking, then selects human body tracking in remote camera lens, the selection of tracking can be with diversity.
In a manner of the video analysis that personage serves as theme, character features are divided into face characteristic and two kinds of character features, and face is special
It levies the personage being mainly used in portrait attachment to differentiate, personage when characteristics of human body is then used for remote camera lens and tracking failure recalls.Wherein
Face characteristic mainly passes through CNN faceform and extracts, it is emphasized that, we are used for the face characteristic for storing and differentiating
Be then to be screened by quality, remember the fused video human face feature of multiframe, than a simple facial image feature more than when
Between dimension information.
In a manner of the video analysis that personage serves as theme, when whole section of video analysis is completed, to the temporal information of high priest
It is merged, is ranked up according to the sequence for duration occur, output listing.
404, according to the time point, obtain and the matched one or more target video segments of the people tag.
Obtain in each video clip and after people tag matched time point, obtain with people tag it is matched one or
Multiple target video segments.
Target video segment will can be entirely used as with the matched video clip of people tag.
It can also obtain marking with the time point of the matched initial position of people tag and with personage in video clip
The time point of matched final position is signed, target is then partitioned into according to the time point at the time point of initial position and final position
Video clip.So as in one or more video clips, obtain one or more target video segments.
405, a target video segment is formed into new target video, or by multiple target video segment groups
It closes and forms the new target video that can continuously play.
Finally, a video clip is formed a new target video, or multiple target video fragment combinations are formed
One new target video, which can play a video clip, or multiple video clips are playd in order.And
Not further according to the original entire video of video playing played in order, can only be shown to user and the matched piece of video of people tag
Section, meets the individual demand of different user.
406, by the target video the target person information and the target person information it is corresponding multiple
Time point shows.
In some embodiments, it is special to can be scene characteristic, motion feature and personage for the default label of method for processing video frequency
1 or 2 or 3 of sign kind, the specific steps of method for processing video frequency can be adjusted according to default label, and specific steps can
Think the step in above-described embodiment.
In some embodiments, method for processing video frequency can mainly include two parts, video analysis portion under line at one
Point, the other is video playing part on line.
Wherein, video analysis part includes: first the scene according to personage under line, the different customization mode such as excellent movement,
The label of the performer's star's label occurred in video, scene tag and excellent movement is defined, and person recognition is respectively trained, field
The deep learning model of scape identification and action recognition;
Then input video enters in above-mentioned several models, the personage occurred in analysis video, the label of scene and movement,
And record the timestamp of appearance.
Then merge each personage respectively, the timestamp of each scene and movement label, when user select some or
When the broadcast mode of several mode combinations, video plays the video clip that the personage of selection occurs automatically.
Video playing part includes: to receive user according to video type and itself attention rate on line, select above-mentioned one kind or
The video playing mode of multiple combinations plays video content.
Wherein, video analysis part can carry out video analysis in different ways under line, but be required first to video
Shot segmentation is carried out, i.e. video is made of the lens unit that multiple camera lens are shot, and the background in single camera lens is opposite
It is single or continuous.Then it is analyzed in each lens unit again, finally merges the analysis result under a plurality of lenses.
From the foregoing, it will be observed that the video is first split by the method for processing video frequency of the embodiment of the present application according to taking lens,
Obtain multiple video clips;Then the corresponding default label of the multiple video clip is obtained;Then the multiple video is obtained
In segment with the time point of the default tag match;Subsequently according to the time point, obtain and the default tag match
One or more target video segments;A target video segment is finally formed into new target video, or will be multiple
The target video fragment combination forms the new target video that can continuously play.It can be obtained in video according to default label
One or more target video segments, then play a target video segment, or continuously play multiple target video segments,
Without can only be shown to user and the video of default tag match further according to the original entire video of video playing played in order
Segment meets the individual demand of different user.
Referring to Fig. 5, Fig. 5 is the schematic diagram of video process apparatus provided by the embodiments of the present application, video process apparatus 500
It may include Video segmentation module 501, default label acquisition module 502, time point obtains module 503, target video segment obtains
Modulus block 505 and processing module 505.
Video segmentation module 502 obtains multiple video clips for the video to be split according to taking lens;
Default label acquisition module 502, for obtaining the corresponding default label of the multiple video clip;
Time point obtains module 503, for obtaining the time in the multiple video clip with the default tag match
Point;
Target video segment obtains module 504, for obtaining one with the default tag match according to the time point
A or multiple target video segments;
Processing module 505, for a target video segment to be formed new target video, or by multiple mesh
Mark video clip combines to form the new target video that can continuously play.
In some embodiments, label acquisition module 502 is preset, is also used to carry out each video clip interval and adopts
Sample obtains multiple sample frames;Obtain the scene information in one of them described sample frame;When the scene information is ancient costume scene
When, multiple sample frames are identified according to ancient costume scene Recognition algorithm, obtain the scene tag for corresponding to each video clip;
When the scene information is modern scene, multiple sample frames are identified according to modern scene Recognition algorithm, are obtained corresponding every
The scene tag of a video clip;The multiple video clip is classified according to the scene tag, is obtained multiple
Video clip set, and a scene figure is obtained as corresponding video clip set from each video clip set
View label.
In some embodiments, label acquisition module 502 is preset, for carrying out multiple video clips according to timing
Arrangement;When the two neighboring scene tag is identical, it is merged into a scene tag, the scene tag after the merging includes
The time segment information of corresponding two video clips.
In some embodiments, label acquisition module 502 is preset, the video light stream for extracting each video clip is believed
Breath;The video clip for determining that the video Optic flow information is greater than preset threshold is target video segment, and label movement is marked
Label.
Time point obtains module 503, is also used to extract the image information and audio-frequency information of the target video segment;According to
It is the first probability, the Yi Jiwei that movement starts that described image information and audio-frequency information, which obtain each frame in the target video segment,
Act the second probability terminated;Using a frame of first maximum probability as movement start frame, by second maximum probability
A frame as movement end frame;Action video set of segments is obtained according to the movement start frame and the movement end frame.
In some embodiments, label acquisition module 502 is preset, is also used to when the quantity of action video set of segments be more
When a, multiple action video set are arranged, and export the temporal information of each action video set.
In some embodiments, label acquisition module 502 is preset, for carrying out face to the personage in each video clip
Identification and/or human bioequivalence determine target person, and people tag are arranged according to target person.
Time point obtains module 503, be also used to obtain the target person occurs in multiple video clips it is multiple when
Between point.
Processing module 505 is also used to the target person information and target person letter in the target video
Cease corresponding multiple time point displayings.
In some embodiments, label acquisition module 502 is preset, is also used to determine target person according to preset rules;
The face characteristic of the target person is obtained, and the face characteristic is compared with presetting database;If the target person
Object in the preset database, obtains the first mass value and presently described face characteristic of face characteristic in presetting database
Second mass value;If the second mass value is greater than the first mass value, current face's image is replaced into the face in presetting database
Image;It is if the face of the target person is not in the preset database, the face characteristic of the target person and/or human body is special
In sign deposit presetting database.
In some embodiments, time point obtains module 503, is also used to obtain the camera lens ginseng of current video segment use
Number;When the lens parameters are portrait attachment, according to the face characteristic of the target person, the target person is obtained multiple
The multiple time points occurred in video clip;It is special according to the human body of the target person when the lens parameters are remote camera lens
Sign, obtains multiple time points that the target person occurs in multiple video clips.
From the foregoing, it will be observed that the video process apparatus of the embodiment of the present application, Video segmentation module 501 is by the video according to shooting
Camera lens is split, and obtains multiple video clips;Then it is corresponding to obtain the multiple video clip for default label acquisition module 502
Default label;Then time point obtain module 503 obtain in the multiple video clip with the default tag match when
Between point;Subsequently target video segment obtains module 504 according to the time point, obtains one with the default tag match
Or multiple target video segments;One target video segment is formed new target video by last processing module 505, or will
Multiple target video fragment combinations form the new target video that can continuously play.It can be obtained and be regarded according to default label
Then one or more target video segments in frequency play a target video segment, or continuously play the multiple target
Video clip, without can only be shown to user and default label further according to the original entire video of video playing played in order
Matched video clip meets the individual demand of different user.
The embodiment of the present application also provides a kind of storage medium, computer program is stored in the storage medium, when described
When computer program is run on computers, the computer executes method for processing video frequency described in any of the above-described embodiment.
It should be noted that those of ordinary skill in the art will appreciate that whole in the various methods of above-described embodiment or
Part steps are relevant hardware can be instructed to complete by computer program, and the computer program can store in meter
In calculation machine readable storage medium storing program for executing, the storage medium be can include but is not limited to: read-only memory (ROM, Read Only
Memory), random access memory (RAM, RandomAccess Memory), disk or CD etc..
Method for processing video frequency provided by the embodiments of the present application, device and storage medium are described in detail above, this
Specific case is applied in text, and the principle and implementation of this application are described, the explanation of above example is only intended to
Help understands the application.Meanwhile for those skilled in the art, according to the thought of the application, it in specific embodiment and answers
With in range, there will be changes, in conclusion the contents of this specification should not be construed as limiting the present application.
Claims (10)
1. a kind of method for processing video frequency characterized by comprising
Video is split according to taking lens, obtains multiple video clips;
Obtain the corresponding default label of the multiple video clip;
Obtain the time point in the multiple video clip with the default tag match;
According to the time point, one or more target video segments with the default tag match are obtained;
One target video segment is formed into new target video, or multiple target video fragment combinations are formed newly
The target video that can continuously play.
2. method for processing video frequency according to claim 1, which is characterized in that the multiple video clip of acquisition is corresponding
Default label the step of, comprising:
Interval sampling is carried out to each video clip, obtains multiple sample frames;
Obtain the scene information in a sample frame in multiple sample frames;
When the scene information is ancient costume scene, multiple sample frames are identified according to ancient costume scene Recognition algorithm, are obtained pair
Answer the scene tag of each video clip;
When the scene information is modern scene, multiple sample frames are identified according to modern scene Recognition algorithm, are obtained pair
Answer the scene tag of each video clip;
The multiple video clip is classified according to the scene tag, obtains multiple video clip set, and from each
View label of the scene figure as corresponding video clip set is obtained in the video clip set.
3. method for processing video frequency according to claim 2, which is characterized in that according to the scene tag by the multiple view
Before the step of frequency segment is classified, further includes:
The time segment information of each video clip is obtained, the time segment information corresponds to the video clip play time
Section;
Multiple video clips are arranged according to timing;
When the two neighboring scene tag is identical, it is merged into a scene tag, the period packet after the merging
Include the time segment information for merging the first two video clip.
4. method for processing video frequency according to claim 1, which is characterized in that the default label obtained in the video
The step of, comprising:
Extract the video Optic flow information of each video clip;
The video clip for determining that the video Optic flow information is greater than preset threshold is target video segment, and label movement is marked
Label;
The time point obtained in the multiple video clip with the default tag match includes:
Extract the image information and audio-frequency information of the target video segment;
Obtaining each frame in the target video segment according to described image information and audio-frequency information, to be that movement starts first general
Rate and the second probability terminated for movement;
Using a frame of first maximum probability as movement start frame, tied a frame of second maximum probability as movement
Beam frame;
Action video set of segments and the action video piece are obtained according to the movement start frame and the movement end frame
The time point of Duan Jihe.
5. method for processing video frequency according to claim 4, which is characterized in that described according to the movement start frame and described
After the step of movement end frame obtains action video set of segments, further includes:
When the quantity of action video set of segments is multiple, multiple action video set are arranged, and are exported every
The temporal information of a action video set.
6. method for processing video frequency according to claim 1, which is characterized in that the default label obtained in the video
The step of, comprising:
Recognition of face and/or human bioequivalence are carried out to the personage in each video clip, determine target person, and according to target
People tag is arranged in personage;
The time point obtained in the multiple video clip with the default tag match includes:
Obtain multiple time points that the target person occurs in multiple video clips;
It is described that a target video segment is formed into new target video, or by multiple target video fragment combination shapes
After the target video that can continuously play of Cheng Xin, further includes:
By the target person information and the corresponding multiple time point exhibitions of the target person information in the target video
Show.
7. method for processing video frequency according to claim 6, which is characterized in that multiple people in each video clip
The step of object carries out recognition of face and/or human bioequivalence, determines target person, comprising:
Target person is determined according to preset rules;
After the step of determining target person, further includes:
The face characteristic of the target person is obtained, and the face characteristic is compared with presetting database;
If the target person is in the preset database, obtains the first mass value of face characteristic in presetting database and work as
Second mass value of the preceding face characteristic;
If the second mass value is greater than the first mass value, current face's image is replaced into the facial image in presetting database;
It is if the face of the target person is not in the preset database, the face characteristic of the target person and/or human body is special
In sign deposit presetting database.
8. method for processing video frequency according to claim 6, which is characterized in that described to obtain the target person in multiple views
The step of multiple time points occurred in frequency segment, comprising:
Obtain the lens parameters that current video segment uses;
When the lens parameters are portrait attachment, according to the face characteristic of the target person, the target person is obtained more
The multiple time points occurred in a video clip;
When the lens parameters are remote camera lens, according to the characteristics of human body of the target person, the target person is obtained more
The multiple time points occurred in a video clip.
9. a kind of video process apparatus characterized by comprising
Video segmentation module obtains multiple video clips for video to be split according to taking lens;
Default label acquisition module, for obtaining the corresponding default label of the multiple video clip;
Time point obtains module, for obtaining the time point in the multiple video clip with the default tag match;
Target video segment obtains module, for obtaining one or more with the default tag match according to the time point
A target video segment;
Processing module, for a target video segment to be formed new target video, or by multiple target videos
Fragment combination forms the new target video that can continuously play.
10. a kind of storage medium, which is characterized in that computer program is stored in the storage medium, when the computer journey
When sequence is run on computers, so that the computer executes method for processing video frequency as claimed in any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910193143.1A CN109922373B (en) | 2019-03-14 | 2019-03-14 | Video processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910193143.1A CN109922373B (en) | 2019-03-14 | 2019-03-14 | Video processing method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109922373A true CN109922373A (en) | 2019-06-21 |
CN109922373B CN109922373B (en) | 2021-09-28 |
Family
ID=66964775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910193143.1A Active CN109922373B (en) | 2019-03-14 | 2019-03-14 | Video processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109922373B (en) |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110225369A (en) * | 2019-07-16 | 2019-09-10 | 百度在线网络技术(北京)有限公司 | Video selection playback method, device, equipment and readable storage medium storing program for executing |
CN110337009A (en) * | 2019-07-01 | 2019-10-15 | 百度在线网络技术(北京)有限公司 | Control method, device, equipment and the storage medium of video playing |
CN110381391A (en) * | 2019-07-11 | 2019-10-25 | 北京字节跳动网络技术有限公司 | Video rapid section method, apparatus and electronic equipment |
CN110505143A (en) * | 2019-08-07 | 2019-11-26 | 上海掌门科技有限公司 | It is a kind of for sending the method and apparatus of target video |
CN110633648A (en) * | 2019-08-21 | 2019-12-31 | 重庆特斯联智慧科技股份有限公司 | Face recognition method and system in natural walking state |
CN110675433A (en) * | 2019-10-31 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Video processing method and device, electronic equipment and storage medium |
CN110913271A (en) * | 2019-11-29 | 2020-03-24 | Oppo广东移动通信有限公司 | Video processing method, mobile terminal and non-volatile computer-readable storage medium |
CN110933462A (en) * | 2019-10-14 | 2020-03-27 | 咪咕文化科技有限公司 | Video processing method, system, electronic device and storage medium |
CN111144498A (en) * | 2019-12-26 | 2020-05-12 | 深圳集智数字科技有限公司 | Image identification method and device |
CN111209897A (en) * | 2020-03-09 | 2020-05-29 | 腾讯科技(深圳)有限公司 | Video processing method, device and storage medium |
CN111274960A (en) * | 2020-01-20 | 2020-06-12 | 央视国际网络有限公司 | Video processing method and device, storage medium and processor |
CN111444819A (en) * | 2020-03-24 | 2020-07-24 | 北京百度网讯科技有限公司 | Cutting frame determining method, network training method, device, equipment and storage medium |
CN111460219A (en) * | 2020-04-01 | 2020-07-28 | 百度在线网络技术(北京)有限公司 | Video processing method and device and short video platform |
CN111506771A (en) * | 2020-04-22 | 2020-08-07 | 上海极链网络科技有限公司 | Video retrieval method, device, equipment and storage medium |
CN111586494A (en) * | 2020-04-30 | 2020-08-25 | 杭州慧川智能科技有限公司 | Intelligent strip splitting method based on audio and video separation |
CN111711861A (en) * | 2020-05-15 | 2020-09-25 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN111711855A (en) * | 2020-05-27 | 2020-09-25 | 北京奇艺世纪科技有限公司 | Video generation method and device |
CN111918122A (en) * | 2020-07-28 | 2020-11-10 | 北京大米科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN111914682A (en) * | 2020-07-13 | 2020-11-10 | 完美世界控股集团有限公司 | Teaching video segmentation method, device and equipment containing presentation file |
CN112016427A (en) * | 2020-08-21 | 2020-12-01 | 广州欢网科技有限责任公司 | Video strip splitting method and device |
CN112069357A (en) * | 2020-07-29 | 2020-12-11 | 北京奇艺世纪科技有限公司 | Video resource processing method and device, electronic equipment and storage medium |
CN112153478A (en) * | 2020-09-11 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Video processing method and video playing method |
WO2021042605A1 (en) * | 2019-09-06 | 2021-03-11 | Oppo广东移动通信有限公司 | Video processing method and device, terminal and computer readable storage medium |
CN112749299A (en) * | 2019-10-31 | 2021-05-04 | 北京国双科技有限公司 | Method and device for determining video type, electronic equipment and readable storage medium |
CN112839251A (en) * | 2019-11-22 | 2021-05-25 | Tcl科技集团股份有限公司 | Television and interaction method of television and user |
CN113012723A (en) * | 2021-03-05 | 2021-06-22 | 北京三快在线科技有限公司 | Multimedia file playing method and device and electronic equipment |
WO2021120685A1 (en) * | 2019-12-20 | 2021-06-24 | 苏宁云计算有限公司 | Video generation method and apparatus, and computer system |
CN113038163A (en) * | 2021-03-26 | 2021-06-25 | 百果园技术(新加坡)有限公司 | User experience model training method, short video user experience evaluation method and device |
CN113128261A (en) * | 2019-12-30 | 2021-07-16 | 阿里巴巴集团控股有限公司 | Data processing method and device and video processing method and device |
CN113163272A (en) * | 2020-01-07 | 2021-07-23 | 海信集团有限公司 | Video editing method, computer device and storage medium |
CN113691864A (en) * | 2021-07-13 | 2021-11-23 | 北京百度网讯科技有限公司 | Video clipping method, video clipping device, electronic equipment and readable storage medium |
CN113825012A (en) * | 2021-06-04 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Video data processing method and computer device |
CN113891156A (en) * | 2021-11-11 | 2022-01-04 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN113891157A (en) * | 2021-11-11 | 2022-01-04 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN114022828A (en) * | 2022-01-05 | 2022-02-08 | 北京金茂教育科技有限公司 | Video stream processing method and device |
CN114125541A (en) * | 2021-11-11 | 2022-03-01 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN114302253A (en) * | 2021-11-25 | 2022-04-08 | 北京达佳互联信息技术有限公司 | Media data processing method, device, equipment and storage medium |
CN114339391A (en) * | 2021-08-18 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Video data processing method, video data processing device, computer equipment and storage medium |
CN114528923A (en) * | 2022-01-25 | 2022-05-24 | 山东浪潮科学研究院有限公司 | Video target detection method, device, equipment and medium based on time domain context |
CN114556963A (en) * | 2019-12-27 | 2022-05-27 | 多玩国株式会社 | Content generation device, content distribution server, content generation method, and content generation program |
CN115086783A (en) * | 2022-06-28 | 2022-09-20 | 北京奇艺世纪科技有限公司 | Video generation method and device and electronic equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550713A (en) * | 2015-12-21 | 2016-05-04 | 中国石油大学(华东) | Video event detection method of continuous learning |
CN105631422A (en) * | 2015-12-28 | 2016-06-01 | 北京酷云互动科技有限公司 | Video identification method and video identification system |
CN107273782A (en) * | 2016-04-08 | 2017-10-20 | 微软技术许可有限责任公司 | Detected using the online actions of recurrent neural network |
CN107766992A (en) * | 2017-11-09 | 2018-03-06 | 上海电力学院 | Family's daily load curve detailed predicting method based on user behavior |
CN107820138A (en) * | 2017-11-06 | 2018-03-20 | 广东欧珀移动通信有限公司 | Video broadcasting method, device, terminal and storage medium |
CN107958234A (en) * | 2017-12-26 | 2018-04-24 | 深圳云天励飞技术有限公司 | Client-based face identification method, device, client and storage medium |
CN108337532A (en) * | 2018-02-13 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Perform mask method, video broadcasting method, the apparatus and system of segment |
CN108804578A (en) * | 2018-05-24 | 2018-11-13 | 南京理工大学 | The unsupervised video summarization method generated based on consistency segment |
CN108830208A (en) * | 2018-06-08 | 2018-11-16 | Oppo广东移动通信有限公司 | Method for processing video frequency and device, electronic equipment, computer readable storage medium |
CN109063611A (en) * | 2018-07-19 | 2018-12-21 | 北京影谱科技股份有限公司 | A kind of face recognition result treating method and apparatus based on video semanteme |
-
2019
- 2019-03-14 CN CN201910193143.1A patent/CN109922373B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550713A (en) * | 2015-12-21 | 2016-05-04 | 中国石油大学(华东) | Video event detection method of continuous learning |
CN105631422A (en) * | 2015-12-28 | 2016-06-01 | 北京酷云互动科技有限公司 | Video identification method and video identification system |
CN107273782A (en) * | 2016-04-08 | 2017-10-20 | 微软技术许可有限责任公司 | Detected using the online actions of recurrent neural network |
CN107820138A (en) * | 2017-11-06 | 2018-03-20 | 广东欧珀移动通信有限公司 | Video broadcasting method, device, terminal and storage medium |
CN107766992A (en) * | 2017-11-09 | 2018-03-06 | 上海电力学院 | Family's daily load curve detailed predicting method based on user behavior |
CN107958234A (en) * | 2017-12-26 | 2018-04-24 | 深圳云天励飞技术有限公司 | Client-based face identification method, device, client and storage medium |
CN108337532A (en) * | 2018-02-13 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Perform mask method, video broadcasting method, the apparatus and system of segment |
CN108804578A (en) * | 2018-05-24 | 2018-11-13 | 南京理工大学 | The unsupervised video summarization method generated based on consistency segment |
CN108830208A (en) * | 2018-06-08 | 2018-11-16 | Oppo广东移动通信有限公司 | Method for processing video frequency and device, electronic equipment, computer readable storage medium |
CN109063611A (en) * | 2018-07-19 | 2018-12-21 | 北京影谱科技股份有限公司 | A kind of face recognition result treating method and apparatus based on video semanteme |
Non-Patent Citations (1)
Title |
---|
易军凯,何潇然,姜大光: "图像内容理解的深度学习方法", 《计算机工程与设计》 * |
Cited By (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110337009A (en) * | 2019-07-01 | 2019-10-15 | 百度在线网络技术(北京)有限公司 | Control method, device, equipment and the storage medium of video playing |
CN110381391A (en) * | 2019-07-11 | 2019-10-25 | 北京字节跳动网络技术有限公司 | Video rapid section method, apparatus and electronic equipment |
CN110381391B (en) * | 2019-07-11 | 2021-11-09 | 北京字节跳动网络技术有限公司 | Video fast slicing method and device and electronic equipment |
CN110225369A (en) * | 2019-07-16 | 2019-09-10 | 百度在线网络技术(北京)有限公司 | Video selection playback method, device, equipment and readable storage medium storing program for executing |
CN110505143A (en) * | 2019-08-07 | 2019-11-26 | 上海掌门科技有限公司 | It is a kind of for sending the method and apparatus of target video |
CN110633648A (en) * | 2019-08-21 | 2019-12-31 | 重庆特斯联智慧科技股份有限公司 | Face recognition method and system in natural walking state |
WO2021042605A1 (en) * | 2019-09-06 | 2021-03-11 | Oppo广东移动通信有限公司 | Video processing method and device, terminal and computer readable storage medium |
CN110933462A (en) * | 2019-10-14 | 2020-03-27 | 咪咕文化科技有限公司 | Video processing method, system, electronic device and storage medium |
CN110933462B (en) * | 2019-10-14 | 2022-03-25 | 咪咕文化科技有限公司 | Video processing method, system, electronic device and storage medium |
CN112749299A (en) * | 2019-10-31 | 2021-05-04 | 北京国双科技有限公司 | Method and device for determining video type, electronic equipment and readable storage medium |
CN110675433A (en) * | 2019-10-31 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Video processing method and device, electronic equipment and storage medium |
US11450027B2 (en) | 2019-10-31 | 2022-09-20 | Beijing Dajia Internet Information Technologys Co., Ltd. | Method and electronic device for processing videos |
CN112839251A (en) * | 2019-11-22 | 2021-05-25 | Tcl科技集团股份有限公司 | Television and interaction method of television and user |
CN110913271B (en) * | 2019-11-29 | 2022-01-18 | Oppo广东移动通信有限公司 | Video processing method, mobile terminal and non-volatile computer-readable storage medium |
CN110913271A (en) * | 2019-11-29 | 2020-03-24 | Oppo广东移动通信有限公司 | Video processing method, mobile terminal and non-volatile computer-readable storage medium |
WO2021120685A1 (en) * | 2019-12-20 | 2021-06-24 | 苏宁云计算有限公司 | Video generation method and apparatus, and computer system |
CN111144498B (en) * | 2019-12-26 | 2023-09-01 | 深圳集智数字科技有限公司 | Image recognition method and device |
CN111144498A (en) * | 2019-12-26 | 2020-05-12 | 深圳集智数字科技有限公司 | Image identification method and device |
CN114556963A (en) * | 2019-12-27 | 2022-05-27 | 多玩国株式会社 | Content generation device, content distribution server, content generation method, and content generation program |
CN113128261A (en) * | 2019-12-30 | 2021-07-16 | 阿里巴巴集团控股有限公司 | Data processing method and device and video processing method and device |
CN113163272A (en) * | 2020-01-07 | 2021-07-23 | 海信集团有限公司 | Video editing method, computer device and storage medium |
CN111274960A (en) * | 2020-01-20 | 2020-06-12 | 央视国际网络有限公司 | Video processing method and device, storage medium and processor |
CN111209897B (en) * | 2020-03-09 | 2023-06-20 | 深圳市雅阅科技有限公司 | Video processing method, device and storage medium |
CN111209897A (en) * | 2020-03-09 | 2020-05-29 | 腾讯科技(深圳)有限公司 | Video processing method, device and storage medium |
CN111444819B (en) * | 2020-03-24 | 2024-01-23 | 北京百度网讯科技有限公司 | Cut frame determining method, network training method, device, equipment and storage medium |
CN111444819A (en) * | 2020-03-24 | 2020-07-24 | 北京百度网讯科技有限公司 | Cutting frame determining method, network training method, device, equipment and storage medium |
CN111460219B (en) * | 2020-04-01 | 2023-07-14 | 百度在线网络技术(北京)有限公司 | Video processing method and device and short video platform |
CN111460219A (en) * | 2020-04-01 | 2020-07-28 | 百度在线网络技术(北京)有限公司 | Video processing method and device and short video platform |
CN111506771A (en) * | 2020-04-22 | 2020-08-07 | 上海极链网络科技有限公司 | Video retrieval method, device, equipment and storage medium |
CN111586494A (en) * | 2020-04-30 | 2020-08-25 | 杭州慧川智能科技有限公司 | Intelligent strip splitting method based on audio and video separation |
CN111711861B (en) * | 2020-05-15 | 2022-04-12 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN111711861A (en) * | 2020-05-15 | 2020-09-25 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN111711855A (en) * | 2020-05-27 | 2020-09-25 | 北京奇艺世纪科技有限公司 | Video generation method and device |
CN111914682B (en) * | 2020-07-13 | 2024-01-05 | 完美世界控股集团有限公司 | Teaching video segmentation method, device and equipment containing presentation file |
CN111914682A (en) * | 2020-07-13 | 2020-11-10 | 完美世界控股集团有限公司 | Teaching video segmentation method, device and equipment containing presentation file |
CN111918122A (en) * | 2020-07-28 | 2020-11-10 | 北京大米科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN112069357B (en) * | 2020-07-29 | 2024-03-01 | 北京奇艺世纪科技有限公司 | Video resource processing method and device, electronic equipment and storage medium |
CN112069357A (en) * | 2020-07-29 | 2020-12-11 | 北京奇艺世纪科技有限公司 | Video resource processing method and device, electronic equipment and storage medium |
CN112016427A (en) * | 2020-08-21 | 2020-12-01 | 广州欢网科技有限责任公司 | Video strip splitting method and device |
CN112153478B (en) * | 2020-09-11 | 2022-03-08 | 腾讯科技(深圳)有限公司 | Video processing method and video playing method |
CN112153478A (en) * | 2020-09-11 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Video processing method and video playing method |
CN113012723A (en) * | 2021-03-05 | 2021-06-22 | 北京三快在线科技有限公司 | Multimedia file playing method and device and electronic equipment |
CN113038163A (en) * | 2021-03-26 | 2021-06-25 | 百果园技术(新加坡)有限公司 | User experience model training method, short video user experience evaluation method and device |
CN113825012A (en) * | 2021-06-04 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Video data processing method and computer device |
CN113691864A (en) * | 2021-07-13 | 2021-11-23 | 北京百度网讯科技有限公司 | Video clipping method, video clipping device, electronic equipment and readable storage medium |
CN114339391A (en) * | 2021-08-18 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Video data processing method, video data processing device, computer equipment and storage medium |
CN113891157A (en) * | 2021-11-11 | 2022-01-04 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN113891156A (en) * | 2021-11-11 | 2022-01-04 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN114125541A (en) * | 2021-11-11 | 2022-03-01 | 百度在线网络技术(北京)有限公司 | Video playing method, video playing device, electronic equipment, storage medium and program product |
CN114302253A (en) * | 2021-11-25 | 2022-04-08 | 北京达佳互联信息技术有限公司 | Media data processing method, device, equipment and storage medium |
CN114302253B (en) * | 2021-11-25 | 2024-03-12 | 北京达佳互联信息技术有限公司 | Media data processing method, device, equipment and storage medium |
CN114022828A (en) * | 2022-01-05 | 2022-02-08 | 北京金茂教育科技有限公司 | Video stream processing method and device |
CN114528923A (en) * | 2022-01-25 | 2022-05-24 | 山东浪潮科学研究院有限公司 | Video target detection method, device, equipment and medium based on time domain context |
CN114528923B (en) * | 2022-01-25 | 2023-09-26 | 山东浪潮科学研究院有限公司 | Video target detection method, device, equipment and medium based on time domain context |
CN115086783A (en) * | 2022-06-28 | 2022-09-20 | 北京奇艺世纪科技有限公司 | Video generation method and device and electronic equipment |
CN115086783B (en) * | 2022-06-28 | 2023-10-27 | 北京奇艺世纪科技有限公司 | Video generation method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109922373B (en) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109922373A (en) | Method for processing video frequency, device and storage medium | |
Yao et al. | Highlight detection with pairwise deep ranking for first-person video summarization | |
US20100005485A1 (en) | Annotation of video footage and personalised video generation | |
CN108600865B (en) | A kind of video abstraction generating method based on super-pixel segmentation | |
CN109376603A (en) | A kind of video frequency identifying method, device, computer equipment and storage medium | |
CN103856689B (en) | Character dialogue subtitle extraction method oriented to news video | |
US20080138029A1 (en) | System and Method For Replay Generation For Broadcast Video | |
WO2007020897A1 (en) | Video scene classification device and video scene classification method | |
Zawbaa et al. | Event detection based approach for soccer video summarization using machine learning | |
CN112183334A (en) | Video depth relation analysis method based on multi-modal feature fusion | |
CN111291617A (en) | Badminton event video wonderful segment extraction method based on machine learning | |
CN113963399A (en) | Personnel trajectory retrieval method and device based on multi-algorithm fusion application | |
CN113240466A (en) | Mobile media video data processing method and device based on big data depth analysis and storage medium | |
Ren et al. | Football video segmentation based on video production strategy | |
Coldefy et al. | Unsupervised soccer video abstraction based on pitch, dominant color and camera motion analysis | |
Lee et al. | Highlight-video generation system for baseball games | |
Liu et al. | Effective feature extraction for play detection in american football video | |
TWI520077B (en) | The use of face recognition to detect news anchor screen | |
CN111768729A (en) | VR scene automatic explanation method, system and storage medium | |
Lee et al. | Hierarchical model for long-length video summarization with adversarially enhanced audio/visual features | |
Tong et al. | Shot classification in sports video | |
Tong et al. | Shot classification in broadcast soccer video | |
Maram et al. | Images to signals, signals to highlights | |
CN110969133A (en) | Intelligent data acquisition method for table tennis game video | |
Xia et al. | Multimodal Video Saliency Analysis With User-Biased Information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |