CN108924576A - A kind of video labeling method, device, equipment and medium - Google Patents
A kind of video labeling method, device, equipment and medium Download PDFInfo
- Publication number
- CN108924576A CN108924576A CN201810750765.5A CN201810750765A CN108924576A CN 108924576 A CN108924576 A CN 108924576A CN 201810750765 A CN201810750765 A CN 201810750765A CN 108924576 A CN108924576 A CN 108924576A
- Authority
- CN
- China
- Prior art keywords
- video
- target
- featured videos
- information
- target video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Abstract
The present invention discloses a kind of video labeling method, device, equipment and medium, this method:Featured videos section is determined from target video using characteristic matching and/or using barrage information analysis;The target video is the video for being currently played or playing;Play time information of the featured videos section in the target video is obtained, and obtains the target frame in the featured videos section;According to the play time information, target position corresponding with the play time information marks the featured videos section in the playing progress bar of the target video;When receiving the operation for acting on the target position, the target frame is shown.Method, apparatus, equipment and medium provided by the present application can solve viewing historical game play live video in the prior art, existing waste spectators' viewing time and the technical problem for causing the probability of spectators' acquisition featured videos segment lower.Realize the technical effect for saving viewing time.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of video labeling method, device, equipment and media.
Background technique
Currently, with the progress of network communication technology and the speed-raising of broadband network, network direct broadcasting has been obtained more and more
Development and application.In order to make user not miss the excellent live video of main broadcaster, video website is often recorded and provides main broadcaster's
History live video is watched for user.
In game live streaming, often there are some excellent game scenarios, successfully regarded for example, killing killing in class game
The marriage video clip acquired in successfully video clip or friend-making class game etc. in frequency segment, collection class game.These essences
Color frequency range is often the part of most excellent most worth viewing in game live streaming, and spectators user is in order to watching these excellent
Camera lens generally requires completely to carry out the viewing of entire video since the beginning of history live video, just can guarantee and do not miss
These wonderfuls.It will lead to spectators in this way and waste more time in its less interested video of viewing, and be also easy to miss
Excellent video moment.
As it can be seen that watching historical game play live video in the prior art, there is waste spectators' viewing time and spectators is caused to obtain
The technical problem for taking the probability of featured videos segment lower.
Summary of the invention
The present invention provides a kind of video labeling method, device, equipment and medium, watches history in the prior art to solve
Game live video, existing waste spectators' viewing time and the lower technology of probability for leading to spectators' acquisition featured videos segment
Problem.
In a first aspect, the present invention provides a kind of video labeling methods, including:
Featured videos section is determined from target video using characteristic matching and/or using barrage information analysis;The target
Video is the video for being currently played or playing;
Play time information of the featured videos section in the target video is obtained, and obtains the featured videos section
In target frame;
According to the play time information, in the playing progress bar of the target video with the play time information pair
The target position answered marks the featured videos section;
When receiving the operation for acting on the target position, the target frame is shown.
Optionally, described that featured videos are determined from target video using characteristic matching and/or using barrage information analysis
Section, including:According to the video classification of target video, characteristic information is set;Characteristic matching is carried out to the target video, with determination
Out in the target video with the matched target frame of the characteristic information;It is intercepted according to the target frame and preset featured videos
Rule determines N number of featured videos section in the target video, wherein the featured videos section includes the target frame,
The featured videos interception rule is corresponding with the characteristic information;Alternatively, target video and barrage information are obtained, the barrage letter
Breath includes barrage quantity information of the target video in history playing process;According to the barrage information, determine described
Barrage situation meets N number of featured videos section of preset requirement in target video.
Optionally, the target position corresponding with the play time information in the playing progress bar of the target video
The mark featured videos section is set, including:Using trichobothrium mode, in the playing progress bar of the target video with the broadcasting
The corresponding target position of temporal information marks the featured videos section.
Optionally, the display target frame, including:The target frame is shown with graphic form, or display includes institute
State the video of target frame.
Optionally, described that featured videos are determined from target video using characteristic matching and/or using barrage information analysis
Section is implemented at GCR-Work layers;It is described according to the play time information, in the playing progress bar of the target video with institute
It states the corresponding target position of play time information and marks the featured videos section in Media-Worker layers of implementation.
Second aspect provides a kind of video labeling device, including:
Determination unit, for determining excellent view from target video using characteristic matching and/or using barrage information analysis
Frequency range;The target video is the video for being currently played or playing;
Acquiring unit for obtaining play time information of the featured videos section in the target video, and obtains
Target frame in the featured videos section;
Mark unit, for according to the play time information, in the playing progress bar of the target video with it is described
The corresponding target position of play time information marks the featured videos section;
Display unit, for showing the target frame when receiving the operation for acting on the target position.
Optionally, the determination unit is also used to:According to the video classification of target video, characteristic information is set;To described
Target video carry out characteristic matching, with determine in the target video with the matched target frame of the characteristic information;According to institute
Target frame and preset featured videos interception rule are stated, N number of featured videos section is determined in the target video, wherein
The featured videos section includes the target frame, and the featured videos interception rule is corresponding with the characteristic information;Alternatively, obtaining
Target video and barrage information, the barrage information include barrage quantity letter of the target video in history playing process
Breath;According to the barrage information, determine that barrage situation in the target video meets N number of featured videos of preset requirement
Section.
Optionally, described to be also used in mark unit:Using trichobothrium mode, in the playing progress bar of the target video
Target position corresponding with the play time information marks the featured videos section.
The third aspect, provides a kind of electronic equipment, including memory, processor and storage on a memory and can handled
The computer program run on device, the processor realize method described in first aspect when executing described program.
Fourth aspect provides a kind of computer readable storage medium, is stored thereon with computer program, and the program is processed
Method described in first aspect is realized when device executes.
The one or more technical solutions provided in the embodiment of the present invention, have at least the following technical effects or advantages:
Method, apparatus, equipment and medium provided by the embodiments of the present application using characteristic matching and/or use barrage information
Analysis determines featured videos section from target video;The target video is the video for being currently played or playing;It obtains
Play time information of the featured videos section in the target video, and obtain the target frame in the featured videos section;
According to the play time information, the target corresponding with the play time information in the playing progress bar of the target video
Position marks the featured videos section;When receiving the operation for acting on the target position, the target frame is shown, so that
Spectators can not have to completely watch entire target video, it is only necessary to drag playing progress bar to being labelled with featured videos section
Position, so that it may see most excellent, not want to miss most video clip, be effectively saved spectators' viewing time, make spectators
Whole featured videos segments can be got within a short period of time.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention,
And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can
It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow chart of video labeling method in the embodiment of the present invention;
Fig. 2 is the extraction schematic diagram for not using time stamp precisely to extract mode in the embodiment of the present invention;
Fig. 3 is the structural schematic diagram of video labeling device in the embodiment of the present invention;
Fig. 4 is the structural schematic diagram of electronic equipment in the embodiment of the present invention;
Fig. 5 is the structural schematic diagram of storage medium in the embodiment of the present invention.
Specific embodiment
The embodiment of the present application is by providing a kind of video labeling method, device, equipment and medium, to solve the prior art
Middle viewing historical game play live video, existing waste spectators' viewing time and the probability for leading to spectators' acquisition featured videos segment
Lower technical problem.It realizes and has saved spectators' viewing time, make spectators within a short period of time and can get and is all excellent
The technical effect of video clip.
Technical solution in the embodiment of the present application, general thought are as follows:
Featured videos section is determined from target video using characteristic matching and/or using barrage information analysis;The target
Video is the video for being currently played or playing;Obtain play time of the featured videos section in the target video
Information, and obtain the target frame in the featured videos section;According to the play time information, in the broadcasting of the target video
Target position corresponding with the play time information marks the featured videos section on progress bar;When receive act on it is described
When the operation of target position, the target frame is shown, so that spectators can not have to completely watch entire target video, only need
Playing progress bar is dragged to the position for being labelled with featured videos section, so that it may see most excellent, not want to miss most view
Frequency segment is effectively saved spectators' viewing time, and whole featured videos segments can be got by making spectators within a short period of time.
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Embodiment one
The present embodiment provides a kind of video labeling methods, as shown in Figure 1, including:
Step S101 determines featured videos section using characteristic matching and/or using barrage information analysis from target video;
The target video is the video for being currently played or playing;
Step S102 obtains play time information of the featured videos section in the target video, and described in acquisition
Target frame in featured videos section;
Step S103, according to the play time information, in the playing progress bar of the target video with the broadcasting
The corresponding target position of temporal information marks the featured videos section;
Step S104 shows the target frame when receiving the operation for acting on the target position.
In the embodiment of the present application, the method can be applied to server, also can be applied to viewer end or main broadcaster end,
This is not restricted, and facilities and equipments can be the electronic equipments such as smart phone, desktop computer, notebook or tablet computer,
This is also with no restriction.
Below with reference to Fig. 1, the specific implementation step of method provided in this embodiment is described in detail:
Firstly, executing step S101, essence is determined from target video using characteristic matching and/or using barrage information analysis
Color frequency range;The target video is the video for being currently played or playing.
For determining that featured videos section, specific implementation method are from the target video using characteristic matching:
According to the video classification of target video, characteristic information is set;Characteristic matching is carried out to the target video, with determination
Out in the target video with the matched target frame of the characteristic information;It is intercepted according to the target frame and preset featured videos
Rule determines N number of featured videos section in the target video, wherein the featured videos section includes the target frame,
The featured videos interception rule is corresponding with the characteristic information.
Specifically, firstly, characteristic information is arranged according to the video classification of target video.
It should be noted that the target video can be the video of main broadcaster end upload;It is also possible to the live streaming before
In the process, the video that server stores;It can also be the live video being currently broadcast live.If the target video is
The live video being currently broadcast live, then during method provided by the embodiment is live streaming at the scene, the live streaming to receiving
Video flowing carries out real-time target frame matching and featured videos section is extracted.
In the specific implementation process, the video classification of target video is different, and corresponding characteristic information is also different, this feature letter
Breath can be voice characteristics information, be also possible to image feature information, this is not restricted, illustrates separately below:
The first, characteristic information is image feature information.
I.e. according to the video classification of the target video, the video classification pair is determined from preset characteristic information library
The characteristic information answered, the characteristic information are the information extracted from excellent image, and the excellent image is the video classification
Image in corresponding video.That is, often there is default phase to similar featured videos segments certain in target video
Same some excellent image frames, then it is special to can be the common images extracted from these excellent image frames for characteristic information
Sign.
For example, when the target video be include the game video for killing plot when, the characteristic information, which is arranged, is
The information extracted in picture is killed successfully from game.Specifically, after killing successfully, often display reminding is hit on video
Successful image, such as " KO " printed words, or " number adds 1 " printed words or blood cake pattern etc. are killed, then can be made with these characteristics of image
It is characterized information.
When the target video be include acquiring the game video of plot when, it is from acquiring successfully that the characteristic information, which is arranged,
The information extracted in picture.Specifically, after acquiring successfully, often display reminding acquires successful image on video,
Such as:" adding 1 " printed words, or acquisition article pattern etc., then can be using these characteristics of image as characteristic information.
Second, characteristic information is voice characteristics information.
I.e. according to the video classification of the target video, the video classification pair is determined from preset characteristic information library
The characteristic information answered, the characteristic information are the information extracted from video speech file.That is, to certain in target video
A little similar featured videos segments, which often exist, defaults identical some voice messagings, then characteristic information can be from these voices
The common phonetic feature extracted in information.
For example, when the target video be include the game video for killing plot when, the characteristic information, which is arranged, is
The information extracted in voice is killed successfully from game.Specifically, it after killing successfully, is often killed with video playing prompt
Successful voice, such as " KO " pronunciation, or " killing success " pronunciation, or the horrible cries such as " " pronunciation, then can be with these voices spy
Sign is used as characteristic information.
When the target video is prize drawing class video, it is the voice extracted from video of announcing the winners in a lottery that the characteristic information, which is arranged,
Information.Specifically, when announcing the winners in a lottery, the voice announced the winners in a lottery, such as specific music often are prompted with video playing, or " at once
Make known " etc. voices, then can be using these phonetic features as characteristic information.
Certainly, in the specific implementation process, the characteristic information is not limited to above two, can also be temporal information,
This also will not enumerate with no restriction.
In the specific implementation process, according to the needs of video type and video content, a target video can be arranged
A variety of or a kind of characteristic information, so as to the subsequent featured videos section that can extract a variety of or a kind of content.
Then, to the target video carry out characteristic matching, with determine in the target video with the characteristic information
Matched target frame.
In the specific implementation process, characteristic information is different, and corresponding matching process is also different:
If the characteristic information is image feature information, each frame image of characteristic information and target video is carried out
Images match, or the interval frame image of characteristic information and target video is subjected to images match, it is deposited on certain frame image when matching
In image corresponding with this feature information, it is determined that the frame is target frame.For example, it is assumed that characteristic information is blood cake pattern, then
When matching the frame comprising the blood cake pattern, then using the frame image as target frame.
If the characteristic information is voice characteristics information, the audio file of characteristic information and target video is subjected to sound
Frequency matches, when match somewhere audio and this feature information to it is corresponding when this at the corresponding frame of audio be target frame, specifically
For at this corresponding frame of audio be time stab information and the consistent frame of audio file time stab information at this.For example, it is assumed that feature is believed
Breath is that " killing success " pronounces, then when matching the audio file comprising audio, then with identical with the time stamp of the audio file
Frame is as target frame.
Certainly, the method for carrying out characteristic matching is not limited to above two, and this is not restricted, also will not enumerate.
Next, intercepting rule according to the target frame and preset featured videos, essence is determined in the target video
Color frequency range, the featured videos section include the target frame, and the featured videos interception rule is corresponding with the characteristic information.
In the embodiment of the present application, rule is intercepted according to the preset featured videos, determines the featured videos section
The end frame of playing duration and the featured videos section of the start frame away from the target frame away from the playing duration of the target frame,
Wherein, in the target video, the play position of the start frame is located at before the target frame or is equal to the target frame,
The play position of the end frame is located at after the target frame or is equal to the target frame.
Specifically, the featured videos interception rule is corresponding with the characteristic information, refers to, different features is believed
Breath has corresponding featured videos interception rule, for example:
Assuming that this feature information is to include in the game video for kill plot, characterization game kills successful information, considers
To excellent aiming and kill probably occur killing successfully first 1 minute or so time, then such characteristic information can be set
Corresponding featured videos intercept rule:Determine target frame forward 60s to the video between the target frame be featured videos section.
Assuming that this feature information is in prize drawing class video, characterization starts the information announced the winners in a lottery, it is contemplated that duration of announcing the winners in a lottery is general
180s, then the corresponding featured videos interception rule of such characteristic information, which can be set, is:Determine that target frame starts to 180s backward
Between video be featured videos section.
Certainly, in addition to determined above by characteristic information type featured videos end duration and with the time location of target frame
Relationship, so that it is determined that featured videos interception rule is outer, there are also the methods that other determine featured videos interception rule.For example, may be used also
Multiple characteristic informations are arranged, using the video between certain corresponding target frame of two characteristic informations as featured videos section.Citing
For, it is assumed that prize drawing class video, it is provided with characterization and starts the characteristic information A to announce the winners in a lottery and characterize the characteristic information B for end of announcing the winners in a lottery,
It matches characteristic information A and corresponds to target frame A, characteristic information B corresponds to target frame B, then corresponding featured videos interception can be set
Rule is:Determine that the video between target frame A and target frame B is featured videos section.
For determining that featured videos section, specific implementation method are from the target video using barrage information analysis:
It obtains target video and barrage information, the barrage information includes the target video in history playing process
Barrage quantity information;According to the barrage information, determine that barrage situation in the target video meets the N of preset requirement
A featured videos section.
Specifically, firstly, acquisition target video and barrage information, the barrage information include the target video in history
Barrage quantity information in playing process.
It should be noted that the target video can be the video of main broadcaster end upload;It is also possible to the live streaming before
In the process, the video that server stores;It can also be the live video being currently broadcast live.If the target video is
The live video being currently broadcast live, then during method provided by the embodiment is live streaming at the scene, the live streaming to receiving
Video flowing carries out real-time barrage acquisition of information and judgement and featured videos section is extracted.
In the specific implementation process, the barrage information may include that every frame of the target video is obtained in net cast
Barrage quantity information, barrage content information, the barrage obtained sends number information, barrage sends number of words information etc..
Then, according to the barrage information, determine that barrage situation meets the excellent of preset requirement in the target video
Video-frequency band.
In the embodiment of the present application, it determines featured videos section, is that the mesh is determined according to the barrage information by elder generation
It marks in video, meets the target frame of preset requirement, rule is intercepted further according to the target frame and preset featured videos, described
Featured videos section is determined in target video, the featured videos section includes the target frame.
The preset requirement can be the barrage quantity shown when target frame being required to play and be greater than preset value or barrage quantity
Speedup is greater than preset value, and this is not restricted.
In the embodiment of the present application, according to the barrage information, determine featured videos section method can there are many, under
Face is enumerated for three kinds:
The first, barrage quantity is greater than preset value.
It i.e. according to the barrage information, determines in the target video, barrage quantity is greater than the essence of preset quantity
Color frequency range.
Specifically, it can first determine that the barrage quantity of display is greater than corresponding frame when preset value, these frames are all extracted
It is in chronological sequence sequentially arranged as featured videos section out.
It is greater than preset value according to barrage quantity and determines that featured videos section can be determined effectively, high excellent of user's participation
Video-frequency band.
Second, the maximum target frame of barrage quantity.
It i.e. according to the barrage information, determines in the target video, the maximum target frame of barrage quantity, further according to institute
It states target frame and determines that the featured videos section, the featured videos section include the target frame.
Specifically, in order to avoid only extracting certain frames, the caused discontinuous problem of featured videos section can be determined first
Barrage quantity is maximum in target video out or the target frame of the big Mr. Yu's value of barrage quantity, then for a period of time by target frame and its front and back
Interior video is as featured videos section.For example, the video of target frame and its front and back 30s can be taken as featured videos section.
The third, barrage quantity speedup.
It i.e. according to the barrage information, determines in the target video, increasing speed for barrage quantity is greater than default speed
The featured videos section of degree
Specifically, the corresponding barrage quantity speedup of each frame can be determined according to the barrage quantity of every frame and its before and after frames,
These frames are all extracted in chronological sequence sequence and arranged by the frame for being greater than pre-set velocity using barrage quantity speedup as target frame
For featured videos section.It is removed for example, the barrage quantity that the corresponding barrage speedup of every frame is shown equal to a frame behind the frame can be set
With the ratio for the barrage quantity that the frame is shown, alternatively, the corresponding barrage speedup of every frame is arranged, for display in continuous 5 seconds after the frame
Barrage sum is divided by the ratio of the barrage sum of display in continuous 5 seconds before the frame, and this is not restricted.
By determining featured videos section according to barrage quantity speedup, it can effectively determine that evoking user largely sends barrage
Key video sequence section.
Certainly, the determination method of featured videos section is not limited to above-mentioned three kinds, can also according to barrage send total number of word come
Determine featured videos section, this is not restricted, also will not enumerate.
Then, step S102 is executed, play time information of the featured videos section in the target video is obtained, and
Obtain the target frame in the featured videos section.
The play time information can be:The starting time stamp and end time stamp of the featured videos section, or institute
The playing duration for stating the starting time stamp and featured videos section of featured videos section can also be the playing duration of the featured videos section
With end time stamp.
The target frame can be the random frame in the featured videos section, be also possible to meet in the featured videos section
Default characteristic information or barrage information meet the frame of preset condition, and this is not restricted.
It is described in the specific implementation process, determine the featured videos section, it can determine the featured videos section
Starting time stamp and terminate time stamp.
Then, execute step S103, according to the play time information, in the playing progress bar of the target video with
The play time information corresponding target position mark featured videos section.
Specifically, can use trichobothrium mode, in the playing progress bar of the target video with the play time
The corresponding target position of information marks the featured videos section.It can be by the corresponding mesh of play time information in playing progress bar
Cursor position carries out color change, or carries out the change of progress bar width, or addition marks lines to be labeled.
Subsequently, it executes step S104 and shows the target when receiving the operation for acting on the target position
Frame.
When receiving the operation for acting on the target position, the target frame is shown.Specifically it can be described in display
The image of target frame, or triggering play the featured videos section, or the preset introduction for being used to describe this section of featured videos section of display
Picture.
In the specific implementation process, the target frame of display can be and individually open a window to be shown,
It is also possible to be also possible to be superimposed to come in the broadcast window of target video directly in the broadcast window of target video come what is shown
Display, this is not restricted.Wherein, Overlapping display can be is shown by the way of picture-in-picture, is also possible to be arranged semi-transparent
It is bright come Overlapping display, herein also with no restriction.
Further, in the embodiment of the present application, it is contemplated that carry out characteristic information matching and carry out featured videos section to extract
Resource consumption can be occupied, interfering with each other when in order to avoid each task execution is seized with resource, can also be arranged described using special
Sign matching determines that featured videos section is implemented at GCR-Work layers from target video;It is described to be extracted from the target video
N number of featured videos section is implemented at Media-Worker layers;The video-splicing and according to the play time information, in institute
Target position mark corresponding with the play time information featured videos section in the playing progress bar of target video is stated to exist
Media-Worker layers of implementation.
In the embodiment of the present application, after determining featured videos section, institute can also be extracted from the target video
Featured videos section is stated, specific implementation method is as follows:
Determine the featured videos section, it can it determines the starting time stamp of the featured videos section and terminates time stamp,
Terminate the featured videos section between time stamp to extract the starting time stamp and this from the target video.
In view of the extraction of featured videos section needs to consume more calculating and process resource, the present embodiment additionally provides one
The extracting method of kind low consumption of resources, is described in detail as follows:
Referring to FIG. 2, due to target video be broadcast live or history live streaming live video, transmission of video be by
It is interspersed with transmission according to video unit and audio unit, each video unit and audio unit have its corresponding time stab information,
Therefore the present embodiment is not decoded the target video section, and corresponding time stamp letter is directly searched in not decoded target video
The immediate video unit of time stab information of breath and the target frame according to the immediate video unit determination and extracts institute
State featured videos section.For example, as illustrated in fig. 2, it is assumed that the time stamp of video unit 3 and video unit 4 with determine it is excellent
The time stab information of video-frequency band is closest, then solves stream multiplexing and extract the video unit 3 and video unit 4, and extract time stamp letter
After ceasing audio unit corresponding with video unit 3 and video unit 4, then stream multiplexing is carried out, synthetic video unit and audio unit,
To form the complete featured videos section extracted.
Using this featured videos section extracting method, due to not needing to be decoded to entire video, can save more
Calculating and process resource improve processing speed.
Further, it is contemplated that some featured videos sections have strict requirements to the time, can also be arranged and carry out excellent view again
Before frequency extraction, the attribute information of the target video is first obtained;Whether the target video is judged according to the attribute information
It needs precisely to extract mode using time stamp;If it is required, then carrying out video decoding to the target video;And according to described excellent
Video intercepting rule, according to the time stab information of decoded target video, is extracted described excellent from decoded target video
Video-frequency band;If it is not required, then searching the time stamp of corresponding time stab information Yu the target frame in not decoded target video
The immediate video unit of information, wherein the target video includes N number of video unit, and N is the positive integer greater than 1;According to institute
It states immediate video unit determination and extracts the featured videos section.
I.e. featured videos section corresponding according to every category feature information by staff the case where, in advance in the category of target video
Property information in setting characterization whether need precisely to extract the extraction information of mode using time stamp, for example, it is desired to accurate using time stamp
Extracting number behind the Ti mark for the information that then sets a property is 1, does not need precisely to extract using time stamp, set a property information
Number is 0 to Ti mark below.Before subsequent extract, first judge that target is regarded according to extraction information preset in attribute information
Whether frequency needs precisely to extract mode using time stamp, is first decoded to target video if needing, then carries out by every frame time stamp
It is accurate to extract, as do not needed, target video is not decoded, directly carries out low consumption of resources by the time stamp of each video unit
Extraction.
In the embodiment of the present application, if extracting N number of featured videos section, N is greater than 1, can also be by N number of excellent view
Frequency range is spliced into a video, forms splicing video, in order to spectators' viewing.
In the specific implementation process, splice the multiple featured videos section method can there are many, be set forth below three kinds
For:
N number of featured videos section can be spliced into a video, and be inserted into prompt before each featured videos section
Video, the featured videos section that the prompt video is used to describe to play, forms splicing video.I.e. in each featured videos section
It is inserted into one section of pre-prepd prompt video before, may include in the prompt video:Next that be played section is excellent
Play time information of the video in former target video, the content for that section of featured videos next to be played description or next
The video content types etc. of that be played section featured videos.
N number of featured videos section can also be spliced into a video, and be inserted between every two featured videos section
It is spaced video, the last period featured videos segment is played to be terminated and next section of featured videos segment the interval video for characterizing
It will play, form splicing video.One section of pre-prepd interval video is inserted into i.e. before each featured videos section, between described
Every can be for one section of blank video, one section of default credit video or self-introduction video of main broadcaster etc. in video.
N number of featured videos section can also be spliced into a video, and regarded in the initial segment of each featured videos section
Play cuing information is superimposed in frequency, the prompt information forms splicing video for describing featured videos section being played on.I.e.
In order not to increase extra video playback time, synthesizes in one frame of starting or multiframe of each featured videos section and mentioned into preset
Show information, which can be prompt picture, be also possible to suggestion voice, and this is not restricted.If the prompt information
It is prompt picture, can be by the way of picture-in-picture, can also be by the way of translucent covering, this is not restricted.
Certainly, in the specific implementation process, the mode for splicing video is not limited to above-mentioned three kinds, can also be by the multiple essence
Color frequency is according to time stamp sequencing, and progress is seamless spliced to play and handle the time to reduce, and this is not restricted, also not another
One enumerates.
After completing splicing, when receiving the request for being used to splice described in request video of client transmission, hair
The splicing video to the client is sent to play out.
In the embodiment of the present application, after forming splicing video, the unlatching of the splicing video can also be linked and is placed into
In webcast website on main broadcaster corresponding with the target video room page, fetched in order to which spectators can directly trigger the unlatching chain
Selection plays the splicing video.
Specifically, method provided in this embodiment is regarded using characteristic matching and/or using barrage information analysis from target
Featured videos section is determined in frequency;The target video is the video for being currently played or playing;Obtain the featured videos
Play time information of the section in the target video, and obtain the target frame in the featured videos section;According to the broadcasting
Temporal information, in the playing progress bar of the target video described in target position mark corresponding with the play time information
Featured videos section;When receiving the operation for acting on the target position, the target frame is shown, so that spectators can not
With completely watching entire target video, it is only necessary to drag playing progress bar to the position for being labelled with featured videos section, so that it may
See most excellent, not want to miss most video clip, be effectively saved spectators' viewing time, makes spectators within a short period of time
Whole featured videos segments can be got.
Based on the same inventive concept, the embodiment of the invention also provides the corresponding dresses of video labeling method in embodiment one
It sets, sees embodiment two.
Embodiment two
A kind of video-splicing device is present embodiments provided, as shown in figure 3, the device includes:
Determination unit 301, for determination to be excellent from target video using characteristic matching and/or using barrage information analysis
Video-frequency band;The target video is the video for being currently played or playing;
Acquiring unit 302 for obtaining play time information of the featured videos section in the target video, and obtains
Take the target frame in the featured videos section;
Mark unit 303, for according to the play time information, in the playing progress bar of the target video with institute
It states the corresponding target position of play time information and marks the featured videos section;
Display unit 304, for showing the target frame when receiving the operation for acting on the target position.
In the embodiment of the present application, the determination unit 301 is also used to:
According to the video classification of target video, characteristic information is set;Characteristic matching is carried out to the target video, with determination
Out in the target video with the matched target frame of the characteristic information;It is intercepted according to the target frame and preset featured videos
Rule determines N number of featured videos section in the target video, wherein the featured videos section includes the target frame,
The featured videos interception rule is corresponding with the characteristic information;Alternatively,
It obtains target video and barrage information, the barrage information includes the target video in history playing process
Barrage quantity information;According to the barrage information, determine that barrage situation in the target video meets the N of preset requirement
A featured videos section.
In the embodiment of the present application, described to be also used in mark unit 303:
Using trichobothrium mode, the target corresponding with the play time information in the playing progress bar of the target video
Position marks the featured videos section.
In the embodiment of the present application, described device can be the electricity such as smart phone, desktop computer, notebook or tablet computer
Sub- equipment, this is not restricted.
In the embodiment of the present application, described device can be android system, IOS system or Windows system, herein
With no restriction.
By the device that the embodiment of the present invention two is introduced, filled used by the method to implement the embodiment of the present invention one
It sets, so based on the method that the embodiment of the present invention one is introduced, the affiliated personnel in this field can understand the specific structure of the device
And deformation, so details are not described herein.Device used by the method for all embodiment of the present invention one belongs to the present invention and is intended to
The range of protection.
Based on the same inventive concept, it this application provides the corresponding electronic equipment embodiment of embodiment one, is detailed in
Embodiment three.
Embodiment three
The present embodiment provides a kind of electronic equipment, as shown in figure 4, including memory 410, processor 420 and being stored in
On reservoir 410 and the computer program 411 that can run on processor 420 can when processor 420 executes computer program 411
To realize any embodiment in embodiment one.
Since the electronic equipment that the present embodiment is introduced is equipment used by method in implementation the embodiment of the present application one, therefore
And based on method described in the embodiment of the present application one, the electronics that those skilled in the art can understand the present embodiment is set
Standby specific embodiment and its various change form, so how to realize the embodiment of the present application for the electronic equipment herein
In method be no longer discussed in detail.As long as those skilled in the art implement to set used by the method in the embodiment of the present application
It is standby, belong to the range to be protected of the application.
Based on the same inventive concept, this application provides the corresponding storage medium of embodiment one, detailed in Example four.
Example IV
The present embodiment provides a kind of computer readable storage mediums 500, as shown in figure 5, being stored thereon with computer program
511, when which is executed by processor, any embodiment in embodiment one may be implemented.
The technical solution provided in the embodiment of the present application, has at least the following technical effects or advantages:
Method, apparatus, equipment and medium provided by the embodiments of the present application using characteristic matching and/or use barrage information
Analysis determines featured videos section from target video;The target video is the video for being currently played or playing;It obtains
Play time information of the featured videos section in the target video, and obtain the target frame in the featured videos section;
According to the play time information, the target corresponding with the play time information in the playing progress bar of the target video
Position marks the featured videos section;When receiving the operation for acting on the target position, the target frame is shown, so that
Spectators can not have to completely watch entire target video, it is only necessary to drag playing progress bar to being labelled with featured videos section
Position, so that it may see most excellent, not want to miss most video clip, be effectively saved spectators' viewing time, make spectators
Whole featured videos segments can be got within a short period of time.
Further, by judging whether the target video needs using time stamp essence according to the attribute information of target video
Quasi- extraction mode, and when needed, video decoding is carried out to the target video, further according to the time stamp of decoded target video
Featured videos section described in information extraction directly searches corresponding time stab information in not decoded target video when not needed
It effectively reduces and does not need to extract the featured videos section with the immediate video unit of time stab information of the target frame
The video extraction time precisely extracted also ensures the extraction accuracy for the video that part needs precisely to extract.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, those skilled in the art can carry out various modification and variations without departing from this hair to the embodiment of the present invention
The spirit and scope of bright embodiment.In this way, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention
And its within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of video labeling method, which is characterized in that including:
Featured videos section is determined from target video using characteristic matching and/or using barrage information analysis;The target video
For the video for being currently played or playing;
Play time information of the featured videos section in the target video is obtained, and is obtained in the featured videos section
Target frame;
It is corresponding with the play time information in the playing progress bar of the target video according to the play time information
Target position marks the featured videos section;
When receiving the operation for acting on the target position, the target frame is shown.
2. the method as described in claim 1, which is characterized in that described using characteristic matching and/or using barrage information analysis
Featured videos section is determined from target video, including:
According to the video classification of target video, characteristic information is set;Characteristic matching is carried out to the target video, to determine
State in target video with the matched target frame of the characteristic information;According to the target frame and preset featured videos interception rule
Then, N number of featured videos section is determined in the target video, wherein the featured videos section includes the target frame, institute
It is corresponding with the characteristic information to state featured videos interception rule;Alternatively,
It obtains target video and barrage information, the barrage information includes barrage of the target video in history playing process
Quantity information;According to the barrage information, determine that barrage situation in the target video meets N number of essence of preset requirement
Color frequency range.
3. the method as described in claim 1, which is characterized in that it is described in the playing progress bar of the target video with it is described
The corresponding target position of play time information marks the featured videos section, including:
Using trichobothrium mode, the target position corresponding with the play time information in the playing progress bar of the target video
Mark the featured videos section.
4. the method as described in claim 1, which is characterized in that the display target frame, including:
The target frame is shown with graphic form, or display includes the video of the target frame.
5. the method as described in claim 1, which is characterized in that described using characteristic matching and/or using barrage information analysis
Determine that featured videos section is implemented at GCR-Work layers from target video;It is described according to the play time information, in the mesh
Target position mark corresponding with the play time information featured videos section in the playing progress bar of video is marked to exist
Media-Worker layers of implementation.
6. a kind of video labeling device, which is characterized in that including:
Determination unit, for determining featured videos section from target video using characteristic matching and/or using barrage information analysis;
The target video is the video for being currently played or playing;
Acquiring unit, for obtaining play time information of the featured videos section in the target video, and described in acquisition
Target frame in featured videos section;
Mark unit, for according to the play time information, in the playing progress bar of the target video with the broadcasting
The corresponding target position of temporal information marks the featured videos section;
Display unit, for showing the target frame when receiving the operation for acting on the target position.
7. device as claimed in claim 6, which is characterized in that the determination unit is also used to:
According to the video classification of target video, characteristic information is set;Characteristic matching is carried out to the target video, to determine
State in target video with the matched target frame of the characteristic information;According to the target frame and preset featured videos interception rule
Then, N number of featured videos section is determined in the target video, wherein the featured videos section includes the target frame, institute
It is corresponding with the characteristic information to state featured videos interception rule;Alternatively,
It obtains target video and barrage information, the barrage information includes barrage of the target video in history playing process
Quantity information;According to the barrage information, determine that barrage situation in the target video meets N number of essence of preset requirement
Color frequency range.
8. device as claimed in claim 6, which is characterized in that described to be also used in mark unit:
Using trichobothrium mode, the target position corresponding with the play time information in the playing progress bar of the target video
Mark the featured videos section.
9. a kind of electronic equipment including memory, processor and stores the calculating that can be run on a memory and on a processor
Machine program, which is characterized in that the processor realizes claim 1-5 any method when executing described program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
Claim 1-5 any method is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810750765.5A CN108924576A (en) | 2018-07-10 | 2018-07-10 | A kind of video labeling method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810750765.5A CN108924576A (en) | 2018-07-10 | 2018-07-10 | A kind of video labeling method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108924576A true CN108924576A (en) | 2018-11-30 |
Family
ID=64412209
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810750765.5A Pending CN108924576A (en) | 2018-07-10 | 2018-07-10 | A kind of video labeling method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108924576A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109672936A (en) * | 2018-12-26 | 2019-04-23 | 上海众源网络有限公司 | A kind of the determination method, apparatus and electronic equipment of video evaluations collection |
CN110049377A (en) * | 2019-03-12 | 2019-07-23 | 北京奇艺世纪科技有限公司 | Expression packet generation method, device, electronic equipment and computer readable storage medium |
CN110234016A (en) * | 2019-06-19 | 2019-09-13 | 大连网高竞赛科技有限公司 | A kind of automatic output method of featured videos and system |
CN110248258A (en) * | 2019-07-18 | 2019-09-17 | 腾讯科技(深圳)有限公司 | Recommended method, device, storage medium and the computer equipment of video clip |
CN110381367A (en) * | 2019-07-10 | 2019-10-25 | 咪咕文化科技有限公司 | A kind of method for processing video frequency, equipment and computer readable storage medium |
CN110392304A (en) * | 2019-06-24 | 2019-10-29 | 北京达佳互联信息技术有限公司 | A kind of video display method, apparatus, electronic equipment and storage medium |
CN110505530A (en) * | 2019-07-17 | 2019-11-26 | 刘彩霞 | A kind of Streaming Media internet big data barrage processing system and method |
CN110677735A (en) * | 2019-10-17 | 2020-01-10 | 网易(杭州)网络有限公司 | Video positioning method and device |
CN110856042A (en) * | 2019-11-18 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Video playing method and device, computer readable storage medium and computer equipment |
CN110891198A (en) * | 2019-11-29 | 2020-03-17 | 腾讯科技(深圳)有限公司 | Video playing prompt method, multimedia playing prompt method, bullet screen processing method and device |
CN111314722A (en) * | 2020-02-12 | 2020-06-19 | 北京达佳互联信息技术有限公司 | Clipping prompting method and device, electronic equipment and storage medium |
CN111447489A (en) * | 2020-04-02 | 2020-07-24 | 北京字节跳动网络技术有限公司 | Video processing method and device, readable medium and electronic equipment |
CN111479168A (en) * | 2020-04-14 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Method, device, server and medium for marking multimedia content hot spot |
CN111726693A (en) * | 2020-06-02 | 2020-09-29 | 广州视源电子科技股份有限公司 | Audio and video playing method, device, equipment and medium |
CN111741333A (en) * | 2020-06-10 | 2020-10-02 | 广州酷狗计算机科技有限公司 | Live broadcast data acquisition method and device, computer equipment and storage medium |
CN111918083A (en) * | 2020-07-31 | 2020-11-10 | 广州虎牙科技有限公司 | Video clip identification method, device, equipment and storage medium |
CN112492370A (en) * | 2019-09-12 | 2021-03-12 | 上海哔哩哔哩科技有限公司 | Progress bar display method and device, computer equipment and readable storage medium |
CN113407775A (en) * | 2020-10-20 | 2021-09-17 | 腾讯科技(深圳)有限公司 | Video searching method and device and electronic equipment |
CN113490045A (en) * | 2021-06-30 | 2021-10-08 | 北京百度网讯科技有限公司 | Special effect adding method, device and equipment for live video and storage medium |
CN113490010A (en) * | 2021-07-06 | 2021-10-08 | 腾讯科技(深圳)有限公司 | Interaction method, device and equipment based on live video and storage medium |
CN113542845A (en) * | 2020-04-16 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and storage medium |
CN113727135A (en) * | 2021-09-23 | 2021-11-30 | 北京达佳互联信息技术有限公司 | Live broadcast interaction method and device, electronic equipment, storage medium and program product |
CN113747241A (en) * | 2021-09-13 | 2021-12-03 | 深圳市易平方网络科技有限公司 | Video clip intelligent editing method, device and terminal based on bullet screen statistics |
CN114630141A (en) * | 2022-03-18 | 2022-06-14 | 北京达佳互联信息技术有限公司 | Video processing method and related equipment |
WO2022213661A1 (en) * | 2021-04-06 | 2022-10-13 | 北京达佳互联信息技术有限公司 | Video playback method and apparatus |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130145269A1 (en) * | 2011-09-26 | 2013-06-06 | University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
CN104994425A (en) * | 2015-06-30 | 2015-10-21 | 北京奇艺世纪科技有限公司 | Video labeling method and device |
CN105959804A (en) * | 2016-04-28 | 2016-09-21 | 乐视控股(北京)有限公司 | Intelligent playing method and device |
CN105979387A (en) * | 2015-12-01 | 2016-09-28 | 乐视网信息技术(北京)股份有限公司 | Video clip display method and system |
CN106339655A (en) * | 2015-07-06 | 2017-01-18 | 无锡天脉聚源传媒科技有限公司 | Video shot marking method and device |
CN106803987A (en) * | 2015-11-26 | 2017-06-06 | 腾讯科技(深圳)有限公司 | The acquisition methods of video data, device and system |
CN106897304A (en) * | 2015-12-18 | 2017-06-27 | 北京奇虎科技有限公司 | A kind for the treatment of method and apparatus of multi-medium data |
CN107786902A (en) * | 2017-11-07 | 2018-03-09 | Tcl海外电子(惠州)有限公司 | Direct broadcast time-shift method, TV and computer-readable recording medium |
CN107784118A (en) * | 2017-11-14 | 2018-03-09 | 北京林业大学 | A kind of Video Key information extracting system semantic for user interest |
-
2018
- 2018-07-10 CN CN201810750765.5A patent/CN108924576A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130145269A1 (en) * | 2011-09-26 | 2013-06-06 | University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
CN104994425A (en) * | 2015-06-30 | 2015-10-21 | 北京奇艺世纪科技有限公司 | Video labeling method and device |
CN106339655A (en) * | 2015-07-06 | 2017-01-18 | 无锡天脉聚源传媒科技有限公司 | Video shot marking method and device |
CN106803987A (en) * | 2015-11-26 | 2017-06-06 | 腾讯科技(深圳)有限公司 | The acquisition methods of video data, device and system |
CN105979387A (en) * | 2015-12-01 | 2016-09-28 | 乐视网信息技术(北京)股份有限公司 | Video clip display method and system |
CN106897304A (en) * | 2015-12-18 | 2017-06-27 | 北京奇虎科技有限公司 | A kind for the treatment of method and apparatus of multi-medium data |
CN105959804A (en) * | 2016-04-28 | 2016-09-21 | 乐视控股(北京)有限公司 | Intelligent playing method and device |
CN107786902A (en) * | 2017-11-07 | 2018-03-09 | Tcl海外电子(惠州)有限公司 | Direct broadcast time-shift method, TV and computer-readable recording medium |
CN107784118A (en) * | 2017-11-14 | 2018-03-09 | 北京林业大学 | A kind of Video Key information extracting system semantic for user interest |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109672936A (en) * | 2018-12-26 | 2019-04-23 | 上海众源网络有限公司 | A kind of the determination method, apparatus and electronic equipment of video evaluations collection |
CN109672936B (en) * | 2018-12-26 | 2021-10-26 | 上海众源网络有限公司 | Method and device for determining video evaluation set and electronic equipment |
CN110049377A (en) * | 2019-03-12 | 2019-07-23 | 北京奇艺世纪科技有限公司 | Expression packet generation method, device, electronic equipment and computer readable storage medium |
CN110049377B (en) * | 2019-03-12 | 2021-06-22 | 北京奇艺世纪科技有限公司 | Expression package generation method and device, electronic equipment and computer readable storage medium |
CN110234016A (en) * | 2019-06-19 | 2019-09-13 | 大连网高竞赛科技有限公司 | A kind of automatic output method of featured videos and system |
CN110392304A (en) * | 2019-06-24 | 2019-10-29 | 北京达佳互联信息技术有限公司 | A kind of video display method, apparatus, electronic equipment and storage medium |
CN110381367A (en) * | 2019-07-10 | 2019-10-25 | 咪咕文化科技有限公司 | A kind of method for processing video frequency, equipment and computer readable storage medium |
CN110381367B (en) * | 2019-07-10 | 2022-01-25 | 咪咕文化科技有限公司 | Video processing method, video processing equipment and computer readable storage medium |
CN110505530A (en) * | 2019-07-17 | 2019-11-26 | 刘彩霞 | A kind of Streaming Media internet big data barrage processing system and method |
CN110505530B (en) * | 2019-07-17 | 2021-07-06 | 深圳市中鹏教育科技股份有限公司 | Streaming media internet big data bullet screen processing system |
CN110248258A (en) * | 2019-07-18 | 2019-09-17 | 腾讯科技(深圳)有限公司 | Recommended method, device, storage medium and the computer equipment of video clip |
US11400382B2 (en) | 2019-09-12 | 2022-08-02 | Shanghai Bilibili Technology Co., Ltd. | Method and device of displaying a progress bar, computing device, and readable storage medium |
CN112492370A (en) * | 2019-09-12 | 2021-03-12 | 上海哔哩哔哩科技有限公司 | Progress bar display method and device, computer equipment and readable storage medium |
CN110677735A (en) * | 2019-10-17 | 2020-01-10 | 网易(杭州)网络有限公司 | Video positioning method and device |
CN110856042A (en) * | 2019-11-18 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Video playing method and device, computer readable storage medium and computer equipment |
CN110891198A (en) * | 2019-11-29 | 2020-03-17 | 腾讯科技(深圳)有限公司 | Video playing prompt method, multimedia playing prompt method, bullet screen processing method and device |
CN111314722A (en) * | 2020-02-12 | 2020-06-19 | 北京达佳互联信息技术有限公司 | Clipping prompting method and device, electronic equipment and storage medium |
CN111447489A (en) * | 2020-04-02 | 2020-07-24 | 北京字节跳动网络技术有限公司 | Video processing method and device, readable medium and electronic equipment |
CN111479168B (en) * | 2020-04-14 | 2021-12-28 | 腾讯科技(深圳)有限公司 | Method, device, server and medium for marking multimedia content hot spot |
CN111479168A (en) * | 2020-04-14 | 2020-07-31 | 腾讯科技(深圳)有限公司 | Method, device, server and medium for marking multimedia content hot spot |
CN113542845B (en) * | 2020-04-16 | 2024-02-02 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and storage medium |
CN113542845A (en) * | 2020-04-16 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and storage medium |
CN111726693A (en) * | 2020-06-02 | 2020-09-29 | 广州视源电子科技股份有限公司 | Audio and video playing method, device, equipment and medium |
CN111741333A (en) * | 2020-06-10 | 2020-10-02 | 广州酷狗计算机科技有限公司 | Live broadcast data acquisition method and device, computer equipment and storage medium |
CN111741333B (en) * | 2020-06-10 | 2021-12-28 | 广州酷狗计算机科技有限公司 | Live broadcast data acquisition method and device, computer equipment and storage medium |
CN111918083A (en) * | 2020-07-31 | 2020-11-10 | 广州虎牙科技有限公司 | Video clip identification method, device, equipment and storage medium |
CN113407775A (en) * | 2020-10-20 | 2021-09-17 | 腾讯科技(深圳)有限公司 | Video searching method and device and electronic equipment |
CN113407775B (en) * | 2020-10-20 | 2024-03-22 | 腾讯科技(深圳)有限公司 | Video searching method and device and electronic equipment |
WO2022213661A1 (en) * | 2021-04-06 | 2022-10-13 | 北京达佳互联信息技术有限公司 | Video playback method and apparatus |
CN113490045B (en) * | 2021-06-30 | 2024-03-22 | 北京百度网讯科技有限公司 | Special effect adding method, device, equipment and storage medium for live video |
CN113490045A (en) * | 2021-06-30 | 2021-10-08 | 北京百度网讯科技有限公司 | Special effect adding method, device and equipment for live video and storage medium |
CN113490010A (en) * | 2021-07-06 | 2021-10-08 | 腾讯科技(深圳)有限公司 | Interaction method, device and equipment based on live video and storage medium |
CN113490010B (en) * | 2021-07-06 | 2022-08-09 | 腾讯科技(深圳)有限公司 | Interaction method, device and equipment based on live video and storage medium |
CN113747241A (en) * | 2021-09-13 | 2021-12-03 | 深圳市易平方网络科技有限公司 | Video clip intelligent editing method, device and terminal based on bullet screen statistics |
CN113727135A (en) * | 2021-09-23 | 2021-11-30 | 北京达佳互联信息技术有限公司 | Live broadcast interaction method and device, electronic equipment, storage medium and program product |
CN114630141A (en) * | 2022-03-18 | 2022-06-14 | 北京达佳互联信息技术有限公司 | Video processing method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108924576A (en) | A kind of video labeling method, device, equipment and medium | |
CN109089154A (en) | A kind of video extraction method, apparatus, equipment and medium | |
CN109089127A (en) | A kind of video-splicing method, apparatus, equipment and medium | |
CN109089128A (en) | A kind of method for processing video frequency, device, equipment and medium | |
CN105872830B (en) | Interactive approach and device in direct broadcast band | |
US10898809B2 (en) | Overlaying content within live streaming video | |
CN104811814B (en) | Information processing method and system, client and server based on video playing | |
CN107027050B (en) | Audio and video processing method and device for assisting live broadcast | |
US10469902B2 (en) | Apparatus and method for confirming content viewing | |
US10057651B1 (en) | Video clip creation using social media | |
CN110708565B (en) | Live broadcast interaction method and device, server and machine-readable storage medium | |
US20160316233A1 (en) | System and method for inserting, delivering and tracking advertisements in a media program | |
CN110446115A (en) | Living broadcast interactive method, apparatus, electronic equipment and storage medium | |
CN108108996B (en) | Method and device for delivering advertisements in video, computer equipment and readable medium | |
CN109040773A (en) | A kind of video improvement method, apparatus, equipment and medium | |
CN105872786B (en) | A kind of method and device for launching advertisement by barrage in a program | |
US10981056B2 (en) | Methods and systems for determining a reaction time for a response and synchronizing user interface(s) with content being rendered | |
CN113490004B (en) | Live broadcast interaction method and related device | |
US20170006328A1 (en) | Systems, methods, and computer program products for capturing spectator content displayed at live events | |
CN110496391B (en) | Information synchronization method and device | |
WO2020197974A1 (en) | System and method for augmenting casted content with augmented reality content | |
CN111757147B (en) | Method, device and system for event video structuring | |
CN108133385A (en) | A kind of advertisement placement method and device | |
CN106851326B (en) | Playing method and device | |
CN110784751A (en) | Information display method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181130 |