CN106899875A - The display control method and device of plug-in captions - Google Patents
The display control method and device of plug-in captions Download PDFInfo
- Publication number
- CN106899875A CN106899875A CN201710066207.2A CN201710066207A CN106899875A CN 106899875 A CN106899875 A CN 106899875A CN 201710066207 A CN201710066207 A CN 201710066207A CN 106899875 A CN106899875 A CN 106899875A
- Authority
- CN
- China
- Prior art keywords
- current
- captions
- frame
- video
- video frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Studio Circuits (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
This disclosure relates to the display control method and device of plug-in captions.The method includes:Obtain the time point of current video frame;In the case where there is captions corresponding with the time point of current video frame section, the corresponding current subtitle image of the current video frame is obtained;Generate the corresponding frame of video texture of the current video frame and the corresponding captions texture of the current subtitle image;The frame of video texture and the captions texture are rendered by graphic process unit GPU.The disclosure ensure that plug-in captions are synchronous with video pictures, and need not add extra caption decoding module and Subtitle Demonstration control.
Description
Technical field
This disclosure relates to multimedia technology field, more particularly to a kind of display control method and device of plug-in captions.
Background technology
Captions are divided into embedded captions and plug-in captions.For the video with embedded captions, video file and subtitle file
Integrate, i.e., frame of video is per se with captions.Mobile terminal, only need to be by solution when the video with embedded captions is played
Code device decodes each frame of video.Plug-in captions refer to that video file is separated with subtitle file.In correlation technique, mobile terminal is being played
Plug-in captions are decoded, it is necessary to add extra caption decoding module during video with plug-in captions, and needs addition
Extra Subtitle Demonstration control is used to show the captions that decoding is obtained.Easily occur in the display control technology of this plug-in captions
Captions and the nonsynchronous problem of video pictures.
The content of the invention
In view of this, the present disclosure proposes a kind of display control method and device of plug-in captions, to solve correlation technique
Middle mobile terminal is carried out, it is necessary to add extra caption decoding module when the video with plug-in captions is played to plug-in captions
Decoding, and the extra display control of addition is needed for showing the captions that decoding is obtained, and easily there are captions and video pictures
Nonsynchronous problem.
According to the one side of the disclosure, there is provided a kind of display control method of plug-in captions, including:
Obtain the time point of current video frame;
In the case where there is captions corresponding with the time point of current video frame section, the current video is obtained
The corresponding current subtitle image of frame;
Generate the corresponding frame of video texture of the current video frame and the corresponding captions texture of the current subtitle image;
The frame of video texture and the captions texture are rendered by graphic process unit GPU.
In a kind of possible implementation, methods described also includes:
The point between at the beginning of the time point of the current video frame is later than or is cut into slices equal to the first captions, and it is described current
The time point of frame of video earlier than or equal to first captions section end time point in the case of, judge to exist and work as with described
The time point corresponding captions section of preceding frame of video, wherein, the first captions section is cut into slices for any one captions.
In a kind of possible implementation, the corresponding current subtitle image of the current video frame is obtained, including:
In the case where the current video frame and a upper frame of video correspond to different captions sections, obtain described current
The corresponding current subtitle section of frame of video;
Current subtitle section is converted into current subtitle image;
The current subtitle image is stored in caching.
In a kind of possible implementation, the corresponding current subtitle image of the current video frame is obtained, including:
In the case where the current video frame and a upper frame of video correspond to the section of same captions, institute is obtained from caching
State the corresponding current subtitle image of current video frame.
In a kind of possible implementation, the corresponding current subtitle image of the current video frame is obtained, including:
Correspond to different captions in the current video frame and a upper frame of video to cut into slices, or current subtitle attribute is relative
In the case that a upper frame of video changes, the corresponding current subtitle section of the current video frame is obtained;
Current subtitle section is converted to by current subtitle image according to the current subtitle attribute;
The current subtitle image is stored in caching.
In a kind of possible implementation, the corresponding current subtitle image of the current video frame is obtained, including:
Cut into slices corresponding to same captions with a upper frame of video in the current video frame, and current subtitle attribute is relative to institute
State in the case that a frame of video do not change, the corresponding current subtitle image of the current video frame is obtained from caching.
In a kind of possible implementation, before the time point of current video frame is obtained, methods described also includes:
Obtain plug-in subtitle file;
According to point, end time point and caption content between at the beginning of every captions in the plug-in subtitle file, generation
The corresponding captions section of every captions.
According to another aspect of the present disclosure, there is provided a kind of display control unit of plug-in captions, including:
Time point acquisition module, the time point for obtaining current video frame;
Current subtitle image collection module, for there is captions corresponding with the time point of current video frame section
In the case of, obtain the corresponding current subtitle image of the current video frame;
Texture generation module, for generating the corresponding frame of video texture of the current video frame and the current subtitle figure
As corresponding captions texture;
Rendering module, for carrying out wash with watercolours to the frame of video texture and the captions texture by graphic process unit GPU
Dye.
In a kind of possible implementation, described device also includes:
Determination module, for the time point of the current video frame be later than or equal to the first captions cut into slices at the beginning of between
Point, and the current video frame time point earlier than or equal to first captions section end time point in the case of, sentence
There is captions corresponding with the time point of current video frame section calmly, wherein, the first captions section is any one
Captions are cut into slices.
In a kind of possible implementation, the current subtitle image collection module includes:
First acquisition submodule, for corresponding to what different captions were cut into slices in the current video frame and a upper frame of video
In the case of, obtain the corresponding current subtitle section of the current video frame;
First transform subblock, for current subtitle section to be converted into current subtitle image;
First cache sub-module, for the current subtitle image to be stored in caching.
In a kind of possible implementation, the current subtitle image collection module includes:
Second acquisition submodule, for corresponding to the feelings that same captions are cut into slices in the current video frame and a upper frame of video
Under condition, the corresponding current subtitle image of the current video frame is obtained from caching.
In a kind of possible implementation, the current subtitle image collection module includes:
3rd acquisition submodule, cuts into slices for corresponding to different captions in the current video frame and a upper frame of video,
Or in the case that current subtitle attribute changes relative to a upper frame of video, obtain the current video frame corresponding
Current subtitle is cut into slices;
Second transform subblock, for current subtitle section to be converted into current word according to the current subtitle attribute
Curtain image;
Second cache sub-module, for the current subtitle image to be stored in caching.
In a kind of possible implementation, the current subtitle image collection module includes:
4th acquisition submodule, for being cut into slices corresponding to same captions with a upper frame of video in the current video frame, and
In the case that current subtitle attribute does not change relative to a upper frame of video, the current video frame is obtained from caching
Corresponding current subtitle image.
In a kind of possible implementation, described device also includes:
Plug-in subtitle file acquisition module, for obtaining plug-in subtitle file;
Captions are cut into slices generation module, for according to point between at the beginning of every captions in the plug-in subtitle file, terminate
Time point and caption content, the corresponding captions section of every captions of generation.
According to another aspect of the present disclosure, there is provided a kind of display control unit of plug-in captions, including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:
Obtain the time point of current video frame;
In the case where there is captions corresponding with the time point of current video frame section, the current video is obtained
The corresponding current subtitle image of frame;
Generate the corresponding frame of video texture of the current video frame and the corresponding captions texture of the current subtitle image;
The frame of video texture and the captions texture are rendered by graphic process unit GPU.
According to another aspect of the present disclosure, there is provided a kind of non-volatile computer readable storage medium storing program for executing, when the storage
Instruction in medium by terminal and/or server computing device when so that terminal and/or server are able to carry out outside one kind
The display control method of captions is hung, methods described includes:
Obtain the time point of current video frame;
In the case where there is captions corresponding with the time point of current video frame section, the current video is obtained
The corresponding current subtitle image of frame;
Generate the corresponding frame of video texture of the current video frame and the corresponding captions texture of the current subtitle image;
The frame of video texture and the captions texture are rendered by graphic process unit GPU.
By obtaining the time point of current video frame, there is captions corresponding with the time point of current video frame section
In the case of, the corresponding current subtitle image of current video frame is obtained, generate the corresponding frame of video texture of current video frame and work as
The corresponding captions texture of preceding subtitling image, and the frame of video texture and the captions texture are rendered by GPU, according to this
The display control method and device of the plug-in captions of disclosed each side ensure that plug-in captions are synchronous with video pictures, and nothing
Extra caption decoding module and Subtitle Demonstration control need to be added.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the further feature and aspect of the disclosure will become
It is clear.
Brief description of the drawings
Comprising in the description and accompanying drawing and the specification of the part that constitutes specification together illustrates the disclosure
Exemplary embodiment, feature and aspect, and for explaining the principle of the disclosure.
Fig. 1 shows the flow chart of the display control method of the plug-in captions according to the embodiment of the disclosure one.
Fig. 2 shows an exemplary flow chart of the display control method of the plug-in captions according to the embodiment of the disclosure one.
The schematic diagram of captions section in the display control method that Fig. 3 shows according to the plug-in captions of the embodiment of the disclosure one.
Captions section is added into captions in the display control method that Fig. 4 shows according to the plug-in captions of the embodiment of the disclosure one
The schematic diagram of track.
Frame of video is in subtitle track in the display control method that Fig. 5 shows according to the plug-in captions of the embodiment of the disclosure one
Time point and captions cut into slices the schematic diagram of time period in subtitle track.
Fig. 6 shows an exemplary flow chart of the display control method of the plug-in captions according to the embodiment of the disclosure one.
Fig. 7 shows the flow of the another exemplary of the display control method of the plug-in captions according to the embodiment of the disclosure one
Figure.
Fig. 8 shows the flow of the another exemplary of the display control method of the plug-in captions according to the embodiment of the disclosure one
Figure.
Fig. 9 shows the block diagram of the display control unit of the plug-in captions according to the embodiment of the disclosure one.
Figure 10 shows an exemplary block diagram of the display control unit of the plug-in captions according to the embodiment of the disclosure one.
Figure 11 is a kind of frame of the device 800 of the display control for plug-in captions according to an exemplary embodiment
Figure.
Specific embodiment
Describe various exemplary embodiments, feature and the aspect of the disclosure in detail below with reference to accompanying drawing.It is identical in accompanying drawing
Reference represent the same or analogous element of function.Although the various aspects of embodiment are shown in the drawings, remove
Non-specifically is pointed out, it is not necessary to accompanying drawing drawn to scale.
Special word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, in order to better illustrate the disclosure, numerous details are given in specific embodiment below.
It will be appreciated by those skilled in the art that without some details, the disclosure can equally be implemented.In some instances, for
Method well known to those skilled in the art, means, element and circuit are not described in detail, in order to highlight the purport of the disclosure.
Embodiment 1
Fig. 1 shows the flow chart of the display control method of the plug-in captions according to the embodiment of the disclosure one.The method can be with
It is applied in the mobile terminals such as mobile phone or panel computer, it is also possible to be applied to PC (Personal Computer, individual calculus
Machine) in, it is not limited thereto.As shown in figure 1, the method includes:
In step S101, the time point of current video frame is obtained.
Wherein, current video frame can refer to the frame of video that will be played.The time point of current video frame can be currently to regard
The timestamp of frequency frame, or can be current video frame time point in video, it is not limited thereto.
In step s 102, in the case where there is captions corresponding with the time point of current video frame section, obtaining and working as
The corresponding current subtitle image of preceding frame of video.
In the present embodiment, the time with current video frame can be judged whether according to the time point of current video frame
The corresponding captions section of point.Current subtitle image can be bitmap file (Bitmap), be not limited thereto.
In step s 103, the corresponding frame of video texture of generation current video frame and the corresponding captions of current subtitle image
Texture.
As an example of the present embodiment, when video decoding is carried out by Video Decoder, texture pair can be obtained
As for example texture object can be applied for from GPU (Graphics Processing Unit, graphic process unit), it is possible to by line
Reason object is set onto Video Decoder.In the corresponding frame of video texture (Texture) of generation current video frame and current word
After the corresponding captions texture of curtain image, can be directly synthesized by GPU and rendered.
In step S104, the frame of video texture and the captions texture are rendered by GPU.
In the present embodiment, frame of video texture and captions texture can be rendered into by GPU and is distributed for video shows
Internal memory on.Replace CPU (Center Processing Unit, central processing unit) to be rendered by GPU, wash with watercolours can be accelerated
Dye speed such that it is able to realize the real-time rendering of captions.
As an example of the present embodiment, can will need what is rendered to regard by the rendering module in video playback framework
Frequency frame texture and captions texture are given GPU and are rendered, and then are shown on screen.
The present embodiment ensure that plug-in captions are synchronous with video pictures, and need not add extra caption decoding module with
Subtitle Demonstration control.
Fig. 2 shows an exemplary flow chart of the display control method of the plug-in captions according to the embodiment of the disclosure one.
As shown in Fig. 2 the method includes:
In step s 201, plug-in subtitle file is obtained.
Plug-in subtitle file can be from local acquisition, it is also possible to online to obtain, and is not limited thereto.
In step S202, according to point, end time point and captions between at the beginning of every captions in plug-in subtitle file
Content, the corresponding captions section of every captions of generation.
In the present embodiment, the length not to every captions is limited, and those skilled in the art can answer according to actual
Every length of captions is selected with demand.
As an example of the present embodiment, plug-in subtitle file is parsed, every beginning of captions can be obtained
Time point, end time point and caption content.Can be according in point, end time point and captions between at the beginning of every captions
Hold, the corresponding captions section of every captions of generation.
As another example of the present embodiment, plug-in subtitle file is parsed, opening for every captions can be obtained
Begin time point, duration and caption content.Every can be generated according to point, duration and caption content between at the beginning of every captions
The corresponding captions section of captions.
The schematic diagram of captions section in the display control method that Fig. 3 shows according to the plug-in captions of the embodiment of the disclosure one.
As shown in figure 3, point is 00 between at the beginning of certain captions:00:02.941 (i.e. 2941ms), when a length of 1159ms, it is possible thereby to
The end time point for determining this captions is 00:00:04.100.According to point, end time point between at the beginning of this captions with
And caption content " what's the matter ", the corresponding captions section of this captions can be generated.
In step S203, the time point of current video frame is obtained.
Description to step S101 is seen above to step S203.
In step S204, in the case where there is captions corresponding with the time point of current video frame section, obtaining and working as
The corresponding current subtitle image of preceding frame of video.
Description to step S102 is seen above to step S204.
In step S205, the corresponding frame of video texture of current video frame and the corresponding captions of current subtitle image are generated
Texture.
Description to step S103 is seen above to step S205.
In step S206, the frame of video texture and the captions texture are rendered by graphic process unit GPU.
Description to step S104 is seen above to step S206.
Captions section is added into captions in the display control method that Fig. 4 shows according to the plug-in captions of the embodiment of the disclosure one
The schematic diagram of track.As an example of the present embodiment, can be by each captions section addition subtitle track.Such as Fig. 4 institutes
Show, can be by captions section 1, captions section 2 and the addition subtitle tracks of captions section 3.
In a kind of possible implementation, the method can also include:It is later than or waits at the time point of current video frame
In the first captions cut into slices at the beginning of between point, and current video frame time point earlier than or equal to the first captions cut into slices at the end of
Between put in the case of, judge there is captions corresponding with the time point of current video frame section, wherein, the section of the first captions can be with
For any one captions is cut into slices.In the implementation, if the first captions cut into slices at the beginning of between point≤current video frame when
Between point≤first captions cut into slices at the beginning of between the section of the captions of point+the first duration, then judge to exist with current video frame when
Between put the section of corresponding captions, and the time point corresponding captions section of current video frame is the section of the first captions.
Frame of video is in subtitle track in the display control method that Fig. 5 shows according to the plug-in captions of the embodiment of the disclosure one
Time point and captions cut into slices the schematic diagram of time period in subtitle track.As shown in figure 5, subtitle track is cut including captions
Piece 1, captions section 2 and captions section 3, captions 1 to the 3 shared time period in subtitle track of section cut into slices in Fig. 5 with captions
The length of middle display is corresponding.As shown in figure 5, in the absence of the section of any one captions, making the time point of frame of video 1 be later than or wait
In the captions cut into slices at the beginning of between point, and make frame of video 1 time point earlier than or equal to the captions section end time point,
Therefore, judge in the absence of captions corresponding with the time point of frame of video 1 section, i.e., there is no captions to need when frame of video 1 is played
Display.The time point of frame of video 2 be later than captions section 2 at the beginning of between point, and earlier than captions section 2 end time point, because
This, judges there is captions corresponding with the time point of frame of video 2 section, and the time point corresponding captions section of frame of video 2 is
Captions section 2, i.e., have captions to need display when frame of video 2 is played, and it is captions section 2 to need the captions section for showing.
Wherein, track (Track) is a concept defined in video playback framework, and track can be used for management and be added to
Each multimedia resource above it.For example, adding a video resource A first, then add a subtitle asset B, then can be with
The resource of picture part in video resource A is added in track of video, the resource of voice parts in video resource A is added to
On audio track, subtitle asset B is added in subtitle track.When commencing play out, can take out corresponding from track of video
Video pictures, take out corresponding sound from audio track, and corresponding caption information is taken out from subtitle track.
In a kind of possible implementation, the corresponding current subtitle image of current video frame is obtained, can included:Working as
In the case that preceding frame of video corresponds to different captions sections from a upper frame of video, the corresponding current subtitle of current video frame is obtained
Section;Current subtitle section is converted into current subtitle image;Current subtitle image is stored in caching.
In alternatively possible implementation, the corresponding current subtitle image of current video frame is obtained, including:Current
In the case that frame of video corresponds to the section of same captions with a upper frame of video, current video frame is obtained from caching corresponding current
Subtitling image.
Fig. 6 shows an exemplary flow chart of the display control method of the plug-in captions according to the embodiment of the disclosure one.
As shown in fig. 6, the method includes:
In step s 601, the time point of current video frame is obtained.
Description to step S101 is seen above to step S601.
In step S602, in the case where there is captions corresponding with the time point of current video frame section, judge to work as
Preceding frame of video is cut into slices from whether a upper frame of video corresponds to different captions, if so, then performing step S603, otherwise performs step
S606。
In this example, in the case where there is captions corresponding with the time point of current video frame section, judge current
Frame of video is cut into slices from whether a upper frame of video corresponds to different captions, that is, judge whether current video frame is right with a upper frame of video
Should be in different sentence captions.
In step S603, the corresponding current subtitle section of current video frame is obtained.
In this example, correspond in the case that different captions are cut into slices in current video frame and a upper frame of video, it is necessary to
Obtain the corresponding current subtitle section of current video frame.Because current subtitle section was obtained according to the time point of current video frame
, therefore can ensure that captions are synchronous with video pictures.
In step s 604, current subtitle section is converted into current subtitle image.
In a kind of possible implementation, can be by part (the i.e. captions figure in subtitling image in addition to caption content
As in background parts) transparency be set to the first numerical value, for example, the first numerical value be 100%.In by setting subtitling image
The transparency of background parts, can realize merging for captions texture and frame of video texture.
In step s 605, current subtitle image is stored in caching.
In this example, it is stored in caching by by current subtitle image, the frequent distribution that can reduce internal memory takes.
In a kind of possible implementation, correspond to what different captions were cut into slices in current video frame and a upper frame of video
In the case of, a subtitling image can be removed from the cache.
In step S606, the corresponding current subtitle image of current video frame is obtained from caching.
In this example, in the case where current video frame and a upper frame of video correspond to the section of identical captions, postpone
The corresponding current subtitle image of middle acquisition current video frame is deposited, and the corresponding current subtitle of current video frame need not be reacquired and cut
Piece is simultaneously converted to current subtitle image such that it is able to improve the efficiency for obtaining identical subtitling image.
In step S607, the corresponding frame of video texture of current video frame and the corresponding captions of current subtitle image are generated
Texture.
Description to step S103 is seen above to step S607.
In step S608, the frame of video texture and the captions texture are rendered by GPU.
Description to step S104 is seen above to step S608.
In alternatively possible implementation, the corresponding current subtitle image of current video frame is obtained, including:Current
Frame of video corresponds to different captions and cuts into slices from a upper frame of video, or current subtitle attribute becomes relative to a upper frame of video
In the case of change, the corresponding current subtitle section of current video frame is obtained;Current subtitle section is turned according to current subtitle attribute
It is changed to current subtitle image;Current subtitle image is stored in caching.Wherein, title can include subtitle font, word
One or more in size, text color and font effects etc., it is not limited thereto.For example, font effects can be included gradually
Become one or more in effect, font edge antialiasing mode and transparency etc., be not limited thereto.In the implementation
In, by before captions are rendered, judging whether current subtitle attribute changes relative to a upper frame of video, and becoming
Current subtitle section is converted to by current subtitle image according to current subtitle attribute in the case of change, thus enables that user is set
Title come into force in real time such that it is able to improve Consumer's Experience.
In alternatively possible implementation, the corresponding current subtitle image of current video frame is obtained, including:Current
Frame of video is cut into slices with a upper frame of video corresponding to same captions, and current subtitle attribute does not change relative to a upper frame of video
In the case of, the corresponding current subtitle image of current video frame is obtained from caching.In the implementation, by working as forward sight
Frequency frame and a upper frame of video cut into slices corresponding to same captions, and current subtitle attribute does not change relative to a upper frame of video
In the case of, the corresponding current subtitle image of current video frame is obtained from caching, and current video frame correspondence need not be reacquired
Current subtitle cut into slices and be converted to current subtitle image such that it is able to improve the efficiency for obtaining identical subtitling image.
Fig. 7 shows the flow of the another exemplary of the display control method of the plug-in captions according to the embodiment of the disclosure one
Figure.As shown in fig. 7, the method includes:
In step s 701, the time point of current video frame is obtained.
Description to step S101 is seen above to step S701.
In step S702, in the case where there is captions corresponding with the time point of current video frame section, judge to work as
Preceding frame of video is cut into slices from whether a upper frame of video corresponds to different captions, if so, then performing step S704, otherwise performs step
S703。
In this example, judge current video frame is cut into slices from whether a upper frame of video corresponds to different captions, that is, judge
Whether current video frame corresponds to different sentence captions from a upper frame of video.If current video frame corresponds to different from a upper frame of video
Captions section, then need to obtain the corresponding current subtitle of current video frame and cut into slices;If current video frame and a upper frame of video pair
Should be cut into slices in identical captions, then need to determine whether whether current subtitle attribute changes relative to a upper frame of video.
In step S703, judge whether current subtitle attribute changes relative to a upper frame of video, if so, then performing
Step S704, otherwise performs step S707.
In this example, correspond to different captions in current video frame and a upper frame of video to cut into slices, or current subtitle
, it is necessary to obtain the corresponding current subtitle section of current video frame in the case that attribute changes relative to a upper frame of video;
Current video frame is cut into slices with a upper frame of video corresponding to same captions, and current subtitle attribute does not occur relative to a upper frame of video
In the case of change, the corresponding current subtitle image of current video frame is obtained from caching, and current video need not be reacquired
The corresponding current subtitle of frame is cut into slices and is converted to current subtitle image such that it is able to improve the efficiency for obtaining identical subtitling image.
In step S704, the corresponding current subtitle section of current video frame is obtained.
Description to step S603 is seen above to step S704.
In step S705, current subtitle section is converted to by current subtitle image according to current subtitle attribute.
This example can realize that user dynamically sets title, and flexibility is larger, be easy to user convenient fast according to hobby
Subtitle style is set promptly, it is possible to increase Consumer's Experience.
In step S706, current subtitle image is stored in caching.
Description to step S605 is seen above to step S706.
In step S707, the corresponding current subtitle image of current video frame is obtained from caching.
Description to step S606 is seen above to step S707.
In step S708, the corresponding frame of video texture of current video frame and the corresponding captions of current subtitle image are generated
Texture.
Description to step S103 is seen above to step S708.
In step S709, the frame of video texture and the captions texture are rendered by GPU.
Description to step S104 is seen above to step S709.
Fig. 8 shows the flow of the another exemplary of the display control method of the plug-in captions according to the embodiment of the disclosure one
Figure.As shown in figure 8, the method includes:
In step S801, plug-in subtitle file is obtained.
Description to step S201 is seen above to step S801.
In step S802, according to point, end time point and captions between at the beginning of every captions in plug-in subtitle file
Content, the corresponding captions section of every captions of generation.
Description to step S202 is seen above to step S802.
In step S803, by each captions section addition subtitle track.
As shown in figure 4, the time sequencing that can be cut into slices according to captions, by each captions section addition subtitle track.Word
Curtain track can be used for managing the section of each captions.
In step S804, the time point of current video frame is obtained.
Description to step S101 is seen above to step S804.
In step S805, captions section corresponding with the time point of current video frame is judged whether, if so, then holding
Row step S806, otherwise performs step S812.
In step S806, judge whether current subtitle attribute changes relative to a upper frame of video, if so, then performing
Step S808, otherwise performs step S807.
In step S807, judge current video frame is cut into slices from whether a upper frame of video corresponds to different captions, if so,
Step S808 is then performed, step S811 is otherwise performed.
In step S808, the corresponding current subtitle section of current video frame is obtained.
In step S809, current subtitle section is converted to by current subtitle image according to current subtitle attribute.
In step S810, current subtitle image is stored in caching.
In step S811, the corresponding current subtitle image of current video frame is obtained from caching.
Description to step S702 to step S707 is seen above to step S806 to step S811.
In step S812, the corresponding frame of video texture of generation current video frame.
As an example of the present embodiment, when video decoding is carried out by Video Decoder, texture pair can be obtained
As for example texture object can be applied for from GPU, it is possible to set onto Video Decoder texture object.Work as forward sight in generation
After the corresponding frame of video texture of frequency frame, can be rendered by GPU.
In step S813, the frame of video texture is rendered by GPU.
In this step, the frame of video texture for needing to render can be given by the rendering module in video playback framework
GPU is rendered, and then is shown on screen.
In step S814, the corresponding frame of video texture of current video frame and the corresponding captions of current subtitle image are generated
Texture.
Description to step S103 is seen above to step S814.
In step S815, the frame of video texture and the captions texture are rendered by GPU.
Description to step S104 is seen above to step S815.
Embodiment 2
Fig. 9 shows the block diagram of the display control unit of the plug-in captions according to the embodiment of the disclosure one.As shown in figure 9, should
Device includes:Time point acquisition module 91, the time point for obtaining current video frame;Current subtitle image collection module 92,
For in the case where there is captions corresponding with the time point of current video frame section, obtaining the current video frame pair
The current subtitle image answered;Texture generation module 93, for generating the corresponding frame of video texture of the current video frame and institute
State the corresponding captions texture of current subtitle image;Rendering module 94, for by graphic process unit GPU to the frame of video texture
And the captions texture is rendered.
The present embodiment ensure that plug-in captions are synchronous with video pictures, and need not add extra caption decoding module with
Subtitle Demonstration control.
Figure 10 shows an exemplary block diagram of the display control unit of the plug-in captions according to the embodiment of the disclosure one.Such as
Shown in Figure 10:
In a kind of possible implementation, described device also includes:Determination module 95, in the current video frame
Time point be later than or equal to the first captions cut into slices at the beginning of between point, and the current video frame time point earlier than or be equal to
In the case of the end time point of the first captions section, judge there is word corresponding with the time point of the current video frame
Curtain section, wherein, the first captions section is cut into slices for any one captions.
In a kind of possible implementation, the current subtitle image collection module 92 includes:First acquisition submodule
921, in the case of corresponding to different captions sections in the current video frame and a upper frame of video, obtain described current
The corresponding current subtitle section of frame of video;First transform subblock 922, for current subtitle section to be converted into current word
Curtain image;First cache sub-module 923, for the current subtitle image to be stored in caching.
In a kind of possible implementation, the current subtitle image collection module 92 includes:Second acquisition submodule
924, in the case of corresponding to the section of same captions in the current video frame and a upper frame of video, institute is obtained from caching
State the corresponding current subtitle image of current video frame.
In a kind of possible implementation, the current subtitle image collection module 92 includes:3rd acquisition submodule
925, cut into slices for corresponding to different captions in the current video frame and a upper frame of video, or current subtitle attribute is relative
In the case that a upper frame of video changes, the corresponding current subtitle section of the current video frame is obtained;Second turn
Submodule 926 is changed, for current subtitle section to be converted into current subtitle image according to the current subtitle attribute;Second
Cache sub-module 927, for the current subtitle image to be stored in caching.
In a kind of possible implementation, the current subtitle image collection module 92 includes:4th acquisition submodule
928, for being cut into slices corresponding to same captions with a upper frame of video in the current video frame, and current subtitle attribute is relative to institute
State in the case that a frame of video do not change, the corresponding current subtitle image of the current video frame is obtained from caching.
In a kind of possible implementation, described device also includes:Plug-in subtitle file acquisition module 96, for obtaining
Plug-in subtitle file;Captions are cut into slices generation module 97, for according between at the beginning of every captions in the plug-in subtitle file
Point, end time point and caption content, the corresponding captions section of every captions of generation.
Embodiment 3
Figure 11 is a kind of frame of the device 800 of the display control for plug-in captions according to an exemplary embodiment
Figure.For example, device 800 can be mobile phone, computer, digital broadcast terminal, messaging devices, game console is put down
Board device, Medical Devices, body-building equipment, personal digital assistant etc..
Reference picture 11, device 800 can include following one or more assemblies:Processing assembly 802, memory 804, power supply
Component 806, multimedia groupware 808, audio-frequency assembly 810, the interface 812 of input/output (I/O), sensor cluster 814, and
Communication component 816.
The integrated operation of the usual control device 800 of processing assembly 802, such as with display, call, data communication, phase
Machine is operated and the associated operation of record operation.Processing assembly 802 can refer to including one or more processors 820 to perform
Order, to complete all or part of step of above-mentioned method.Additionally, processing assembly 802 can include one or more modules, just
Interaction between processing assembly 802 and other assemblies.For example, processing assembly 802 can include multi-media module, it is many to facilitate
Interaction between media component 808 and processing assembly 802.
Memory 804 is configured as storing various types of data supporting the operation in device 800.These data are shown
Example includes the instruction for any application program or method operated on device 800, and contact data, telephone book data disappears
Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
Close and realize, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) is erasable to compile
Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 can include power management system
System, one or more power supplys, and other generate, manage and distribute the component that electric power is associated with for device 800.
Multimedia groupware 808 is included in one screen of output interface of offer between described device 800 and user.One
In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch-screen, to receive the input signal from user.Touch panel includes one or more touch sensings
Device is with the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or sliding action
Border, but also detection and the touch or slide related duration and pressure.In certain embodiments, many matchmakers
Body component 808 includes a front camera and/or rear camera.When device 800 be in operator scheme, such as screening-mode or
During video mode, front camera and/or rear camera can receive outside multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or with focusing and optical zoom capabilities.
Audio-frequency assembly 810 is configured as output and/or input audio signal.For example, audio-frequency assembly 810 includes a Mike
Wind (MIC), when device 800 is in operator scheme, such as call model, logging mode and speech recognition mode, microphone is matched somebody with somebody
It is set to reception external audio signal.The audio signal for being received can be further stored in memory 804 or via communication set
Part 816 sends.In certain embodiments, audio-frequency assembly 810 also includes a loudspeaker, for exports audio signal.
, to provide interface between processing assembly 802 and peripheral interface module, above-mentioned peripheral interface module can for I/O interfaces 812
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor cluster 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor cluster 814 can detect the opening/closed mode of device 800, the relative positioning of component is for example described
Component is the display and keypad of device 800, and sensor cluster 814 can be with 800 1 components of detection means 800 or device
Position change, user is presence or absence of with what device 800 was contacted, the orientation of device 800 or acceleration/deceleration and device 800
Temperature change.Sensor cluster 814 can include proximity transducer, be configured to be detected when without any physical contact
The presence of neighbouring object.Sensor cluster 814 can also include optical sensor, such as CMOS or ccd image sensor, for into
As being used in application.In certain embodiments, the sensor cluster 814 can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary implementation
In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 also includes near-field communication (NFC) module, to promote junction service.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be by one or more application specific integrated circuits (ASIC), numeral letter
Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components realization, for performing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing including instructing, example are additionally provided
Such as include the memory 804 of instruction, above-mentioned instruction can be performed to complete the above method by the processor 820 of device 800.
The disclosure can be system, method and/or computer program product.Computer program product can include computer
Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer-readable recording medium can be the tangible of the instruction that holding and storage are used by instruction execution equipment
Equipment.Computer-readable recording medium for example can be-- but be not limited to-- storage device electric, magnetic storage apparatus, optical storage
Equipment, electromagnetism storage device, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer-readable recording medium
More specifically example (non exhaustive list) includes:Portable computer diskette, hard disk, random access memory (RAM), read-only deposit
It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static RAM (SRAM), portable
Compact disk read-only storage (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon
Be stored with instruction punch card or groove internal projection structure and above-mentioned any appropriate combination.Calculating used herein above
Machine readable storage medium storing program for executing is not construed as instantaneous signal in itself, the electromagnetic wave of such as radio wave or other Free propagations, logical
Cross electromagnetic wave (for example, the light pulse for passing through fiber optic cables) that waveguide or other transmission mediums propagate or by wire transfer
Electric signal.
Computer-readable program instructions as described herein can from computer-readable recording medium download to each calculate/
Processing equipment, or outer computer or outer is downloaded to by network, such as internet, LAN, wide area network and/or wireless network
Portion's storage device.Network can include copper transmission cable, Optical Fiber Transmission, be wirelessly transferred, router, fire wall, interchanger, gateway
Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted
Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for storing the meter in each calculating/processing equipment
In calculation machine readable storage medium storing program for executing.
For perform the disclosure operation computer program instructions can be assembly instruction, instruction set architecture (ISA) instruction,
Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming language
Source code or object code that any combination is write, programming language of the programming language including object-oriented-such as
Smalltalk, C++ etc., and routine procedural programming languages-such as " C " language or similar programming language.Computer
Readable program instructions can perform fully on the user computer, partly perform on the user computer, as one solely
Vertical software kit is performed, part performs or completely in remote computer on the remote computer on the user computer for part
Or performed on server.In the situation for being related to remote computer, remote computer can be by the network-bag of any kind
LAN (LAN) or wide area network (WAN)-be connected to subscriber computer are included, or, it may be connected to outer computer (such as profit
With ISP come by Internet connection).In certain embodiments, by using computer-readable program instructions
Status information carry out personalized customization electronic circuit, such as PLD, field programmable gate array (FPGA) or can
Programmed logic array (PLA) (PLA), the electronic circuit can perform computer-readable program instructions, so as to realize each side of the disclosure
Face.
Referring herein to the method according to the embodiment of the present disclosure, device (system) and computer program product flow chart and/
Or block diagram describes various aspects of the disclosure.It should be appreciated that each square frame and flow chart of flow chart and/or block diagram and/
Or in block diagram each square frame combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to all-purpose computer, special-purpose computer or other programmable datas
The processor of processing unit, so as to produce a kind of machine so that these instructions are by computer or other programmable datas
During the computing device of processing unit, work(specified in one or more square frames realized in flow chart and/or block diagram is generated
The device of energy/action.Can also be the storage of these computer-readable program instructions in a computer-readable storage medium, these refer to
Order causes that computer, programmable data processing unit and/or other equipment work in a specific way, so that, be stored with instruction
Computer-readable medium then includes a manufacture, and it includes realizing in one or more square frames in flow chart and/or block diagram
The instruction of the various aspects of the function/action of regulation.
Can also computer-readable program instructions be loaded into computer, other programmable data processing units or other
In equipment so that perform series of operation steps on computer, other programmable data processing units or miscellaneous equipment, to produce
The computer implemented process of life, so that performed on computer, other programmable data processing units or miscellaneous equipment
Instruct function/action specified in one or more square frames realized in flow chart and/or block diagram.
Flow chart and block diagram in accompanying drawing show system, method and the computer journey of the multiple embodiments according to the disclosure
The architectural framework in the cards of sequence product, function and operation.At this point, each square frame in flow chart or block diagram can generation
One part for module, program segment or instruction of table a, part for the module, program segment or instruction is used comprising one or more
In the executable instruction of the logic function for realizing regulation.In some realizations as replacement, the function of being marked in square frame
Can occur with different from the order marked in accompanying drawing.For example, two continuous square frames can essentially be held substantially in parallel
OK, they can also be performed in the opposite order sometimes, and this is depending on involved function.It is also noted that block diagram and/or
The combination of the square frame in each square frame and block diagram and/or flow chart in flow chart, can use the function of performing regulation or dynamic
The special hardware based system made is realized, or can be realized with the combination of computer instruction with specialized hardware.
It is described above the presently disclosed embodiments, described above is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.In the case of without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes will be apparent from for the those of ordinary skill in art field.The selection of term used herein, purport
Best explaining principle, practical application or the technological improvement to the technology in market of each embodiment, or lead this technology
Other those of ordinary skill in domain are understood that each embodiment disclosed herein.
Claims (15)
1. a kind of display control method of plug-in captions, it is characterised in that including:
Obtain the time point of current video frame;
In the case where there is captions corresponding with the time point of current video frame section, the current video frame pair is obtained
The current subtitle image answered;
Generate the corresponding frame of video texture of the current video frame and the corresponding captions texture of the current subtitle image;
The frame of video texture and the captions texture are rendered by graphic process unit GPU.
2. method according to claim 1, it is characterised in that methods described also includes:
Point, and the current video between at the beginning of the time point of the current video frame is later than or is cut into slices equal to the first captions
The time point of frame earlier than or equal to first captions section end time point in the case of, judge to exist and work as forward sight with described
The time point corresponding captions section of frequency frame, wherein, the first captions section is cut into slices for any one captions.
3. method according to claim 1, it is characterised in that obtain the corresponding current subtitle figure of the current video frame
Picture, including:
In the case where the current video frame and a upper frame of video correspond to different captions sections, the current video is obtained
The corresponding current subtitle section of frame;
Current subtitle section is converted into current subtitle image;
The current subtitle image is stored in caching.
4. method as claimed in any of claims 1 to 3, it is characterised in that obtain the current video frame correspondence
Current subtitle image, including:
In the case where the current video frame and a upper frame of video correspond to the section of same captions, described working as is obtained from caching
The corresponding current subtitle image of preceding frame of video.
5. method according to claim 1, it is characterised in that obtain the corresponding current subtitle figure of the current video frame
Picture, including:
Correspond to different captions in the current video frame and a upper frame of video to cut into slices, or current subtitle attribute is relative to institute
State in the case that a frame of video changes, obtain the corresponding current subtitle section of the current video frame;
Current subtitle section is converted to by current subtitle image according to the current subtitle attribute;
The current subtitle image is stored in caching.
6. the method according to claim 1,2 or 5, it is characterised in that obtain the corresponding current word of the current video frame
Curtain image, including:
Cut into slices corresponding to same captions in the current video frame and a upper frame of video, and current subtitle attribute is relative on described
In the case that one frame of video does not change, the corresponding current subtitle image of the current video frame is obtained from caching.
7. method according to claim 1, it is characterised in that before the time point of current video frame is obtained, the side
Method also includes:
Obtain plug-in subtitle file;
According to point, end time point and caption content between at the beginning of every captions in the plug-in subtitle file, every is generated
The corresponding captions section of captions.
8. a kind of display control unit of plug-in captions, it is characterised in that including:
Time point acquisition module, the time point for obtaining current video frame;
Current subtitle image collection module, in the feelings that there is captions corresponding with the time point of current video frame section
Under condition, the corresponding current subtitle image of the current video frame is obtained;
Texture generation module, for generating the corresponding frame of video texture of the current video frame and the current subtitle image pair
The captions texture answered;
Rendering module, for being rendered to the frame of video texture and the captions texture by graphic process unit GPU.
9. device according to claim 8, it is characterised in that described device also includes:
Determination module, for the time point of the current video frame be later than or equal to the first captions cut into slices at the beginning of between point,
And the time point of the current video frame earlier than or equal to first captions section end time point in the case of, judge to deposit
In captions corresponding with the time point of current video frame section, wherein, the first captions section is any one captions
Section.
10. device according to claim 8, it is characterised in that the current subtitle image collection module includes:
First acquisition submodule, for corresponding to the situation that different captions are cut into slices in the current video frame and a upper frame of video
Under, obtain the corresponding current subtitle section of the current video frame;
First transform subblock, for current subtitle section to be converted into current subtitle image;
First cache sub-module, for the current subtitle image to be stored in caching.
11. device according to any one in claim 8 to 10, it is characterised in that the current subtitle image is obtained
Module includes:
Second acquisition submodule, for corresponding to the situation that same captions are cut into slices in the current video frame and a upper frame of video
Under, the corresponding current subtitle image of the current video frame is obtained from caching.
12. devices according to claim 8, it is characterised in that the current subtitle image collection module includes:
3rd acquisition submodule, cuts into slices for corresponding to different captions in the current video frame and a upper frame of video, or
In the case that current subtitle attribute changes relative to a upper frame of video, the current video frame is obtained corresponding current
Captions are cut into slices;
Second transform subblock, for current subtitle section to be converted into current subtitle figure according to the current subtitle attribute
Picture;
Second cache sub-module, for the current subtitle image to be stored in caching.
13. device according to claim 8,9 or 12, it is characterised in that the current subtitle image collection module includes:
4th acquisition submodule, for being cut into slices corresponding to same captions with a upper frame of video in the current video frame, and currently
In the case that title does not change relative to a upper frame of video, the current video frame correspondence is obtained from caching
Current subtitle image.
14. devices according to claim 8, it is characterised in that described device also includes:
Plug-in subtitle file acquisition module, for obtaining plug-in subtitle file;
Captions are cut into slices generation module, for according to point, end time between at the beginning of every captions in the plug-in subtitle file
Point and caption content, the corresponding captions section of every captions of generation.
A kind of 15. display control units of plug-in captions, it is characterised in that including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:
Obtain the time point of current video frame;
In the case where there is captions corresponding with the time point of current video frame section, the current video frame pair is obtained
The current subtitle image answered;
Generate the corresponding frame of video texture of the current video frame and the corresponding captions texture of the current subtitle image;
The frame of video texture and the captions texture are rendered by graphic process unit GPU.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710066207.2A CN106899875A (en) | 2017-02-06 | 2017-02-06 | The display control method and device of plug-in captions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710066207.2A CN106899875A (en) | 2017-02-06 | 2017-02-06 | The display control method and device of plug-in captions |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106899875A true CN106899875A (en) | 2017-06-27 |
Family
ID=59198773
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710066207.2A Pending CN106899875A (en) | 2017-02-06 | 2017-02-06 | The display control method and device of plug-in captions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106899875A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107995440A (en) * | 2017-12-13 | 2018-05-04 | 北京奇虎科技有限公司 | A kind of video caption textures generation method and device |
CN108449651A (en) * | 2018-05-24 | 2018-08-24 | 腾讯科技(深圳)有限公司 | Subtitle adding method and device |
CN109729420A (en) * | 2017-10-27 | 2019-05-07 | 腾讯科技(深圳)有限公司 | Image processing method and device, mobile terminal and computer readable storage medium |
CN113455012A (en) * | 2020-09-03 | 2021-09-28 | 深圳市大疆创新科技有限公司 | Rendering method, rendering device, mobile terminal and storage medium |
CN113473045A (en) * | 2020-04-26 | 2021-10-01 | 海信集团有限公司 | Subtitle adding method, device, equipment and medium |
CN114900736A (en) * | 2022-03-28 | 2022-08-12 | 网易(杭州)网络有限公司 | Video generation method and device and electronic equipment |
CN117336563A (en) * | 2023-10-23 | 2024-01-02 | 书行科技(北京)有限公司 | Externally hung subtitle display method and related products |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1172764A2 (en) * | 2000-05-31 | 2002-01-16 | Sony Computer Entertainment Inc. | Image drawing |
US20070262995A1 (en) * | 2006-05-12 | 2007-11-15 | Available For Licensing | Systems and methods for video editing |
CN101483723A (en) * | 2008-01-11 | 2009-07-15 | 新奥特(北京)视频技术有限公司 | Method for performance guarantee of television subtitle playing apparatus based on diversity application |
CN101500127A (en) * | 2008-01-28 | 2009-08-05 | 德信智能手机技术(北京)有限公司 | Method for synchronously displaying subtitle in video telephone call |
US20100259676A1 (en) * | 2009-04-09 | 2010-10-14 | Ati Technologies Ulc | Detection and enhancement of in-video text |
CN103700385A (en) * | 2012-09-27 | 2014-04-02 | 深圳市快播科技有限公司 | Media player, playing method, and video post-processing method in hardware acceleration mode |
CN103905744A (en) * | 2014-04-10 | 2014-07-02 | 中央电视台 | Rendering synthetic method and system |
CN104113727A (en) * | 2013-04-17 | 2014-10-22 | 华为技术有限公司 | Monitoring video playing method, device and system |
CN104768075A (en) * | 2015-04-16 | 2015-07-08 | 福建升腾资讯有限公司 | External subtitle redirection method and system based on DirectShow |
CN105828165A (en) * | 2016-04-29 | 2016-08-03 | 维沃移动通信有限公司 | Method and terminal for acquiring caption |
CN105933724A (en) * | 2016-05-23 | 2016-09-07 | 福建星网视易信息系统有限公司 | Video producing method, device and system |
CN106303659A (en) * | 2016-08-22 | 2017-01-04 | 暴风集团股份有限公司 | The method and system of picture and text captions are loaded in player |
-
2017
- 2017-02-06 CN CN201710066207.2A patent/CN106899875A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1172764A2 (en) * | 2000-05-31 | 2002-01-16 | Sony Computer Entertainment Inc. | Image drawing |
US20070262995A1 (en) * | 2006-05-12 | 2007-11-15 | Available For Licensing | Systems and methods for video editing |
CN101483723A (en) * | 2008-01-11 | 2009-07-15 | 新奥特(北京)视频技术有限公司 | Method for performance guarantee of television subtitle playing apparatus based on diversity application |
CN101500127A (en) * | 2008-01-28 | 2009-08-05 | 德信智能手机技术(北京)有限公司 | Method for synchronously displaying subtitle in video telephone call |
US20100259676A1 (en) * | 2009-04-09 | 2010-10-14 | Ati Technologies Ulc | Detection and enhancement of in-video text |
CN103700385A (en) * | 2012-09-27 | 2014-04-02 | 深圳市快播科技有限公司 | Media player, playing method, and video post-processing method in hardware acceleration mode |
CN104113727A (en) * | 2013-04-17 | 2014-10-22 | 华为技术有限公司 | Monitoring video playing method, device and system |
CN103905744A (en) * | 2014-04-10 | 2014-07-02 | 中央电视台 | Rendering synthetic method and system |
CN104768075A (en) * | 2015-04-16 | 2015-07-08 | 福建升腾资讯有限公司 | External subtitle redirection method and system based on DirectShow |
CN105828165A (en) * | 2016-04-29 | 2016-08-03 | 维沃移动通信有限公司 | Method and terminal for acquiring caption |
CN105933724A (en) * | 2016-05-23 | 2016-09-07 | 福建星网视易信息系统有限公司 | Video producing method, device and system |
CN106303659A (en) * | 2016-08-22 | 2017-01-04 | 暴风集团股份有限公司 | The method and system of picture and text captions are loaded in player |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109729420A (en) * | 2017-10-27 | 2019-05-07 | 腾讯科技(深圳)有限公司 | Image processing method and device, mobile terminal and computer readable storage medium |
CN109729420B (en) * | 2017-10-27 | 2021-04-20 | 腾讯科技(深圳)有限公司 | Picture processing method and device, mobile terminal and computer readable storage medium |
CN107995440A (en) * | 2017-12-13 | 2018-05-04 | 北京奇虎科技有限公司 | A kind of video caption textures generation method and device |
CN108449651A (en) * | 2018-05-24 | 2018-08-24 | 腾讯科技(深圳)有限公司 | Subtitle adding method and device |
CN108449651B (en) * | 2018-05-24 | 2021-11-02 | 腾讯科技(深圳)有限公司 | Subtitle adding method, device, equipment and storage medium |
CN113473045A (en) * | 2020-04-26 | 2021-10-01 | 海信集团有限公司 | Subtitle adding method, device, equipment and medium |
CN113455012A (en) * | 2020-09-03 | 2021-09-28 | 深圳市大疆创新科技有限公司 | Rendering method, rendering device, mobile terminal and storage medium |
WO2022047686A1 (en) * | 2020-09-03 | 2022-03-10 | 深圳市大疆创新科技有限公司 | Rendering method, apparatus, mobile terminal, and storage medium |
CN114900736A (en) * | 2022-03-28 | 2022-08-12 | 网易(杭州)网络有限公司 | Video generation method and device and electronic equipment |
CN117336563A (en) * | 2023-10-23 | 2024-01-02 | 书行科技(北京)有限公司 | Externally hung subtitle display method and related products |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106899875A (en) | The display control method and device of plug-in captions | |
CN106792170A (en) | Method for processing video frequency and device | |
CN104731688B (en) | Point out the method and device of reading progress | |
CN107948708A (en) | Barrage methods of exhibiting and device | |
CN106804000A (en) | Direct playing and playback method and device | |
CN106993229A (en) | Interactive attribute methods of exhibiting and device | |
CN105389296A (en) | Information partitioning method and apparatus | |
CN108259991A (en) | Method for processing video frequency and device | |
CN104166689A (en) | Presentation method and device for electronic book | |
CN104035995A (en) | Method and device for generating group tags | |
CN113691833B (en) | Virtual anchor face changing method and device, electronic equipment and storage medium | |
CN108924644A (en) | Video clip extracting method and device | |
CN108737891A (en) | Video material processing method and processing device | |
CN108174269A (en) | Visualize audio frequency playing method and device | |
CN107807762A (en) | Method for showing interface and device | |
CN109146789A (en) | Picture splicing method and device | |
CN104461348A (en) | Method and device for selecting information | |
CN107943550A (en) | Method for showing interface and device | |
CN107797741A (en) | Method for showing interface and device | |
CN106991018A (en) | The method and device of changing an interface skin | |
CN108156506A (en) | The progress adjustment method and device of barrage information | |
CN110209877A (en) | Video analysis method and device | |
CN108062364A (en) | Information displaying method and device | |
CN110121106A (en) | Video broadcasting method and device | |
CN106896915A (en) | Input control method and device based on virtual reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170627 |
|
RJ01 | Rejection of invention patent application after publication |