CN103905744A - Rendering synthesis method and system - Google Patents
Rendering synthesis method and system Download PDFInfo
- Publication number
- CN103905744A CN103905744A CN201410145373.8A CN201410145373A CN103905744A CN 103905744 A CN103905744 A CN 103905744A CN 201410145373 A CN201410145373 A CN 201410145373A CN 103905744 A CN103905744 A CN 103905744A
- Authority
- CN
- China
- Prior art keywords
- video
- audio
- file
- picture
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 8
- 238000001308 synthesis method Methods 0.000 title abstract 2
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000010189 synthetic method Methods 0.000 claims description 11
- 230000015572 biosynthetic process Effects 0.000 claims description 8
- 238000003786 synthesis reaction Methods 0.000 claims description 8
- 230000002194 synthesizing effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000004064 recycling Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000007115 recruitment Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Landscapes
- Television Systems (AREA)
- Studio Circuits (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention provides a rendering synthesis method and a system, wherein the method provided by the invention comprises the following steps: and loading the video and audio files, and calling the CPU to decode frame data of the frame pictures of the video and audio files to obtain video and audio picture data when the subtitle files are selected to be loaded and opened. And meanwhile, calling a GPU rendering engine to render the information contained in the subtitle file. Then, the GPU rendering engine synthesizes, superposes and renders the video and audio image data and the rendered subtitle data, and superposes the subtitle on the video and audio image. In addition, the invention also correspondingly provides a rendering and synthesizing system according to the provided method. By adopting the method and the system provided by the invention, the subtitle picture can be adaptively adjusted along with the changes of the maximization minimization and the like of the video player window, the definition of the subtitle picture is ensured, and the browsing experience of a user is improved.
Description
Technical field
The present invention relates to field of broadcast televisions, more particularly, what relate to video and audio file and subtitle file plays up synthetic method and system.
Background technology
Along with the develop rapidly of information technology, broadcasting media channel and audient are also more and more extensive and diversified, and as the core force of broadcasting media, TV station also will consider channel and the audient that will propagate in program production process.Such as, same news program is in the time that TV station's different channel broadcasts or while broadcast towards different audients, need to show languages that different captions or identical caption content are different etc., such as captions are identical but be different local languages.Before program broadcasts manufacture with storage administration process in, as the storage of program material, search with preview, program and again make, send and broadcast etc. in production link, often just need to carry out respective handling to different captions different languages.Therefore, TV station's same video pictures tend in the time of the making of program has multiple different captions, or need to change different captions in the time of video replay.Thus, how efficiently to manage these video files and subtitle file, it is also the problem that needs solve that TV station needs to face in multimedia asset management simultaneously.
In the TV station of radio, TV and film industries, all by media asset management system, the files such as media audio/video program are managed at present.But in management multi-version captions and video, it is all often unification management, as video and audio file and subtitle file binding, in program making directly by subtitle superposition to the transcoding of packing on video pictures, file after synthetic like this just only has a program file when being saved in media asset management system, subtitle file is not independently, and this is unfavorable for follow-uply editing again.Simultaneously, need to the function such as browse for user provides the retrieval of video and audio because media resource system is follow-up, consider and reduce volume of transmitted data and reduce bandwidth, when high the video and audio of high standard definition code check file is saved in to media resource system, need to carry out the conversion of low code check, and thus conversion after low code check be also the file that video pictures and captions are directly synthesized together.
Fig. 3 is the caption managed process schematic diagram of the video/audio of prior art.As shown in Figure 3, when searching while browsing, can only browse the low bit-rate video file of overlapping text in media resource system.And, because subtitle file has been synthesized on video, can only calculate the decoding and the picture that carry out video by CPU and play up, cannot make full use of the computational resources such as GPU.In the time that this program has multiple subtitle file, also cannot carry out the selection of subtitle file, can only change subtitle file from Program Making System and again make syntheticly, then, through the conversion of low code check again, be saved in media resource system.
But one of function of media resource system is exactly, for business department, the video and audio material of preserving is searched to recycling.In the time that media resource system is wished to reuse this program and again made or again broadcast, video pictures itself has been applied captions, and producer cannot obtain the pure original video pictures without captions.Namely we often see that some program below there will be the fuzzy regions such as mosaic, has then again been covered again the reason of other captions in this fuzzy region on TV for this.
Certainly, the mode that service management department of TV station has also adopted video material to separate with captions, be a program making synthetic video and audio file when synthetic, and when changing low code check, do not change subtitle file yet, like this, when being saved in media resource system, just there are respectively high standard definition video and audio file, subtitle file and low code check file.But when when media resource system carries out searching of program and browses, cannot browse subtitle file simultaneously, subtitle file can only be served as independently annex storage, cannot provide effective reference information to user, and the video pictures while browsing is decoded by CPU equally and is played up, and cannot make full use of equally GPU computational resource.
Foregoing, the problem running into for prior art scheme.Synthesize and the conversion of low code check if each subtitle file is carried out to the making of a program, the reduction to production efficiency and the efficiency of management has beyond doubt promoted the recruitment cost in production process simultaneously.In addition, also cannot retain original video pictures, again produce utilization to later video and audio material and stayed sorry.If the mode that adopts material picture to separate with captions is saved in media resource system, cannot provide in real time user again, captions reference information intuitively, the mode that user can only open subtitle file by third party software is checked caption content, but can not guarantee so again the demand of caption information and video and audio picture synchronization, also bring and used inconvenience and puzzlement to user.
Summary of the invention
The present invention is directed to the problems referred to above, propose one and played up synthetic method and system, the present invention makes full use of CPU and GPU computational resource, for user retrieves while browsing low code check file in media resource system, provide a kind of variation of player window while supporting to browse according to user and the method and system of real-time rendering image content, guarantee the high-fidelity of the strict synchronous and rendered picture of captions, picture simultaneously.
In one aspect, the invention provides one and play up synthetic method, comprise the following steps: load video and audio file, in the time selecting load and open subtitle file, call CPU the frame picture of video and audio file is carried out to frame data decoding, obtain video and audio picture data.Meanwhile, calling the information that GPU render engine comprises subtitle file plays up.Then, by GPU render engine to video and audio picture data and play up the caption data obtaining synthesize stack play up, by subtitle superposition to video and audio picture.
Preferably, the synthetic method of playing up provided by the invention, in the time selecting not loading caption file, directly calls CPU and its decoded video and audio picture is carried out to CPU plays up, thereby completes the processing of video and audio picture.
In yet another aspect, the present invention also provides one to play up synthesis system, comprises CPU video-audio decoder and GPU render engine, and wherein CPU video-audio decoder is for carrying out frame data decoding to the frame picture of video and audio file; And GPU render engine is played up for information that subtitle file is comprised, and the video and audio picture data that decoding is obtained and play up the caption data obtaining and synthesize to superpose and play up.
Preferably, the CPU video-audio decoder of playing up synthesis system provided by the invention also for, in the time of loading caption file not, video and audio is decoded and CPU plays up, thereby completes the processing of video and audio picture.
Provided by the inventionly play up synthetic method and system makes full use of CPU and GPU computational resource, combined calculation the technology of playing up, subtitle file is played up separately, make video file retain the most original picture, when this program recycling for follow-up business system, provide more complete original video information, and simplified the operation in program making process.And method and system provided by the invention can directly superpose on original video, and new captions are synthetic to be browsed.In addition, because the present invention has adopted render engine, and taking full advantage of GPU computational resource, size, position, color, font, the transparency texts of captions can be adjusted in real time according to the size of video pictures, background, contrast, brightness in the captions that make in the time playing up.Make the self adaptation adjustment along with variations such as the maximization of video player window minimize of captions picture, and guarantee the definition of captions picture, promoted user's viewing experience.
Accompanying drawing explanation
Specific embodiments of the invention are described below with reference to accompanying drawings, wherein:
Fig. 1 is that audio frequency picture of the present invention separates management process schematic diagram with captions.
Fig. 2 is the flow chart of steps of method provided by the invention.
Fig. 3 is the caption managed process schematic diagram of the video/audio of prior art.
Embodiment
In order to make technical scheme of the present invention and advantage clearer, below in conjunction with accompanying drawing, exemplary embodiment of the present invention is described in more detail, obviously, described embodiment is only a part of embodiment of the present invention, rather than all embodiment's is exhaustive.
Method and system provided by the invention are mainly used in media asset management system, when user browses, use.The technology that has adopted CPU and GPU combined calculation and played up due to the present invention, subtitle file is played up separately, make video file retain the most original picture, when this program recycling for follow-up business system, provide more complete original video information, and simplified the operation in program making process.In order to achieve the above object, the present invention, in program making in early stage building-up process, adopts the video and audio picture of material to separate management with captions, thereby the new captions that can directly superpose on original video are browsed after also synthesizing.The process that below in conjunction with Fig. 1, audio frequency picture is separated management with captions describes in detail.
Fig. 1 is that audio frequency picture of the present invention separates management process schematic diagram with captions, and as shown in Figure 1, step S101, synthesizes video and audio file by corresponding video file and audio file.Certainly, in the case, this video and audio file is the video file with audio frequency.But in fact, in this article, video and audio file can be also the video file without audio frequency, such as the absence of audio in the situation that.Then, then pass through step S102, video and audio file transcoding is generated to low code check file.
Wherein, in step S102, select to generate low code check file, that what in the time browsing, to adopt due to media resource system is the low code check of 640*360 resolution, can avoid the direct synthetic video of captions to be displayed in full screen on the high-clear displays such as 1920*1080 time, the fuzzy distortion that will seem of the information such as captions, when serious, may there is the situations such as mosaic, affect user's viewing experience.
Finally, execution step S103, is saved in media asset management system by the low code check file of video and audio file generated, and meanwhile, subtitle file, without processing, is directly saved in media asset system.Wherein, in subtitle file, need to retain corresponding time-code information, and the time-code information of general captions adopts the organizational form of " enter point, go out point, word " more.The present invention supports the subtitle file of multiple different-format, as SRT, XML, TXT, DLG subtitle file form etc.The xm1 subtitle file of for example supporting in the present invention, adopts following organizational form.
The SRT subtitle file that for example the present invention supports, adopts following organizational form.
The resource object of now managing in media asset system comprises the high bit-rate video file of high/standard definition, low code check video and audio file and subtitle file, for recycling and browsing.
Method and system provided by the invention are mainly used in media asset management system, when user browses, use.Be user searching while browsing video-audio program, load and open subtitle file or loading caption file not by selection, utilize CPU video and audio decoding provided by the invention and GPU render engine to complete the demonstration of finally browsing picture.Below in conjunction with Fig. 2, the step of method provided by the invention is described in detail the flow chart of steps that Fig. 2 is method provided by the invention.
Step S201, checks and loads video and audio file, whether selects loading caption file.In this article, video and audio file can be corresponding video file and audio file composite document, can be also the video file without audio frequency.In general, for video file and the synthetic video and audio file of audio file, in media asset management system, conventionally have captions, need to coordinate corresponding captions as user searches while browsing video and audio file, select loading caption file.And for the video file without audio frequency, this class file does not have corresponding captions to load, need not consider loading caption file, for example some landscape cards, documentary film etc.Certainly, do not need to coordinate captions when user searches while browsing video and audio file, do not need loading caption file yet.Just check below and load video and audio file, selecting whether two kinds of situations of loading caption file to be described in detail.
Load and open subtitle file as selected, executed in parallel step S202 and step S203, call CPU the frame picture of video and audio file is carried out to frame data decoding, obtain video and audio picture data, and call the information that GPU render engine comprises subtitle file and play up.Perform step S204, GPU render engine to the video and audio picture data that obtains of decoding and play up the caption data obtaining and synthesize to superpose and play up, completes video and audio picture again.Wherein, the information that subtitle file comprises comprises the caption character that frame video pictures is corresponding, font, color, size, position, transparency and the time-code etc. of captions.
Particularly, as selected to load and open subtitle file, call CPU and GPU computational resource, CPU is carried out to initialization operation depending on sound decoder and GPU render engine simultaneously, as file path, initialization buffer memory Buffer, load document etc. are set.CPU video-audio decoder will carry out frame data decoding to video and audio frame picture by CPU, and " time, minute, second, frame " time-code information that can Real-time Obtaining video pictures in decode procedure, no matter in player, adopt speed to play or quick seek, video-audio decoder can get in real time and accurately time-code information in the time of decoding.Then, render engine is searched time-code information corresponding in subtitle file.Render engine gets after corresponding time-code information, first whether contain captions at this time-code according to the time-code position enquiring of video, as contain captions, the captions value inquiring is carried out to playing up of font, color, size, position, transparency texts by GPU by caption character, then decoded CPU frame video and audio picture data being synthesized to stack by GPU plays up, subtitle superposition, to video pictures, is completed to the synthetic of frame picture, final display frame.
Give an example in the application of actual scene below in conjunction with the present invention, for example, in media asset management system, user want to search browse one about the video and audio file of baby growth recording and coordinate captions, wherein, in the present embodiment, subtitle file organizational form is as follows:
User searches and browses relevant video and audio, in loading and open video and audio file, select to load and open corresponding subtitle file, CPU arranges the initialization operations such as file path, initialization buffer memory Buffer, load document depending on sound decoder and GPU render engine.CPU video-audio decoder will carry out frame data decoding to video and audio frame picture by CPU, for example, when be decoded to baby scribble on the wall drawing frame picture, Real-time Obtaining time-code information 00:10:08,039.Corresponding time-code information is searched and obtained to render engine in subtitle file (subtitle file in the present embodiment is SRT subtitle file),
Render engine gets corresponding time-code information 00:10:08, after 039, whether contains captions at this time-code according to the time-code position enquiring of video.Due to the drawing of scribbling on the wall of the baby on this frame picture, there is no corresponding dialogue or aside captions, therefore, as shown in subtitle file, 00:10:08, is engraved in 00:10:07 at 039 o'clock, 039--> 00:10:13, in 119 moment, does not have captions.Render engine is played up decoded CPU frame video and audio picture data by GPU, complete frame picture, final display frame.
And complete the frame picture that scribble is on the wall painted, Real-time Obtaining time-code information 00:10:14,033 when being decoded to baby.Render engine gets corresponding time-code information 00:00:14, after 033, whether contains captions at this time-code according to the time-code position enquiring of video.Baby on this time frame picture completes scribble drawing on the wall, has corresponding side certainly.Therefore, as shown in subtitle file, 00:00:14, is engraved in 00:10:13 at 033 o'clock, 119--> 00:10:22, and in 439 moment, and the picture that aside captions are for they is arbitrary creation mostly.Render engine carries out the playing up of font, color, size, position, transparency texts by GPU by caption character by the captions value inquiring, then decoded CPU frame video and audio picture data being synthesized to stack by GPU plays up, subtitle superposition to video pictures, complete the synthetic of frame picture, final display frame.
As select not loading caption file, and perform step S205, directly call CPU and carry out video and audio decoding, and decoded video and audio picture is played up.
Correspondingly, in conjunction with said method, the invention allows for one and play up synthesis system, comprising: CPU video-audio decoder and GPU render engine.Wherein, CPU video-audio decoder, for carrying out frame data decoding to the every frame picture of video and audio; GPU render engine, plays up for the information that subtitle file is comprised, and synthesizes stack with the video and audio picture data of CPU video-audio decoder decoding and play up.In addition, CPU video-audio decoder carries out after frame data decoding video and audio, also plays up synthetic to decoded video and audio picture.
Method and system provided by the invention adopt render engine, and take full advantage of GPU computational resource, size, position, color, font, the transparency texts of captions can be adjusted in real time according to the size of video pictures, background, contrast, brightness in the captions that make in the time playing up, the self adaptation adjustment along with variations such as the maximization of video player window minimize, and the definition that has guaranteed captions pictures, has promoted user's viewing experience.
Above embodiment is only in order to technical scheme of the present invention to be described, but not is limited.Therefore,, in the situation that not deviating from spirit of the present invention and essence thereof, those skilled in the art can make various changes, replacement and modification.Obviously, but within these changes, replacement and modification all should be covered by the protection range of the claims in the present invention.
Claims (10)
1. play up a synthetic method, it is characterized in that, comprise the following steps:
Load video and audio file;
In the time selecting load and open subtitle file,
Call CPU the frame picture of video and audio file is carried out to frame data decoding, obtain video and audio picture data;
Meanwhile, calling the information that GPU render engine comprises subtitle file plays up;
Then, by described GPU render engine to described video and audio picture data and play up the caption data obtaining synthesize stack play up, by subtitle superposition to video and audio picture.
2. the synthetic method of playing up as claimed in claim 1, is characterized in that, described video and audio file is the video file with audio frequency, or without the video file of audio frequency.
3. the synthetic method of playing up as claimed in claim 1 or 2, is characterized in that, the information that described subtitle file comprises comprises the caption character that frame video and audio picture is corresponding, font, color, size, position, transparency and the time-code of captions.
4. the synthetic method of playing up as claimed in claim 1 or 2, is characterized in that, described CPU carries out after frame data decoding the frame picture of video and audio file, and Real-time Obtaining is to time-code information;
Render engine is searched corresponding time-code information in subtitle file, whether contains captions at this time-code, as contain captions according to time-code position enquiring, carries out GPU caption rendering.
5. the synthetic method of playing up as claimed in claim 1 or 2, is characterized in that, described method also comprises:
In the time selecting not loading caption file, directly call CPU and its decoded video and audio picture is carried out to CPU play up, thereby complete the processing of video and audio picture.
6. play up a synthesis system, it is characterized in that, comprise CPU video-audio decoder and GPU render engine, wherein
CPU video-audio decoder is for carrying out frame data decoding to the frame picture of video and audio file; And
GPU render engine is played up for information that subtitle file is comprised, and the video and audio picture data that decoding is obtained and play up the caption data obtaining and synthesize to superpose and play up.
7. the synthesis system of playing up as claimed in claim 6, is characterized in that, described video and audio file is the video file with audio frequency, or without the video file of audio frequency.
8. as described in claim 6 or 7, play up synthesis system, it is characterized in that, the information that described subtitle file comprises comprises the caption character that frame video and audio picture is corresponding, font, color, size, position, transparency and the time-code of captions.
9. as described in claim 6 or 7, play up synthesis system, it is characterized in that, described CPU video-audio decoder carries out after frame data decoding the frame picture of video and audio file, and Real-time Obtaining is to time-code information;
GPU render engine is searched corresponding time-code information in subtitle file, whether contains captions at this time-code, as contain captions according to time-code position enquiring, carries out GPU caption rendering.
10. as described in claim 6 or 7, play up synthesis system, it is characterized in that, described CPU video-audio decoder also for, in the time of loading caption file not, the video and audio picture of its decoding is carried out to CPU and plays up, thereby complete the processing of video and audio picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410145373.8A CN103905744B (en) | 2014-04-10 | 2014-04-10 | Rendering synthesis method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410145373.8A CN103905744B (en) | 2014-04-10 | 2014-04-10 | Rendering synthesis method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103905744A true CN103905744A (en) | 2014-07-02 |
CN103905744B CN103905744B (en) | 2017-07-11 |
Family
ID=50996874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410145373.8A Active CN103905744B (en) | 2014-04-10 | 2014-04-10 | Rendering synthesis method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103905744B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106210838A (en) * | 2016-07-14 | 2016-12-07 | 腾讯科技(深圳)有限公司 | Caption presentation method and device |
CN106303659A (en) * | 2016-08-22 | 2017-01-04 | 暴风集团股份有限公司 | The method and system of picture and text captions are loaded in player |
CN106303579A (en) * | 2016-09-20 | 2017-01-04 | 上海斐讯数据通信技术有限公司 | Video play device and method |
CN106600527A (en) * | 2016-12-19 | 2017-04-26 | 广东威创视讯科技股份有限公司 | Method and device for embedding adaptive-color text into image |
CN106899875A (en) * | 2017-02-06 | 2017-06-27 | 合网络技术(北京)有限公司 | The display control method and device of plug-in captions |
CN107211169A (en) * | 2015-02-03 | 2017-09-26 | 索尼公司 | Dispensing device, sending method, reception device and method of reseptance |
WO2017161768A1 (en) * | 2016-03-22 | 2017-09-28 | 乐视控股(北京)有限公司 | Method and device for generating caption background in video frame |
CN107995440A (en) * | 2017-12-13 | 2018-05-04 | 北京奇虎科技有限公司 | A kind of video caption textures generation method and device |
CN110460889A (en) * | 2019-09-16 | 2019-11-15 | 深圳市迅雷网络技术有限公司 | A kind of video throws screen method, apparatus, system and storage medium |
CN111133763A (en) * | 2017-09-26 | 2020-05-08 | Lg 电子株式会社 | Superposition processing method and device in 360 video system |
CN112312196A (en) * | 2020-11-13 | 2021-02-02 | 深圳市前海手绘科技文化有限公司 | Video subtitle making method |
CN115334348A (en) * | 2021-05-10 | 2022-11-11 | 腾讯科技(北京)有限公司 | Video subtitle adjusting method and device, electronic equipment and storage medium |
CN116916094A (en) * | 2023-09-12 | 2023-10-20 | 联通在线信息科技有限公司 | Dual-video mixed-stream playing method, player and storage medium |
CN117676053A (en) * | 2024-01-31 | 2024-03-08 | 成都华栖云科技有限公司 | Dynamic subtitle rendering method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101188681A (en) * | 2007-11-19 | 2008-05-28 | 新奥特(北京)视频技术有限公司 | A video and audio and image separation playing system |
CN101188696A (en) * | 2007-11-19 | 2008-05-28 | 新奥特(北京)视频技术有限公司 | A system for separation, preparation and playing of TV station caption and video |
CN101188695A (en) * | 2007-11-19 | 2008-05-28 | 新奥特(北京)视频技术有限公司 | A method for separation, preparation and playing of TV station caption and video |
CN101540847A (en) * | 2008-03-21 | 2009-09-23 | 株式会社康巴思 | Caption creation system and caption creation method |
CN102572298A (en) * | 2010-12-29 | 2012-07-11 | 新奥特(北京)视频技术有限公司 | System and method for rendering subtitle in advance |
-
2014
- 2014-04-10 CN CN201410145373.8A patent/CN103905744B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101188681A (en) * | 2007-11-19 | 2008-05-28 | 新奥特(北京)视频技术有限公司 | A video and audio and image separation playing system |
CN101188696A (en) * | 2007-11-19 | 2008-05-28 | 新奥特(北京)视频技术有限公司 | A system for separation, preparation and playing of TV station caption and video |
CN101188695A (en) * | 2007-11-19 | 2008-05-28 | 新奥特(北京)视频技术有限公司 | A method for separation, preparation and playing of TV station caption and video |
CN101540847A (en) * | 2008-03-21 | 2009-09-23 | 株式会社康巴思 | Caption creation system and caption creation method |
CN102572298A (en) * | 2010-12-29 | 2012-07-11 | 新奥特(北京)视频技术有限公司 | System and method for rendering subtitle in advance |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107211169A (en) * | 2015-02-03 | 2017-09-26 | 索尼公司 | Dispensing device, sending method, reception device and method of reseptance |
WO2017161768A1 (en) * | 2016-03-22 | 2017-09-28 | 乐视控股(北京)有限公司 | Method and device for generating caption background in video frame |
CN106210838B (en) * | 2016-07-14 | 2019-05-24 | 腾讯科技(深圳)有限公司 | Caption presentation method and device |
CN106210838A (en) * | 2016-07-14 | 2016-12-07 | 腾讯科技(深圳)有限公司 | Caption presentation method and device |
CN106303659A (en) * | 2016-08-22 | 2017-01-04 | 暴风集团股份有限公司 | The method and system of picture and text captions are loaded in player |
CN106303579A (en) * | 2016-09-20 | 2017-01-04 | 上海斐讯数据通信技术有限公司 | Video play device and method |
CN106600527A (en) * | 2016-12-19 | 2017-04-26 | 广东威创视讯科技股份有限公司 | Method and device for embedding adaptive-color text into image |
CN106899875A (en) * | 2017-02-06 | 2017-06-27 | 合网络技术(北京)有限公司 | The display control method and device of plug-in captions |
CN111133763A (en) * | 2017-09-26 | 2020-05-08 | Lg 电子株式会社 | Superposition processing method and device in 360 video system |
CN111133763B (en) * | 2017-09-26 | 2022-05-10 | Lg 电子株式会社 | Superposition processing method and device in 360 video system |
US11575869B2 (en) | 2017-09-26 | 2023-02-07 | Lg Electronics Inc. | Overlay processing method in 360 video system, and device thereof |
CN107995440A (en) * | 2017-12-13 | 2018-05-04 | 北京奇虎科技有限公司 | A kind of video caption textures generation method and device |
CN110460889A (en) * | 2019-09-16 | 2019-11-15 | 深圳市迅雷网络技术有限公司 | A kind of video throws screen method, apparatus, system and storage medium |
CN112312196A (en) * | 2020-11-13 | 2021-02-02 | 深圳市前海手绘科技文化有限公司 | Video subtitle making method |
CN115334348A (en) * | 2021-05-10 | 2022-11-11 | 腾讯科技(北京)有限公司 | Video subtitle adjusting method and device, electronic equipment and storage medium |
CN116916094A (en) * | 2023-09-12 | 2023-10-20 | 联通在线信息科技有限公司 | Dual-video mixed-stream playing method, player and storage medium |
CN116916094B (en) * | 2023-09-12 | 2024-01-19 | 联通在线信息科技有限公司 | Dual-video mixed-stream playing method, player and storage medium |
CN117676053A (en) * | 2024-01-31 | 2024-03-08 | 成都华栖云科技有限公司 | Dynamic subtitle rendering method and system |
CN117676053B (en) * | 2024-01-31 | 2024-04-16 | 成都华栖云科技有限公司 | Dynamic subtitle rendering method and system |
Also Published As
Publication number | Publication date |
---|---|
CN103905744B (en) | 2017-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103905744A (en) | Rendering synthesis method and system | |
US11770589B2 (en) | Using text data in content presentation and content search | |
EP2677738B1 (en) | Display control method, recording medium, and display control device | |
CN1997153B (en) | A method and device for computer multi-video playing | |
JP4518194B2 (en) | Generating apparatus, generating method, and program | |
US20080279535A1 (en) | Subtitle data customization and exposure | |
KR102310241B1 (en) | Source device, controlling method thereof, sink device and processing method for improving image quality thereof | |
CN105830456B (en) | The method and apparatus of transmission and receiving media data | |
CN104065979A (en) | Method for dynamically displaying information related with video content and system thereof | |
EP2041974A1 (en) | Method and apparatus for encoding/decoding signal | |
US10972809B1 (en) | Video transformation service | |
US8699846B2 (en) | Reproducing device, reproducing method, program, and data structure | |
JP2013046198A (en) | Television viewing apparatus | |
US20160322080A1 (en) | Unified Processing of Multi-Format Timed Data | |
CN108769806B (en) | Media content display method and device | |
JP2020017945A (en) | Automated media publishing | |
US10390107B2 (en) | Apparatus and method for transceiving scene composition information in multimedia communication system | |
CN101594476A (en) | A kind of processing method of ultralong caption rendering | |
CN101594477A (en) | A kind of treatment system of ultralong caption rendering | |
Jamil et al. | Overview of JPEG Snack: a novel international standard for the snack culture | |
CN102088568B (en) | A kind of subtitle making system | |
CN101594479B (en) | System for processing ultralong caption data | |
US20090161012A1 (en) | Dynamic multilayer video processing method | |
JP2009010846A (en) | Digital broadcasting receiver | |
CN104301791A (en) | Method for optimizing starting advertisements of set top box |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |