CN113014984A - Method and device for adding subtitles in real time, computer equipment and computer storage medium - Google Patents

Method and device for adding subtitles in real time, computer equipment and computer storage medium Download PDF

Info

Publication number
CN113014984A
CN113014984A CN201911310259.5A CN201911310259A CN113014984A CN 113014984 A CN113014984 A CN 113014984A CN 201911310259 A CN201911310259 A CN 201911310259A CN 113014984 A CN113014984 A CN 113014984A
Authority
CN
China
Prior art keywords
video
audio stream
stream data
character
real time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911310259.5A
Other languages
Chinese (zh)
Inventor
周雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN201911310259.5A priority Critical patent/CN113014984A/en
Publication of CN113014984A publication Critical patent/CN113014984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4856End-user interface for client configuration for language selection, e.g. for the menu or subtitles

Abstract

The invention provides a method and a device for adding subtitles in real time, computer equipment and a computer storage medium, wherein the method for adding subtitles in real time comprises the following steps: when the video is played, the system acquires audio stream data in real time; processing the audio stream data by utilizing a preset translation module to obtain a character string corresponding to the audio stream data; and displaying the character strings on the video picture by utilizing the dot matrix character library. According to the method for adding the subtitles in real time, when the computer equipment plays the video, the audio stream data can be directly extracted from the decoding service of the bottom layer of the computer equipment system to perform audio translation, so that the real-time subtitles are obtained, and therefore when the computer equipment plays the video, the real-time subtitles can be added to the video or the audio decoded by the bottom layer of the system, and the user experience degree is improved.

Description

Method and device for adding subtitles in real time, computer equipment and computer storage medium
Technical Field
The invention relates to the technical field of real-time subtitles, in particular to a method and a device for adding subtitles in real time, computer equipment and a computer storage medium.
Background
The existing video application generally decodes the video by using a system decoding service pre-stored in a computer device when playing the video, and a method for displaying subtitles in the video generally synthesizes a subtitle stream into a video stream by using a video production tool. Therefore, subtitles need to be prepared in advance before video release to be synthesized into video, or subtitles need to be added after video release, which affects user experience.
A method for adding subtitles in real time during video playing is lacking.
Disclosure of Invention
In view of the above problems, the present invention provides a method, an apparatus, a computer device, and a computer storage medium for adding subtitles in real time, so that when playing a video, the computer device can add subtitles in real time for any video or audio decoded by a system bottom layer, thereby improving user experience.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for adding subtitles in real time comprises the following steps:
when the video is played, the system acquires audio stream data in real time;
processing the audio stream data by utilizing a preset translation module to obtain a character string corresponding to the audio stream data;
and displaying the character strings on the video picture by utilizing the dot matrix character library.
Preferably, in the method for adding subtitles in real time, the step of acquiring audio stream data in real time by the system when playing video includes:
and when the video is played, controlling a multimedia extractor to extract the audio stream data from the video stream decoded in real time by using the video decoding service.
Preferably, in the method for adding subtitles in real time, the preset translation module is a system translation module;
the step of processing the audio stream data by using a preset translation module to obtain a character string corresponding to the audio stream data comprises the following steps:
calling the system translation module through a pre-established callback function, and transmitting the audio stream data to the system translation module;
and acquiring the character strings output by the system translation module, and transmitting the character strings to a video decoding service through a hardware abstraction layer interface so that the video decoding service displays the character strings on a video picture according to a dot matrix word stock.
Preferably, in the method for adding subtitles in real time, the preset translation module is a cloud translation service;
the step of processing the audio stream data by using a preset translation module to obtain a character string corresponding to the audio stream data comprises the following steps:
transmitting the audio stream data to the cloud translation service through a network;
and receiving the character strings transmitted by the cloud translation service, and transmitting the character strings to a video decoding service through a hardware abstraction layer interface so that the video decoding service displays the character strings on a video picture according to a dot matrix word stock.
Preferably, in the real-time caption adding method, the displaying the text character string on the video picture by using the dot matrix font library includes:
utilizing a video decoding service to call a dot matrix word stock to calculate character display coordinates of the character strings on a video picture;
and replacing pixel points corresponding to the character display coordinates in the video picture with preset color pixel points so as to display the character string.
Preferably, in the method for adding subtitles in real time, when the video is a YUV coded format video, the step of replacing pixel points corresponding to the text display coordinates in a video picture with preset color pixel points to display the text character string includes:
and setting the Y value of a pixel point corresponding to the character display coordinate in the video picture to be 255 so as to display the character string.
The invention also provides a device for adding subtitles in real time, which comprises:
the audio stream acquisition module is used for acquiring audio stream data in real time by the system when playing videos;
the character string acquisition module is used for processing the audio stream data by using a preset translation module to acquire character strings corresponding to the audio stream data;
and the character string display module is used for displaying the character string on the video picture by utilizing the dot matrix character library.
The invention also provides computer equipment, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program to enable the computer equipment to execute the real-time caption adding method.
Preferably, in the computer device, the computer device includes a mobile terminal.
The present invention also provides a computer storage medium storing a computer program which, when run on a processor, performs the real-time subtitling method.
The invention provides a method for adding subtitles in real time, which comprises the following steps: when the video is played, the system acquires audio stream data in real time; processing the audio stream data by utilizing a preset translation module to obtain a character string corresponding to the audio stream data; and displaying the character strings on the video picture by utilizing the dot matrix character library. According to the method for adding the subtitles in real time, when the computer equipment plays the video, the audio stream data can be directly extracted from the decoding service of the bottom layer of the computer equipment system to perform audio translation, so that the real-time subtitles are obtained, and therefore when the computer equipment plays the video, the real-time subtitles can be added to the video or the audio decoded by the bottom layer of the system, and the user experience degree is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
Fig. 1 is a flowchart of a method for adding subtitles in real time according to embodiment 1 of the present invention;
fig. 2 is a flowchart of acquiring a text string according to embodiment 2 of the present invention;
fig. 3 is another flowchart for acquiring a text string according to embodiment 2 of the present invention;
FIG. 4 is a flowchart of displaying text strings according to embodiment 3 of the present invention;
FIG. 5 is a flowchart of another method for displaying text strings according to embodiment 3 of the present invention;
fig. 6 is a schematic structural diagram of a real-time subtitle adding apparatus according to embodiment 4 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are only intended to indicate specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
Example 1
Fig. 1 is a flowchart of a method for adding subtitles in real time according to embodiment 1 of the present invention, where the method includes the following steps:
step S11: and when the video is played, the system acquires audio stream data in real time.
In the embodiment of the invention, when the computer equipment plays the video or the audio, the video or the audio can be decoded in real time at the bottom layer of the system, and then the audio stream data in the decoded video or audio can be extracted. For example, when a user plays a video using a video playing program, the video playing program will call a decoder pre-stored in the computer device to decode, and during the real-time decoding process, audio stream data of the decoded video or audio can be extracted by using an algorithm or an application program pre-stored in the computer device.
In the embodiment of the invention, when the video is played, the multimedia extractor is controlled by using the video decoding service to extract the audio stream data from the video stream decoded in real time. In the android system, the video decoding service is an OMX service, and the OMX service is a standard of a multimedia application program and is used for coding and decoding in the android system.
Step S12: and processing the audio stream data by utilizing a preset translation module to obtain a character string corresponding to the audio stream data.
In the embodiment of the present invention, after the audio stream data is obtained, the audio stream data may be transmitted to the preset translation module, specifically, a callback function may be pre-established in the computer device, and a connection may be established with the preset translation module through the callback function. The preset translation module comprises a Google translation or track translation and other third-party translation modules, and the translation modules can be pre-stored in the computer equipment and can also be online translation modules. The method comprises the steps that audio stream data are transmitted to a three-party audio translation service in real time, and meanwhile, a receiving preset translation module translates the audio stream data into character strings of character contents.
Step S13: and displaying the character strings on the video picture by utilizing the dot matrix character library.
In the embodiment of the invention, after receiving the character string of the preset translation module, the character string is displayed on the video picture in real time by utilizing the dot matrix character library. The lattice font library is a set of lattice fonts pre-stored in a computer device, and specifically includes, for example, hzk16 lattice font libraries, hzk12 lattice font libraries, and the like.
In the embodiment of the invention, when the computer equipment plays a video, the audio stream data is directly extracted from the bottom decoding service of the computer equipment system to perform audio translation, so that the real-time subtitles can be added to the video or audio decoded by the bottom system layer when the computer equipment plays the video, for example, in a live video, the real-time subtitles can be added by using the method, and the user experience is improved.
Example 2
Fig. 2 is a flowchart for acquiring a text string according to embodiment 2 of the present invention, including the following steps:
step S21: and calling the system translation module through a pre-established callback function, and transmitting the audio stream data to the system translation module.
In the embodiment of the present invention, the preset translation module is a system translation module, that is, a translation module preset in a computer device, or a translation module downloaded by a user, for example, a track translation program or a google translation program.
Step S22: and acquiring the character strings output by the system translation module, and transmitting the character strings to a video decoding service through a hardware abstraction layer interface so that the video decoding service displays the character strings on a video picture according to a dot matrix word stock.
In the embodiment of the present invention, after the text character string output by the system translation module is obtained, the text character string may be transmitted to a video decoding service of the computer device in real time through the hardware abstraction layer interface, that is, the text character string is transmitted back to the video or audio decoding service, so as to embed text subtitles when decoding video or audio, for example, when playing video, the content of the text character string may be embedded into the decoded video, so that the video carries real-time subtitles.
Fig. 3 is another flowchart for acquiring a text string according to embodiment 2 of the present invention, which includes the following steps:
step S31: and transmitting the audio stream data to the cloud translation service through a network.
In the embodiment of the present invention, the preset translation module is a cloud translation service, that is, a cloud translation service that can be acquired from the internet after the computer device is connected to the internet, for example, a track online translation service or a google online translation service.
Step S32: and receiving the character strings transmitted by the cloud translation service, and transmitting the character strings to a video decoding service through a hardware abstraction layer interface so that the video decoding service displays the character strings on a video picture according to a dot matrix word stock.
In the embodiment of the invention, after receiving the text character string transmitted by the cloud translation service, the text character string can be transmitted to the video decoding service of the computer equipment in real time through the hardware abstraction layer interface, that is, the text character string is transmitted back to the video or audio decoding service, so that text subtitles are embedded when the video or audio is decoded, for example, when the video is played, the content of the text character string can be embedded into the decoded video, and the video is provided with real-time subtitles.
Example 3
Fig. 4 is a flowchart for displaying a text string according to embodiment 3 of the present invention, which includes the following steps:
step S41: and utilizing the video decoding service to call a dot matrix character library to calculate the character display coordinates of the character strings on the video picture.
Step S42: and replacing pixel points corresponding to the character display coordinates in the video picture with preset color pixel points so as to display the character string.
In the embodiment of the invention, after the video decoding service of the computer device obtains the character strings, the video decoding service calls a lattice character library stored in the computer device in advance, and calculates the coordinates of characters corresponding to the character strings in the lattice character library on the video picture. The process of calculating the text coordinates may be implemented by using an algorithm or an application program, for example, an application program for calculating the text coordinates and a size and a position parameter of a stored font may be preset in the computer device, and after the application program obtains the text character string, the text display coordinates may be calculated according to parameters such as the corresponding font, the size of the font, the position parameter, and the size of the video frame in the dot matrix font library.
In the embodiment of the invention, after the character display coordinates are calculated, the pixel points corresponding to the character display coordinates in the video picture are replaced by the preset color pixel points, so that the character content corresponding to the character string can be displayed at the corresponding position of the video picture.
In the embodiment of the present invention, as shown in fig. 5, when the video is a YUV coded format video, the process of displaying the text character string includes:
step S43: and setting the Y value of a pixel point corresponding to the character display coordinate in the video picture to be 255 so as to display the character string.
In the embodiment of the invention, when the video coding format of the video is the YUV format, after the text display coordinates are obtained, the Y value of the pixel point corresponding to the text display coordinates can be set to 255, that is, the pixel point corresponding to the text display coordinates in the video picture is replaced by the white pixel point, and then the text content corresponding to the text character string can be displayed at the corresponding position of the video picture.
Example 4
Fig. 6 is a schematic structural diagram of a real-time subtitle adding apparatus according to embodiment 4 of the present invention.
The real-time subtitle adding apparatus 600 includes:
the audio stream acquiring module 610 is configured to acquire audio stream data in real time by the system when playing a video;
a character string obtaining module 620, configured to process the audio stream data by using a preset translation module, and obtain a character string corresponding to the audio stream data;
and a character string display module 630, configured to display the text character string on the video picture by using the dot matrix font library.
In the embodiment of the present invention, for more detailed description of functions of the modules, reference may be made to contents of corresponding parts in the foregoing embodiment, which are not described herein again.
In addition, the invention also provides computer equipment which can comprise mobile terminals such as smart phones, tablet computers, vehicle-mounted computers and intelligent wearable equipment. The computer device comprises a memory and a processor, wherein the memory can be used for storing a computer program, and the processor executes the computer program so as to enable the computer device to execute the functions of the method or the modules in the real-time caption adding device.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio stream data, a phonebook, etc.) created according to the use of the computer device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The embodiment also provides a computer storage medium for storing a computer program used in the computer device.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for adding subtitles in real time is characterized by comprising the following steps:
when the video is played, the system acquires audio stream data in real time;
processing the audio stream data by utilizing a preset translation module to obtain a character string corresponding to the audio stream data;
and displaying the character strings on the video picture by utilizing the dot matrix character library.
2. The method for adding subtitles in real time according to claim 1, wherein the acquiring audio stream data in real time by a system when playing video comprises:
and when the video is played, controlling a multimedia extractor to extract the audio stream data from the video stream decoded in real time by using the video decoding service.
3. The real-time subtitle adding method of claim 1, wherein the preset translation module is a system translation module;
the step of processing the audio stream data by using a preset translation module to obtain a character string corresponding to the audio stream data comprises the following steps:
calling the system translation module through a pre-established callback function, and transmitting the audio stream data to the system translation module;
and acquiring the character strings output by the system translation module, and transmitting the character strings to a video decoding service through a hardware abstraction layer interface so that the video decoding service displays the character strings on a video picture according to a dot matrix word stock.
4. The real-time subtitle adding method of claim 1, wherein the preset translation module is a cloud translation service;
the step of processing the audio stream data by using a preset translation module to obtain a character string corresponding to the audio stream data comprises the following steps:
transmitting the audio stream data to the cloud translation service through a network;
and receiving the character strings transmitted by the cloud translation service, and transmitting the character strings to a video decoding service through a hardware abstraction layer interface so that the video decoding service displays the character strings on a video picture according to a dot matrix word stock.
5. The method for adding subtitles in real time according to claim 1, wherein the displaying the text strings on the video picture by using a dot matrix font library comprises:
utilizing a video decoding service to call a dot matrix word stock to calculate character display coordinates of the character strings on a video picture;
and replacing pixel points corresponding to the character display coordinates in the video picture with preset color pixel points so as to display the character string.
6. The method according to claim 5, wherein when the video is a YUV coded format video, the step of replacing pixels corresponding to the text display coordinates in a video picture with pixels of a predetermined color to display the text string comprises:
and setting the Y value of a pixel point corresponding to the character display coordinate in the video picture to be 255 so as to display the character string.
7. A real-time subtitle adding apparatus, comprising:
the audio stream acquisition module is used for acquiring audio stream data in real time by the system when playing videos;
the character string acquisition module is used for processing the audio stream data by using a preset translation module to acquire character strings corresponding to the audio stream data;
and the character string display module is used for displaying the character string on the video picture by utilizing the dot matrix character library.
8. A computer device comprising a memory for storing a computer program and a processor that executes the computer program to cause the computer device to perform the real-time subtitling method of any of claims 1-6.
9. The computer device of claim 8, wherein the computer device comprises a mobile terminal.
10. A computer storage medium, characterized in that it stores a computer program which, when run on a processor, performs the real-time subtitling method of any of claims 1-6.
CN201911310259.5A 2019-12-18 2019-12-18 Method and device for adding subtitles in real time, computer equipment and computer storage medium Pending CN113014984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911310259.5A CN113014984A (en) 2019-12-18 2019-12-18 Method and device for adding subtitles in real time, computer equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911310259.5A CN113014984A (en) 2019-12-18 2019-12-18 Method and device for adding subtitles in real time, computer equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN113014984A true CN113014984A (en) 2021-06-22

Family

ID=76381111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911310259.5A Pending CN113014984A (en) 2019-12-18 2019-12-18 Method and device for adding subtitles in real time, computer equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113014984A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115988169A (en) * 2023-03-20 2023-04-18 全时云商务服务股份有限公司 Method and device for rapidly displaying real-time video screen-combination characters in cloud conference

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859565A (en) * 2005-05-01 2006-11-08 腾讯科技(深圳)有限公司 Method for broadcastin stream media caption and its stream media player
CN107172351A (en) * 2017-06-16 2017-09-15 福建星网智慧科技股份有限公司 A kind of method of the real-time subtitle superposition of camera
CN108401192A (en) * 2018-04-25 2018-08-14 腾讯科技(深圳)有限公司 Video stream processing method, device, computer equipment and storage medium
WO2019000721A1 (en) * 2017-06-30 2019-01-03 联想(北京)有限公司 Video file recording method, audio file recording method, and mobile terminal
CN109379628A (en) * 2018-11-27 2019-02-22 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN110134973A (en) * 2019-04-12 2019-08-16 深圳壹账通智能科技有限公司 Video caption real time translating method, medium and equipment based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1859565A (en) * 2005-05-01 2006-11-08 腾讯科技(深圳)有限公司 Method for broadcastin stream media caption and its stream media player
CN107172351A (en) * 2017-06-16 2017-09-15 福建星网智慧科技股份有限公司 A kind of method of the real-time subtitle superposition of camera
WO2019000721A1 (en) * 2017-06-30 2019-01-03 联想(北京)有限公司 Video file recording method, audio file recording method, and mobile terminal
CN108401192A (en) * 2018-04-25 2018-08-14 腾讯科技(深圳)有限公司 Video stream processing method, device, computer equipment and storage medium
CN109379628A (en) * 2018-11-27 2019-02-22 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN110134973A (en) * 2019-04-12 2019-08-16 深圳壹账通智能科技有限公司 Video caption real time translating method, medium and equipment based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115988169A (en) * 2023-03-20 2023-04-18 全时云商务服务股份有限公司 Method and device for rapidly displaying real-time video screen-combination characters in cloud conference
CN115988169B (en) * 2023-03-20 2023-08-18 全时云商务服务股份有限公司 Method and device for rapidly displaying real-time video on-screen text in cloud conference

Similar Documents

Publication Publication Date Title
US11218739B2 (en) Live video broadcast method, live broadcast device and storage medium
US20140176604A1 (en) Automated Object Selection and Placement for Augmented Reality
CN111954052B (en) Method for displaying bullet screen information, computer equipment and readable storage medium
CN111954060B (en) Barrage mask rendering method, computer device and readable storage medium
CN110177295B (en) Subtitle out-of-range processing method and device and electronic equipment
CN106507200B (en) Video playing content insertion method and system
CN107027068B (en) Rendering method, decoding method, and method and device for playing multimedia data stream
WO2019192416A1 (en) Video processing method and device therefor, and storage medium and electronic product
US20230377229A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN113709554A (en) Animation video generation method and device, and animation video playing method and device in live broadcast room
CN110121105B (en) Clip video generation method and device
CN114598937B (en) Animation video generation and playing method and device
CN109525852B (en) Live video stream processing method, device and system and computer readable storage medium
CN113014984A (en) Method and device for adding subtitles in real time, computer equipment and computer storage medium
CN111031032A (en) Cloud video transcoding method and device, decoding method and device, and electronic device
CN110582021B (en) Information processing method and device, electronic equipment and storage medium
CN109862295B (en) GIF generation method, device, computer equipment and storage medium
CN111131654B (en) Watermark embedding method and device and coder-decoder
CN113691835B (en) Video implantation method, device, equipment and computer readable storage medium
CN113490009B (en) Content information implantation method, device, server and storage medium
CN107995538B (en) Video annotation method and system
US20210289266A1 (en) Video playing method and apparatus
CN112584252B (en) Instant translation display method and device, mobile terminal and computer storage medium
CN113411661B (en) Method, apparatus, device, storage medium and program product for recording information
US20170171644A1 (en) Method and electronic device for creating video image hyperlink

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination