CN103051945A - Method and system for translating subtitles of video playing terminal - Google Patents

Method and system for translating subtitles of video playing terminal Download PDF

Info

Publication number
CN103051945A
CN103051945A CN2012105945573A CN201210594557A CN103051945A CN 103051945 A CN103051945 A CN 103051945A CN 2012105945573 A CN2012105945573 A CN 2012105945573A CN 201210594557 A CN201210594557 A CN 201210594557A CN 103051945 A CN103051945 A CN 103051945A
Authority
CN
China
Prior art keywords
caption
translated
languages
area
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105945573A
Other languages
Chinese (zh)
Other versions
CN103051945B (en
Inventor
张培凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201210594557.3A priority Critical patent/CN103051945B/en
Publication of CN103051945A publication Critical patent/CN103051945A/en
Application granted granted Critical
Publication of CN103051945B publication Critical patent/CN103051945B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Machine Translation (AREA)

Abstract

The invention belongs to the technical field of video signal processing, and provides a method and a system for translating subtitles of a video playing terminal. The method and the system are characterized in that after a subtitle area to be translated outputs subtitles, the subtitles of the subtitle area to be translated are translated into texts with designated language, and the translated texts with designated language are displayed in a designated area in an overlapping way, so the local instant translation of a video file is realized; the requirements of different language users are met; the adaptability is high; and the cost is low.

Description

A kind of caption translating method of video playing terminal, system
Technical field
The invention belongs to the video signal treatment technique field, relate in particular to a kind of caption translating method, system of video playing terminal.
Background technology
Along with the development of network video technique, people can get access to the video data in country variant or area more and more easily.The user is watching outer text video or when live, although video file provides captions, and the language be familiar with of user not necessarily, the user is because the foreign language level is limited, be difficult to understand the implication of these captions, cause the user can't understand video content, reduced user's the interest of watching.
Be directed to this, in the prior art, the making side of outer text video or introduction side can add the captions of different language.Yet the cost of manufacture under this kind mode is higher, and the languages of adding are limited, only are English generally, can not satisfy the demand of watching of other languages country, and applicability is lower.
Summary of the invention
The purpose of the embodiment of the invention is to provide a kind of caption translating method of video playing terminal, is intended to solve in the prior art, and the cost of captions that is added different language by making side or introduction side is higher, the problem that applicability is lower.
The embodiment of the invention is achieved in that a kind of caption translating method of video playing terminal, and described method comprises:
Whether caption area to be translated has captions output in the identification current video frame;
If identify described caption area to be translated captions output is arranged, then the caption content of described caption area to be translated is translated into the literal of specifying languages;
The text addition of described appointment languages is presented at the appointed area of described current video frame.
Another purpose of the embodiment of the invention is to provide a kind of caption translating system of video playing terminal, and described system comprises:
Image identification unit, whether be used for identification current video frame caption area to be translated has captions output;
Translation unit is used for the caption content of described caption area to be translated being translated into the literal of appointment languages when described image identification unit identifies described caption area to be translated captions output is arranged;
The Overlapping display unit, the text addition that is used for described appointment languages that described translation unit is translated into is presented at the appointed area of described current video frame.
The caption translating method of the video playing terminal that the present invention proposes and system are when caption area to be translated has captions output, the caption content of caption area to be translated is translated into the literal of specifying languages, and the text addition of the appointment languages after will translating is presented at the appointed area, thereby realized the local instant translation to video file, can satisfy different language user's demand, applicability is strong and cost is low.
Description of drawings
Fig. 1 is the flow chart of the caption translating method of the video playing terminal that provides of the embodiment of the invention one;
Fig. 2 is among the present invention, identifies the detail flowchart whether caption area to be translated has captions output;
Fig. 3 is among the present invention, caption content is translated into the detail flowchart of the literal of specifying languages;
Fig. 4 is the flow chart of the caption translating method of the video playing terminal that provides of the embodiment of the invention two;
Fig. 5 is the structure chart of the caption translating system of the video playing terminal that provides of the embodiment of the invention three;
Fig. 6 is the structure chart of image identification unit among Fig. 5;
Fig. 7 is the structure chart of translation unit among Fig. 5;
Fig. 8 is the structure chart of the caption translating system of the video playing terminal that provides of the embodiment of the invention four.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, is not intended to limit the present invention.
For the problem that prior art exists, the caption translating method of the video playing terminal that the present invention proposes and system can realize this locality translation to video file, and satisfying different language user's demand, and cost is low.Describe implementation of the present invention in detail below in conjunction with embodiment:
Embodiment one
The embodiment of the invention one provides a kind of caption translating method of video playing terminal, as shown in Figure 1, comprising:
Step S11: whether caption area to be translated has captions output in the identification current video frame.
Particularly, the embodiment of the invention one is based on the frame image sequence algorithm, and whether caption area to be translated has captions output in the identification current video frame, and this frame image sequence algorithm is to utilize the changing value of pixel intensity between the different video frame to realize the identification that captions have or not.As shown in Figure 2, at this moment, step S11 comprises:
Step S111: the picture that extracts caption area to be translated in the current video frame.
Step S112: convert the picture that extracts to the gray scale picture, and the gray scale picture is divided into a plurality of subregions.
Step S113: calculate on the gray scale picture of caption area to be translated in each subregion and the frame of video before, the pixel intensity difference of corresponding subregion.
Step S114: with the subregion corresponding testing result position 1 of pixel intensity difference greater than first threshold, the pixel intensity difference is not more than the corresponding testing result of the subregion position 0 of first threshold.
Step S115: with the respectively testing result position addition of correspondence of all subregion, and judge that whether addition result is greater than Second Threshold.
Step S116: caption area to be translated has captions output in the current video frame if addition result greater than Second Threshold, is then identified.
For instance, suppose that the gray scale picture of current video frame is divided into the n sub regions, and the pixel brightness value of n all subregion is respectively A1 ... An; The pixel brightness value of n sub regions is respectively B1 on the gray scale picture of the before frame of video that the n sub regions is corresponding ... Bn; First threshold α is that Second Threshold is β.Then at first, calculating pixel luminance difference δ i=| A i-B i|, 1≤i≤n; Afterwards, with the pixel intensity value of delta iα compares with first threshold, if δ iα, then think respective sub-areas A iPixel intensity change greatly, make testing result position μ corresponding to respective sub-areas i=1, otherwise μ i=0; Afterwards, calculate each testing result position μ iAnd, if addition result is β greater than Second Threshold, then thinking has caption area in the current video frame.
Step S12: if identify caption area to be translated captions output is arranged, then the caption content of caption area to be translated is translated into the literal of specifying languages.Further, as shown in Figure 3, the step of the caption content of caption area to be translated being translated into the literal of specifying languages comprises:
Step S121: utilize optical character identification (Optical Character Recognition, OCR) engine to extract caption content in the gray scale picture of caption area to be translated.
Step S122: identify the affiliated languages of caption content.
Step S123: if the languages that identify are different from the appointment languages, then caption content is divided into word and/or phrase.
Step S124: search language library, obtain specifying under the languages, word and/or literal corresponding to phrase.
In the embodiment of the invention one, language library can be language dictionary or Parallel Corpus.In order to improve the applicability of interpretation method, this language library can carry out online upgrading by wireless networks such as the Internet or mobile networks, or upgrades by peripheral hardware.
Step S125: with the synthetic literal of specifying languages of the group of text that obtains.
Step S13: will specify the text addition of languages to be presented at the appointed area of current video frame.
In the embodiment of the invention one, the appointed area can current video frame in, above or below the caption area to be translated.For example, can be preferentially and the onesize font of captions be presented at caption area to be translated above or below, and when above or below zone when being not enough to show the literal of specifying languages, with the font demonstration less than size of caption; Perhaps preferably below caption area to be translated, show the literal of specifying languages, and when the insufficient height of the below of caption area to be translated, above caption area to be translated, show the literal of specifying languages.
Embodiment two
The embodiment of the invention two provides a kind of caption translating method of video playing terminal, as shown in Figure 4.From shown in Figure 1 different, the embodiment of the invention two also comprised before step S11:
Step S14: according to user's operation, choose caption area to be translated, specify languages and/or appointed area.
In the embodiment of the invention two, caption area to be translated can be the Zone Full of current video frame, also can be in order to improve recognition efficiency, and the subregion of choosing current video frame is as caption area to be translated; When caption area to be translated was the subregion of choosing, the user can operate in the frame of video scope that shows.For example, the user can utilize touch panel or mouse, and the selected rectangular area that is pulled down to the lower right corner from the upper left corner is caption area to be translated.
Other step of the embodiment of the invention two is identical with the embodiment of the invention one, is not repeated herein.
Embodiment three
The embodiment of the invention three provides a kind of caption translating system of video playing terminal, as shown in Figure 5, comprising: image identification unit 11, and whether be used for identification current video frame caption area to be translated has captions output; Translation unit 12 is used for then the caption content of caption area to be translated being translated into the literal of appointment languages when image identification unit 11 identifies caption area to be translated captions output is arranged; Overlapping display unit 13, the text addition that is used for appointment languages that translation unit 12 is translated into is presented at the appointed area of current video frame.
Further, as shown in Figure 6, image identification unit 11 can comprise: the first extraction module 111, for the picture that extracts current video frame caption area to be translated; First divides module 112, and the picture that is used for the first extraction module 111 is extracted converts the gray scale picture to, and the gray scale picture is divided into a plurality of subregions; Computing module 113 be used for to calculate first each subregion and the gray scale picture of frame of video caption area to be translated before, the pixel intensity difference of corresponding subregion of dividing after module 112 is divided; Set module 114 is used for the subregion corresponding testing result position 1 of pixel intensity difference greater than first threshold is not more than the pixel intensity difference the corresponding testing result of the subregion position 0 of first threshold; Judge module 115, be used for after 114 set of set module, respectively corresponding testing result position addition of all subregion, and judge that whether addition result is greater than Second Threshold; The first identification module 116 is used for when judge module 115 is judged addition result greater than Second Threshold, and caption area to be translated has captions output in the identification current video frame.
Further, as shown in Figure 7, translation unit 12 can comprise: the second extraction module 121 is used for utilizing optical character recognition engine to extract the caption content of the gray scale picture of caption area to be translated; The second identification module 122 is used for identifying the affiliated languages of caption content; Second divides module 123, is used for the languages that identify when the second identification module 122 and specifies languages not simultaneously, and caption content is divided into word and/or phrase; Search module 124, be used for searching language library, obtain specifying under the languages, word and/or literal corresponding to phrase; Composite module 125 is used for searching the synthetic literal of specifying languages of group of text that module 124 obtains.
In the embodiment of the invention three, language library can be language dictionary or Parallel Corpus.In order to improve the applicability of interpretation method, this language library can carry out online upgrading by wireless networks such as the Internet or mobile networks, or upgrades by peripheral hardware.
Embodiment four
The embodiment of the invention four provides a kind of caption translating system of video playing terminal, as shown in Figure 8.From shown in Figure 5 different, in the embodiment of the invention four, system can also comprise: choose module 14, be used for the operation according to the user, choose caption area to be translated, specify languages and/or appointed area.The other parts of this system are identical with the embodiment of the invention three, are not repeated herein.
The caption translating method of the video playing terminal that the present invention proposes and system are when caption area to be translated has captions output, the caption content of caption area to be translated is translated into the literal of specifying languages, and the text addition of the appointment languages after will translating is presented at the appointed area, thereby realized the local instant translation to video file, can satisfy different language user's demand, applicability is strong and cost is low.
One of ordinary skill in the art will appreciate that all or part of step that realizes in above-described embodiment method is can control relevant hardware by program to finish, described program can be in being stored in a computer read/write memory medium, described storage medium is such as ROM/RAM, disk, CD etc.
The above only is preferred embodiment of the present invention, not in order to limiting the present invention, all any modifications of doing within the spirit and principles in the present invention, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. the caption translating method of a video playing terminal is characterized in that, described method comprises:
Whether caption area to be translated has captions output in the identification current video frame;
If identify described caption area to be translated captions output is arranged, then the caption content of described caption area to be translated is translated into the literal of specifying languages;
The text addition of described appointment languages is presented at the appointed area of described current video frame.
2. the caption translating method of video playing terminal as claimed in claim 1 is characterized in that, whether caption area to be translated has the step of captions output to comprise in the described identification current video frame:
Extract the picture of caption area to be translated in the current video frame;
Convert the described picture that extracts to the gray scale picture, and described gray scale picture is divided into a plurality of subregions;
Calculate on the gray scale picture of each subregion and caption area to be translated described in the frame of video before, the pixel intensity difference of corresponding subregion;
With described pixel intensity difference greater than testing result position 1 corresponding to the subregion of first threshold, the testing result position 0 that the subregion that described pixel intensity difference is not more than first threshold is corresponding;
With the respectively described testing result position addition of correspondence of all subregion, and judge that whether addition result is greater than Second Threshold;
If described addition result, is then identified caption area to be translated described in the described current video frame greater than described Second Threshold captions output is arranged.
3. the caption translating method of video playing terminal as claimed in claim 2 is characterized in that, the step that described caption content with described caption area to be translated is translated into the literal of specifying languages comprises:
Utilize optical character recognition engine to extract caption content in the gray scale picture of described caption area to be translated;
Identify the affiliated languages of described caption content;
If the described languages that identify are different from described appointment languages, then described caption content is divided into word and/or phrase;
Search language library, obtain under the described appointment languages, word and/or literal corresponding to phrase;
The literal that the described group of text that obtains is synthesized described appointment languages.
4. the caption translating method of video playing terminal as claimed in claim 3 is characterized in that, described language library is language dictionary or Parallel Corpus.
5. such as the caption translating method of each described video playing terminal of claim 1 to 4, it is characterized in that whether caption area to be translated has before the step of captions output in described identification current video frame, described method also comprises:
According to user's operation, choose described caption area to be translated, described appointment languages and/or described appointed area.
6. the caption translating system of a video playing terminal is characterized in that, described system comprises:
Image identification unit, whether be used for identification current video frame caption area to be translated has captions output;
Translation unit is used for the caption content of described caption area to be translated being translated into the literal of appointment languages when described image identification unit identifies described caption area to be translated captions output is arranged;
The Overlapping display unit, the text addition that is used for described appointment languages that described translation unit is translated into is presented at the appointed area of described current video frame.
7. the caption translating system of video playing terminal as claimed in claim 6 is characterized in that, described image identification unit comprises:
The first extraction module is for the picture that extracts current video frame caption area to be translated;
First divides module, and the described picture that is used for described the first extraction module is extracted converts the gray scale picture to, and described gray scale picture is divided into a plurality of subregions;
Computing module is used for calculating described first each subregion and the gray scale picture of caption area to be translated described in the frame of video before, the pixel intensity difference of corresponding subregion of dividing after the Module Division;
The set module is used for described pixel intensity difference greater than testing result position 1 corresponding to the subregion of first threshold, the testing result position 0 that the subregion that described pixel intensity difference is not more than first threshold is corresponding;
Judge module, be used for after the set of described set module, respectively corresponding testing result position addition of all subregion, and judge that whether addition result is greater than Second Threshold;
The first identification module is used for when described judge module is judged described addition result greater than described Second Threshold, and identifying caption area to be translated described in the described current video frame has captions output.
8. the caption translating system of video playing terminal as claimed in claim 7 is characterized in that, described translation unit comprises:
The second extraction module is used for utilizing optical character recognition engine to extract the caption content of the gray scale picture of described caption area to be translated;
The second identification module is used for identifying the affiliated languages of described caption content;
Second divides module, is used for the described languages that identify when described the second identification module and described appointment languages not simultaneously, and described caption content is divided into word and/or phrase;
Search module, be used for searching language library, obtain under the described appointment languages, word and/or literal corresponding to phrase;
Composite module is used for describedly searching the described group of text that module obtains and synthesizes the literal of described appointment languages.
9. the caption translating system of video playing terminal as claimed in claim 8 is characterized in that, described language library is language dictionary or Parallel Corpus.
10. such as the caption translating system of each described video playing terminal of claim 6 to 9, it is characterized in that described system also comprises:
Choose module, be used for the operation according to the user, choose described caption area to be translated, described appointment languages and/or described appointed area.
CN201210594557.3A 2012-12-31 2012-12-31 A kind of caption translating method of video playing terminal, system Expired - Fee Related CN103051945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210594557.3A CN103051945B (en) 2012-12-31 2012-12-31 A kind of caption translating method of video playing terminal, system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210594557.3A CN103051945B (en) 2012-12-31 2012-12-31 A kind of caption translating method of video playing terminal, system

Publications (2)

Publication Number Publication Date
CN103051945A true CN103051945A (en) 2013-04-17
CN103051945B CN103051945B (en) 2016-02-03

Family

ID=48064426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210594557.3A Expired - Fee Related CN103051945B (en) 2012-12-31 2012-12-31 A kind of caption translating method of video playing terminal, system

Country Status (1)

Country Link
CN (1) CN103051945B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104378692A (en) * 2014-11-17 2015-02-25 天脉聚源(北京)传媒科技有限公司 Method and device for processing video captions
CN104469523A (en) * 2014-12-25 2015-03-25 杨海 Word clicking and paraphrase displaying foreign language video playing method for mobile equipment
CN106303303A (en) * 2016-08-17 2017-01-04 北京金山安全软件有限公司 Method and device for translating subtitles of media file and electronic equipment
CN106340294A (en) * 2016-09-29 2017-01-18 安徽声讯信息技术有限公司 Synchronous translation-based news live streaming subtitle on-line production system
WO2017008241A1 (en) * 2015-07-14 2017-01-19 张阳 Subtitle control method and system for ktv song selection system
WO2017015908A1 (en) * 2015-07-29 2017-02-02 张阳 Karaoke lyric capturing method and system
CN107484002A (en) * 2017-08-25 2017-12-15 四川长虹电器股份有限公司 The method of intelligent translation captions
CN108681393A (en) * 2018-04-16 2018-10-19 优视科技有限公司 Translation display methods, device, computing device and medium based on augmented reality
CN110134973A (en) * 2019-04-12 2019-08-16 深圳壹账通智能科技有限公司 Video caption real time translating method, medium and equipment based on artificial intelligence
CN110276349A (en) * 2019-06-24 2019-09-24 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN111356025A (en) * 2018-12-24 2020-06-30 深圳Tcl新技术有限公司 Multi-subtitle display method, intelligent terminal and storage medium
CN114885197A (en) * 2022-04-26 2022-08-09 中山亿联智能科技有限公司 Multi-language translation system and method applied to set top box subtitles

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1697515A (en) * 2004-05-14 2005-11-16 创新科技有限公司 Captions translation engine
CN2772159Y (en) * 2005-01-20 2006-04-12 英业达股份有限公司 Caption translating device
JP2009016910A (en) * 2007-06-29 2009-01-22 Toshiba Corp Video reproducing device and video reproducing method
CN101674420A (en) * 2008-09-10 2010-03-17 英业达股份有限公司 System and method for translating captured image characters

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1697515A (en) * 2004-05-14 2005-11-16 创新科技有限公司 Captions translation engine
CN2772159Y (en) * 2005-01-20 2006-04-12 英业达股份有限公司 Caption translating device
JP2009016910A (en) * 2007-06-29 2009-01-22 Toshiba Corp Video reproducing device and video reproducing method
CN101674420A (en) * 2008-09-10 2010-03-17 英业达股份有限公司 System and method for translating captured image characters

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢光艺: "视频字幕检测与提取的算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 November 2005 (2005-11-15) *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104378692A (en) * 2014-11-17 2015-02-25 天脉聚源(北京)传媒科技有限公司 Method and device for processing video captions
CN104469523B (en) * 2014-12-25 2018-04-10 杨海 The foreign language video broadcasting method clicked on word and show lexical or textual analysis for mobile device
CN104469523A (en) * 2014-12-25 2015-03-25 杨海 Word clicking and paraphrase displaying foreign language video playing method for mobile equipment
WO2017008241A1 (en) * 2015-07-14 2017-01-19 张阳 Subtitle control method and system for ktv song selection system
WO2017015908A1 (en) * 2015-07-29 2017-02-02 张阳 Karaoke lyric capturing method and system
CN106303303A (en) * 2016-08-17 2017-01-04 北京金山安全软件有限公司 Method and device for translating subtitles of media file and electronic equipment
CN106340294A (en) * 2016-09-29 2017-01-18 安徽声讯信息技术有限公司 Synchronous translation-based news live streaming subtitle on-line production system
CN107484002A (en) * 2017-08-25 2017-12-15 四川长虹电器股份有限公司 The method of intelligent translation captions
CN108681393A (en) * 2018-04-16 2018-10-19 优视科技有限公司 Translation display methods, device, computing device and medium based on augmented reality
CN111356025A (en) * 2018-12-24 2020-06-30 深圳Tcl新技术有限公司 Multi-subtitle display method, intelligent terminal and storage medium
CN110134973A (en) * 2019-04-12 2019-08-16 深圳壹账通智能科技有限公司 Video caption real time translating method, medium and equipment based on artificial intelligence
CN110276349A (en) * 2019-06-24 2019-09-24 腾讯科技(深圳)有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN110276349B (en) * 2019-06-24 2023-08-18 腾讯科技(深圳)有限公司 Video processing method, device, electronic equipment and storage medium
CN114885197A (en) * 2022-04-26 2022-08-09 中山亿联智能科技有限公司 Multi-language translation system and method applied to set top box subtitles

Also Published As

Publication number Publication date
CN103051945B (en) 2016-02-03

Similar Documents

Publication Publication Date Title
CN103051945B (en) A kind of caption translating method of video playing terminal, system
US9082035B2 (en) Camera OCR with context information
US11475588B2 (en) Image processing method and device for processing image, server and storage medium
US11551027B2 (en) Object detection based on a feature map of a convolutional neural network
CN102342124A (en) Method and apparatus for providing information related to broadcast programs
CN103052953A (en) Information processing device, method of processing information, and program
CN103069414A (en) Information processing device, information processing method, and program
CN103984772A (en) Method and device for generating text retrieval subtitle library and video retrieval method and device
US9129177B2 (en) Image cache
CN111832403A (en) Document structure recognition method, and model training method and device for document structure recognition
CN112825561A (en) Subtitle display method, system, computer device and readable storage medium
CN109558513A (en) A kind of content recommendation method, device, terminal and storage medium
WO2011153392A2 (en) Semantic enrichment by exploiting top-k processing
CN111309200B (en) Method, device, equipment and storage medium for determining extended reading content
US20220230274A1 (en) Method and system for displaying a video poster based on artificial intelligence
KR20210047467A (en) Method and System for Auto Multiple Image Captioning
CN112036373B (en) Method for training video text classification model, video text classification method and device
CN111709431B (en) Instant translation method and device, computer equipment and storage medium
CN111353532A (en) Image generation method and device, computer-readable storage medium and electronic device
CN116644246A (en) Search result display method and device, computer equipment and storage medium
CN116320659A (en) Video generation method and device
CN105631917A (en) Subtitle translation method in digital animation production process
US20220198158A1 (en) Method for translating subtitles, electronic device, and non-transitory storage medium
CN103997657A (en) Converting method and device of audio in video
CN103139635A (en) System and method used for providing subtitle translation during playing of video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Patentee after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523841 usha Beach Road No. 18

Patentee before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160203