CN113420627A - System and method capable of generating English dubbing materials - Google Patents

System and method capable of generating English dubbing materials Download PDF

Info

Publication number
CN113420627A
CN113420627A CN202110663529.1A CN202110663529A CN113420627A CN 113420627 A CN113420627 A CN 113420627A CN 202110663529 A CN202110663529 A CN 202110663529A CN 113420627 A CN113420627 A CN 113420627A
Authority
CN
China
Prior art keywords
subtitle
module
dubbing
playing
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110663529.1A
Other languages
Chinese (zh)
Inventor
林青荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Readboy Education Technology Co Ltd
Original Assignee
Readboy Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Readboy Education Technology Co Ltd filed Critical Readboy Education Technology Co Ltd
Priority to CN202110663529.1A priority Critical patent/CN113420627A/en
Publication of CN113420627A publication Critical patent/CN113420627A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

The invention discloses a system and a method capable of generating English dubbing materials. The invention can customize English dubbing materials according to the user's favor, and dubs by using the favorite materials, thereby improving the learning interest of the user and the learning efficiency.

Description

System and method capable of generating English dubbing materials
Technical Field
The invention relates to the technical field of education systems, in particular to a system and a method capable of generating English dubbing materials.
Background
When learning english, it is a common means of raising interest to utilize english video to dub the exercise, but english video that every user likes is different, and current dubbing APP exercise material is restricted to the material that acquiescence provided, can not customize.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a system and a method for generating English dubbing materials.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a system for generating english dubbing material, comprising:
the video playing module: the English video playing device is used for playing an English video which is required to collect English dubbing materials;
a screen recording module: the screen recording device is used for recording screens in the process of playing English videos;
a subtitle region selection module: the system comprises a caption area used for delimiting a caption area in an English video for a user providing tool and recording the position of the caption area;
and a subtitle detection module: the caption detection is carried out on the caption area;
a screenshot recognition module: the caption detection module is used for capturing a caption area and identifying caption content when detecting that a caption appears in the caption area;
a recording module: the subtitle detection module is used for recording the starting time of the subtitle when detecting that the subtitle appears in the subtitle area, and recording the ending time of the subtitle when detecting that the subtitle disappears;
a correlation module: the system comprises a video acquisition module, a caption acquisition module, a display module and a display module, wherein the video acquisition module is used for acquiring a video recorded by a screen, identifying the acquired caption, and associating the start time and the end time of each caption;
and a dubbing module: the system is used for playing videos obtained by screen recording and displaying subtitles obtained by recognition at a set position of a playing interface according to a time sequence; when a user triggers a start dubbing event, only playing videos without sound, changing a certain subtitle to be highlighted when playing to the position of the start time of the subtitle, stopping highlighting the subtitle when playing to the position of the end time of the subtitle, synchronously recording sound while playing the videos, and fusing the recorded sound and the videos to obtain the dubbing videos after the user triggers the end dubbing event.
The invention also provides a method of the system, which comprises the following specific processes:
playing an English video of which the English dubbing material is required to be acquired by using a video playing module; a user demarcates a caption area in an English video through a tool provided by a caption area selection module, and the caption area selection module records the position of the caption area; after a user triggers to start screen recording, a screen recording module synchronously records screens in the process of playing English videos, and a subtitle detection module detects subtitles in a subtitle area; when the subtitle detection module detects that the subtitle appears in the subtitle area, the screenshot recognition module captures the subtitle area and recognizes the subtitle content, meanwhile, the recording module records the starting time of the subtitle, and when the subtitle detection module detects that the subtitle disappears, the recording module records the ending time of the subtitle;
after the playing of the English video is finished, the association module associates the video obtained by screen recording with the subtitles obtained by identification and the start time and the end time of each subtitle to form an English dubbing material;
when a user wants to use the English dubbing material, the dubbing module is used for playing the video obtained by recording the screen, and the dubbing module displays the identified subtitles at the set position of the playing interface according to the time sequence; when a user triggers a start dubbing event, the dubbing module only plays a video without sound, changes a certain subtitle into a highlight when the position of the start time of the subtitle is played, and stops highlighting the subtitle when the position of the end time of the subtitle is played; when the subtitles are lightened, a user starts dubbing, and the dubbing module synchronously records the sound; and after the user triggers the ending dubbing event, the dubbing module fuses the recorded sound and the video to obtain the dubbing video.
The invention has the beneficial effects that: the invention can customize English dubbing materials according to the user's favor, and dubs by using the favorite materials, thereby improving the learning interest of the user and the learning efficiency.
Detailed Description
The present invention will be further described below. It should be noted that the present embodiment is premised on the technical solution, and detailed description and specific implementation are given, but the scope of protection of the present invention is not limited to the present embodiment.
Example 1
The embodiment provides a system capable of generating English dubbing materials, comprising:
the video playing module: the English video playing device is used for playing an English video which is required to collect English dubbing materials;
a screen recording module: the screen recording device is used for recording screens in the process of playing English videos;
a subtitle region selection module: the system comprises a caption area used for delimiting a caption area in an English video for a user providing tool and recording the position of the caption area;
and a subtitle detection module: the caption detection is carried out on the caption area;
a screenshot recognition module: the caption detection module is used for capturing a caption area and identifying caption content when detecting that a caption appears in the caption area;
a recording module: the subtitle detection module is used for recording the starting time of the subtitle when detecting that the subtitle appears in the subtitle area, and recording the ending time of the subtitle when detecting that the subtitle disappears;
a correlation module: the system comprises a video acquisition module, a caption acquisition module, a display module and a display module, wherein the video acquisition module is used for acquiring a video recorded by a screen, identifying the acquired caption, and associating the start time and the end time of each caption;
and a dubbing module: the system is used for playing videos obtained by screen recording and displaying subtitles obtained by recognition at a set position of a playing interface according to a time sequence; when a user triggers a start dubbing event, only playing videos without sound, changing a certain subtitle to be highlighted when playing to the position of the start time of the subtitle, stopping highlighting the subtitle when playing to the position of the end time of the subtitle, synchronously recording sound while playing the videos, and fusing the recorded sound and the videos to obtain the dubbing videos after the user triggers the end dubbing event.
The system can be applied to intelligent terminals such as student tablets.
Example 2
The present embodiment provides a method using the system described in embodiment 1, which includes the following specific steps:
playing an English video of which the English dubbing material is required to be acquired by using a video playing module; a user demarcates a caption area in an English video through a tool provided by a caption area selection module, and the caption area selection module records the position of the caption area; after a user triggers to start screen recording, a screen recording module synchronously records screens in the process of playing English videos, and a subtitle detection module detects subtitles in a subtitle area; when the subtitle detection module detects that the subtitle appears in the subtitle area, the screenshot recognition module captures the subtitle area and recognizes the subtitle content, meanwhile, the recording module records the starting time of the subtitle, and when the subtitle detection module detects that the subtitle disappears, the recording module records the ending time of the subtitle;
after the playing of the English video is finished, the association module associates the video obtained by screen recording with the subtitles obtained by identification and the start time and the end time of each subtitle to form an English dubbing material;
when a user wants to use the English dubbing material, the dubbing module is used for playing the video obtained by recording the screen, and the dubbing module displays the identified subtitles at the set position of the playing interface according to the time sequence; when a user triggers a start dubbing event, the dubbing module only plays a video without sound, changes a certain subtitle into a highlight when the position of the start time of the subtitle is played, and stops highlighting the subtitle when the position of the end time of the subtitle is played; when the subtitles are lightened, a user starts dubbing, and the dubbing module synchronously records the sound; and after the user triggers the ending dubbing event, the dubbing module fuses the recorded sound and the video to obtain the dubbing video.
It should be noted that the triggering manner of various events may adopt conventional manners such as key pressing, voice, gesture, etc.
Various other changes and modifications to the above-described embodiments and concepts will become apparent to those skilled in the art from the above description, and all such changes and modifications are intended to be included within the scope of the present invention as defined in the appended claims.

Claims (2)

1. A system for generating english dubbing material, comprising:
the video playing module: the English video playing device is used for playing an English video which is required to collect English dubbing materials;
a screen recording module: the screen recording device is used for recording screens in the process of playing English videos;
a subtitle region selection module: the system comprises a caption area used for delimiting a caption area in an English video for a user providing tool and recording the position of the caption area;
and a subtitle detection module: the caption detection is carried out on the caption area;
a screenshot recognition module: the caption detection module is used for capturing a caption area and identifying caption content when detecting that a caption appears in the caption area;
a recording module: the subtitle detection module is used for recording the starting time of the subtitle when detecting that the subtitle appears in the subtitle area, and recording the ending time of the subtitle when detecting that the subtitle disappears;
a correlation module: the system comprises a video acquisition module, a caption acquisition module, a display module and a display module, wherein the video acquisition module is used for acquiring a video recorded by a screen, identifying the acquired caption, and associating the start time and the end time of each caption;
and a dubbing module: the system is used for playing videos obtained by screen recording and displaying subtitles obtained by recognition at a set position of a playing interface according to a time sequence; when a user triggers a start dubbing event, only playing videos without sound, changing a certain subtitle to be highlighted when playing to the position of the start time of the subtitle, stopping highlighting the subtitle when playing to the position of the end time of the subtitle, synchronously recording sound while playing the videos, and fusing the recorded sound and the videos to obtain the dubbing videos after the user triggers the end dubbing event.
2. A method using the system of claim 1, comprising the steps of:
playing an English video of which the English dubbing material is required to be acquired by using a video playing module; a user demarcates a caption area in an English video through a tool provided by a caption area selection module, and the caption area selection module records the position of the caption area; after a user triggers to start screen recording, a screen recording module synchronously records screens in the process of playing English videos, and a subtitle detection module detects subtitles in a subtitle area; when the subtitle detection module detects that the subtitle appears in the subtitle area, the screenshot recognition module captures the subtitle area and recognizes the subtitle content, meanwhile, the recording module records the starting time of the subtitle, and when the subtitle detection module detects that the subtitle disappears, the recording module records the ending time of the subtitle;
after the playing of the English video is finished, the association module associates the video obtained by screen recording with the subtitles obtained by identification and the start time and the end time of each subtitle to form an English dubbing material;
when a user wants to use the English dubbing material, the dubbing module is used for playing the video obtained by recording the screen, and the dubbing module displays the identified subtitles at the set position of the playing interface according to the time sequence; when a user triggers a start dubbing event, the dubbing module only plays a video without sound, changes a certain subtitle into a highlight when the position of the start time of the subtitle is played, and stops highlighting the subtitle when the position of the end time of the subtitle is played; when the subtitles are lightened, a user starts dubbing, and the dubbing module synchronously records the sound; and after the user triggers the ending dubbing event, the dubbing module fuses the recorded sound and the video to obtain the dubbing video.
CN202110663529.1A 2021-06-15 2021-06-15 System and method capable of generating English dubbing materials Pending CN113420627A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110663529.1A CN113420627A (en) 2021-06-15 2021-06-15 System and method capable of generating English dubbing materials

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110663529.1A CN113420627A (en) 2021-06-15 2021-06-15 System and method capable of generating English dubbing materials

Publications (1)

Publication Number Publication Date
CN113420627A true CN113420627A (en) 2021-09-21

Family

ID=77788574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110663529.1A Pending CN113420627A (en) 2021-06-15 2021-06-15 System and method capable of generating English dubbing materials

Country Status (1)

Country Link
CN (1) CN113420627A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021903A (en) * 2006-10-10 2007-08-22 鲍东山 Video caption content analysis system
US20160021334A1 (en) * 2013-03-11 2016-01-21 Video Dubber Ltd. Method, Apparatus and System For Regenerating Voice Intonation In Automatically Dubbed Videos
CN106293347A (en) * 2016-08-16 2017-01-04 广东小天才科技有限公司 The learning method of a kind of man-machine interaction and device, user terminal
CN110149548A (en) * 2018-09-26 2019-08-20 腾讯科技(深圳)有限公司 Video dubbing method, electronic device and readable storage medium storing program for executing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021903A (en) * 2006-10-10 2007-08-22 鲍东山 Video caption content analysis system
US20160021334A1 (en) * 2013-03-11 2016-01-21 Video Dubber Ltd. Method, Apparatus and System For Regenerating Voice Intonation In Automatically Dubbed Videos
CN106293347A (en) * 2016-08-16 2017-01-04 广东小天才科技有限公司 The learning method of a kind of man-machine interaction and device, user terminal
CN110149548A (en) * 2018-09-26 2019-08-20 腾讯科技(深圳)有限公司 Video dubbing method, electronic device and readable storage medium storing program for executing

Similar Documents

Publication Publication Date Title
US7734148B2 (en) Method for reproducing sub-picture data in optical disc device, and method for displaying multi-text in optical disc device
JP4584250B2 (en) Video processing device, integrated circuit of video processing device, video processing method, and video processing program
JP5135024B2 (en) Apparatus, method, and program for notifying content scene appearance
CN102750962B (en) A kind of back method of video file and device
CN102291542B (en) Multilingual subtitle setting method and device thereof
CN110149548B (en) Video dubbing method, electronic device and readable storage medium
CN105828101A (en) Method and device for generation of subtitles files
US20020006266A1 (en) Record/play apparatus and method for extracting and searching index simultaneously
US20030190148A1 (en) Displaying multi-text in playback of an optical disc
CN1218597A (en) Combination of VCR index and EPG
CN107277646A (en) A kind of captions configuration system of audio and video resources
US20030030852A1 (en) Digital visual recording content indexing and packaging
CN106412645A (en) Method and apparatus for uploading video file to multimedia server
EP2089820A1 (en) Method and apparatus for generating a summary of a video data stream
JP5079817B2 (en) Method for creating a new summary for an audiovisual document that already contains a summary and report and receiver using the method
CN108551587A (en) Method, apparatus, computer equipment and the medium of television set automatic data collection
JP2008276340A (en) Retrieving device
CN113420627A (en) System and method capable of generating English dubbing materials
US20080285957A1 (en) Information processing apparatus, method, and program
Girgensohn et al. Facilitating Video Access by Visualizing Automatic Analysis.
CN116708916A (en) Data processing method, system, storage medium and electronic equipment
KR101869905B1 (en) Video playing method and video player
US7444068B2 (en) System and method of manual indexing of image data
CN114546939A (en) Conference summary generation method and device, electronic equipment and readable storage medium
JPH11331761A (en) Method and device for automatically summarizing image and recording medium with the method recorded therewith

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210921

RJ01 Rejection of invention patent application after publication