WO2019074145A1 - 단일 화면에서의 자막데이터 편집 시스템 및 그 방법 - Google Patents
단일 화면에서의 자막데이터 편집 시스템 및 그 방법 Download PDFInfo
- Publication number
- WO2019074145A1 WO2019074145A1 PCT/KR2017/011862 KR2017011862W WO2019074145A1 WO 2019074145 A1 WO2019074145 A1 WO 2019074145A1 KR 2017011862 W KR2017011862 W KR 2017011862W WO 2019074145 A1 WO2019074145 A1 WO 2019074145A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- subtitle
- image
- unit
- editing unit
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/278—Subtitling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
Definitions
- the present invention relates to a caption data editing system and a method thereof in a single screen, and more particularly, to a caption data editing system and a caption data editing method therefor. More particularly, The present invention relates to a technique for creating single caption data by inserting translated text into tabs formed for different languages, instead of generating separate caption data for each language.
- Subtitles in TV programs are simply for fun, regardless of the facts (intention of the person speaking), such as auxiliary subtitles for name, title, location, time, etc., And the words of the speech balloon.
- the subtitles in the PC environment have a structure for outputting subtitles on the basis of synchronous time information with respect to a file in which video stream data is recorded or video stream data provided on a network.
- retrieving caption layer data including graphic caption elements from a storage medium extracting crop information (RHC, RVC, RCH, RCW) from the retrieved caption layer data , And enabling automatic cropping of portions of the subtitle elements to be displayed.
- the above-described process of producing a subtitle according to the prior art includes: 1) a process of dividing a segment into segments and inputting the subtitles in units of time, editing and storing the segment; 2) And subtitles are created. Therefore, the above-mentioned 1) and 2) processes are not performed in parallel but are separated in time order.
- the process of dividing the segment and the process of creating and modifying the subtitles are incompatible with each other, and even if the division of the segment is wrong, there is a problem that the segment modification can not be performed simultaneously with the subtitling.
- the subtitle is made only in a limited position and a simple text format
- the subtitle is displayed on the screen, and the position of the subtitle and the text typeface can not be flexibly modified, so that it is difficult to implement an aesthetic presentation .
- subtitle data for each language must be produced and modified for each language by producing separate subtitle for each language, it is troublesome to create a subtitle file separately for each language.
- the purpose of this is to prevent the hassle of hiring.
- the object of the present invention is to set the tone of a voice by setting a rendering effect on a voice file converted using a TTS engine, and selecting a voice from a male voice or a female voice,
- the effect of the equalizer can be adjusted to provide a more natural voice to adjust the voice and therefore to direct a specific situation and emotions are intended to facilitate.
- a caption data editing system for a single screen, comprising: an image index unit for indexing image data stored in a local PC or image data uploaded to an online platform and outputting the indexed image data to a screen configuration unit; A video divider dividing the video data into segments each having a length corresponding to an input value and outputting the segmented video to a screen configuration unit; And a subtitle editor for generating caption data by receiving and synchronizing the texts to be inserted into each of the segmented images and superimposing the caption data on the segmented images so as to correspond to the dragged and dropped coordinates and outputting them to the screen composing unit do.
- the caption editing unit controls the output of the caption data to be superimposed on the segmented image so as to correspond to the input event value.
- the event value may be an animation effect including any one of flying, disappearing, appearing, or shaking of the text included in the subtitle data within the segmented video, and any one of the font
- the formatting effect is a value for outputting the formatting effect.
- the subtitle editing unit receives the text to be inserted into the segmented video, and receives text for each of a plurality of languages in a predetermined language library, and generates a single subtitle data.
- the subtitle editing unit may further include a dubbing function for inserting predetermined sound data or input sound data into the segmented image and outputting the sound inserted when the image data is reproduced.
- the subtitle editing unit outputs sound data to be inserted into the segmented image or inserted sound data so that the sound data can be previewed beforehand.
- a subtitle merging unit for merging subtitle data in the video data in a time series to generate a single file; And a caption transmission unit for uploading a single file to a local PC or an online platform.
- the subtitle editing unit converts the input text into an audio file, and inserts a predetermined rendering effect into the converted audio file to perform dubbing.
- the method of editing subtitle data on a single screen of the present invention comprises the steps of: (a) indexing image data stored in a local PC or image data uploaded to an online platform; (B) dividing the image data into a segment having a length corresponding to an input value; (C) generating caption data by receiving and synchronizing texts to be inserted into segmented images, respectively, by a caption editing unit; (D) generating a single file by merging subtitle image data and caption data in a time series manner; And (e) the subtitle transmission unit uploads a single file to a local PC or an online platform.
- the subtitle editing unit superimposes the caption data on the segmented image so as to correspond to the dragged and dropped coordinates; And (g) controlling output of the subtitle data to be superimposed on the segmented image so that the subtitle editing unit corresponds to the input event value.
- step (H) determining whether the subtitle editing unit inserts preset sound data or inserted sound data into a segmented image after step (c); (i) inserting the pre-stored audio file into the segmented image by inserting the pre-stored audio data into the segmented image when the preset sound data is inserted as a result of the determination in step (h); And (j) inserting the audio file recorded in real time by the subtitle editing unit into the segmented image when inserting the received sound data as a result of the determination in step (h).
- step (K) converting the text input by the subtitle editing unit into a voice file through the TTS engine after step (c);
- segmentation of image data into segments of a predetermined size and generation and correction of subtitle data can be simultaneously performed on a single screen, thereby providing segmentation and subtitle data generation in a single screen It is possible to perform the operation in parallel.
- the dragging function for the caption data to be superimposed on the segmented image is provided, whereby the position of the caption data can be instantly corrected in the image.
- caption data synchronized with a video and an image can be merged and uploaded / downloaded to a local PC or an online platform, thereby making it possible to view a single video in which caption data is inserted without a separate caption file It is effective.
- 1 is a diagram showing a conventional subtitle creation method and apparatus.
- FIG. 2 is a block diagram showing a caption data editing system in a single screen according to the present invention.
- FIG. 3 is a detailed functional diagram of a video segmenting unit of a caption data editing system in a single screen according to the present invention.
- FIG. 4 is a diagram illustrating an example in which caption data generated by a caption editing unit of a caption data editing system in a single screen according to the present invention is dragged and dropped onto a segmented image.
- FIG. 4 is a diagram illustrating an example in which caption data generated by a caption editing unit of a caption data editing system in a single screen according to the present invention is dragged and dropped onto a segmented image.
- FIG. 5 is a diagram illustrating an example in which a caption editing unit of a caption data editing system in a single screen according to the present invention receives texts for a plurality of languages in a predetermined language library.
- FIG. 6 is a diagram illustrating an example in which a subtitle editing unit of a caption data editing system in a single screen according to the present invention performs search, recording, and preview functions for audio to be inserted into segmented video.
- FIG. 7 is a block diagram illustrating a caption merging unit and a caption transmitting unit of a caption data editing system in a single screen according to the present invention
- FIG. 8 is a flowchart showing a method of editing caption data in a single screen according to the present invention.
- FIG. 9 is a flowchart showing steps after step S30 and step before step S40 of a method of editing caption data in a single screen according to the present invention.
- FIG. 10 is a flowchart showing still another process after step S30 and step S40 of the method of editing caption data in a single screen according to the present invention.
- FIG. 10 is a flowchart showing still another process after step S30 and step S40 of the method of editing caption data in a single screen according to the present invention.
- the caption data editing system S in a single screen includes a screen configuration unit 10, an image index unit 20, a video division unit 30, and a caption editing unit 40 ).
- the screen configuration unit 10 displays the image index unit 20, the image division unit 30, and the caption editing unit 40 in a predetermined area.
- the image indexing unit 20 indexes the image data stored in the local PC or the image data uploaded to the online platform, and outputs the indexed data to the screen configuration unit 10.
- the image divider 30 divides the image data received from the image index unit 20 into segments each having a length corresponding to the input value, and outputs the segmented image to the screen configuration unit 10.
- the image data corresponding to the starting point segment # 1 is divided, and the image data corresponding to the segment # 2 at the point 2.4 seconds elapsed from the starting point is divided Lt; / RTI >
- the segment division length is divided into a predetermined unit or a length corresponding to the input drag value, in which the end point is divided in units of 0.1 second to 5 seconds from the start point thereof.
- the subtitle editing unit 40 generates subtitle data by receiving and synchronizing the texts to be inserted into each of the segmented images. As shown in FIG. 4, the subtitle editing unit 40 divides the subtitle data into segments And outputs it to the screen configuration unit 10. [
- the subtitle editing unit 40 controls the output of the subtitle data to be superimposed on the segmented image so as to correspond to the input event value as shown in FIG.
- the event value may be an animation effect including any one of flying, disappearing, appearing, or waving the text included in the subtitle data in the segmented image, and a font, size, This is a value for outputting the formatting effect included.
- the subtitle editing unit 40 receives the text to be inserted into the segmented video, and inputs text for each of the plurality of languages into the preset language language library as shown in FIG. 5, have.
- the user can select a language desired to be displayed in one subtitle data, and can view an image including the selected subtitle.
- the subtitle editing unit 40 inserts predetermined sound data or input sound data into the segmented image to provide a dubbing function so that the sound inserted when reproducing the image data is output.
- the input sound data includes any one of pre-stored sound source, background sound, effect sound, or audio input through real-time recording.
- the subtitle editing unit 40 is configured to convert the received text into a voice file through the TTS engine, and set a predetermined rendering effect on the converted voice file to dub the voice in a form of voice.
- the subtitle editing unit 40 After the subtitle editing unit 40 receives a selection of one of the male and female voices, it adjusts effects such as the height and the pitch of the voice and the echo effect through the frequency division quality correction of the equalizer function, The dubbed voice can be adjusted more naturally.
- the subtitle editing unit 40 is configured to index the pre-stored audio file according to a click signal for the audio search button included in the screen configuration unit 10, and to insert the indexed image into the segmented image .
- the subtitle editing unit 40 is configured to insert an audio file recorded in real time in a segmented image according to a click signal for a recording button provided in the screen configuration unit 10.
- the caption editing unit 40 receives the click signal for the preview button provided in the screen configuration unit 10, and outputs the sound data or the inserted sound data to be inserted into the segmented image .
- the subtitle data editing system S on a single screen can display subtitle data generated by the subtitle editing unit 40 on the video data indexed by the video indexing unit 20 as shown in FIG. 7 And a subtitle transmission unit (60) for uploading / downloading a single file generated by the subtitle merging unit (50) to a local PC or an online platform.
- the subtitle transmission unit do.
- the subtitle merging unit 50 generates a single file of the video data and the subtitle data by receiving the click signal for the merge button provided in the screen composing unit 10, 10, and uploads / downloads the single file to a local PC or an online platform.
- the image indexing unit 20 indexes the image data stored in the local PC or the image data uploaded to the online platform (S10).
- the image divider 30 divides the image data into segments each having a length corresponding to the input value (S20).
- the subtitle editing unit 40 receives and synchronizes the texts to be inserted into the segmented images, respectively, to generate subtitle data (S30).
- the subtitle merging unit 50 merges the video data and the caption data in a time series to generate a single file (S40).
- the subtitle transmission unit 60 uploads a single file to the local PC or online platform (S50).
- steps S30 to S40 of the subtitle data editing method for a single screen according to the present invention will be described with reference to FIG. 9 as follows.
- the subtitle editing unit 40 superimposes the subtitle data on the segmented image so as to correspond to the dragged and dropped coordinates (S60).
- the subtitle editing unit 40 controls the output of the subtitle data to be superimposed on the segmented video so as to correspond to the input event value (S70).
- steps S30 to S40 of the method of editing caption data on a single screen according to the present invention will be described with reference to FIG. 10 as follows.
- step S30 the subtitle editing unit 40 determines whether to insert preset sound data or inserted sound data into the segmented image (S80).
- step S80 when the preset sound data is inserted, the subtitle editing unit 40 indexes the pre-stored audio file and inserts the indexed audio file into the segmented image (S81).
- the subtitle editing unit 40 inserts the audio file recorded in real time into the segmented image (S82).
- step S30 the subtitle editing unit converts the inputted text into a voice file through the TTS engine.
- the subtitle editing unit receives an equalizer value for adjusting any one of the pitch, the speed, and the echo of the voice.
- the subtitle editing unit corrects the sound quality of the converted audio file and inserts the generated audio file into the segmented image.
- the system and method for editing subtitle data in a single screen divides video data into segments of a predetermined size and provides a working environment capable of simultaneously generating and modifying subtitle data on a single screen,
- the subtitle data generation can be performed in parallel on a single screen without being separated in a time series manner and the dragging function for the subtitle data to be superimposed on the segmented image can be provided so that the position of the subtitle data can be instantly modified in the image, Insertion of subtitle data into a segmented image and dubbing through audio recording are possible to produce and insert a sound effect in caption data.
- Video divider 40 Subtitle editor
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Studio Circuits (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
Claims (12)
- 자막 편집 시스템에 있어서,로컬PC에 저장된 영상데이터 또는 온라인 플랫폼에 업로드된 영상데이터를 색인하여 화면 구성부에 출력하는 영상 색인부;상기 영상데이터를 입력값과 대응하는 길이의 세그먼트로 분할하고, 세그먼트 분할된 영상을 상기 화면 구성부에 출력하는 영상 분할부; 및상기 세그먼트 분할된 영상 각각에 삽입할 텍스트들을 입력받아 동기화시켜 자막데이터를 생성하되, 상기 자막데이터를 드래그 앤 드롭된 좌표와 대응하도록 세그먼트 분할된 영상에 중첩시켜 상기 화면 구성부에 출력하는 자막 편집부;를 포함하는 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 시스템.
- 제1항에 있어서,상기 자막 편집부는,입력받은 이벤트 값과 대응하도록 세그먼트 분할된 영상에 중첩시킬 자막데이터의 출력을 제어하는 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 시스템.
- 제2항에 있어서,상기 이벤트 값은,상기 세그먼트 분할된 영상 내에서 자막데이터에 포함된 텍스트에 대한 날아오기, 사라지기, 나타나기 또는 흔들기 중에 어느 하나를 포함하는 애니메이션 효과, 및 텍스트에 대한 글꼴, 크기 또는 색상 중에 어느 하나를 포함하는 서식 효과를 출력하기 위한 값인 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 시스템.
- 제1항에 있어서,상기 자막 편집부는,상기 세그먼트 분할된 영상에 삽입할 텍스트를 입력받되,기 설정된 언어별 라이브러리에 복수개의 언어 각각에 대한 텍스트를 입력받아 단일 자막데이터를 생성하는 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 시스템.
- 제1항에 있어서,상기 자막 편집부는,상기 세그먼트 분할된 영상에 기 설정된 사운드 데이터 또는 입력받은 사운드 데이터를 삽입하여 영상데이터 재생시 삽입된 사운드가 출력되도록 더빙 기능을 제공하는 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 시스템.
- 제1항에 있어서,상기 자막 편집부는,상기 세그먼트 분할된 영상에 삽입될 사운드 데이터 또는 삽입된 사운드 데이터를 출력하여 미리듣기가 가능한 것을 특징으로 하는 단일 화면에서의 단일 화면에서의 자막데이터 편집 시스템.
- 제1항에 있어서,상기 영상데이터에 상기 자막데이터를 시계열적으로 병합하여 단일 파일로 생성하는 자막 병합부; 및상기 단일 파일을 로컬PC 또는 온라인 플랫폼에 업로드 하는 자막 전송부;를 더 포함하는 것을 특징으로 하는 단일 화면에서의 단일 화면에서의 자막데이터 편집 시스템.
- 제1항에 있어서,상기 자막 편집부는,입력받은 텍스트를 음성파일로 변환하고, 변환한 음성파일에 기 설정된 연출효과를 삽입하여 더빙하는 것을 특징으로 하는 단일 화면에서의 단일 화면에서의 자막데이터 편집 시스템.
- 자막 편집 방법에 있어서,(a) 영상 색인부가 로컬PC에 저장된 영상데이터 또는 온라인 플랫폼에 업로드된 영상데이터를 색인하는 단계;(b) 영상 분할부가 영상데이터를 입력값과 대응하는 길이의 세그먼트로 분할하는 단계;(c) 자막 편집부가 세그먼트 분할된 영상 각각에 삽입할 텍스트들을 입력받아 동기화시켜 자막데이터를 생성하는 단계;(d) 자막 병합부가 영상데이터 및 자막데이터를 시계열적으로 병합하여 단일 파일로 생성하는 단계; 및(e) 자막 전송부가 단일 파일을 로컬PC 또는 온라인 플랫폼에 업로드하는 단계;를 포함하는 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 시스템.
- 제9항에 있어서,상기 (c) 단계 이후,(f) 자막 편집부가 자막데이터를 드래그 앤 드롭된 좌표와 대응하도록 세그먼트 분할된 영상에 중첩하는 단계; 및(g) 자막 편집부가 입력받은 이벤트 값과 대응하도록 세그먼트 분할된 영상에 중첩시킬 자막데이터의 출력을 제어하는 단계;를 포함하는 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 방법.
- 제9항에 있어서,상기 (c) 단계 이후,(h) 자막 편집부가 세그먼트 분할된 영상에 기 설정된 사운드 데이터를 삽입할 것인지 또는 입력받은 사운드 데이터를 삽입할 것인지 여부를 판단하는 단계;(i) 상기 (h) 단계의 판단결과, 기 설정된 사운드 데이터를 삽입하는 경우, 자막 편집부가 기 저장된 오디오 파일을 색인하여 세그먼트 분할된 영상에 삽입하는 단계; 및(j) 상기 (h) 단계의 판단결과, 입력받은 사운드 데이터를 삽입하는 경우, 자막 편집부가 실시간으로 녹음한 오디오 파일을 세그먼트 분할된 영상에 삽입하는 단계;를 포함하는 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 방법.
- 제9항에 있어서,상기 (c) 단계 이후,(k) 자막 편집부가 입력받은 텍스트를 TTS엔진을 통해 음성파일로 변환하는 단계;(l) 자막 편집부가 목소리의 높낮이, 빠르기 또는 에코 중에 어느 하나의 효과를 조정하는 이퀄라이저 값을 입력받는 단계; 및(m) 자막 편집부가 변환된 음성파일의 음질을 보정하여 생성한 오디오 파일을 세그먼트 분할된 영상에 삽입하는 단계;를 포함하는 것을 특징으로 하는 단일 화면에서의 자막데이터 편집 방법.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2017-0129845 | 2017-10-11 | ||
KR20170129845 | 2017-10-11 | ||
KR1020170139033A KR101961750B1 (ko) | 2017-10-11 | 2017-10-25 | 단일 화면에서의 자막데이터 편집 시스템 |
KR10-2017-0139033 | 2017-10-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019074145A1 true WO2019074145A1 (ko) | 2019-04-18 |
Family
ID=65907850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2017/011862 WO2019074145A1 (ko) | 2017-10-11 | 2017-10-25 | 단일 화면에서의 자막데이터 편집 시스템 및 그 방법 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR101961750B1 (ko) |
WO (1) | WO2019074145A1 (ko) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113905267B (zh) * | 2021-08-27 | 2023-06-20 | 北京达佳互联信息技术有限公司 | 一种字幕编辑方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050118733A (ko) * | 2003-04-14 | 2005-12-19 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | 시청각 스트림상에 자동 더빙을 수행하는 시스템 및 방법 |
KR100650410B1 (ko) * | 2005-03-15 | 2006-11-27 | 이현무 | 비선형편집 방식을 이용한 영상물 번역 자막 처리 방법 |
KR100957244B1 (ko) * | 2008-02-20 | 2010-05-11 | (주)아이유노글로벌 | 자막 데이터의 동기화를 이용한 편집된 영상물의 자막 처리방법 |
JP2011155329A (ja) * | 2010-01-26 | 2011-08-11 | Nippon Telegr & Teleph Corp <Ntt> | 映像コンテンツ編集装置,映像コンテンツ編集方法および映像コンテンツ編集プログラム |
KR101576094B1 (ko) * | 2014-04-22 | 2015-12-09 | 주식회사 뱁션 | 애니메이션을 이용한 자막 삽입 시스템 및 방법 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2003279350B2 (en) | 2002-11-15 | 2008-08-07 | Interdigital Ce Patent Holdings | Method and apparatus for composition of subtitles |
-
2017
- 2017-10-25 WO PCT/KR2017/011862 patent/WO2019074145A1/ko active Application Filing
- 2017-10-25 KR KR1020170139033A patent/KR101961750B1/ko active IP Right Grant
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050118733A (ko) * | 2003-04-14 | 2005-12-19 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | 시청각 스트림상에 자동 더빙을 수행하는 시스템 및 방법 |
KR100650410B1 (ko) * | 2005-03-15 | 2006-11-27 | 이현무 | 비선형편집 방식을 이용한 영상물 번역 자막 처리 방법 |
KR100957244B1 (ko) * | 2008-02-20 | 2010-05-11 | (주)아이유노글로벌 | 자막 데이터의 동기화를 이용한 편집된 영상물의 자막 처리방법 |
JP2011155329A (ja) * | 2010-01-26 | 2011-08-11 | Nippon Telegr & Teleph Corp <Ntt> | 映像コンテンツ編集装置,映像コンテンツ編集方法および映像コンテンツ編集プログラム |
KR101576094B1 (ko) * | 2014-04-22 | 2015-12-09 | 주식회사 뱁션 | 애니메이션을 이용한 자막 삽입 시스템 및 방법 |
Also Published As
Publication number | Publication date |
---|---|
KR101961750B1 (ko) | 2019-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111538851B (zh) | 自动生成演示视频的方法、系统、设备及存储介质 | |
EP1425736B1 (en) | Method for processing audiovisual data using speech recognition | |
EP1295482B1 (en) | Generation of subtitles or captions for moving pictures | |
US6185538B1 (en) | System for editing digital video and audio information | |
US7362946B1 (en) | Automated visual image editing system | |
US20060285654A1 (en) | System and method for performing automatic dubbing on an audio-visual stream | |
WO2004040576A8 (en) | Methods and apparatus for use in sound replacement with automatic synchronization to images | |
US20190096407A1 (en) | Caption delivery system | |
JP2003009096A (ja) | ユーザー選択の改良されたクローズドキャプションを付与するための方法及び装置 | |
WO2020091431A1 (ko) | 그래픽 객체를 이용한 자막 생성 시스템 | |
KR950034155A (ko) | 시청각매체의 음향재녹음시스템 및 재녹음방법 | |
JPH07261652A (ja) | 語学学習方法及び語学学習用記録媒体 | |
WO2019074145A1 (ko) | 단일 화면에서의 자막데이터 편집 시스템 및 그 방법 | |
JP2021090172A (ja) | 字幕データ生成装置、コンテンツ配信システム、映像再生装置、プログラム及び字幕データ生成方法 | |
JPH0991928A (ja) | 映像の編集方法 | |
WO2017051955A1 (ko) | 동영상 이펙트 적용 장치 및 방법 | |
WO2022065537A1 (ko) | 자막 동기화를 제공하는 영상 재생 장치 및 그 동작 방법 | |
US7430564B2 (en) | Performance information reproducing apparatus and method and performance information reproducing program | |
JP4206445B2 (ja) | 字幕番組制作方法、及び字幕番組制作システム | |
JP2584070B2 (ja) | データ編集装置とデータ編集方法 | |
JP4500957B2 (ja) | 字幕制作システム | |
JP2008134686A (ja) | 作画プログラム、プログラマブル表示器、並びに、表示システム | |
JPH0792938A (ja) | 案内装置 | |
AU745436B2 (en) | Automated visual image editing system | |
KR102463283B1 (ko) | 청각 장애인 및 비장애인 겸용 영상 콘텐츠 자동 번역 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17928646 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17928646 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.01.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17928646 Country of ref document: EP Kind code of ref document: A1 |