WO2018120821A1 - Procédé et dispositif de production d'une présentation - Google Patents

Procédé et dispositif de production d'une présentation Download PDF

Info

Publication number
WO2018120821A1
WO2018120821A1 PCT/CN2017/094600 CN2017094600W WO2018120821A1 WO 2018120821 A1 WO2018120821 A1 WO 2018120821A1 CN 2017094600 W CN2017094600 W CN 2017094600W WO 2018120821 A1 WO2018120821 A1 WO 2018120821A1
Authority
WO
WIPO (PCT)
Prior art keywords
presentation
audio data
focus
speech
web page
Prior art date
Application number
PCT/CN2017/094600
Other languages
English (en)
Chinese (zh)
Inventor
吴亮
黄薇
高峰
钟恒
Original Assignee
北京奇虎科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京奇虎科技有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2018120821A1 publication Critical patent/WO2018120821A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting

Definitions

  • the present application relates to the field of web technologies, and in particular, to a method for fabricating a presentation and a device for making a presentation.
  • the user In order to realize distance learning, the user usually records the operation of the presentation while the user is speaking, keeping the user's speech synchronized with the presentation.
  • the video data obtained by recording the operation of the presentation is bulky and takes up a lot of storage space.
  • the video data is often compressed to reduce the resolution of the video data, resulting in blurry content of the presentation.
  • the present application has been made in order to provide a method for fabricating a presentation and a corresponding apparatus for producing a presentation that overcomes the above problems or at least partially solves the above problems.
  • a method of making a presentation including:
  • a production apparatus for a presentation including:
  • a web page loading module adapted to load a web page generated for the presentation
  • a presentation element configuration module adapted to configure a presentation element in the web page
  • An audio data adding module adapted to add audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis;
  • a speech focus action setting module is adapted to set a speech focus action on the presentation element to focus the presentation document in accordance with the speech focus action.
  • a computer program comprising computer readable code causing the terminal device to perform the production of any of the aforementioned presentations when the computer readable code is run on a terminal device method.
  • a computer readable medium storing a computer program of a method of fabricating a presentation as described above.
  • the embodiment of the present application loads a web page generated for a presentation in a client, and configures a presentation element in the web page, and further adds audio data to the presentation element on the timeline, so that the presentation can be played according to the timeline
  • the elements are synchronized to play audio data, the audio data is re-added in the selected target time interval on the time axis, the web page is used as a carrier to create a presentation, and the audio data is used to realize the synchronous playing of the presentation elements and audio data for the user.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for creating a presentation according to an embodiment of the present application
  • FIGS. 2A-2C illustrate example diagrams of a configuration presentation element in accordance with one embodiment of the present application
  • 3A-3D illustrate example diagrams of editing a presentation element and audio data playback order, in accordance with one embodiment of the present application
  • FIGS. 4A-4D illustrate example diagrams of playing presentation elements and audio data in accordance with one embodiment of the present application
  • FIGS. 5A-5B illustrate example diagrams of recording audio data in accordance with one embodiment of the present application
  • 6A-6B illustrate example diagrams of a focus element in accordance with one embodiment of the present application
  • FIG. 7 is a structural block diagram of a device for fabricating a presentation according to an embodiment of the present application.
  • Figure 8 schematically shows a block diagram of a terminal device for performing the method according to the present application
  • Fig. 9 schematically shows a storage unit for holding or carrying program code implementing the method according to the present application.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for creating a presentation according to an embodiment of the present application. Specifically, the method may include the following steps:
  • Step 101 Load a web page generated for the presentation.
  • the user can log in to the server by using a user account on a client such as a browser, and send a request for generating a presentation to the server.
  • a client such as a browser
  • the server can configure a new presentation and configure the presentation with a unique presentation identifier, such as slide_id (slide ID), which is used to generate a unique one for the presentation. Edit the URL (Uniform Resource Locator) and return the URL for editing to the client.
  • slide_id segment ID
  • URL Uniform Resource Locator
  • the client accesses the URL for editing to load a web page, which is the carrier of the presentation, ie the presentation can edit the content in the web page.
  • the information of the presentation can be displayed in the area such as the user center.
  • the client can directly load the web page by using the URL for editing, which is not used in the embodiment of the present application. limit.
  • the presentation ID is used to generate a unique URL for the presentation, and the URL for the presentation is returned to the client.
  • the client can access the URL for the presentation to load the web page, which is the carrier of the presentation, ie the presentation can be played in the web page.
  • Step 102 configuring a presentation element in the web page.
  • the presentation elements can include one or more of the following:
  • Text images, images of specified shapes, lines, tables, frames, and code.
  • the user can trigger the presentation element to edit state by clicking or the like.
  • the editing operation bar of the presentation element is popped up in the web page, and the user can display the element of the presentation element in the editing operation column. Parameters for the user to adjust.
  • the edit operation bar of the text box can be popped up in the web page, and the user can set the font alignment and font. Play multiplier, font color, line spacing, font spacing and other element parameters.
  • the edit operation column of the table may be popped up in the web page, and the user may set the number of rows, the number of columns, and the cell.
  • Element parameters such as margins, border width, and border color.
  • the user can save it manually, or the script of the client executing the web page can be automatically saved.
  • the parameters configured in the presentation element of the web page can be synchronized with the server during saving, and the server takes the parameter. Store under the presentation (represented by the presentation ID) for subsequent loading.
  • the client loads the web page with the URL for editing, and according to the previously set element parameters. Load the appropriate presentation element for the user to proceed Line editing, this embodiment of the application does not limit this.
  • Step 103 adding audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis.
  • the client in order to control the playing of the presentation, can configure a timeline and set the playing time of the presentation element on the timeline.
  • the user can record audio data
  • the client adds audio data to the presentation element, such as a user's speech, so that the presentation elements can be played while the audio data is being played on the time axis, so that the two can be synchronized.
  • the user can set the playing time of the presentation element. With the passage of time, when the audio data is set to be played, the speech can be set to be switched in order.
  • the manuscript elements that is, the text "Quiet Night Thinking", “Li Bai”, “Before the Moon”.
  • the timing control is displayed in the lower left corner, and as time passes, the audio presentation data is played, and the presentation document elements are switched in order, that is, the text is displayed. "Quiet night thinking”, “Li Bai”, “before the bed bright moonlight”.
  • step 103 may include the following sub-steps:
  • Sub-step S11 the recorder is called to record audio data to the presentation element.
  • the microphone can be called to collect the original audio data, and the recorder is called to record the audio data.
  • a recording control can be loaded, after clicking the recording control, recording is started, and a visual element of the audio data is displayed on the axis element of the visual axis of the time axis.
  • the sub-step S11 may include the following sub-steps:
  • Sub-step S111 acquiring original audio stream data collected by the microphone
  • Sub-step S112 the original audio stream data is transmitted to the recorder
  • Sub-step S113 the original audio stream data is visualized in the recorder according to the recording parameters, and the original audio stream data is converted into audio data of a specified format.
  • the client can pass WebRTC (Web Real-Time Communication, The getUserMedia interface provided by the webpage real-time communication) acquires the original audio stream data collected by the microphone.
  • WebRTC Web Real-Time Communication
  • the getUserMedia interface provided by the webpage real-time communication acquires the original audio stream data collected by the microphone.
  • a script processing node is created by the createScriptProcess method of the Web Audio API, which is used to process raw audio stream data using Javascript.
  • the audio source node is connected to the processing node, and the processing node is connected to the audio output node to form a complete processing flow.
  • the processing node can listen to the AudioProcessingEvent event through the onaudioprocess method, and the event acquires a certain length of data from the original audio stream data for processing at regular intervals.
  • the original audio stream data is visualized by the drawAudioWave method (the visualized elements are generated based on the frequency, waveform and other attributes of the original audio stream data), and the audio data is transmitted to the Web Worker for audio.
  • the drawAudioWave method the visualized elements are generated based on the frequency, waveform and other attributes of the original audio stream data
  • the audio data is transmitted to the Web Worker for audio.
  • the audio processing is paused, and a format file such as WAV is requested from the Web Worker, and the Web Worker converts the existing original audio stream data into audio data of a format such as WAV and returns it.
  • a format file such as WAV
  • the computing power of the client (such as a browser) is mostly limited, and the temporary storage and processing of the original audio stream data generally requires a large amount of computing power, another thread is opened by introducing a Web Worker.
  • the temporary storage and processing of the original audio stream data is performed to ensure that other processing of the client (such as a browser) can be performed normally.
  • step 103 may include the following sub-steps:
  • Sub-step S21 inputting text information to the presentation element
  • Sub-step S22 converting the text information into audio data.
  • the terminal where the client is located is not configured with a microphone
  • the user can input text information to the presentation element, which can be synthesized by voice (The Emperor Waltz, TEW) Converts text information to audio data.
  • voice The Emperor Waltz, TEW
  • Speech synthesis also known as Text to Speech (TTS) technology
  • TTS Text to Speech
  • the characteristics of the segment such as pitch, length and intensity, are made, so that the synthesized speech can correctly express the semantics and sound more natural.
  • the phonetic primitives of the single words or phrases corresponding to the processed text are extracted from the speech synthesis library, and the prosody characteristics of the speech primitives are adjusted and modified by using a specific speech synthesis technique, and finally synthesized. Meet the required voice data.
  • the manner of adding audio data is only an example.
  • other manners of adding audio data may be set according to actual conditions, for example, directly importing existing audio data, and the like. This is not limited.
  • those skilled in the art may also adopt other manners of adding audio data according to actual needs, and the embodiment of the present application does not limit this.
  • the audio data on the time axis can be uploaded to the server.
  • the audio data can be retrieved from the Web Worker, and the audio file is compressed by the amrnb.js library, compressed into a specified format such as amr, and then uploaded to the server, the server stores Under the presentation (represented by the presentation ID) for subsequent loading.
  • Step 104 setting a speech focus action on the presentation element to gather according to the speech The focus action focuses on the presentation.
  • a speech focus action may be configured thereon, so that when the presentation document is played, a focusing operation may be performed, so that the attention of the viewed user can be concentrated on the presentation.
  • Document element for a presentation element that may be emphasized, a speech focus action may be configured thereon, so that when the presentation document is played, a focusing operation may be performed, so that the attention of the viewed user can be concentrated on the presentation.
  • Document element for a presentation element that may be emphasized, a speech focus action may be configured thereon, so that when the presentation document is played, a focusing operation may be performed, so that the attention of the viewed user can be concentrated on the presentation.
  • step 104 may include the following sub-steps:
  • Sub-step S31 determining a presentation element to be focused
  • Sub-step S32 adding a focus element to the speech document element to be focused
  • Sub-step S33 recording the focused element and the corresponding time point to focus on the focus document element to be focused when the speech document is played to the time point.
  • the user can trigger the presentation element of the current web page by means of a mouse click or the like, and add a focus element to the presentation element.
  • the presentation element to be focused is in a first position of the web page, and the second location is calculated in the web page based on the first location, such as in the upper left corner and the upper right corner of the first location
  • the position is a second position or the like, and a focus element is added at the second position.
  • the text "still night thinking” is a presentation element to be focused, with the upper left corner of the first position as a second position, and an arrow-shaped focusing element is added at the second position, Point to the text “Quiet Nights” and set its time point to 4 seconds, that is, you can play the text "Quiet Nights” in 0.9 seconds, play the arrow-shaped focus elements in 4 seconds, and focus on the text "Quiet Nights”.
  • the embodiment of the present application loads a web page generated for a presentation in a client, and configures a presentation element in the web page, and further adds audio data to the presentation element on the timeline, so that the presentation can be played according to the timeline
  • the elements are synchronized to play audio data, the audio data is re-added in the selected target time interval on the time axis, the web page is used as a carrier to create a presentation, and the audio data is used to realize the synchronous playing of the presentation elements and audio data for the user.
  • FIG. 7 is a structural block diagram of an embodiment of a device for creating a presentation according to an embodiment of the present application, which may specifically include the following modules:
  • a web page loading module 701, configured to load a web page generated for the presentation
  • a presentation element configuration module 702 adapted to configure a presentation element in the web page
  • An audio data adding module 703, configured to add audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis;
  • the speech focus action setting module 704 is adapted to set a speech focus action on the presentation element to focus the speech presentation in accordance with the speech focus action.
  • the audio data adding module 703 includes:
  • a recording sub-module adapted to call the recorder to record audio data to the presentation element.
  • the recording submodule includes:
  • the original audio stream data acquiring unit is adapted to acquire original audio stream data collected in the microphone
  • a recorder incoming unit adapted to transmit the raw audio stream data to the recorder
  • a recorder processing unit adapted to visualize the original audio stream data in the recorder according to recording parameters, and convert the original audio stream data into audio data of a specified format.
  • the audio data adding module 703 includes:
  • a text information input submodule adapted to input text information to the presentation element
  • a text information conversion sub-module adapted to convert the text information into audio data.
  • the speech focus action setting module 704 includes:
  • a presentation element determination sub-module adapted to determine a presentation element to be focused
  • a focus element adding submodule adapted to add a focus element to the speech document element to be focused
  • An information recording sub-module adapted to record the focus element and a corresponding time point to focus the focus document to focus on the speech document element to be focused when the speech document is played to the time point.
  • the focused element adding submodule includes:
  • a first location determining unit configured to determine that the presentation element to be focused is at a first location of the web page
  • a second location determining unit configured to calculate a second location based on the first location in the web page
  • a position adding unit adapted to add a focusing element at the second position.
  • the method further includes:
  • An audio uploading module adapted to upload audio data on the timeline to a server.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the components of the presentation device in accordance with embodiments of the present invention.
  • the invention can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.
  • Figure 8 shows a terminal that can implement the production of a presentation in accordance with the present invention.
  • the terminal device conventionally includes a processor 810 and a computer program product or computer readable medium in the form of a memory 820.
  • the memory 820 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 820 has a memory space 830 for program code 831 for performing any of the method steps described above.
  • storage space 830 for program code may include various program code 831 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • Such computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 820 in the terminal device of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit includes computer readable code 831', i.e., code readable by a processor, such as 810, that when executed by the terminal device causes the terminal device to perform each of the methods described above step.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

L'invention concerne un procédé et un dispositif de production d'une présentation. Le procédé comporte les étapes consistant à: charger une page web générée pour une présentation (101); configurer un élément de présentation dans la page web (102); ajouter des données audio à l'élément de présentation sur un axe chronologique, reproduire simultanément les données audio lors de la reproduction de l'élément de présentation suivant l'axe chronologique (103); configurer une action de focalisation par la parole pour l'élément de présentation, et assurer la focalisation sur la présentation selon l'action de focalisation par la parole (104). Le procédé et le dispositif font d'un élément web l'élément de présentation et permettent, en comparaison de données vidéo, un volume considérablement réduit et une occupation réduite de l'espace de stockage; en outre, du fait que l'élément web est restitué et chargé directement sur la page web, le besoin d'un traitement de compression est éliminé, et la clarté de l'élément web peut être garantie.
PCT/CN2017/094600 2016-12-26 2017-07-27 Procédé et dispositif de production d'une présentation WO2018120821A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611219546.1A CN108241596A (zh) 2016-12-26 2016-12-26 一种演示文稿的制作方法和装置
CN201611219546.1 2016-12-26

Publications (1)

Publication Number Publication Date
WO2018120821A1 true WO2018120821A1 (fr) 2018-07-05

Family

ID=62701965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/094600 WO2018120821A1 (fr) 2016-12-26 2017-07-27 Procédé et dispositif de production d'une présentation

Country Status (2)

Country Link
CN (1) CN108241596A (fr)
WO (1) WO2018120821A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183249A (zh) * 2020-09-14 2021-01-05 北京神州泰岳智能数据技术有限公司 一种视频处理方法和装置
CN112233669A (zh) * 2019-07-15 2021-01-15 珠海金山办公软件有限公司 一种演讲内容提示方法及系统
CN112533054A (zh) * 2019-09-19 2021-03-19 腾讯科技(深圳)有限公司 在线视频的播放方法、装置及存储介质
CN115396404A (zh) * 2022-08-08 2022-11-25 深圳乐播科技有限公司 云会议场景中主讲人讲解位置的同步投屏方法及相关装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765284B (zh) * 2019-10-25 2023-03-24 天津车之家数据信息技术有限公司 一种生成演示文稿的方法、系统、计算设备及存储介质
CN112987921B (zh) * 2021-02-19 2024-03-15 车智互联(北京)科技有限公司 一种vr场景讲解方案生成方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450944A (zh) * 2015-11-13 2016-03-30 北京自由坊科技有限责任公司 一种幻灯片和现场讲演语音同步录制与重现的方法及装置
CN105744340A (zh) * 2016-02-26 2016-07-06 上海卓越睿新数码科技有限公司 直播视频和演示文稿实时画面融合方法
CN106021334A (zh) * 2016-05-06 2016-10-12 亿瑞互动科技(北京)有限公司 一种在线教学中ppt的实时标注方法、装置及相关设备
CN106210841A (zh) * 2016-07-06 2016-12-07 深圳市矽伟智科技有限公司 一种视频同步播放方法、装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7312803B2 (en) * 2004-06-01 2007-12-25 X20 Media Inc. Method for producing graphics for overlay on a video source
CN101344883A (zh) * 2007-07-09 2009-01-14 宇瞻科技股份有限公司 记录演示文稿的方法
US20120317486A1 (en) * 2011-06-07 2012-12-13 Microsoft Corporation Embedded web viewer for presentation applications
CN105373524A (zh) * 2015-10-13 2016-03-02 百度在线网络技术(北京)有限公司 演示稿的编辑方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450944A (zh) * 2015-11-13 2016-03-30 北京自由坊科技有限责任公司 一种幻灯片和现场讲演语音同步录制与重现的方法及装置
CN105744340A (zh) * 2016-02-26 2016-07-06 上海卓越睿新数码科技有限公司 直播视频和演示文稿实时画面融合方法
CN106021334A (zh) * 2016-05-06 2016-10-12 亿瑞互动科技(北京)有限公司 一种在线教学中ppt的实时标注方法、装置及相关设备
CN106210841A (zh) * 2016-07-06 2016-12-07 深圳市矽伟智科技有限公司 一种视频同步播放方法、装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233669A (zh) * 2019-07-15 2021-01-15 珠海金山办公软件有限公司 一种演讲内容提示方法及系统
CN112533054A (zh) * 2019-09-19 2021-03-19 腾讯科技(深圳)有限公司 在线视频的播放方法、装置及存储介质
CN112183249A (zh) * 2020-09-14 2021-01-05 北京神州泰岳智能数据技术有限公司 一种视频处理方法和装置
CN115396404A (zh) * 2022-08-08 2022-11-25 深圳乐播科技有限公司 云会议场景中主讲人讲解位置的同步投屏方法及相关装置
CN115396404B (zh) * 2022-08-08 2023-09-05 深圳乐播科技有限公司 云会议场景中主讲人讲解位置的同步投屏方法及相关装置

Also Published As

Publication number Publication date
CN108241596A (zh) 2018-07-03

Similar Documents

Publication Publication Date Title
WO2018120821A1 (fr) Procédé et dispositif de production d'une présentation
WO2018120819A1 (fr) Procédé et dispositif pour produire des présentations
US9552807B2 (en) Method, apparatus and system for regenerating voice intonation in automatically dubbed videos
JP5030617B2 (ja) デジタル・オーディオ・プレーヤ上でrssコンテンツをレンダリングするためのrssコンテンツ管理のための方法、システム、およびプログラム(デジタル・オーディオ・プレーヤ上でrssコンテンツをレンダリングするためのrssコンテンツ管理)
US9203877B2 (en) Method for mobile terminal to process text, related device, and system
WO2016037440A1 (fr) Procédé et dispositif de conversion de voix de vidéo et serveur
US20090006965A1 (en) Assisting A User In Editing A Motion Picture With Audio Recast Of A Legacy Web Page
US20200058288A1 (en) Timbre-selectable human voice playback system, playback method thereof and computer-readable recording medium
US20180226101A1 (en) Methods and systems for interactive multimedia creation
US20090326948A1 (en) Automated Generation of Audiobook with Multiple Voices and Sounds from Text
WO2018120820A1 (fr) Procédé et appareil de production de présentations
JP2015517684A (ja) コンテンツのカスタマイズ
Mitchel et al. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech
WO2022184055A1 (fr) Procédé et appareil de lecture de parole pour article, et dispositif, support de stockage et produit programme
JP2023548008A (ja) 音声およびビデオ組立てのためのテキスト駆動型エディタ
US20110311201A1 (en) Recasting a legacy web page as a motion picture with audio
US20220300250A1 (en) Audio messaging interface on messaging platform
US20080243510A1 (en) Overlapping screen reading of non-sequential text
TW201331930A (zh) 用於電子系統的語音合成方法及裝置
CN113870833A (zh) 语音合成相关系统、方法、装置及设备
KR102020341B1 (ko) 악보 구현 및 음원 재생 시스템 및 그 방법
JP2020204683A (ja) 電子出版物視聴覚システム、視聴覚用電子出版物作成プログラム、及び利用者端末用プログラム
JP2010230948A (ja) コンテンツ配信システムおよびテキスト表示方法
KR20210050410A (ko) 영상 컨텐츠에 대한 합성음 실시간 생성에 기반한 컨텐츠 편집 지원 방법 및 시스템
CN113709551B (zh) 基于剧本的视频展示方法、装置和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17888058

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17888058

Country of ref document: EP

Kind code of ref document: A1