WO2020209647A1 - Method and system for generating synthetic speech for text through user interface - Google Patents
Method and system for generating synthetic speech for text through user interface Download PDFInfo
- Publication number
- WO2020209647A1 WO2020209647A1 PCT/KR2020/004857 KR2020004857W WO2020209647A1 WO 2020209647 A1 WO2020209647 A1 WO 2020209647A1 KR 2020004857 W KR2020004857 W KR 2020004857W WO 2020209647 A1 WO2020209647 A1 WO 2020209647A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- speech
- sentences
- text
- style
- voice
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000013528 artificial neural network Methods 0.000 claims abstract description 43
- 230000000694 effects Effects 0.000 claims description 46
- 238000004458 analytical method Methods 0.000 claims description 30
- 230000000007 visual effect Effects 0.000 claims description 30
- 230000004044 response Effects 0.000 claims description 21
- 238000003058 natural language processing Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000007689 inspection Methods 0.000 claims description 2
- 230000015572 biosynthetic process Effects 0.000 description 49
- 238000003786 synthesis reaction Methods 0.000 description 49
- 238000004891 communication Methods 0.000 description 42
- 230000015654 memory Effects 0.000 description 34
- 230000008859 change Effects 0.000 description 25
- 239000013598 vector Substances 0.000 description 25
- 238000010586 diagram Methods 0.000 description 22
- 230000008451 emotion Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000010801 machine learning Methods 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 6
- 238000012805 post-processing Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000014509 gene expression Effects 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 241000590419 Polygonia interrogationis Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- OMFRMAHOUUJSGP-IRHGGOMRSA-N bifenthrin Chemical compound C1=CC=C(C=2C=CC=CC=2)C(C)=C1COC(=O)[C@@H]1[C@H](\C=C(/Cl)C(F)(F)F)C1(C)C OMFRMAHOUUJSGP-IRHGGOMRSA-N 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000005291 magnetic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
Definitions
- the present disclosure relates to a method and system for generating a synthesized speech for a text through a user interface, and more specifically, a speaker, a style, a speed, an emotion, a context, and a rhyme and a change of speech according to the context to the output speech. It relates to a method of providing a reflective user interface.
- Broadcasting programs including audio contents are produced and released for web-based video services such as YouTube and podcasts provided online as well as existing broadcasting channels such as TV and radio.
- an application for creating or editing audio content including audio is widely used.
- voice synthesis technology also called TTS (Text-To-Speech)
- TTS Text-To-Speech
- the typical method of speech synthesis is a concatenative TTS that synthesizes the speech by combining the phonemes constituting the sentence to be synthesized and storing the speech in very short units such as phonemes.
- the conventional speech synthesis technology can be used for producing a broadcast program
- the audio content generated through this speech synthesis technology does not reflect the speaker's personality and emotions, so its effectiveness as an audio content for producing a broadcast program is poor. I can.
- the style of the speaker who spoke the line is reflected for each line in the audio content created by the speech synthesis technique. Skills are required.
- a user interface technology that enables a user to intuitively and easily create and edit audio content reflecting a style based on text is required.
- Embodiments of the present disclosure provide a user interface capable of reflecting style, emotion, context, and rhyme change and voice change according to the context of the input text to the synthesized speech or audio content, thereby providing a natural and realistic text for the input text. It relates to a method for generating and editing synthetic speech for.
- the present disclosure may be implemented in a variety of ways, including a method, system, apparatus, or computer program stored on a computer-readable storage medium.
- a method of generating a synthesized speech for text through a user interface includes the steps of receiving one or more sentences, determining a speech style characteristic for the received one or more sentences, and the determined speech style characteristic. And outputting the synthesized speech for the reflected one or more sentences, wherein the synthesized speech is output from the artificial neural network text-synthetic speech model by inputting one or more sentences and the determined speech style characteristics into the artificial neural network text-synthetic speech model. And a method of generating a synthesized speech for text, which is generated based on the speech data.
- changing the setting information for at least a portion of the output one or more sentences includes changing setting information for visually displaying a portion of the output one or more sentences.
- the step of receiving one or more sentences includes receiving a plurality of sentences, and the method further comprises adding a visual indication indicating characteristics of an effect to be inserted between the plurality of sentences.
- the synthesized voice includes sound effects generated based on the characteristics of the effects included in the added visual display.
- the effect to be inserted between the plurality of sentences includes silence
- the step of adding a visual indication indicating the characteristics of the effect to be inserted between the plurality of sentences is the time of silence to be inserted between the plurality of sentences.
- receiving one or more sentences includes receiving a plurality of sentences, and the method includes separating the plurality of sentences into one or more sentence sets, and the received one or more sentences Determining the speech style characteristic for the includes determining a role corresponding to the separated set of one or more sentences and setting a predetermined speech style characteristic corresponding to the determined role.
- the separated one or more sentence sets are analyzed using natural language processing, and the step of determining a role corresponding to the separated one or more sentence sets is recommended based on the analysis result of the one or more sentence sets. And outputting one or more cast candidates and selecting at least some of the outputted one or more cast candidates.
- the separated one or more sentence sets are grouped based on the analysis result, and the determining of a role corresponding to the separated one or more sentence sets includes each of the grouped sentence sets recommended based on the analysis result. And outputting one or more cast candidates corresponding to and selecting at least some of the outputted one or more cast candidates.
- the determining of the speech style characteristic for the received one or more sentences includes outputting the recommended one or more speech style characteristic candidates based on the analysis result of the one or more sentence sets, and the output one or more And selecting at least some of the speech style feature candidates.
- the synthesized speech for one or more sentences is inspected, and the method further includes changing a speech style characteristic applied to the synthesized speech based on the inspection result.
- audio content including synthesized speech is generated.
- the method further comprises receiving the generated audio content in response to a request for downloading the generated audio content.
- the step of playing the generated audio content in real time is further included.
- it further comprises mixing the generated audio content with the video content.
- further comprising the step of outputting the received one or more sentences, and determining a speech style characteristic of the received one or more sentences may include selecting at least some of the output one or more sentences, Outputting an interface for changing a speech style characteristic for at least a portion of the one or more sentences, and changing a value representing the speech style characteristic for at least a portion of the sentence through the interface, wherein the synthesized speech comprises: At least a part and values representing the changed speech style characteristics are input into the artificial neural network text-synthetic speech model, and are changed based on the speech data output from the artificial neural network text-synthetic speech model.
- a computer program stored in a computer-readable recording medium is provided in order to execute a method of generating a synthesized speech for text according to an embodiment of the present disclosure on a computer.
- a user interface for generating and editing audio content is operated by a document creator (eg, a word processor, etc.), and when a user opens a document and edits the document content, the look and feel of the document Audio content can be automatically generated according to.
- a document creator eg, a word processor, etc.
- a voice style may be proposed, and the proposed style may be easily selected by a user.
- a speech style characteristic for text is automatically determined using natural language processing or the like.
- a user interface device for generating and editing audio content enables a user to adjust a height and/or speed for each word, phoneme, or syllable for a detailed style of a voice.
- the user interface for creating and editing audio content is provided so that the selected style for text can be visually displayed so that the user can intuitively recognize it, so that the user can edit the style. Make it easy.
- a synthesized voice reflecting a speaker or style determined for a text may be generated, and audio content including the generated synthesized voice may be applied.
- FIG. 1 is a diagram illustrating an exemplary screen of a user interface for providing a speech synthesis service according to an embodiment of the present disclosure.
- FIG. 2 is a schematic diagram illustrating a configuration in which a plurality of user terminals and a synthesized voice generating system are connected to enable communication in order to provide a service for generating a synthesized voice for text according to an embodiment of the present disclosure.
- FIG. 3 is a block diagram illustrating an internal configuration of a user terminal and a synthesized voice generating system according to an embodiment of the present disclosure.
- FIG. 4 is a block diagram illustrating an internal configuration of a processor of a user terminal according to an embodiment of the present disclosure.
- FIG. 5 is a block diagram illustrating an internal configuration of a processor of a synthesized speech generation system according to an embodiment of the present disclosure.
- FIG. 6 is a flowchart illustrating a method of generating synthesized speech according to an embodiment of the present disclosure.
- FIG. 7 is a flowchart illustrating a method of generating a synthesized speech for changing setting information according to an embodiment of the present disclosure.
- FIG. 8 is a diagram illustrating a configuration of an artificial neural network-based text-speech synthesis apparatus and a network for extracting an embedding vector capable of distinguishing each of a plurality of speakers according to an embodiment of the present disclosure.
- FIG. 9 is a diagram illustrating an exemplary screen of a user interface providing a speech synthesis service according to an embodiment of the present disclosure.
- FIG. 10 is a diagram illustrating an exemplary screen of a user interface providing a speech synthesis service according to an embodiment of the present disclosure.
- FIG. 11 is a diagram illustrating an exemplary screen of a user interface providing a speech synthesis service according to an embodiment of the present disclosure.
- FIG. 12 is a diagram illustrating an exemplary screen of a user interface providing a speech synthesis service according to an embodiment of the present disclosure.
- FIG. 13 is a diagram illustrating an exemplary screen of a user interface providing a speech synthesis service according to an embodiment of the present disclosure.
- module' refers to software or hardware components, and'module' performs certain roles. However,'module' is not meant to be limited to software or hardware.
- The'module' may be configured to be in an addressable storage medium, or may be configured to reproduce one or more processors.
- 'module' refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, and procedures. , Subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, database, data structures, tables, arrays, or variables. Components and'modules' may be combined into a smaller number of components and'modules', or further separated into additional components and'modules'.
- a'module' may be implemented with a processor and a memory.
- 'Processor' should be interpreted broadly to include general purpose processors, central processing units (CPUs), microprocessors, digital signal processors (DSPs), controllers, microcontrollers, state machines, etc.
- a'processor' may refer to an application specific application (ASIC), programmable logic device (PLD), field programmable gate array (FPGA), and the like.
- ASIC application specific application
- PLD programmable logic device
- FPGA field programmable gate array
- 'Processor' refers to a combination of processing devices such as, for example, a combination of a DSP and a microprocessor, a combination of a plurality of microprocessors, a combination of one or more microprocessors in combination with a DSP core, or any other such combination of configurations. You may.
- the'memory' should be broadly interpreted to include any electronic component capable of storing electronic information.
- 'Memory' refers to random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erase-programmable read-only memory (EPROM), It may refer to various types of processor-readable media such as electrically erasable PROM (EEPROM), flash memory, magnetic or optical data storage, registers, and the like.
- RAM random access memory
- ROM read-only memory
- NVRAM non-volatile random access memory
- PROM programmable read-only memory
- EPROM erase-programmable read-only memory
- the memory is said to be in electronic communication with the processor if it can read information from and/or write information to the memory.
- the memory integrated in the processor is in electronic communication with the processor.
- the'voice style feature' may include a component or identification element of a voice.
- the speech style characteristics may include speech style (eg, tone, tone, tone, etc.), speech speed, accent, intonation, pitch, loudness, frequency, and the like.
- the'caste' may include a speaker or a character who utters the text.
- the'caste' may include a predetermined voice style feature corresponding to each role. 'The role' and the'voice style feature' are used separately, but the'cast' may be included in the'voice style feature'.
- the'setting information' may include visually recognizable information for distinguishing voice style features set in one or more sentences through a user interface.
- it may mean information such as a font applied to one or more sentences, a font style, a font color, a font size, a font effect, an underline, and an underline style.
- setting information such as "#3", “slow”, and "1.5s” indicating a voice style, sound effect, or silence may be displayed through a user interface.
- a'sentence' may mean that a plurality of texts are separated based on punctuation marks such as periods, exclamation marks, question marks, quotation marks, and the like. For example,'Today is the day to meet customers and listen to and answer questions.' The text can be separated into a separate sentence from the continuous text based on the period.
- the'sentence' is an input for the user's sentence separation, and the text may be divided into sentences. That is, one sentence formed by separating text based on a punctuation mark may be divided into at least two sentences as an input for the user's sentence separation. For example, by entering Enter after'Eating' in the sentence'Eating and going home,' the user can separate the sentence into'Eating rice' and'Going home.' I can.
- the'sentence set' may be composed of one or more sentences, and a group formed by grouping of the sentence sets may be composed of one or more sentence sets.
- 'Sentence set' and'sentence' are used separately, but'sentence' may include'sentence set'.
- a user interface providing a speech synthesis service may be provided to a user terminal operable by a user.
- the user terminal may refer to any electronic device having one or more processors and memories.
- the user interface may be displayed on an output device (eg, a display) connected to or included in the user terminal.
- the user interface includes text information (e.g., one or more sentences, one or more phrases, one or more words, one or more phonemes, etc.) through an input device (e.g., a keyboard, etc.) connected or included with the user terminal. May be configured to receive and provide a synthesized speech corresponding to the received text information.
- the input text information may be provided to a synthesized speech generating system configured to provide a synthesized speech corresponding to the text.
- the synthesized speech generation system may be configured to input one or more sentences and speech style features into an artificial neural network text-synthetic speech model to generate output speech data for one or more sentences reflecting the speech style characteristics.
- Such a synthesized speech generating system may be executed by an arbitrary computing device, such as a user terminal or a system accessible to the user terminal.
- one or more sentences may be received through a user interface.
- a plurality of sentences 110 on which speech synthesis is to be performed may be received and displayed through a display.
- an input for a plurality of sentences may be received through an input device (eg, a keyboard), and the input plurality of sentences 110 may be displayed.
- a document type file including a plurality of sentences may be uploaded through a user interface, and a plurality of sentences included in the document file may be output.
- the document type file may refer to an arbitrary document type file supported by the synthetic speech generation system, for example, a project file, a text file, etc. that can be edited through a user interface.
- a plurality of sentences received through the user interface may be divided into one or more sentence sets.
- a user may edit a plurality of sentences displayed through a user interface and divide them into one or more sentence sets.
- a plurality of sentences received through the user interface may be analyzed through natural language processing or the like, and may be separated into one or more sentence sets.
- One or more separate sets of sentences may be displayed through a user interface. For example, as shown in the user interface screen 100, the sentence'Today is the day to meet with customers and listen to and answer questions' and'Today's representative describes an artificial intelligence voice actor service that gives emotion to text. The sentence of'You say you do it' can be divided into one sentence set (hereinafter,'A set', 112_1).
- a cast corresponding to one or more separate sets of sentences may be determined.
- different roles may be determined for each of a plurality of different sentence sets, or, alternatively, the same roles may be determined.
- 'Jinhyuk' 114_1 may be determined as a cast in the A set 112_1
- 'Beomsu' 114_2 as a different cast in the B set 112_2
- the C set 112_3 may be determined as'Jinhyuk' 114_1, which is the same role as the A set 112_1.
- a predetermined speech style characteristic corresponding to the determined role may be set or determined in each sentence set. Voice style characteristics corresponding to these roles may also be changed according to user input.
- 'Jinhyuk' 114_1 which is a role corresponding to the A set 112_1 and the C set 112_3, may be changed to another role (eg, changu, etc.) provided through a user interface.
- another role eg, changu, etc.
- one or more roles may be displayed through the user interface.
- the role of'Jinhyuk' 114_1 may be changed to the selected one.
- the previous role'Jinhyuk' corresponding to the A set 112_1 and the C set 112_3 may be changed to the selected role.
- a predetermined voice style characteristic corresponding to the selected role may be set in the A set 112_1 and the C set 112_3.
- the separated one or more sentence sets may be analyzed using natural language processing or the like, and some sentence sets among the plurality of different sentence sets may be grouped.
- the same roles may be determined in a plurality of different sentence sets grouped into one group.
- the A set 112_1 and the C set 112_3 correspond to the sentence set of the same speaker and may be grouped into one group.
- one or more cast candidates may be recommended for the A set 112_1 and the C set 112_3.
- the same role may be selected or determined in the A set 112_1 and the C set 112_3.
- a role of'Jinhyuk' 114_1 may be determined in the A set 112_1 and the C set 112_3.
- Speech style characteristics for the received one or more sentences may be determined. These voice style characteristics may be determined or changed based on setting information for one or more sentences. In one embodiment, such setting information may be determined or changed according to user input. For example, the user may input or change setting information through a plurality of icons 136 located at the lower left of the user interface screen 100. According to another embodiment, the synthesized speech generating system may automatically determine setting information for one or more sentences by analyzing one or more sentences. For example, as shown in the user interface screen 100,'#3' (116) may be determined and displayed as setting information in the sentence'I am representative.', and in the sentence'I am representative.' The voice style feature of Korea may be determined by the voice style feature of'emoticy' corresponding to'#3' 116.
- 'slowly' may be determined and displayed as setting information in the sentence'I'm glad to meet you', and the voice style feature for the sentence'I'm glad to see you' is the slow speed style style feature. It can be determined by features.
- the synthesized speech for one or more sentences in which the speech style characteristics determined as described above are reflected may be output through a user interface.
- the synthesized speech generation system may input one or more sentences and speech style characteristics into an artificial neural network text synthesis speech model to generate output speech data reflecting speech style characteristics and provide them through a user interface.
- the synthesized speech may be generated based on the output speech data.
- audio content including the generated synthetic voice may be generated and provided through a user interface.
- the audio content may include any sound and/or silence in addition to the generated synthetic voice.
- streaming of the audio content is connected or included with the user terminal. Can be output through the speaker.
- a streaming bar disposed on the right side of the bar 122 displayed at the bottom of the user interface screen 100 is displayed, and the streaming bar 134 displays the entire synthetic voice. The location of the voice currently being output of can be displayed.
- the audio content may be downloaded to the user terminal.
- a user may create a new speech synthesis job file by clicking on the'new file' icon 132 disposed at the upper left of the user interface screen 100.
- a'test file' which is a file created through the'new file' icon 132 may be displayed in a bar 122 displayed at the bottom of the user interface screen 100.
- the user can edit text and/or generate a synthesized voice for a synthesized voice service, and click the'save' icon 130 to store a file in operation.
- the user can share the synthesized voice corresponding to the input text with other users.
- the user interface for generating a synthesized speech for text of the present disclosure may be provided to a user in various ways executed by a user terminal, and may be provided to a user through, for example, a web browser or an application.
- the bar and/or icon are illustrated to be disposed at a specific location, but the present invention is not limited thereto, and may be disposed at any location on the user interface screen 100.
- FIG. 2 is a configuration in which a plurality of user terminals 210_1, 210_2, 210_3 and a synthesized voice generation system 230 are connected to enable communication in order to provide a service for generating a synthesized voice for text according to an embodiment of the present disclosure. It is a schematic diagram showing 200.
- the plurality of user terminals 210_1, 210_2, and 210_3 may communicate with the synthesized speech generating system 230 through the network 220.
- the network 220 may be configured to enable communication between the plurality of user terminals 210_1, 210_2, and 210_3 and the synthesized speech generating system 230.
- the network 220 may be a wired network 220 such as Ethernet, a wired home network (Power Line Communication), a telephone line communication device, and RS-serial communication, a mobile communication network, or a WLAN (Wireless LAN), a wireless network 220 such as Wi-Fi, Bluetooth and ZigBee, or a combination thereof.
- the communication method is not limited, and user terminals 210_1, 210_2, as well as a communication method utilizing a communication network (for example, a mobile communication network, wired Internet, wireless Internet, broadcasting network, satellite network, etc.) that the network 220 may include. Short-range wireless communication between 210_3) may also be included.
- the network 220 includes a personal area network (PAN), a local area network (LAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), and a broadband network (BBN). , Internet, and the like.
- PAN personal area network
- LAN local area network
- CAN campus area network
- MAN metropolitan area network
- WAN wide area network
- BBN broadband network
- the network 220 may include any one or more of a network topology including a bus network, a star network, a ring network, a mesh network, a star-bus network, a tree or a hierarchical network, etc. Not limited.
- a mobile phone or a smart phone 210_1, a tablet computer 210_2, and a laptop or desktop computer 210_3 are illustrated as examples of user terminals that execute or operate a user interface providing a speech synthesis service, but are not limited thereto.
- User terminals 210_1, 210_2, 210_3 are arbitrary computing devices capable of wired and/or wireless communication and capable of running a user interface providing a speech synthesis service by installing a web browser, a mobile browser application, or a speech synthesis generating application
- the user terminal 210 includes a smartphone, a mobile phone, a navigation terminal, a desktop computer, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a tablet computer, a game console. console), a wearable device, an internet of things (IoT) device, a virtual reality (VR) device, an augmented reality (AR) device, and the like.
- IoT internet of things
- VR virtual reality
- AR augmented reality
- three user terminals 210_1, 210_2, 210_3 are shown to communicate with the synthesized speech generating system 230 through the network 220, but the present invention is not limited thereto, and different numbers of user terminals are connected to the network. It may be configured to communicate with the synthesized speech generation system 230 via 220.
- the user terminals 210_1, 210_2, and 210_3 may receive one or more sentences through a user interface providing a speech synthesis service.
- a user interface providing a speech synthesis service.
- an input device eg, a keyboard
- the user terminals 210_1, 210_2, 210_3 One or more sentences may be received.
- one or more sentences included in a document type file uploaded through the user interface may be received.
- One or more sentences thus received may be provided to the synthesized speech generating system 230.
- the user terminals 210_1, 210_2, and 210_3 may determine or change setting information for at least some of one or more sentences.
- the user terminal selects a sentence for at least a part of one or more sentences output through the user interface, and designates a predetermined value and/or term representing a specific voice style for the selected sentence, You can determine or change the setting information for. Determination or change of such setting information may be performed in response to a user input.
- the user terminals 210_1, 210_2, 210_3 visually display at least some of the output one or more sentences (e.g., font, font style, font color, font size, font Effect, underlined or underlined style, etc.).
- the user terminals 210_1, 210_2, and 210_3 may change the setting information for at least a portion of the output by changing the font size for at least some of the output one or more sentences from 10 to 12.
- the user terminals 210_1, 210_2, and 210_3 may change the setting information for at least a portion of the output by changing the font color of at least a portion of the output one or more sentences from black to red.
- the user terminal may determine or change a speech style for a corresponding sentence in response to setting information determined or changed for one or more sentences.
- the modified speech style may be provided to the synthesized speech generation system 230.
- the user terminal provides the determined or changed setting information for one or more sentences to the synthesized speech generation system 230, and the synthesized speech generation system 230 determines a speech style corresponding to the determined or changed setting information. You can decide or change it.
- the user terminals 210_1, 210_2, and 210_3 may add a visual display indicating characteristics of an effect to be inserted between a plurality of sentences in response to a user input.
- the user terminals 210_1, 210_2, 210_3 may receive an input for adding'#2', a visual indication indicating a predetermined sound effect to be inserted between two sentences among a plurality of sentences output through the user interface. have.
- the user terminals 210_1, 210_2, 210_3 may receive an input for adding '1.5s', which is a visual indication indicating the time of silence to be inserted between two sentences among a plurality of sentences output through the user interface. have.
- the added visual indication may be provided to the synthesized speech generating system 230, and sound effects (including silent sounds) corresponding to the added visual indication may be included or reflected in the generated synthesized voice.
- the user terminals 210_1, 210_2, and 210_3 may determine a role corresponding to one or more sentences, one or more sentence sets, and/or a grouped sentence set output through the user interface.
- the synthesized speech generation system 230 receives an input from the user terminals 210_1, 210_2, 210_3 for determining a role corresponding to one sentence set as a'number', and a'number' in one or more sentence sets. You can decide the role called. Then, the user terminals 210_1, 210_2, 210_3 set a voice style corresponding to the determined role (eg, a predetermined voice style corresponding to the determined role), and provide the set voice style to the synthetic voice generation system 230 can do.
- a voice style corresponding to the determined role eg, a predetermined voice style corresponding to the determined role
- the user terminals 210_1, 210_2, and 210_3 may provide the synthesized speech generation system 230 with a role determined according to a user input, and the synthesized speech generation system 230 may provide a predetermined voice style corresponding to the determined role. Can be set.
- the synthesized speech generation system 230 may analyze the received one or more sentences or sentence sets and recommend a cast candidate and/or a speech style feature candidate to the corresponding character or sentence set based on the analyzed result.
- any processing method capable of recognizing and processing the input language may be used.
- a natural language processing method may be used.
- the recommended cast candidate or voice style feature candidate may be transmitted to the user terminals 210_1, 210_2, and 210_3, and may be output in association with a corresponding sentence through a user interface.
- the user terminals 210_1, 210_2, 210_3 receive a user input for selecting at least some of the output one or more cast candidates and/or at least some of the output one or more voice style feature candidates, and such input Based on the sentence, the selected role candidate and/or style candidate may be set.
- the synthesized voice generating system 230 may transmit output voice data reflecting the determined or changed voice style characteristics and/or the synthesized voice generated based on the output voice data to the user terminals 210_1, 210_2, and 210_3.
- the synthesized voice generation system 230 may receive a request for audio content including the synthesized voice from the user terminals 210_1, 210_2, and 210_3, and according to the received request, the user terminals 210_1, 210_2, 210_3 ) To transmit audio content.
- the synthesized voice generation system 230 may receive a streaming request for audio content including the synthesized voice from the user terminals 210_1, 210_2, 210_3, and the user terminal that has performed the streaming request The audio content may be received from the synthesized speech generating system 230.
- the synthesized voice generation system 230 may receive a download request for audio content including the synthesized voice from the user terminals 210_1, 210_2, 210_3, and the user terminal that has performed the download request Audio content may be received from the synthesized speech generation system 230.
- the synthesized voice generation system 230 may receive a request for sharing audio content including the synthesized voice from the user terminals 210_1, 210_2, 210_3, and the user terminal that performed the share request Audio content can be transmitted to this designated user terminal.
- each of the user terminals 210_1, 210_2, and 210_3 and the synthesized voice generating system 230 are illustrated as separate elements, but are not limited thereto, and the synthesized voice generating system 230 includes the user terminals 210_1, 210_2, and It may be configured to be included in each of 210_3).
- the user terminal 210 may refer to any computing device capable of wired/wireless communication, and may include, for example, a mobile phone terminal 210_1, a tablet terminal 210_2, and a PC terminal 210_3 of FIG. 2. I can. As shown, the user terminal 210 may include a memory 312, a processor 314, a communication module 316, and an input/output interface 318. Similarly, the synthesized speech generation system 230 may include a memory 332, a processor 334, a communication module 336 and an input/output interface 338. As shown in FIG.
- the user terminal 210 and the synthesized speech generation system 230 may communicate information and/or data through the network 220 using respective communication modules 316 and 336. Can be configured.
- the input/output device 320 may be configured to input information and/or data to the user terminal 210 through the input/output interface 318 or to output information and/or data generated from the user terminal 210.
- the memories 312 and 332 may include any non-transitory computer-readable recording medium. According to one embodiment, the memories 312 and 332 are non-volatile mass storage devices such as random access memory (RAM), read only memory (ROM), disk drive, solid state drive (SSD), flash memory, etc. (Permanent mass storage device) may be included. As another example, a non-destructive mass storage device such as a ROM, SSD, flash memory, disk drive, etc. may be included in the user terminal 210 or the synthesized voice generating system 230 as a separate permanent storage device that is separate from the memory.
- RAM random access memory
- ROM read only memory
- SSD solid state drive
- flash memory etc.
- a non-destructive mass storage device such as a ROM, SSD, flash memory, disk drive, etc. may be included in the user terminal 210 or the synthesized voice generating system 230 as a separate permanent storage device that is separate from the memory.
- the memory 312 and 332 may store an operating system and at least one program code (e.g., a code for providing a synthetic voice service through a user interface, a code for an artificial neural network text-synthetic voice model, etc.). have.
- program code e.g., a code for providing a synthetic voice service through a user interface, a code for an artificial neural network text-synthetic voice model, etc.
- These software components may be loaded from a computer-readable recording medium separate from the memories 312 and 332.
- a separate computer-readable recording medium may include a recording medium that can be directly connected to the user terminal 210 and the synthetic voice generating system 230, for example, a floppy drive, a disk, a tape, a DVD/CD.
- -It may include a computer-readable recording medium such as a ROM drive and a memory card.
- software components may be loaded into the memories 312 and 332 through a communication module other than a computer-readable recording medium.
- At least one program is a computer program installed by files provided by the file distribution system for distributing the installation files of the developers or applications through the network 220 (eg, artificial neural network text-synthetic speech model program ) May be loaded into the memories 312 and 332.
- the file distribution system for distributing the installation files of the developers or applications through the network 220 (eg, artificial neural network text-synthetic speech model program ) May be loaded into the memories 312 and 332.
- the processors 314 and 334 may be configured to process instructions of a computer program by performing basic arithmetic, logic, and input/output operations. Instructions may be provided to the processors 314 and 334 by the memories 312 and 332 or the communication modules 316 and 336. For example, the processors 314 and 334 may be configured to execute commands received according to program code stored in a recording device such as the memories 312 and 332.
- the communication modules 316 and 336 may provide a configuration or function for communicating with the user terminal 210 and the synthesized speech generation system 230 through the network 220, and the user terminal 210 and/or the synthesis
- the voice generating system 230 may provide a configuration or function for communicating with another user terminal or another system (eg, a separate cloud system, a separate audio content sharing support system, etc.).
- a request e.g., an audio content download request, an audio content streaming request
- a communication module It may be transmitted to the synthesized speech generation system 230 through the network 220 under control of 316 ).
- a control signal or command provided under the control of the processor 334 of the synthesized speech generation system 230 is transmitted to the communication module 316 of the user terminal 210 via the communication module 336 and the network 220. It may be received by the user terminal 210 through.
- the input/output interface 318 may be a means for an interface with the input/output device 320.
- the input device may include a device such as a keyboard, a microphone, a mouse, and a camera including an image sensor
- the output device may include a device such as a display, a speaker, and a haptic feedback device.
- the input/output interface 318 may be a means for an interface with a device in which a component or function for performing input and output is integrated into one, such as a touch screen.
- the processor 314 of the user terminal 210 processes a command of a computer program loaded in the memory 312, information provided by the synthesized speech generation system 230 or other user terminal 210 and/ Alternatively, a service screen or content configured using data may be displayed on the display through the input/output interface 318.
- the input/output device 320 is illustrated not to be included in the user terminal 210, but is not limited thereto, and may be configured with the user terminal 210 and one device.
- the input/output interface 338 of the synthesized speech generation system 230 is an interface with a device (not shown) for input or output that is connected to the synthesized speech generation system 230 or that the synthesized speech generation system 230 may include.
- the input/output interfaces 318 and 338 are illustrated as elements configured separately from the processors 314 and 334, but are not limited thereto, and the input/output interfaces 318 and 338 may be configured to be included in the processors 314 and 334. have.
- the user terminal 210 and the synthesized speech generating system 230 may include more components than those of FIG. 3. However, there is no need to clearly show most of the prior art components. According to an embodiment, the user terminal 210 may be implemented to include at least some of the input/output devices 320 described above. In addition, the user terminal 210 may further include other components such as a transceiver, a global positioning system (GPS) module, a camera, various sensors, and a database. For example, when the user terminal 210 is a smartphone, it may generally include components included in the smartphone.
- GPS global positioning system
- an acceleration sensor for example, an acceleration sensor, a gyro sensor, a camera module, various physical buttons, and touch
- Various components such as a button using a panel, an input/output port, and a vibrator for vibration may be implemented to be further included in the user terminal 210.
- the processor 314 may receive input or selected text, images, etc. through an input device 320 such as a touch screen or a keyboard connected to the input/output interface 318, and store the received text and/or image into the memory 312 Or provided to the synthesized speech generation system 230 through the communication module 316 and the network 220.
- the processor 314 may receive text information constituting one or more sentences, a request for a voice style characteristic change, a request for streaming audio content, a request for downloading an audio content, etc. through an input device such as a touch screen and a keyboard. have. Accordingly, the received request and/or the result of processing the request may be provided to the synthesized speech generating system 230 through the communication module 316 and the network 220.
- the processor 314 may receive an input for text information (eg, one or more paragraphs, sentences, phrases, words, phonemes, etc.) through the input device 320. According to an embodiment, the processor 314 may receive a text input constituting one or more sentences through the input device 320 through the input/output interface 318. According to another embodiment, the processor 314 may receive an input for uploading a document type file including one or more sentences through a user interface through the input device 320 and the input/output interface 318. Here, the processor 314 may receive a file in a document format corresponding to the input from the memory 312 in response to this input. The processor 314 may receive one or more sentences included in the file in response to this input.
- text information eg, one or more paragraphs, sentences, phrases, words, phonemes, etc.
- the processor 314 may receive a text input constituting one or more sentences through the input device 320 through the input/output interface 318.
- the processor 314 may receive an input for uploading a document type file including
- the received one or more sentences may be provided to the synthesized speech generating system 230 through the communication module 316.
- the processor 314 is configured to provide the uploaded file to the synthesized speech generation system 230 through the communication module 316 and receive one or more sentences included in the file from the synthesized speech generation system 230. I can.
- the processor 314 may receive an input for the voice style feature of one or more sentences through the input device 320 and determine the voice style feature of the one or more sentences.
- the input and/or the determined voice style feature for the received voice style feature may be provided to the synthesized voice generation system 230 via the communication module 316.
- the input of the voice style feature of one or more sentences may include an arbitrary operation of selecting a portion to which the voice style feature is to be changed.
- the portion to which the voice style characteristic is to be changed may include one or more sentences, at least a portion of one or more sentences, a portion between a plurality of sentences, a set of one or more sentences, a set of grouped sentences, etc. .
- the processor 314 may receive an input for determining or changing setting information for at least some of one or more sentences through the input device 320.
- the processor 314 may receive an input for changing setting information on a speech style or speech speed.
- the processor 314 receives an input for changing setting information for visually displaying, such as a font, a font style, a font color, a font size, a font effect, an underline or underline style, for some of one or more sentences. can do.
- the processor 314 may receive an input for selecting at least some of the one or more speech style feature candidates received from the synthesized speech generation system 230.
- the processor 314 may receive an input for changing a value representing a speech style characteristic through an interface for changing a speech style characteristic for at least a part of one or more sentences. Based on the received input in this way, the processor 314 may determine or change setting information for at least some of one or more sentences. In contrast, the processor 314 provides the received input to the synthesized speech generation system 230 through the communication module 316, and receives the speech style characteristics determined or changed according to the setting information from the synthesized speech generation system 230. can do.
- the processor 314 may receive an input through the input device 320 for adding a visual indication indicating characteristics of an effect to be inserted between a plurality of sentences.
- the processor 314 may receive an input for adding a visual indication indicating sound effects to be inserted between a plurality of sentences.
- the processor 314 may receive an input for adding a visual indication indicating a silence time to be inserted between a plurality of sentences.
- the processor 314 provides an input for adding a visual indication indicating a sound effect to the synthesized speech generation system 230 through the communication module 316, and receives a synthesized speech including or reflecting the sound effect from the synthesized speech generation system 230 can do.
- the processor 314 receives an input for a role corresponding to one or more sentences or sentence sets through the input device 320 through the input device, and determines a role for one or more sentences or sentence sets based on the received input. I can. For example, the processor 314 may receive an input for selecting at least a portion of a list including one or more roles. As another example, the processor 314 may receive an input for selecting at least some of the one or more cast candidates received from the synthesized speech generation system 230. Then, the processor 314 may be configured to set a predetermined speech style characteristic corresponding to the determined role for the sentence or sentence set. The voice style feature set in this way may be provided to the synthesized voice generation system 230 through the communication module 316.
- the processor 314 provides the role determined for the sentence or sentence set to the synthesized speech generation system 230 through the communication module 316, and a preset corresponding to the role determined by the synthesized speech generation system 230
- the determined voice style feature may be received, and a voice style feature for a corresponding sentence or sentence set may be determined.
- the processor 314 may receive an input indicating a request for audio content through the input device 320 and the input/output interface 318, and generate a synthesized speech through the communication module 316, a request corresponding to the received input. Can be provided to the system 230. According to an embodiment, the processor 314 may receive an input for an audio content download request through the input device 320. In another embodiment, the processor 314 may receive an input for an audio content streaming request through the input device 320. In another embodiment, the processor 314 may receive an input for an audio content sharing request through the input device 320. In response to this input, the processor 314 may receive audio content including the synthesized voice from the synthesized voice generation system 230 through the communication module 316.
- the processor 314 is configured to output processed information and/or data through an output device such as a display output capable device (eg, a touch screen, a display, etc.) of the user terminal 210 and an audio output capable device (eg, a speaker). Can be configured. According to an embodiment, the processor 314 may display one or more sentences through a display output capable device or the like. For example, the processor 314 may output one or more sentences received from the input device 320 through the screen of the user terminal 210. As another example, the processor 314 may output one or more sentences included in a document format file received from the memory 312 through the screen of the user terminal 210. In this case, the processor 314 may output visual display or setting information together with the received one or more sentences, or output one or more sentences in which the setting information is reflected.
- an output device such as a display output capable device (eg, a touch screen, a display, etc.) of the user terminal 210 and an audio output capable device (eg, a speaker).
- the processor 314 may
- the processor 314 may output an interface for determining or changing a voice style characteristic for at least a part of one or more sentences through the screen of the user terminal 210. For example, the processor 314 displays an interface for setting or changing a speech style characteristic including speech style, speech speed, sound effect, silence time, etc. for at least a portion of one or more sentences on the screen of the user terminal 210. You can print it through As another example, the processor 314 may output the recommended cast candidate or the recommended voice style feature candidate received from the synthesized speech generation system 230 through the screen of the user terminal 210.
- the processor 314 may output the synthesized speech or audio content including the synthesized speech through an audio output capable device.
- the processor 314 may output the synthesized voice received from the synthesized voice generating system 230 or audio content including the synthesized voice through a speaker.
- the processor 334 of the synthesized speech generation system 230 is configured to manage, process, and/or store information and/or data received from a plurality of user terminals including the user terminal 210 and/or a plurality of external systems. I can. Information and/or data processed by the processor 334 may be provided to the user terminal 210 through the communication module 336. For example, the processed information and/or data may be provided to the user terminal 210 in real time or may be provided later in the form of a history. For example, the processor 334 may receive one or more sentences from the user terminal 210 through the communication module 336.
- the processor 334 may receive an input for the voice style feature of one or more sentences from the user terminal 210 through the communication module 336 and determine the voice style feature corresponding to the input received in the received one or more sentences. have. According to an embodiment, the processor 334 may determine a voice style characteristic corresponding to an input for changing setting information for at least a part of one or more sentences received from the user terminal 210. For example, the processor 334 may determine a speech style or speech speed according to an input for changing the received setting information. As another example, the processor 334 may determine a speech style characteristic according to an input for changing setting information for visual display, such as a received font, font style, font color, font size, font effect, underline or underline style, etc. have.
- the processor 334 may determine a voice style feature corresponding to an input for selecting at least some of one or more voice style feature candidates received from the user terminal 210.
- the processor 334 may determine a voice style feature corresponding to an input for changing a value representing the voice style feature received from the user terminal 210.
- the value representing the voice style characteristic may include a sound height, speed, and loudness corresponding to units such as phonemes, letters, and words.
- the processor 334 may provide the determined voice style characteristic to the processor 314 of the user terminal 210 through the communication module 336, and based on this received characteristic, the processor 314 Voice style characteristics can be determined.
- the processor 334 may determine a speech style characteristic corresponding to an input for adding a visual indication indicating a characteristic of an effect to be inserted between a plurality of sentences received from the user terminal 210.
- the visual indication indicating the characteristics of the effect to be inserted may include a visual indication indicating a sound effect to be inserted or a visual indication indicating a time of silence to be inserted.
- the processor 334 may provide the determined voice style characteristic to the processor 314 of the user terminal 210 through the communication module 336, and based on this received characteristic, the processor 314 You can determine the speech style characteristics for the part.
- the processor 334 may divide a plurality of sentences received from the processor 314 into one or more sentence sets, and determine a cast or speech style characteristic corresponding to the separated one or more sentence sets.
- the processor may set a predetermined voice style characteristic corresponding to the determined role.
- the processor 334 may analyze one or more separated sentence sets using natural language processing, and recommend one or more cast candidates or voice style feature candidates based on the analysis result. For example, the processor 334 transmits the recommended one or more cast candidates or voice style feature candidates to the processor 314 of the user terminal 210, and the processor 314 transmits the recommended one or more cast candidates or voice style feature candidates. By receiving selection of at least some of the candidates, a cast or speech style feature corresponding to a corresponding sentence set may be determined.
- the processor 334 analyzes one or more sets of sentences separated using natural language processing, and automatically determines one or more character or speech style features corresponding to the one or more sentence sets based on the analysis result, and the user terminal 210 It can be provided to the processor 314 of. In response, the processor 314 may determine or set one or more cast or speech style features corresponding to the one or more sentence sets.
- the processor 334 may analyze and group one or more separated sentence sets using natural language processing, and recommend one or more cast candidates corresponding to each of the grouped sentence sets based on the analysis result. can do.
- the processor 334 transmits one or more recommended role candidates to the processor 314 of the user terminal 210, and the processor 314 receives selection of at least some of the recommended one or more role candidates. , It is possible to determine a role corresponding to the grouped sentence set.
- the processor 334 may input one or more sentences and the determined or changed speech style characteristics into the artificial neural network text-synthetic speech model to generate output speech data for the one or more sentences reflecting the determined or changed speech style characteristics.
- the artificial neural network text-synthetic speech model uses a plurality of reference sentences and a plurality of reference speech styles so that speech data corresponding to the input text and the input speech style is output or a synthetic speech is generated. Can be configured.
- the processor 334 may generate a synthesized voice based on the generated output voice data, and may generate audio content including the synthesized voice.
- the processor 334 may be configured to input the generated output voice data to a post-processing processor and/or a vocoder to output a synthesized voice.
- the processor 334 may store the generated audio content in the memory 332 of the synthesized speech generation system 230.
- the processor 334 may transmit the generated synthetic voice or audio content to a plurality of user terminals 210 or other systems through the communication module 336.
- the processor 334 may transmit the generated audio content to the user terminal 210 that has made the streaming request through the communication module 336, and the generated audio content can be streamed from the user terminal 210. have.
- the processor 334 transmits the generated audio content to the user terminal 210 that has requested the download through the communication module 336, and the generated audio content is the memory 312 of the user terminal 210 Can be saved in.
- the processor 334 may mix the generated audio content with the image content.
- the video content may be received from the plurality of user terminals 210, other systems, or the memory 332 of the synthesized voice generating system 230.
- the processor 334 may inspect the output voice data for one or more sentences or the generated synthesized voice. According to an embodiment, the processor 334 may be configured to operate the voice recognizer to determine whether the output voice data or the synthesized voice is properly generated. For example, such a speech recognizer may be configured to not only check text information recognized from the synthesized speech, but also check whether the emotions and prosody of the synthesized speech are appropriate. Based on the checked result, the processor 334 may determine whether the speech style feature set for one or more sentences and/or the appropriateness of the cast. In addition, the processor 334 may recommend new cast candidates or voice style feature candidates for one or more sentences and provide them to the user terminal 210, and the processor 314 of the user terminal 210 is the recommended role candidate. Alternatively, one of the voice style feature candidates may be selected to determine a cast or voice style feature for a corresponding sentence.
- the processor 314 may include a sentence editing module 410, a role determination module 420, a style determination module 430, and a voice output module 440.
- the sentence editing module 410 may divide a plurality of sentences into one or more sentence sets. According to an embodiment, the sentence editing module 410 may divide a plurality of sentences into one or more sentence sets by receiving an input for sentence separation (eg, an enter input after text input) through a user interface.
- an input for sentence separation eg, an enter input after text input
- the cast determination module 420 may determine a cast corresponding to one or more divided sentence sets. According to an embodiment, the cast determination module 420 may determine or change a cast corresponding to one or more sentence sets based on an input for selecting a cast corresponding to one or more sentence sets received through the user interface. In this case, a predetermined speech style characteristic corresponding to the determined or changed role may be determined in one or more sentence sets.
- the style determination module 430 may determine speech style characteristics corresponding to one or more received sentences. According to an embodiment, the style determination module 430 may determine or change a speech style characteristic corresponding to one or more sentence sets based on an input for selection of a speech style characteristic corresponding to one or more sentences received through the user interface. have.
- the role determination module 420 and the style determination module 430 are shown to be included in the processor 314, but are not limited thereto, and are configured to be included in the processor 334 of the synthesized speech generation system 230 Can be.
- the role determination module 420 and the style determination module 430 are illustrated as separate modules, but are not limited thereto.
- the role determination module 420 may be implemented to be included in the style determination module 430.
- the speech style characteristics determined through the cast determination module 420 and the style determination module 430 may be provided to the synthesized speech generation system together with the corresponding one or more sentences.
- the synthesized speech generating system may output speech data from the artificial neural network text-synthetic speech model by inputting one or more received sentences and speech style features corresponding thereto to the artificial neural network text-synthetic speech model. Then, a synthesized speech may be generated based on the output speech data. The generated synthesized voice may be output through the voice output module 450.
- the user may listen to the output synthesized voice in advance, and edit or change the corresponding sentence, the role of the sentence, and/or the speech style characteristics of the sentence.
- the sentence editing module 410 may receive an input indicating editing of an inappropriate sentence among the output synthesized speech.
- the cast determination module 420 may change the set cast by selecting at least a part of one or more sentence sets for which cast selection is not suitable among the output synthesized voices.
- the style determination module 430 may change the set voice style feature by selecting one or more sentences of which voice style features are not suitable among the output voices.
- the processor 334 may include a speech synthesis module 510, a script analysis module 520, a role recommendation module 530, a style recommendation module 540, and an image synthesis module 550.
- Each of the modules operated by the processor 334 may be configured to communicate with each of the modules operated by the processor 314 of FIG. 4.
- the speech synthesis module 510 may input one or more sentences and the determined or changed speech style characteristics into an artificial neural network text synthesis speech model to generate output speech data reflecting the determined or changed speech style characteristics.
- the speech synthesis module 510 may generate a synthesized speech based on the generated output speech data.
- the generated synthesized voice may be provided to the user terminal and output to the user.
- the script analysis module 520 may receive one or more sentences and analyze one or more sentences using natural language processing or the like. According to an embodiment, the script analysis module 520 may divide a plurality of sentences received based on the analysis result into one or more sentence sets. In addition, the script analysis module 520 may analyze one or more divided sentence sets, and group one or more divided sentence sets based on the analysis result. The divided sentence set and/or the grouped sentence set may be provided to the user terminal and output through the user interface.
- the role recommendation module 530 may recommend a role candidate corresponding to each of one or more sentence sets or grouped sentence sets based on the analysis result of the script analysis module 520.
- the role recommendation module 530 may output role candidates corresponding to each of one or more sentence sets or grouped sentence sets through a user interface, and receive a user's response thereto.
- the role recommendation module 530 may determine a role corresponding to each of one or more divided sentence sets or grouped sentence sets according to the user's response to the role candidate received through the user interface.
- the role recommendation module 530 may automatically select a role corresponding to each of one or more sentence sets or grouped sentence sets based on the analysis result of the script analysis module 520. The automatically selected role may be output to the user through the user interface.
- the style recommendation module 540 may recommend a speech style feature candidate for one or more sentences or one or more sentence sets based on the analysis result of the script analysis module 520.
- the style recommendation module 540 may output voice feature candidates recommended through a user interface and receive a user's response thereto.
- the style recommendation module 540 may determine a speech style characteristic corresponding to each of one or more divided sentence sets or grouped sentence sets according to the user's response to the voice style feature candidate received through the user interface.
- the style recommendation module 540 may automatically determine a speech style characteristic corresponding to the received one or more sentences, the one or more sentence sets, or the grouped sentence set based on the analysis result of the script analysis module 520.
- the image synthesis module 550 may mix or dub audio content including synthesized speech and/or synthesized speech generated by the speech synthesis module 510 to the image content.
- the video content may be received from the user terminal 210, another system, or the memory 332 of the synthesized voice generating system 230.
- the audio content is content related to the received video content, and may be generated according to a playback speed of the video content. For example, audio content may be mixed or dubbed according to a timing at which a person in video content speaks.
- the synthesized speech generation method 600 may be performed by a user terminal and/or a synthesized speech generation system. As shown, the synthesized speech generating method 600 may be initiated by receiving one or more sentences (S610).
- a speech style characteristic for one or more received sentences may be determined.
- the synthesized speech generation system may recommend or determine a speech style characteristic for one or more sentences and provide it to the user terminal, and the user terminal may provide the speech style for the corresponding sentence based on the received speech style characteristic.
- a synthesized voice for one or more sentences reflecting the voice style characteristic may be output.
- the synthesized speech may be generated based on speech data output from the artificial neural network text-synthetic speech model by inputting the one or more sentences and the speech style characteristics into an artificial neural network text-synthetic speech model.
- the synthesized voice may be included in the user terminal or may be output through a connected speaker.
- the synthesized speech generation method 700 for changing the setting information may be performed by the user terminal and/or the synthesized speech generation system. As shown, the method 700 for generating a synthesized speech for changing the setting information may be initiated with an operation S710 of receiving one or more sentences through a user interface.
- step S720 the received one or more sentences may be output through the user interface.
- step S730 setting information for at least some of the output one or more sentences may be changed.
- setting information for visually displaying at least a portion of one or more sentences may be changed. For example, by changing the font, font style, font color, font size, font effect, underline, underline style, etc. of some of one or more sentences, setting information for a part of one or more sentences may be changed.
- step S740 the voice style feature applied to at least some of the one or more sentences may be changed based on the changed setting information. That is, the voice style feature corresponding to the setting information may be applied to at least some of one or more sentences.
- step S750 the synthesized speech for one or more sentences reflecting the changed speech style characteristics may be output.
- the synthesized speech may be changed based on speech data output from the artificial neural network text-synthetic speech model by inputting the one or more sentences and the changed speech style characteristics into an artificial neural network text-synthetic speech model.
- FIG. 8 is a diagram illustrating a configuration of an artificial neural network-based text-speech synthesis apparatus and a network for extracting an embedding vector 822 capable of distinguishing each of a plurality of speaker and/or speech style features according to an embodiment of the present disclosure to be.
- the text-to-speech synthesis apparatus may be configured to include an encoder 810, a decoder 820, and a post-processing processor 830.
- Such a text-to-speech synthesis device may be configured to be included in a synthesized speech generating system.
- the encoder 810 may receive character embeddings for input text, as shown in FIG. 8.
- the input text may include at least one of words, phrases, or sentences used in one or more languages.
- the encoder 810 may receive one or more sentences as input text through a user interface.
- the encoder 810 may separate the received input text into a letter unit, a letter unit, and a phoneme unit.
- the encoder 810 may receive input text separated by a letter unit, a letter unit, and a phoneme unit. Then, the encoder 810 may convert the input text into character embedding and generate it.
- the encoder 810 may be configured to generate text as pronunciation information.
- the encoder 810 may pass the generated character embeddings through a pre-net including a fully-connected layer.
- the encoder 810 may provide an output from a pre-net to the CBHG module to output the encoder hidden states e i as shown in FIG. 8.
- the CBHG module may include a 1D convolution bank, max pooling, a highway network, and a bidirectional gated recurrent unit (GRU).
- the encoder 810 when the encoder 810 receives the input text or the separated input text, the encoder 810 may be configured to generate at least one embedding layer.
- at least one embedding layer of the encoder 810 may generate character embeddings based on input text divided into a letter unit, a letter unit, and a phoneme unit.
- the encoder 810 may use an already learned machine learning model (for example, a probability model or an artificial neural network) to obtain letter embeddings based on the separated input text.
- the encoder 810 may update the machine learning model while performing machine learning. When the machine learning model is updated, the character embedding of the separated input text may also be changed.
- the encoder 810 may pass the character embedding through a deep neural network (DNN) module composed of a fully-connected layer.
- the DNN may include a general feedforward layer or a linear layer.
- the encoder 810 may provide the output of the DNN to a module including at least one of a convolutional neural network (CNN) or a recurrent neural network (RNN), and may generate hidden states of the encoder 810.
- CNN can capture local characteristics according to the size of a convolution kernel, while RNN can capture long term dependencies.
- the hidden states of the encoder 810, that is, pronunciation information on the input text are provided to the decoder 820 including the attention module, and the decoder 820 may be configured to generate the pronunciation information as a voice.
- the decoder 820 may receive hidden states e i of the encoder from the encoder 810.
- the decoder 820 includes a prinet composed of an attention module, a fully connected layer, and a gated recurrnt unit (GRU), and includes an attention RNN (recurrent neural network), and a residual It may include a decoder RNN (decoder RNN) including a residual GRU (GRU).
- the attention RNN may output information to be used in the attention module.
- the decoder RNN may receive location information of the input text from the attention module. That is, the location information may include information on which location of the input text is converted by the decoder 820 into speech.
- the decoder RNN may receive information from the attention RNN.
- the information received from the attention RNN may include information on which speech the decoder 820 has generated up to a previous time-step.
- the decoder RNN can generate the next output voice following the voice generated so far.
- the output voice may have a mel spectrogram form, and the output voice may include r frames.
- the prinet included in the decoder 820 may be replaced with a DNN composed of a fully-connected layer.
- the DNN may include at least one of a general feedforward layer or a linear layer.
- the decoder 820 uses a pair of input text, information related to speaker and/or speech style characteristics, and speech signals corresponding to the input text. You can use an existing database.
- the decoder 820 may learn information related to the input text, speaker, and/or speech style characteristics as inputs of the artificial neural network, respectively, and a speech signal corresponding to the input text as a correct answer.
- the decoder 820 may apply the input text and information related to the speaker and/or speech style characteristic to the updated single artificial neural network text-speech synthesis model, and output a speech corresponding to the speaker and/or speech style characteristic. .
- the output of the decoder 820 may be provided to the post-processor 830.
- the CBHG of the post-processing processor 830 may be configured to convert the mel scale spectrogram of the decoder 820 into a linear-scale spectrogram.
- the output signal of CBHG of the post-processing processor 830 may include a magnitude spectrogram.
- the phase of the output signal of the CBHG of the post-processing processor 830 may be restored through a Griffin-Lim algorithm and may be inverse short-time fourier transform.
- the post-processing processor 830 may output a voice signal in a time domain.
- the output of the decoder 820 may be provided to a vocoder (not shown).
- operations of the DNN, the attention RNN, and the decoder RNN may be repeatedly performed for text-speech synthesis. For example, r frames acquired in the first time-step may be input to the next time-step. Also, r frames output in the next time-step may be input to the next time-step. Voices for all units of text may be generated through the above-described process.
- the apparatus for synthesizing a text-to-speech may acquire a voice of the mel spectrogram for the entire text by concatenating the mel spectrograms generated at each time-step in chronological order.
- Vocoder can predict the phase of the spectrogram through the Griffin-Lim algorithm.
- the vocoder may output a speech signal in a time domain using an Inverse Short-Time Fourier Transform.
- the vocoder may generate a speech signal from a mel spectrogram based on a machine learning model.
- the machine learning model may include a machine learning model of a correlation between a mel spectrogram and a speech signal.
- the vocoder inputs a mel spectrogram or linear prediction coefficient (LPC, Linear Prediction Coefficient), LSP (Line Spectral Pair), LSF (Line Spectral Frequency), and pitch period as inputs and outputs a voice signal.
- LPC Linear Prediction Coefficient
- LSP Line Spectral Pair
- LSF Line Spectral Frequency
- the artificial neural network-based text-to-speech synthesis apparatus can be learned using a large-capacity database that exists as a pair of text and speech signals.
- a loss function can be defined by inserting text as input and comparing the output output with the corresponding correct speech signal.
- the text-to-speech synthesis apparatus learns the loss function through an error back propagation algorithm, and finally obtains a single artificial neural network text-to-speech synthesis model that outputs a desired speech when an arbitrary text is input.
- the decoder 820 may receive hidden states e i of the encoder from the encoder 810. According to an embodiment, the decoder 820 of FIG. 8 may receive speech data 821 corresponding to a specific speaker and/or a specific speech style characteristic.
- the voice data 821 may include data representing voice input from a speaker within a predetermined time period (a short time period, for example, several seconds, tens of seconds, or tens of minutes).
- the speaker's voice data 821 may include voice spectrogram data (eg, log-mel-spectrogram).
- the decoder 820 may obtain an embedding vector 822 representing the speaker and/or speech style characteristics based on the speaker's speech data.
- an embedding vector representing speaker and/or speech style characteristics ( 822) can be obtained.
- the obtained embedding vector may be stored in advance, and when a specific speaker and/or speech style feature is requested through a user interface, a synthesized speech may be generated using an embedding vector corresponding to the requested information among the previously stored embedding vectors. have.
- the decoder 820 may provide the obtained embedding vector 822 to the attention RNN and the decoder RNN.
- the text-speech synthesis apparatus illustrated in FIG. 8 may provide a plurality of pre-stored speakers and/or a plurality of embedding vectors corresponding to a plurality of speech style features.
- a synthesized speech may be generated using a corresponding embedding vector.
- the text-to-speech synthesis device can immediately generate a new speaker's speech, that is, adaptively, without additionally learning a TTS model or manually searching for a speaker embedding vector to generate a new speaker vector.
- TTS system can be provided. That is, the text-to-speech synthesis apparatus may generate voices that have been adaptively changed for a plurality of speakers.
- an embedding vector 822 extracted from speech data 821 of a specific speaker may be input to the decoder RNN and the attention RNN.
- a synthesized voice reflecting at least one of a vocal feature, a prosody feature, an emotion feature, or a tone and pitch feature included in the embedding vector 822 of a specific speaker may be generated.
- the network shown in FIG. 8 includes a convolutional network and max over time pooling, and extracts a fixed-dimensional speaker embedding vector as a speech sample or speech signal by receiving a log-Mel-spectrogram. can do.
- the voice sample or the voice signal need not be voice data corresponding to the input text, and an arbitrarily selected voice signal may be used.
- an embedding vector 822 representing a new speaker and/or a new voice style characteristic may be generated through immediate adaptation of the network.
- the input spectrogram may have various lengths, but for example, a fixed dimensional vector having a length of 1 with respect to the time axis may be input to a max-over-time pooling layer located at the end of the convolutional layer.
- a network including various layers can be constructed to extract speaker and/or voice style features.
- a network may be implemented to extract features using a recurrent neural network (RNN).
- RNN recurrent neural network
- Speech style characteristics for the received one or more sentences 910 may be determined. This voice style characteristic may be determined or changed based on setting information for at least some of one or more sentences.
- the speech style setting interface 920 may be displayed. According to a user's selection of one of a plurality of speech styles included in the speech style setting interface 920, the speech style selected for a given sentence may be determined. For example, when the user selects the sentence 922 of'I am representative' and clicks the icon 912 associated with the speech style list, the speech style setting interface 920 may be displayed. When the user selects a part corresponding to '3' in the speech style setting interface 920,'#3' in the sentence 922 of'I am representative' may be determined as the setting information.
- the voice style feature of the sentence 922 of'I am representative' may be determined or set as a voice style feature of'emoticy', which is a predetermined voice style feature corresponding to'#3'.
- the speech style setting interface 920 may be displayed. By selecting a portion corresponding to '5' in the speech style setting interface 920,'#5' may be determined as the setting information in the sentence 924 of'What is this service?'.
- the voice style feature of the sentence 924 of'What is this service?' may be determined as a voice style feature of'dangerously' corresponding to'#5'.
- the speech rate setting interface 930 may be displayed.
- the speech style selected for the selected sentence may be determined. For example, when the user selects the sentence 932 of'I am happy to meet you' and clicks an icon 914 associated with a speech speed, the speech speed setting interface 930 may be displayed.
- the speech speed setting interface 930 By selecting'slow' in the speech speed setting interface 930,'slowly' can be determined as the setting information in the sentence 932 of'I am glad to meet you.', and the sentence of'I am glad to meet you.' (932)
- the voice style feature for) may be determined as a predetermined slow speed style feature.
- the speech speed setting interface 930 May be displayed.
- 'fast' in the speech speed setting interface 930 By selecting'fast' in the speech speed setting interface 930,'fast' can be determined as the setting information in the sentence 934 of'We are constantly improving and upgrading sound quality for better quality.' For better quality, we are constantly improving and upgrading the sound quality.
- the voice style feature for the sentence 934 in'' can be determined by a pre-determined, fast style feature. However, the speed of the selected sentence and/or part is changed by the user, and a synthesized voice may be generated accordingly. The configuration for this will be described in detail with reference to FIG. 13.
- the voice style feature is determined according to an input through the user interface, but is not limited thereto, and the voice style feature is automatically determined according to the analysis result using natural language processing in the synthetic voice generation system.
- the synthesized speech generation system may recognize a sentence of'Well...' and determine the'hesitating' speech style characteristic for the sentence 932 of the next sentence,'I'm glad to see you'. In this case, unlike shown in FIG. 9,'hesitate' may be displayed in front of the sentence 932 of'I am glad to meet you.'
- Speech style characteristics for the received one or more sentences 1010 may be determined. These voice style characteristics may be determined or changed based on setting information for visually displaying at least a portion of one or more sentences.
- the setting information for visually displaying may include a font, a font style, a font color, a font size, a font effect, an underline, and an underline style.
- the setting information for visual display may be determined or changed according to a user input.
- the synthesized speech generation system may automatically determine setting information for visually displaying one or more sentences by analyzing one or more sentences.
- the font thickness of the sentence 1014 of'Emotion to text' may be determined in bold, and the voice for the sentence 1014 of'Emotion to text'
- the style feature may be determined as a bold voice style feature.
- an underline may be added to the sentence 1016 of the'artificial intelligence voice actor service', and the voice style characteristic of the sentence 1016 of the'artificial intelligence voice actor service' may be determined as a voice style feature to be emphasized.
- the letter spacing of the sentence of'I am glad to meet you' (1018) can be determined widely, and the voice style characteristic of the sentence of'I am happy to meet you' (1018) will be determined by the style feature of slow speed. I can.
- the sentence 1022 of'What is this service?' may be determined to be inclined, and the voice style feature of the sentence 1022 of'What is this service?' is the voice style feature of sharp tone. Can be determined.
- the font of the sentence (1024) of'We are constantly improving and upgrading the sound quality for better quality' can be determined in an archetype, and the voice style characteristic for the sentence (1024) is a serious voice style. It can be determined by features.
- Silence may be inserted between the plurality of received sentences 1010.
- the time of silence to be inserted may be determined or changed based on a visual indication indicating the time of silence added between a plurality of received sentences.
- the visual display indicating the time of silence may mean an interval between two sentences among a plurality of sentences. For example, as shown in the figure, a wide spacing (1020) may be determined between the sentences of'If you have a question, please raise your hand and ask a question.' Silence for the amount of time corresponding to the spacing interval 1020 may be added to.
- FIG. 11 is a diagram illustrating an exemplary screen 1100 of a user interface providing a speech synthesis service according to an embodiment of the present disclosure.
- An effect may be inserted into one or more received sentences 1110.
- This to-be-inserted effect may be determined or changed based on a visual indication indicating the characteristics of the to-be-inserted effect.
- the effects to be inserted may include sound effects, background music, and silence.
- a visual indication may be inserted between a plurality of sentences 1112 received through a user interface.
- 11 illustrates an operation in which an effect is inserted between a plurality of sentences, but is not limited thereto.
- an effect may be inserted before, after, or in the middle of one selected sentence.
- a sound effect setting interface When at least one of the plurality of sentences 1112 received through the user interface or at least one of the received one or more sentences is selected and an icon associated with a sound effect (not shown) or an icon associated with silence (not shown) is clicked, a sound effect setting interface Alternatively, a silence time setting interface 1118 may be displayed.
- an icon associated with a sound effect (not shown) or an icon associated with silence (not shown) may be disposed at an arbitrary position in the user interface. For example, when the user selects between a sentence of'Hello,' and a sentence of'I am representative' and clicks an icon (not shown) associated with a sound effect, the sound effect setting interface 1114 may be displayed.
- '#1' may be determined as a visual indication between the sentences of'Hello,' and the sentences of'I am representative.' Then, between the two sentences, a sound effect corresponding to'#1' may be inserted.
- the silence time setting interface 1118 may be displayed.
- the slide bar is moved to '1.5s' by a user input, so that '1.5s' is determined as a visual display after the sentence'ummm...'
- a silence corresponding to the time corresponding to '1.5s' may be inserted after the sentence of'ummm...'.
- FIG. 11 an operation in which an effect is inserted according to a user's input through a user interface is shown, but the present invention is not limited thereto, and an effect is automatically inserted or inserted according to the analysis result using natural language processing in the synthetic speech generation system.
- FIG. 12 is a diagram illustrating an exemplary screen 1200 of a user interface providing a speech synthesis service according to an embodiment of the present disclosure.
- a list of roles can be displayed through the user interface.
- each role may include a predetermined voice style characteristic.
- the cast when any one of the cast list displayed through the user interface is selected by the user, the cast (ie, the cast to be used) may be determined for one or more sentence sets.
- a character list 1202 including'Young-hee','Ji-young', and'Kook-hee' may be displayed as a cast list through the user interface.
- Sunyoung 1204_1 By selecting Sunyoung 1204_1 from the character list and clicking the character application icon, the user may determine the intended role together with Jinhyuk 1204_2 and Beomsu 1204_3 already included in the intended use role.
- a cast list including the recommended cast candidates may be displayed through a user interface, and at least one of the one or more cast candidates may be determined as a cast for one or more sentence sets or grouped sentence sets.
- the roles in the cast list may be listed in the order of recommendation.
- the synthesized speech generation system analyzes one or more sentence sets or grouped sentence sets, recommends a cast list including a plurality of roles, and the recommended cast list may be output through a user interface. For example, by selecting one of the recommended cast candidates output from the user interface, the user may determine the selected cast candidate as a cast for one or more sentence sets or grouped sentence sets.
- FIG. 12 an operation in which the intended use role is determined according to a user's input through the user interface is shown, but is not limited thereto, and the intended use role is automatically used according to the analysis result using natural language processing in the synthetic speech generation system. Can be determined by
- a cast or voice style characteristic corresponding to one or more sentences 1310 received from the user interface may be determined or changed.
- a decision or change may be referred to as a global style decision or change.
- the cast may be determined or changed in one or more separate sentence sets. For example, a user may use a sentence of'Hello, I'm the CEO', a sentence of'Well...', a sentence of'I'm glad to meet you', and'This service uses artificial intelligence deep learning technology to provide the voice of a specific person. It is a service that allows anyone to create audio content with individuality and emotion by learning styles and characteristics.
- the role corresponding to the sentence set including the sentence'Beomsu' will be used in'Jinhyuk' included in the role. Can be changed to To this end, when the user selects an area corresponding to the'number' displayed on the user interface, a list of designated or changeable role candidates 1312, where Beomsu, Jinhyuk, and Sunyoung may be displayed.
- the order of the roles displayed in the role candidate list 1312 may be arranged in the order of role recommendation.
- the voice style characteristic may be changed from the voice style characteristic included in the role of'Beomsu' to the voice style included in the role of'Jinhyuk'.
- FIG. 13 illustrates an operation in which one of the characters to be used for a sentence set is determined according to a user's input through a user interface, but is not limited thereto, and the result analyzed using natural language processing in the synthesized speech generation system is shown. Accordingly, one of the characters to be used for the sentence set may be automatically determined.
- voice style characteristics of at least some of the one or more sentences 1310 may be changed.
- This change can be referred to as a local style change.
- 'some' may include not only sentences but also phonemes, letters, words, syllables, etc. in which the sentences are divided into smaller units.
- An interface for changing a speech style characteristic for at least some of the selected one or more sentences may be output. For example, when the user selects the sentence 1314 of'What is this service?', an interface 1320 for changing a value representing a voice style characteristic may be output.
- a loudness setting graph 1324, a pitch setting graph 1326, and a speed setting graph are shown, but are not limited thereto, and arbitrary information indicating voice style characteristics may be displayed.
- the x-axis represents the size of the unit in which the user can change the voice style (for example, phoneme, letter, word, syllable). , Sentences, etc.), and the y-axis may indicate the style value of each unit.
- the voice style feature may include a sequential prosody feature including prosody information corresponding to at least one unit of a frame, a phoneme, a letter, a syllable, a word, or a sentence in chronological order.
- the prosody information may include at least one of information on the loudness of the sound, information on the height of the sound, information on the length of the sound, information on the pause period of the sound, or information on the speed of the sound.
- the style of sound may include any form, manner, or nuance represented by the sound or voice, and may include, for example, a tone, intonation, emotion, etc. inherent in the sound or voice.
- the sequential prosody feature may be expressed by a plurality of embedding vectors, and each of the plurality of embedding vectors may correspond to prosody information included in chronological order.
- the user may modify the y-axis value at the feature point of the x-axis within at least one graph shown in the interface 1320.
- the user may perform the loudness setting graph 1324 to increase the y-axis value of the x-axis point corresponding to the corresponding phoneme or character.
- the synthesized speech generation system receives the changed y-axis value corresponding to the corresponding phoneme or letter, and writes the speech style feature including the changed y-axis value and one or more sentences including the corresponding phoneme or letter to the artificial neural network text.
- synthetic speech can be generated based on speech data output from the artificial neural network text-synthetic speech model.
- the synthesized voice thus generated may be provided to a user through a user interface.
- the speech synthesis system may change a value of at least one embedding vector corresponding to a corresponding x-axis point from among a plurality of embedding vectors corresponding to the speech style feature with reference to the changed y-axis value.
- the user may provide a voice in which the user reads the given sentence in a manner desired by the user to the synthesized voice generating system through the user interface.
- the synthesized speech generating system may input the received speech to an artificial neural network configured to infer the input speech as a sequential prosody characteristic, and output sequential prosody characteristics corresponding to the received speech.
- the output sequential prosody features may be represented by one or more embedding vectors. One or more of these embedding vectors may be reflected in a graph provided through the interface 1320.
- a loudness setting graph 1324, a pitch setting graph 1326, and a speed setting graph 1328 may be included in the interface 1320 for changing voice style characteristics, but the present invention is not limited thereto.
- a graph for a mel scale spectogram corresponding to the voice data for the user may be shown together.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
Abstract
Description
Claims (16)
- 사용자 인터페이스를 통해 텍스트에 대한 합성 음성을 생성하는 방법에 있어서,In a method of generating a synthetic speech for text through a user interface,하나 이상의 문장을 수신하는 단계;Receiving one or more sentences;상기 수신된 하나 이상의 문장에 대한 음성 스타일 특징을 결정하는 단계; 및Determining a speech style characteristic for the received one or more sentences; And상기 결정된 음성 스타일 특징이 반영된 상기 하나 이상의 문장에 대한 합성 음성을 출력하는 단계를 포함하고,And outputting a synthesized speech for the one or more sentences in which the determined speech style characteristic is reflected,상기 합성 음성은, 상기 하나 이상의 문장 및 상기 결정된 음성 스타일 특징을 인공신경망 텍스트-합성 음성 모델에 입력하여 상기 인공신경망 텍스트-합성 음성 모델로부터 출력된 음성 데이터를 기초로, 생성되는,The synthesized speech is generated based on speech data output from the artificial neural network text-synthetic speech model by inputting the one or more sentences and the determined speech style characteristics into an artificial neural network text-synthetic speech model,텍스트에 대한 합성 음성을 생성하는 방법.How to create synthetic speech for text.
- 제1항에 있어서,The method of claim 1,상기 수신된 하나 이상의 문장을 출력하는 단계를 더 포함하고,Further comprising the step of outputting the received one or more sentences,상기 수신된 하나 이상의 문장에 대한 음성 스타일 특징을 결정하는 단계는, 상기 출력된 하나 이상의 문장 중 적어도 일부에 대한 설정 정보를 변경하는 단계를 포함하고,The determining of the speech style characteristic of the received one or more sentences includes changing setting information for at least a part of the output one or more sentences,상기 하나 이상의 문장 중 적어도 일부에 적용된 음성 스타일 특징은, 상기 변경된 설정 정보에 기초로 변경되고,The speech style feature applied to at least a portion of the one or more sentences is changed based on the changed setting information,상기 합성 음성은, 상기 하나 이상의 문장 중 적어도 일부 및 상기 변경된 음성 스타일 특징을 상기 인공신경망 텍스트-합성 음성 모델에 입력하여 상기 인공신경망 텍스트-합성 음성 모델로부터 출력된 음성 데이터를 기초로, 변경되는,The synthesized speech is changed based on speech data output from the artificial neural network text-synthetic speech model by inputting at least a part of the one or more sentences and the changed speech style characteristic into the artificial neural network text-synthetic speech model,텍스트에 대한 합성 음성을 생성하는 방법.How to create synthetic speech for text.
- 제2항에 있어서,The method of claim 2,상기 출력된 하나 이상의 문장 중 적어도 일부에 대한 설정 정보를 변경하는 단계는, 상기 출력된 하나 이상의 문장 중 일부에 대한 시각적으로 표시하기 위한 설정 정보를 변경하는 단계The step of changing the setting information for at least a part of the output one or more sentences may include changing setting information for visually displaying a part of the output one or more sentences.를 포함하는, 텍스트에 대한 합성 음성을 생성하는 방법.Including, a method of generating a synthetic speech for text.
- 제2항에 있어서,The method of claim 2,상기 하나 이상의 문장을 수신하는 단계는, 복수의 문장을 수신하는 단계를 포함하고,The step of receiving the one or more sentences includes receiving a plurality of sentences,상기 방법은, 상기 복수의 문장 사이에 삽입될 효과의 특성을 나타내는 시각적인 표시를 추가하는 단계를 더 포함하고,The method further comprises adding a visual indication indicating characteristics of an effect to be inserted between the plurality of sentences,상기 합성 음성은, 상기 추가된 시각적인 표시에 포함된 효과의 특성에 기초하여 생성된 효과음을 포함하는,The synthesized voice includes a sound effect generated based on a characteristic of an effect included in the added visual display,텍스트에 대한 합성 음성을 생성하는 방법.How to create synthetic speech for text.
- 제4항에 있어서,The method of claim 4,상기 복수의 문장 사이에 삽입될 효과는 묵음을 포함하고,The effect to be inserted between the plurality of sentences includes silence,상기 복수의 문장 사이에 삽입될 효과의 특성을 나타내는 시각적인 표시를 추가하는 단계는 상기 복수의 문장 사이에 삽입될 묵음의 시간을 나타내는 시각적인 표시를 추가하는 단계를 포함하는,The step of adding a visual indication indicating a characteristic of an effect to be inserted between the plurality of sentences comprises adding a visual indication indicating a time of silence to be inserted between the plurality of sentences,텍스트에 대한 합성 음성을 생성하는 방법.How to create synthetic speech for text.
- 제1항에 있어서,The method of claim 1,상기 하나 이상의 문장을 수신하는 단계는, 복수의 문장을 수신하는 단계를 포함하고,The step of receiving the one or more sentences includes receiving a plurality of sentences,상기 방법은, 상기 복수의 문장을 하나 이상의 문장 세트로 분리하는 단계를 포함하고,The method includes the step of separating the plurality of sentences into one or more sentence sets,상기 수신된 하나 이상의 문장에 대한 음성 스타일 특징을 결정하는 단계는,Determining a speech style characteristic for the received one or more sentences,상기 분리된 하나 이상의 문장 세트에 대응하는 배역을 결정하는 단계; 및Determining a role corresponding to the separated one or more sentence sets; And상기 결정된 배역에 대응하는 미리 결정된 음성 스타일 특징을 설정하는 단계를 포함하는, Including the step of setting a predetermined speech style feature corresponding to the determined role,텍스트에 대한 합성 음성을 생성하는 방법.How to create synthetic speech for text.
- 제6항에 있어서,The method of claim 6,상기 분리된 하나 이상의 문장 세트는 자연어 처리를 이용하여 분석되고,The separated set of one or more sentences is analyzed using natural language processing,상기 분리된 하나 이상의 문장 세트에 대응하는 배역을 결정하는 단계는,The step of determining a role corresponding to the separated one or more sentence sets,상기 하나 이상의 문장 세트에 대한 분석 결과에 기초하여 추천된 하나 이상의 배역 후보를 출력하는 단계; 및Outputting one or more recommended role candidates based on an analysis result of the one or more sentence sets; And상기 출력된 하나 이상의 배역 후보 중 적어도 일부를 선택하는 단계Selecting at least some of the outputted one or more cast candidates를 포함하는, 텍스트에 대한 합성 음성을 생성하는 방법.Including, a method of generating a synthetic speech for text.
- 제7항에 있어서,The method of claim 7,상기 분리된 하나 이상의 문장 세트는 상기 분석 결과에 기초하여 그룹화되고,The separated one or more sets of sentences are grouped based on the analysis result,상기 분리된 하나 이상의 문장 세트에 대응하는 배역을 결정하는 단계는,The step of determining a role corresponding to the separated one or more sentence sets,상기 분석 결과에 기초하여 추천된 상기 그룹화된 문장 세트의 각각에 대응하는 하나 이상의 배역 후보를 출력하는 단계; 및Outputting one or more cast candidates corresponding to each of the grouped sentence sets recommended based on the analysis result; And상기 출력된 하나 이상의 배역 후보 중 적어도 일부를 선택하는 단계Selecting at least some of the outputted one or more cast candidates를 포함하는, 텍스트에 대한 합성 음성을 생성하는 방법.Including, a method of generating a synthetic speech for text.
- 제7항에 있어서,The method of claim 7,상기 수신된 하나 이상의 문장에 대한 음성 스타일 특징을 결정하는 단계는,Determining a speech style characteristic for the received one or more sentences,상기 하나 이상의 문장 세트에 대한 분석 결과에 기초하여 추천된 하나 이상의 음성 스타일 특징 후보를 출력하는 단계; 및Outputting one or more recommended speech style feature candidates based on an analysis result of the one or more sentence sets; And상기 출력된 하나 이상의 음성 스타일 특징 후보 중 적어도 일부를 선택하는 단계Selecting at least some of the output one or more voice style feature candidates를 포함하는, 텍스트에 대한 합성 음성을 생성하는 방법.Including, a method of generating a synthetic speech for text.
- 제1항에 있어서,The method of claim 1,상기 하나 이상의 문장에 대한 합성 음성은 검수되고,The synthesized speech for the one or more sentences is inspected,상기 방법은, 상기 검수 결과에 기초하여, 상기 합성 음성에 적용된 상기 음성 스타일 특징을 변경하는 단계를 더 포함하는, 텍스트에 대한 합성 음성을 생성하는 방법.The method further comprises changing the speech style characteristic applied to the synthesized speech based on the inspection result.
- 제1항에 있어서,The method of claim 1,상기 합성 음성을 포함하는 오디오 콘텐츠가 생성되는,Audio content including the synthesized voice is generated,텍스트에 대한 합성 음성을 생성하는 방법.How to create synthetic speech for text.
- 제11항에 있어서,The method of claim 11,상기 생성된 오디오 콘텐츠를 다운로드하기 위한 요청에 응답하여, 상기 생성된 오디오 콘텐츠를 수신하는 단계In response to a request for downloading the generated audio content, receiving the generated audio content를 더 포함하는, 텍스트에 대한 합성 음성을 생성하는 방법.A method for generating a synthesized speech for text, further comprising.
- 제11항에 있어서,The method of claim 11,상기 생성된 오디오 콘텐츠에 대한 스트리밍 요청에 응답하여, 상기 생성된 오디오 콘텐츠를 실시간으로 재생하는 단계In response to the streaming request for the generated audio content, playing the generated audio content in real time를 더 포함하는, 텍스트에 대한 합성 음성을 생성하는 방법.A method for generating a synthesized speech for text, further comprising.
- 제11항에 있어서,The method of claim 11,상기 생성된 오디오 콘텐츠를 영상 콘텐츠에 믹싱하는 단계Mixing the generated audio content with video content를 더 포함하는, 텍스트에 대한 합성 음성을 생성하는 방법.A method for generating a synthesized speech for text, further comprising.
- 제1항에 있어서,The method of claim 1,상기 수신된 하나 이상의 문장을 출력하는 단계를 더 포함하고,Further comprising the step of outputting the received one or more sentences,상기 수신된 하나 이상의 문장에 대한 음성 스타일 특징을 결정하는 단계는, Determining a speech style characteristic for the received one or more sentences,상기 출력된 하나 이상의 문장 중 적어도 일부를 선택하는 단계;Selecting at least some of the output one or more sentences;상기 선택된 하나 이상의 문장 중 적어도 일부에 대한 음성 스타일 특징을 변경하기 위한 인터페이스를 출력하는 단계; 및Outputting an interface for changing a speech style characteristic of at least some of the selected one or more sentences; And상기 인터페이스를 통해 상기 적어도 일부에 대한 음성 스타일 특징을 나타내는 값을 변경하는 단계를 포함하고,Changing a value representing a speech style characteristic for the at least some of the at least some through the interface,상기 합성 음성은, 상기 하나 이상의 문장 중 적어도 일부 및 상기 변경된 음성 스타일 특징을 나타내는 값을 인공신경망 텍스트-합성 음성 모델에 입력하여, 상기 인공신경망 텍스트-합성 음성 모델로부터 출력된 음성 데이터를 기초로, 변경되는,The synthesized speech is based on speech data output from the artificial neural network text-synthetic speech model by inputting at least a part of the one or more sentences and a value representing the changed speech style characteristic into an artificial neural network text-synthetic speech model, Changed,텍스트에 대한 합성 음성을 생성하는 방법.How to create synthetic speech for text.
- 제1항 내지 제15항 중 어느 한 항에 따른 사용자 인터페이스를 통해 텍스트에 대한 합성 음성을 처리하는 방법을 컴퓨터에서 실행하기 위해 컴퓨터 판독 가능한 기록 매체에 저장된 컴퓨터 프로그램.A computer program stored in a computer-readable recording medium for executing a method of processing synthesized speech for text through a user interface according to any one of claims 1 to 15 on a computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/152,913 US20210142783A1 (en) | 2019-04-09 | 2021-01-20 | Method and system for generating synthetic speech for text through user interface |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2019-0041620 | 2019-04-09 | ||
KR20190041620 | 2019-04-09 | ||
KR1020200043362A KR20200119217A (en) | 2019-04-09 | 2020-04-09 | Method and system for generating synthesis voice for text via user interface |
KR10-2020-0043362 | 2020-04-09 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/152,913 Continuation US20210142783A1 (en) | 2019-04-09 | 2021-01-20 | Method and system for generating synthetic speech for text through user interface |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020209647A1 true WO2020209647A1 (en) | 2020-10-15 |
Family
ID=72751126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/004857 WO2020209647A1 (en) | 2019-04-09 | 2020-04-09 | Method and system for generating synthetic speech for text through user interface |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020209647A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112509552A (en) * | 2020-11-27 | 2021-03-16 | 北京百度网讯科技有限公司 | Speech synthesis method, speech synthesis device, electronic equipment and storage medium |
CN112786009A (en) * | 2021-02-26 | 2021-05-11 | 平安科技(深圳)有限公司 | Speech synthesis method, apparatus, device and storage medium |
CN113010138A (en) * | 2021-03-04 | 2021-06-22 | 腾讯科技(深圳)有限公司 | Article voice playing method, device and equipment and computer readable storage medium |
CN113539236A (en) * | 2021-07-13 | 2021-10-22 | 网易(杭州)网络有限公司 | Speech synthesis method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100768127B1 (en) * | 2007-04-10 | 2007-10-17 | (주)올라웍스 | Method for inferring personal relations by using readable data and method and system for tagging person identification information to digital data by using readable data |
JP2008046951A (en) * | 2006-08-18 | 2008-02-28 | Xing Inc | System and method for generating electronic document, server device, terminal device, program for server device, and program for terminal device |
KR20170092603A (en) * | 2014-12-04 | 2017-08-11 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Emotion type classification for interactive dialog system |
KR20180078197A (en) * | 2018-06-27 | 2018-07-09 | 조은형 | E-voice book editor and player |
-
2020
- 2020-04-09 WO PCT/KR2020/004857 patent/WO2020209647A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008046951A (en) * | 2006-08-18 | 2008-02-28 | Xing Inc | System and method for generating electronic document, server device, terminal device, program for server device, and program for terminal device |
KR100768127B1 (en) * | 2007-04-10 | 2007-10-17 | (주)올라웍스 | Method for inferring personal relations by using readable data and method and system for tagging person identification information to digital data by using readable data |
KR20170092603A (en) * | 2014-12-04 | 2017-08-11 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Emotion type classification for interactive dialog system |
KR20180078197A (en) * | 2018-06-27 | 2018-07-09 | 조은형 | E-voice book editor and player |
Non-Patent Citations (1)
Title |
---|
YOUNGGUN LEE: "Emotional End-to-End Neural Speech Synthesizer", ARXIV:1711.05447, 28 November 2017 (2017-11-28), pages 1 - 5, XP081148145, Retrieved from the Internet <URL:https://arxiv.org/abs/1711.05447v2> [retrieved on 20200720] * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112509552A (en) * | 2020-11-27 | 2021-03-16 | 北京百度网讯科技有限公司 | Speech synthesis method, speech synthesis device, electronic equipment and storage medium |
CN112509552B (en) * | 2020-11-27 | 2023-09-26 | 北京百度网讯科技有限公司 | Speech synthesis method, device, electronic equipment and storage medium |
CN112786009A (en) * | 2021-02-26 | 2021-05-11 | 平安科技(深圳)有限公司 | Speech synthesis method, apparatus, device and storage medium |
CN113010138A (en) * | 2021-03-04 | 2021-06-22 | 腾讯科技(深圳)有限公司 | Article voice playing method, device and equipment and computer readable storage medium |
WO2022184055A1 (en) * | 2021-03-04 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Speech playing method and apparatus for article, and device, storage medium and program product |
CN113539236A (en) * | 2021-07-13 | 2021-10-22 | 网易(杭州)网络有限公司 | Speech synthesis method and device |
CN113539236B (en) * | 2021-07-13 | 2024-03-15 | 网易(杭州)网络有限公司 | Speech synthesis method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210142783A1 (en) | Method and system for generating synthetic speech for text through user interface | |
WO2020209647A1 (en) | Method and system for generating synthetic speech for text through user interface | |
WO2019139430A1 (en) | Text-to-speech synthesis method and apparatus using machine learning, and computer-readable storage medium | |
CN111566656B (en) | Speech translation method and system using multi-language text speech synthesis model | |
WO2020027619A1 (en) | Method, device, and computer readable storage medium for text-to-speech synthesis using machine learning on basis of sequential prosody feature | |
WO2019139431A1 (en) | Speech translation method and system using multilingual text-to-speech synthesis model | |
WO2020190054A1 (en) | Speech synthesis apparatus and method therefor | |
WO2020190050A1 (en) | Speech synthesis apparatus and method therefor | |
CN112309366B (en) | Speech synthesis method, speech synthesis device, storage medium and electronic equipment | |
WO2022045651A1 (en) | Method and system for applying synthetic speech to speaker image | |
WO2019139428A1 (en) | Multilingual text-to-speech synthesis method | |
WO2022260432A1 (en) | Method and system for generating composite speech by using style tag expressed in natural language | |
EP4343755A1 (en) | Method and system for generating composite speech by using style tag expressed in natural language | |
KR20190109651A (en) | Voice imitation conversation service providing method and sytem based on artificial intelligence | |
WO2021085661A1 (en) | Intelligent voice recognition method and apparatus | |
WO2022034982A1 (en) | Method for performing synthetic speech generation operation on text | |
US20240038251A1 (en) | Audio data processing method and apparatus, electronic device, medium and program product | |
JP2006284645A (en) | Speech reproducing device, and reproducing program and reproducing method therefor | |
Lobanov et al. | A prototype of the computer system for speech intonation training | |
WO2024090997A1 (en) | Electronic device for acquiring synthesized speech by considering emotion and control method therefor | |
KR20240099120A (en) | Method and system for generating synthesis voice reflecting timeing information | |
WO2022196087A1 (en) | Information procesing device, information processing method, and information processing program | |
KR20220147554A (en) | Method for providing personalized voice contents | |
KR20220085257A (en) | Method and system for generating synthesis voice reflecting timeing information | |
d'Alessandro et al. | Is this guitar talking or what!? |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20787272 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20787272 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20787272 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.04.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20787272 Country of ref document: EP Kind code of ref document: A1 |