CN110232908B - Distributed speech synthesis system - Google Patents

Distributed speech synthesis system Download PDF

Info

Publication number
CN110232908B
CN110232908B CN201910693618.3A CN201910693618A CN110232908B CN 110232908 B CN110232908 B CN 110232908B CN 201910693618 A CN201910693618 A CN 201910693618A CN 110232908 B CN110232908 B CN 110232908B
Authority
CN
China
Prior art keywords
module
unit
time
processing unit
sharing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910693618.3A
Other languages
Chinese (zh)
Other versions
CN110232908A (en
Inventor
许阿义
陈跃鸿
庄少波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Taieam Artificial Intelligence Technology Co ltd
Original Assignee
Xiamen Taieam Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Taieam Artificial Intelligence Technology Co ltd filed Critical Xiamen Taieam Artificial Intelligence Technology Co ltd
Priority to CN201910693618.3A priority Critical patent/CN110232908B/en
Publication of CN110232908A publication Critical patent/CN110232908A/en
Application granted granted Critical
Publication of CN110232908B publication Critical patent/CN110232908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • G10L13/047Architecture of speech synthesisers

Abstract

The invention discloses a distributed voice synthesis system, which comprises a voice synthesis layer and a resolution layer, wherein the voice synthesis layer and the resolution layer are in bidirectional connection, the voice synthesis layer comprises a segmentation module, a combination module and a database, the output end of the segmentation module is connected with the input end of the combination module, the combination module is in bidirectional connection with the database, the resolution layer comprises a central processing unit, a segmentation processing unit, a grade division unit and a reward unit, and the central processing unit is in bidirectional connection with the voice synthesis layer and the segmentation processing unit respectively. The distributed voice synthesis system utilizes the mode that the central processing unit drives a plurality of terminals to share the processing performance, utilizes the idle processor, establishes the distributed voice synthesis system through the block processing unit, and pays reasonable use cost through the reward unit, thereby achieving the situation of mutual benefits and cooperation win-win.

Description

Distributed speech synthesis system
Technical Field
The invention relates to the technical field of voice synthesis, in particular to a distributed voice synthesis system.
Background
Speech synthesis, also known as text-to-speech technology, can convert any text information into standard smooth speech in real time and read out, equivalent to installing artificial mouth on a machine, it relates to a plurality of subject technologies such as acoustics, linguistics, digital signal processing, computer science, etc., is a leading-edge technology in the field of Chinese information processing, and solves the main problem of how to convert text information into audible sound information, i.e. let the machine speak like a human, so that the machine speak like a human, which is different from the traditional sound playback equipment in essence, the traditional sound playback equipment, such as a tape recorder, realizes the machine speaking by prerecording sound and then playing back, which has great limitations in content, storage, transmission or convenience, timeliness, etc., and can convert any text into speech with high naturalness at any time through computer speech synthesis, thereby really realizing that the robot speaks like a human.
The possession of the computer becomes very common, because the working time is limited, the computer can not be reasonably utilized, so that a plurality of processors can not be effectively utilized, along with the continuous popularization of the computer, the updating of the private computer is more rapid, the idle waste of the processor is directly caused, the voice synthesis system needs a large amount of operation, the operation system is directly built, the cost is higher, and therefore the idle processor of the private computer can be utilized to carry out effective operation.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a distributed voice synthesis system, which solves the problems that a computer cannot be reasonably utilized, so that a plurality of processors cannot be effectively utilized, the personal computer is rapidly updated along with the continuous popularization of the computer, the idle waste of the processors is directly caused, the voice synthesis system needs a large amount of operations, the operation system is directly built, and the cost is higher.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a distributed voice synthesis system comprises a voice synthesis layer and a resolution layer, wherein the voice synthesis layer and the resolution layer are in bidirectional connection, the voice synthesis layer comprises a segmentation module, an assembly module and a database, the output end of the segmentation module is connected with the input end of the assembly module, the assembly module is in bidirectional connection with the database, the resolution layer comprises a central processing unit, a segmentation processing unit, a grade division unit and a reward unit, the central processing unit is in bidirectional connection with the voice synthesis layer and the segmentation processing unit respectively, the reward unit and the grade division unit are in bidirectional connection, the resolution layer is in bidirectional connection with a sharing layer, the sharing layer comprises a sharing unit, an intelligent contract module and a real-time transmission module, the intelligent contract module is in bidirectional connection with the sharing unit, the real-time transmission module and the grade division unit respectively, and the real-time transmission module is in bidirectional connection with the block processing unit.
Preferably, the block processing unit comprises a performance sorting module, a task issuing module and a waveform splicing module, wherein the output end of the performance sorting module is connected with the input end of the task issuing module, and the output end of the task issuing module is connected with the input end of the waveform splicing module.
Preferably, the grade division unit comprises a performance transmission module, a data analysis module, a data comparison module and a grade determination module, wherein the output end of the performance transmission module is connected with the input end of the data analysis module, the output end of the data analysis module is connected with the input end of the data comparison module, and the output end of the data comparison module is connected with the input end of the grade determination module.
Preferably, the reward unit comprises a contribution time counting module, a time charging module, an actual time counting module, a work charging module and a weighted counting analysis module, wherein the output end of the contribution time counting module is connected with the input end of the time charging module, the output end of the actual time counting module is connected with the input end of the work charging module, and the output ends of the time charging module and the work charging module are connected with the input end of the weighted counting analysis module.
Preferably, the rewarding unit performs statistics by using a rewarding algorithm, inputs an on-hook unit price a into the time charging module, and then inputs a working unit price b into the working charging module, wherein the shared time counted by the contribution time counting module is recorded as T1, the actual working time counted by the actual time counting module is recorded as T2, and then the total bonus amount is counted by using a calculation mode of a total bonus amount T1+ b total bonus amount T2 through the weighted statistical analysis module.
Preferably, the hanging unit price a of the time charging module and the working unit price b of the working charging module are determined by a grade division unit, and the grade height is in direct proportion to the unit price height.
Preferably, the combination module is a processing module for comparing the band combinations by using a database.
Preferably, the sharing unit is composed of a first sharing terminal, a second sharing terminal and an nth sharing terminal.
(III) advantageous effects
The invention provides a distributed speech synthesis system. The method has the following beneficial effects:
(1) the distributed voice synthesis system comprises a segmentation module, a combination module and a database through a voice synthesis layer, wherein the output end of the segmentation module is connected with the input end of the combination module, the combination module is in bidirectional connection with the database, a resolution layer comprises a central processing unit, a block processing unit, a grading unit and a reward unit, the central processing unit is in bidirectional connection with the voice synthesis layer and the block processing unit respectively, the reward unit is in bidirectional connection with the grading unit, the resolution layer is in bidirectional connection with a sharing layer, the sharing layer comprises a sharing unit, an intelligent contract module and a real-time transmission module, the intelligent contract module is in bidirectional connection with the sharing unit, the real-time transmission module and the grading unit respectively, the real-time transmission module is in bidirectional connection with the block processing unit and cooperates with the voice synthesis layer, the resolution layer, the segmentation module, The combined module, the database, the central processing unit, the block processing unit, the grade division unit, the reward unit, the sharing layer, the sharing unit, the intelligent contract module and the real-time transmission module are arranged, the central processing unit is used for driving a plurality of terminals to share the processing performance, the idle processor is used, the distributed voice synthesis system is built through the block processing unit, reasonable use cost is paid through the reward unit, and the situation of mutual benefits and cooperation win-win is achieved.
(2) The distributed voice synthesis system comprises a performance sequencing module, a task issuing module and a waveform splicing module through a block processing unit, wherein the output end of the performance sequencing module is connected with the input end of the task issuing module, the output end of the task issuing module is connected with the input end of the waveform splicing module, the performance sequencing module is matched with the setting of the task issuing module and the waveform splicing module, the processing performance of a shared terminal is sequenced, a processing task is distributed to a processor for processing rapidly as much as possible, and the processing efficiency of voice synthesis is ensured.
(3) The distributed voice synthesis system comprises a reward unit, a time charging module, an actual time counting module, a work charging module and a weighted statistical analysis module, wherein the output end of the contribution time counting module is connected with the input end of the time charging module, the output end of the actual time counting module is connected with the input end of the work charging module, the output ends of the time charging module and the work charging module are both connected with the input end of the weighted statistical analysis module, the reward unit adopts a reward algorithm to count, an on-hook unit price a is input into the time charging module, then a work unit price b is input into the work charging module, shared time counted by the contribution time counting module is marked as T1, actual work time is marked as T2, and then the total number of the reward is counted by the weighted statistical analysis module in a reward T1+ b calculation mode of T2, the system is matched with the setting of the contribution time counting module, the time charging module, the actual time counting module, the work charging module and the weighted counting analysis module, a definite bonus incentive is provided for the sharing users, income can be provided for the sharing users in the idle time of the computer, the development amount of the sharing users is greatly improved, and the rapid processing of the system is effectively guaranteed.
Drawings
FIG. 1 is a schematic block diagram of the system of the present invention;
FIG. 2 is a system schematic block diagram of a block processing unit of the present invention;
FIG. 3 is a system schematic block diagram of a ranking unit of the present invention;
FIG. 4 is a schematic block diagram of a system for rewarding units of the present invention;
FIG. 5 is a schematic block diagram of a system for sharing units according to the present invention.
In the figure, 1-voice synthesis layer, 2-resolution layer, 3-segmentation module, 4-combination module, 5-database, 6-central processing unit, 7-block processing unit, 8-level division unit, 9-reward unit, 10-sharing layer, 11-sharing unit, 111-first sharing terminal, 112-second sharing terminal, 11N-nth sharing terminal, 12-intelligent contract module, 13-real-time transmission module, 14-performance sequencing module, 15-task issuing module, 16-waveform splicing module, 17-performance transmission module, 18-data analysis module, 19-data comparison module, 20-level determination module, 21-contribution time statistic module, 22-time charging module, 23-actual time statistic module, 24-work charging module, 25-weighted statistic analysis module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, an embodiment of the present invention provides a technical solution: a distributed speech synthesis system comprises a speech synthesis layer 1 and a resolution layer 2, the speech synthesis layer 1 and the resolution layer 2 are connected in a bidirectional mode, the speech synthesis layer 1 comprises a segmentation module 3, a combination module 4 and a database 5, the combination module 4 is a processing module which utilizes the database 5 to carry out wave band combination comparison, the output end of the segmentation module 3 is connected with the input end of the combination module 4, the combination module 4 is connected with the database 5 in a bidirectional mode, the resolution layer 2 comprises a central processing unit 6, a block processing unit 7, a grade division unit 8 and a reward unit 9, the central processing unit 6 is an ARM9 series processor, the block processing unit 7 comprises a performance sorting module 14, a task issuing module 15 and a waveform splicing module 16, the output end of the performance sorting module 14 is connected with the input end of the task issuing module 15, the output end of the task issuing module 15 is connected with the input end of the waveform splicing module 16, the processing performance of the shared terminal is sequenced by matching with the settings of the performance sequencing module 14, the task issuing module 15 and the waveform splicing module 16, and the processing tasks are distributed to the processor which processes the voice synthesis as quickly as possible, so as to ensure the processing efficiency of the voice synthesis, the grading unit 8 comprises a performance transmission module 17, a data analysis module 18, a data comparison module 19 and a grade determination module 20, the performance transmission module 17 transmits the relevant data of the processor to the system, the grading determination is carried out by judging the relevant software, the output end of the performance transmission module 17 is connected with the input end of the data analysis module 18, the output end of the data analysis module 18 is connected with the input end of the data comparison module 19, the output end of the data comparison module 19 is connected with the input end of the grade determination module 20, the rewarding unit 9 comprises a contribution time counting module 21, a time charging module 22, a data transmission module 17, a data transmission module, a grade determination module 20, a grade determination module, a rewarding module, a data transmission module, a grade determination module, a data transmission module, a grade determination module, a performance sorting module, a time counting module, a performance sorting module, a time counting module, a data transmission module, a processing module, a data transmission module, a grade determination module, a data transmission module, a grade determination module, a data transmission module, An actual time counting module 23, a work charging module 24 and a weighted statistical analysis module 25, wherein the contribution time counting module 21 is for counting the total time for providing an idle processor, the actual time counting module 23 is for counting the total time for actual auxiliary calculation, an output end of the contribution time counting module 21 is connected with an input end of a time charging module 22, an output end of the actual time counting module 23 is connected with an input end of the work charging module 24, output ends of the time charging module 22 and the work charging module 24 are both connected with an input end of the weighted statistical analysis module 25, a reward unit 9 performs statistics by using a reward algorithm, inputs an on-hook unit price a into the time charging module 22, and then inputs a work unit price b into the work charging module 24, wherein the shared time counted by the contribution time counting module 21 is marked as T1, the actual work time counting module 23 counts the actual work time as T2, then, the total number of the bonus is counted by a weighted statistical analysis module 25 in a calculation mode of a total number of bonus a T1+ b T2, a definite bonus encouragement is given to the sharing user by matching with the settings of the contribution time counting module 21, the time charging module 22, the actual time counting module 23, the work charging module 24 and the weighted statistical analysis module 25, a income can be provided for the sharing user in the idle time of the computer, the development amount of the sharing user is greatly improved, and the rapid processing of the system is effectively ensured, the hanging unit price a of the time charging module 22 and the work unit price b of the work charging module 24 are determined by a grade dividing unit 8, the height of the grade is in direct proportion to the height of the unit price, the central processor 6 is respectively in bidirectional connection with the voice synthesis layer 1 and the block processing unit 7, and the rewarding unit 9 is in bidirectional connection with the grade dividing unit 8, the resolution layer 2 is connected with the sharing layer 10 in a bidirectional way, the sharing layer 10 comprises a sharing unit 11, an intelligent contract module 12 and a real-time transmission module 13, the sharing unit 11 consists of a first sharing terminal 111, a second sharing terminal 112 and an Nth sharing terminal 11N, the intelligent contract module 12 is respectively connected with the sharing unit 11, the real-time transmission module 13 and the grade division unit 8 in a bidirectional way, the real-time transmission module 13 is connected with the block processing unit 7 in a bidirectional way, and an idle processor is utilized by utilizing the way that the central processing unit 6 drives a plurality of terminals to share the processing performance in cooperation with the arrangement of the voice synthesis layer 1, the resolution layer 2, the segmentation module 3, the combination module 4, the database 5, the central processing unit 6, the block processing unit 7, the grade division unit 8, the reward unit 9, the sharing layer 10, the sharing unit 11, the intelligent contract module 12 and the real-time transmission module 13, a distributed voice synthesis system is established through the block processing unit 7, and reasonable use cost is paid through the reward unit 9, so that the situations of mutual benefits and reciprocity and cooperative win-win are achieved.
When the system works, the sharing unit 11 is connected with the system through the intelligent contract module 12, the sharing user transmits the processor information to the grade dividing unit 8, the grading standard set by the system is stored in the data comparison module 19, the shared user processor information is processed and analyzed through the data analysis module 18, the grade determining module 20 determines the grade, then the performance sorting module 14 in the block processing unit 7 sorts the information according to the grade, the information processing is preferentially carried out through the task issuing module 15, namely the waveform splicing module 16 is assisted to carry out splicing calculation, corresponding unit prices are input into the time charging module 22 and the work charging module 24, after the processing of the reward algorithm, the bonus is obtained, the voice synthesis layer 1 carries out segmentation processing on the characters through the segmentation module 3 after receiving the instruction, and the storage amount in the database 5 is retrieved and compared through the combination module 4, the characters after the segmentation processing are issued to a processor of a sharing user through the block processing unit 7, and are calculated and recovered through the real-time transmission module 13 during calculation, and then are summarized through the central processing unit 6, so that the speech synthesis is realized.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation. The use of the phrase "comprising one of the elements does not exclude the presence of other like elements in the process, method, article, or apparatus that comprises the element.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. A distributed speech synthesis system comprising a speech synthesis layer (1) and a resolution layer (2), said speech synthesis layer (1) and resolution layer (2) being bidirectionally coupled, characterized by: the voice synthesis layer (1) comprises a segmentation module (3), a combination module (4) and a database (5), the output end of the segmentation module (3) is connected with the input end of the combination module (4), the combination module (4) is in bidirectional connection with the database (5), the resolution layer (2) comprises a central processing unit (6), a block processing unit (7), a grade division unit (8) and a reward unit (9), the central processing unit (6) is in bidirectional connection with the voice synthesis layer (1) and the block processing unit (7) respectively, the reward unit (9) is in bidirectional connection with the grade division unit (8), the resolution layer (2) is in bidirectional connection with the sharing layer (10), the sharing layer (10) comprises a sharing unit (11), an intelligent contract module (12) and a real-time transmission module (13), the intelligent contract module (12) is respectively in bidirectional connection with the sharing unit (11), the real-time transmission module (13) and the grading unit (8), the real-time transmission module (13) is in bidirectional connection with the block processing unit (7), the grading unit (8) comprises a performance transmission module (17), a data analysis module (18), a data comparison module (19) and a grade determination module (20), the block processing unit (7) comprises a performance sorting module (14), a task issuing module (15) and a waveform splicing module (16), the rewarding unit (9) comprises a contribution time statistic module (21), a time charging module (22), an actual time statistic module (23), a work charging module (24) and a weighted statistic analysis module (25), the sharing unit (11) is in network connection with the system through the intelligent contract module (12), the sharing user transmits processor information to a grading unit (8), grading standards set by a system are stored in a data comparison module (19), the processor information of the sharing user is processed and analyzed by a data analysis module (18), a grade determining module (20) judges the grade, then according to the grade, a performance sorting module (14) in a block processing unit (7) sorts the information, a task issuing module (15) preferentially processes the information, namely a waveform splicing module (16) is assisted to perform splicing calculation, corresponding unit prices are input into a time charging module (22) and a work charging module (24) at the same time, bonus money is obtained after processing of a bonus algorithm, a voice synthesis layer (1) receives an instruction, performs segmentation processing on characters by a segmentation module (3), and searches and compares the storage amount in a database (5) by a combination module (4), the characters after the segmentation processing are issued to a processor of a sharing user through a block processing unit (7), and are calculated and recovered through a real-time transmission module (13) during calculation, and then are gathered through a central processing unit (6) to realize voice synthesis.
2. A distributed speech synthesis system according to claim 1, wherein: the output end of the performance sequencing module (14) is connected with the input end of the task issuing module (15), and the output end of the task issuing module (15) is connected with the input end of the waveform splicing module (16).
3. A distributed speech synthesis system according to claim 1, wherein: the output end of the performance transmission module (17) is connected with the input end of the data analysis module (18), the output end of the data analysis module (18) is connected with the input end of the data comparison module (19), and the output end of the data comparison module (19) is connected with the input end of the grade determination module (20).
4. A distributed speech synthesis system according to claim 1, wherein: the output end of the contribution time counting module (21) is connected with the input end of the time charging module (22), the output end of the actual time counting module (23) is connected with the input end of the work charging module (24), and the output ends of the time charging module (22) and the work charging module (24) are connected with the input end of the weighted statistical analysis module (25).
5. A distributed speech synthesis system according to claim 4, wherein: the reward unit (9) adopts a reward algorithm for statistics, and the specific algorithm is as follows: the method comprises the steps of inputting an on-hook unit price a into a time charging module (22), then inputting a work unit price b into a work charging module (24), wherein the shared time counted by a contribution time counting module (21) is recorded as T1, the actual time counting module (23) counts the actual work time as T2, and then the total number of the bonus money is counted by a weighted statistical analysis module (25) in a calculation mode of a total number of the bonus money T1+ b total number of the bonus money T2.
6. A distributed speech synthesis system according to claim 5, wherein: the hanging unit price a of the time charging module (22) and the working unit price b of the working charging module (24) are determined by the grade dividing unit (8), and the grade height is in direct proportion to the unit price height.
7. A distributed speech synthesis system according to claim 1, wherein: the combination module (4) is a processing module which utilizes the database (5) to carry out wave band combination comparison.
8. A distributed speech synthesis system according to claim 1, wherein: the sharing unit (11) is composed of a first sharing terminal (111), a second sharing terminal (112) and an Nth sharing terminal (11N).
CN201910693618.3A 2019-07-30 2019-07-30 Distributed speech synthesis system Active CN110232908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910693618.3A CN110232908B (en) 2019-07-30 2019-07-30 Distributed speech synthesis system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910693618.3A CN110232908B (en) 2019-07-30 2019-07-30 Distributed speech synthesis system

Publications (2)

Publication Number Publication Date
CN110232908A CN110232908A (en) 2019-09-13
CN110232908B true CN110232908B (en) 2022-02-18

Family

ID=67855228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910693618.3A Active CN110232908B (en) 2019-07-30 2019-07-30 Distributed speech synthesis system

Country Status (1)

Country Link
CN (1) CN110232908B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1384490A (en) * 2002-04-23 2002-12-11 安徽中科大讯飞信息科技有限公司 Distributed voice synthesizing method
CN1384489A (en) * 2002-04-22 2002-12-11 安徽中科大讯飞信息科技有限公司 Distributed voice synthesizing system
CN101405731A (en) * 2006-01-23 2009-04-08 查查搜索公司 A scalable search system using human searchers
CN102568471A (en) * 2011-12-16 2012-07-11 安徽科大讯飞信息科技股份有限公司 Voice synthesis method, device and system
CN104538024A (en) * 2014-12-01 2015-04-22 百度在线网络技术(北京)有限公司 Speech synthesis method, apparatus and equipment
CN105096934A (en) * 2015-06-30 2015-11-25 百度在线网络技术(北京)有限公司 Method for constructing speech feature library as well as speech synthesis method, device and equipment
CN108090052A (en) * 2018-01-05 2018-05-29 深圳市沃特沃德股份有限公司 Voice translation method and device
CN108447473A (en) * 2018-03-06 2018-08-24 深圳市沃特沃德股份有限公司 Voice translation method and device
CN109074806A (en) * 2016-02-12 2018-12-21 亚马逊技术公司 Distributed audio output is controlled to realize voice output

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040054534A1 (en) * 2002-09-13 2004-03-18 Junqua Jean-Claude Client-server voice customization

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1384489A (en) * 2002-04-22 2002-12-11 安徽中科大讯飞信息科技有限公司 Distributed voice synthesizing system
CN1384490A (en) * 2002-04-23 2002-12-11 安徽中科大讯飞信息科技有限公司 Distributed voice synthesizing method
CN101405731A (en) * 2006-01-23 2009-04-08 查查搜索公司 A scalable search system using human searchers
CN102568471A (en) * 2011-12-16 2012-07-11 安徽科大讯飞信息科技股份有限公司 Voice synthesis method, device and system
CN104538024A (en) * 2014-12-01 2015-04-22 百度在线网络技术(北京)有限公司 Speech synthesis method, apparatus and equipment
CN105096934A (en) * 2015-06-30 2015-11-25 百度在线网络技术(北京)有限公司 Method for constructing speech feature library as well as speech synthesis method, device and equipment
CN109074806A (en) * 2016-02-12 2018-12-21 亚马逊技术公司 Distributed audio output is controlled to realize voice output
CN108090052A (en) * 2018-01-05 2018-05-29 深圳市沃特沃德股份有限公司 Voice translation method and device
CN108447473A (en) * 2018-03-06 2018-08-24 深圳市沃特沃德股份有限公司 Voice translation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
STUDY ON DISTRIBUTED SPEECH SYNTHESIS SYSTEM;TANG HAO ET AL.;《IEEE ICASSP 2003》;20031231;全文 *
面向语音交互的云计算系统的研究;贾玉辉;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20140315;全文 *

Also Published As

Publication number Publication date
CN110232908A (en) 2019-09-13

Similar Documents

Publication Publication Date Title
TWI711967B (en) Method, device and equipment for determining broadcast voice
CN110782335B (en) Method, device and storage medium for processing credit data based on artificial intelligence
CN110245240A (en) A kind of determination method and device of problem data answer
CN101923855A (en) Test-irrelevant voice print identifying system
CN104462600A (en) Method and device for achieving automatic classification of calling reasons
CN106250400A (en) A kind of audio data processing method, device and system
CN111694940A (en) User report generation method and terminal equipment
CN109509010A (en) A kind of method for processing multimedia information, terminal and storage medium
CN103366784A (en) Multimedia playing method and device with function of voice controlling and humming searching
CN112700781A (en) Voice interaction system based on artificial intelligence
CN113254840B (en) Artificial intelligence application service pushing method, pushing platform and terminal equipment
CN111667284A (en) Customer service switching method and device
CN107908796A (en) E-Government duplicate checking method, apparatus and computer-readable recording medium
CN108154311A (en) Top-tier customer recognition methods and device based on random forest and decision tree
US11157916B2 (en) Systems and methods for detecting complaint interactions
CN112016327A (en) Intelligent structured text extraction method and device based on multiple rounds of conversations and electronic equipment
CN113807103B (en) Recruitment method, device, equipment and storage medium based on artificial intelligence
CN109147146B (en) Voice number taking method and terminal equipment
CN110232908B (en) Distributed speech synthesis system
CN117520503A (en) Financial customer service dialogue generation method, device, equipment and medium based on LLM model
CN105745679A (en) Fluoropolymer coatings comprising aziridine compounds and non-fluorinated polymer
CN108257600A (en) Method of speech processing and device
CN115878768A (en) NLP-based vehicle insurance service call-back clue recommendation method and related equipment thereof
CN111985231B (en) Unsupervised role recognition method and device, electronic equipment and storage medium
CN107437414A (en) Parallelization visitor's recognition methods based on embedded gpu system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant