JPS60220652A - Speech synthesizing system of exchange - Google Patents

Speech synthesizing system of exchange

Info

Publication number
JPS60220652A
JPS60220652A JP59077738A JP7773884A JPS60220652A JP S60220652 A JPS60220652 A JP S60220652A JP 59077738 A JP59077738 A JP 59077738A JP 7773884 A JP7773884 A JP 7773884A JP S60220652 A JPS60220652 A JP S60220652A
Authority
JP
Japan
Prior art keywords
speech
voice
subscriber
exchange
translated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP59077738A
Other languages
Japanese (ja)
Inventor
Mitsuaki Ishikura
石倉 光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Nippon Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp, Nippon Electric Co Ltd filed Critical NEC Corp
Priority to JP59077738A priority Critical patent/JPS60220652A/en
Publication of JPS60220652A publication Critical patent/JPS60220652A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

PURPOSE:To make it possible that a receiving subscriber specifies and recognizes a transmitting subscriber when the receiving subscriber hears a translated synthesized speech from an exchange, by adding speech character information indicating individual characters of the transmitting subscriber to translated speech language information to synthesize a voice. CONSTITUTION:The speech from a transmitting subscriber 1 is subjected to A/D conversion by an A/D converter 5 in an exchange 2 and is translated automatically from a language A to a language B by a speech automatic translating device 9, and the translated code is synthesized in a speech synthesizing device 10 and is subjected to D/A conversion in a D/A converter 6 through a switching network 4 and is outputted as a speech signal and reaches a receiving subscriber 3. In this case, an operator of a switchboard 15 registers the sex and the age of the transmitting subscriber 1 in a transmitting subscriber individual character information register device 12 as voice individual character information parameters of the transmitting subscriber 1 in accordance with conversation with the transmitting subscriber 1. Simultaneously, a coefficient of partial self-correlation is obtained from voice information by a voice individual character information extracting device 11, and parameter information is added to this coefficient from the device 12, and they are synthesized by the device 10 and are transmitted to the receiving subscriber 3. The receiving subscriber 3 specifies and recognizes the speech of the translated language B as the speech from the transmitting subscriber 1 and receives it.

Description

【発明の詳細な説明】 〔技術分駒〕 本発明は交換機における音声合成方式に関し、特に音声
自動翻訳機能を有する交換機における音声合成方式に関
する。
DETAILED DESCRIPTION OF THE INVENTION [Technical Section] The present invention relates to a speech synthesis method in an exchange, and particularly to a speech synthesis method in an exchange having an automatic speech translation function.

〔従来技術〕[Prior art]

従来この糧の交換機において、意味をもつ言語Aから言
語Bに翻訳変換された音声情報の音声合成方式にしては
、あらかじめ記録された特定の音声情報もしくは音声素
片の編集による合成音声を交換機で生成して相手受話者
に伝達する方式が用いられている。従って相手受動7者
は送話者音声の個性認識ができないまま、つ°まり送話
者が誰か分からないまま常に一定の翻訳合成音声を聞く
ことになるという欠点があった。
Conventionally, in the exchange of this kind of food, the voice synthesis method of the voice information translated and converted from the meaningful language A to the language B is to use the exchange to synthesize a synthesized voice by editing specific voice information or voice segments that have been recorded in advance. A method is used in which the information is generated and transmitted to the other party. Therefore, there is a drawback in that the receiver on the other end cannot recognize the individuality of the speaker's voice, that is, always listens to a fixed translated synthesized voice without knowing who the speaker is.

〔発明の目的〕[Purpose of the invention]

本発明の目的は、翻訳された音声口語iff報に送話者
の音声個性を示す廿声形質情報を付加して音声合成する
ことにより上記欠点を除去し、交換機からの翻訳合成音
声を相手受話者が聞いたとき送話者を特定認知できるよ
うにした交換機における音声合成方式を提供することに
るる。
The purpose of the present invention is to eliminate the above-mentioned drawbacks by adding voice characteristic information indicating the voice personality of the sender to the translated spoken colloquial information and synthesizing the voice, so that the translated synthesized voice from the exchange is transmitted to the receiver of the other party. The purpose of the present invention is to provide a speech synthesis method in an exchange that allows a person to identify and recognize a person who is speaking when he or she hears the message.

〔発明の構成〕[Structure of the invention]

本発明によれは、fP自動翻訳装置お・よびこれλ結合
した音声合成装置を有する交換機において、音声個性情
報抽出装置と音声個性情報登録装置とを備え、前記音声
自動翻訳装置により翻訳された音声個性情報に前記音声
個性抽出装影および音声個性情報登録装置からの送話者
音声形質情報を付加して音声合成し相手受話者に伝達す
ることを特徴とする交換機における音声合成方式が得ら
れる。
According to the present invention, an exchange having an fP automatic translation device and a speech synthesis device coupled thereto, includes a voice personality information extraction device and a voice personality information registration device, and the voice translated by the voice automatic translation device A voice synthesis system in an exchange is obtained, which is characterized in that the speech personality information is synthesized by adding the voice personality extraction device and the speaker's voice characteristic information from the voice personality information registration device and transmitted to the other party.

〔実施例〕〔Example〕

次に図面を参照して本発明について説明する。 Next, the present invention will be explained with reference to the drawings.

第1図は本発明の交換機における音声合成方式の一実施
例を示すブロック図で、送話者1および受話者3と両者
を接続する交換機2から構成される。交換機2の交換ネ
ットワーク4には送話者1および受話者3にそれぞれ接
続されるアナログディジタル変換回路5および6と、信
号分配用トランク回路7と、合成音声出力用トランク回
路8および交換台接続用トランク回路14が接続されて
いる。交換台接続用トランク回路14には交換台15が
接続され、交換台15はさらにデータ通信制御バス16
を介して交換機中央処理装置(以下cpu )13に接
続されている。音声自動翻訳装置9は音声合成装置10
と接続されており、両者はそれぞれ信号分配用トランク
回路7、合成音声出力用トランク回路8に接続されてい
る。さらに信号分配用トランク回路7は音声個性情報抽
出装置11のデータ受信バッファllaにも接続されて
いる。音声個性情報抽出装置11はデータ送信バッファ
lidを経由して音声合成装置10に接続され、また中
央処理装置11bを経由して送話者個性情報登録装置1
2にも接続されている。ここで音声&性情報抽出装置1
1は中央処理装置flbを中心にしてデータ受信バッフ
ァ11a1個性情報抽出プログラム格納用記憶装置11
Cおよびデータ送信バッファlidから構成されている
。また送話者個性情報登録装@12はデータ入出力制御
回路12aおよびこれと接続された記憶装置12bで構
成され、データ入出力制御回路12at−経由して音声
個性情報抽出装置11および交換機2のデータ通信制御
バス16に接続されている。
FIG. 1 is a block diagram showing an embodiment of a speech synthesis method in an exchange according to the present invention, and is composed of a transmitter 1, a receiver 3, and an exchange 2 that connects them. The exchange network 4 of the exchange 2 includes analog-to-digital conversion circuits 5 and 6 connected to the transmitter 1 and the receiver 3, respectively, a signal distribution trunk circuit 7, a synthetic voice output trunk circuit 8, and a switchboard connection circuit. A trunk circuit 14 is connected. A switchboard 15 is connected to the switchboard connection trunk circuit 14, and the switchboard 15 further connects to a data communication control bus 16.
It is connected to the exchange central processing unit (hereinafter referred to as CPU) 13 via. The automatic speech translation device 9 is a speech synthesis device 10
Both are connected to a signal distribution trunk circuit 7 and a synthetic voice output trunk circuit 8, respectively. Furthermore, the signal distribution trunk circuit 7 is also connected to the data reception buffer lla of the voice individuality information extraction device 11. The voice personality information extraction device 11 is connected to the voice synthesis device 10 via the data transmission buffer lid, and is connected to the speaker personality information registration device 1 via the central processing unit 11b.
It is also connected to 2. Here, audio & sexual information extraction device 1
1 includes a central processing unit flb, a data receiving buffer 11a1, a storage device 11 for storing a personality information extraction program;
C and a data transmission buffer lid. The speaker personality information registration device @12 is composed of a data input/output control circuit 12a and a storage device 12b connected thereto. It is connected to the data communication control bus 16.

いま、送話者lは受話者3に対して言語Aで通話してい
るものとして、送話者1の音声信号はアナログディジタ
ル変換回路5にて符号化され、通常線ネットワークパス
4aを経由して再びアナログディジタル変換回路6にて
音声信号に復号化されて受話者3に伝達される。ここで
送話者1あるいは受話83が、送話者1の音声言語Aを
言語Bに翻訳することを希望したものとすれば、CPU
13はネットワークパス4aの解放とネットワークパス
4bの設定を行うので送話者1祉交換台接続用トランク
回路14を介して交換台15に接続される。交換台15
のオペレータは送話者1との送話者lの音声個性情報の
パラメータとして送話者個性情報登録装置12に入力す
る。この入力は交換機2に用意されている登録コマンド
によって翫者個性情報登録装置12の記憶装置12b内
に設定される。
Assuming that the speaker 1 is currently speaking to the receiver 3 in language A, the voice signal of the speaker 1 is encoded by the analog-to-digital conversion circuit 5 and sent via the normal line network path 4a. The signal is again decoded into an audio signal by the analog-to-digital conversion circuit 6 and transmitted to the receiver 3. Here, if the speaker 1 or receiver 83 wishes to translate the speaker 1's audio language A into language B, the CPU
13 releases the network path 4a and sets the network path 4b, so the caller is connected to the switchboard 15 via the switchboard connection trunk circuit 14. Exchange table 15
The operator inputs it into the speaker personality information registration device 12 as a parameter of the voice personality information of the speaker 1 and the speaker l. This input is set in the storage device 12b of the user individual information registration device 12 by a registration command prepared in the exchange 2.

次いで交換機2はネットワークパス4bの解放。Next, the exchange 2 releases the network path 4b.

ネットワークパス4cおよびネットワークパス4dの設
定を行い音声翻訳処理にとりかかる。すなわち−アナロ
グディジタル変換回路5により符号化された送話者1の
音声信号はネットワークパス4cおよび信号分配用トラ
ンク回路7を経由して音声自動翻訳装置9に入力され、
音声自動翻訳装置9はこの入力された符号系列を認識・
分析して前記言語Bに翻訳し、該言語Bに対応する符号
系列に変換・生成して音声合成装置10に処理を引き継
ぐ。これと同時に信号分配用トランク回路7がら畔アナ
ログディジタル変換回路5によってサンプリングされ符
号化された送話者1の音声情報が音声個性情報抽出装置
11にも入力される。音声個性情報抽出装置11はこの
入力された音声情報をもとに公知の部分自己相関係数を
め、これに記憶装置12bK6らかじめ記憶されている
送話者1の前記性別と年令の度合を示すパラメータ情報
を加えて該送話者1の特徴パラメータとしてデータ送信
バッファlidを介して音声合成装置10に入力する。
The network path 4c and network path 4d are set and speech translation processing begins. That is, the speech signal of the speaker 1 encoded by the analog-to-digital conversion circuit 5 is input to the automatic speech translation device 9 via the network path 4c and the signal distribution trunk circuit 7;
The automatic speech translation device 9 recognizes and recognizes this input code sequence.
It is analyzed and translated into the language B, converted and generated into a code sequence corresponding to the language B, and the processing is handed over to the speech synthesis device 10. At the same time, the voice information of the speaker 1 sampled and encoded by the analog-to-digital conversion circuit 5 from the signal distribution trunk circuit 7 is also input to the voice individuality information extraction device 11 . The voice personality information extraction device 11 calculates a known partial autocorrelation coefficient based on this input voice information, and uses this to calculate the gender and age of the speaker 1 stored in advance in the storage device 12bK6. Parameter information indicating the degree is added and input as a characteristic parameter of the speaker 1 to the speech synthesis device 10 via the data transmission buffer lid.

この一連の処理は個性情報抽出プログラム格納用記憶装
置11Cに格納されている音声個性情報抽出プログラム
を読み出して中央処理装置11bによってなされる。こ
の処理に流れについて第2図を用いて説明する。
This series of processing is performed by the central processing unit 11b by reading out the voice personality information extraction program stored in the memory device 11C for storing the personality information extraction program. The flow of this process will be explained using FIG. 2.

第2図は第1図における音声個性情報抽出処理手順の一
例を示すフローチャートである。同図において、処理1
から処理3までは処理4の部分自己相関係数をめるため
の前処珍となり、該処理1〜処理4が従来公知の音声分
析特徴抽出の流れの一例であり、処理5は本発明による
加えられた部分の処理である。すなわち音声分析特徴抽
出処理に続いて処理5(性別情報の抽出および年令情報
の抽出処理)が行われる。第1図に示した音声合成装置
10はこれらの情報をもとに送話者1の擬似音声を生成
・合成するための手段である。
FIG. 2 is a flowchart showing an example of the voice individuality information extraction processing procedure in FIG. In the same figure, processing 1
Processing 3 to Processing 3 is a preliminary step for calculating the partial autocorrelation coefficient of Processing 4, and Processing 1 to Processing 4 are an example of the flow of conventionally known speech analysis feature extraction, and Processing 5 is according to the present invention. This is the processing of the added part. That is, following the voice analysis feature extraction process, process 5 (extraction of gender information and age information) is performed. The speech synthesis device 10 shown in FIG. 1 is a means for generating and synthesizing the pseudo speech of the speaker 1 based on this information.

交換機2では音声合成装置10で生成・合成されたディ
ジタル音声情報を合成音声出力用トランク回路8から取
り出しネットワークパス4dおよびアナログディジタル
変換回路6を経由してアナ四グ音声情報に変換して受話
者3に伝達する。受話者3はこれら一連の動作により交
換機2で言語Aから言語Bに翻訳されかつ擬似的に合成
された音声を送話者1からの音声であると特定認識して
受信することが可能となる。
In the exchange 2, the digital voice information generated and synthesized by the voice synthesizer 10 is taken out from the trunk circuit 8 for outputting synthesized voice and converted to analog/4G voice information via the network path 4d and the analog/digital conversion circuit 6, and then sent to the receiver. 3. Through these series of operations, the receiver 3 is able to specifically recognize and receive the voice translated from language A to language B and pseudo-synthesized by the exchange 2 as being the voice from the sender 1. .

〔発明の効果〕〔Effect of the invention〕

本発明の交換機における音声合成方式は以上説明したよ
うに、音声自動翻訳装置および音声合成装置を有する交
換機に音声個性情報抽出装置と音声個性情報登録装置と
を追加することにより受話者にとって送話者自身があた
かも翻訳された言語で話しているかのように送話者を特
定認識せしめる効果がある。
As explained above, the speech synthesis method in the exchange of the present invention is such that a voice individuality information extraction device and a voice individuality information registration device are added to the exchange having an automatic speech translation device and a speech synthesis device. This has the effect of allowing the speaker to be identified and recognized as if he were speaking in the translated language.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の交換機における音声合成方式の一実施
例を示すブロック図および第2図は第1図における音声
個性情報抽出処理手順の一例を示すフローチャートであ
る。 −図において、1・・・・・・送話者、2・・・・・・
交換機、3・・・・・・受話者、4・・・・・・交換ネ
ットワーク、4aご4d・・・・・・ネットワークパス
、5,6・・・・・・アナログディジタル変換回路、7
・・・・・・信号分配用トランク回路、8・・・・・・
合成音声出力用トランク回路、9・・・・・・音声自動
翻訳装置t、1o・旧・・音声合成装置、11・・・・
・・音声個性情報抽出装置、11a・・・・・・データ
受信バ・・・データ送信バッファ、12・・・・・・送
話省個性情牡登録装置t、、12 a・・・・・・デー
タ入出力制御回路、12b換台、16・・・・・・デー
タ通信制御バス。
FIG. 1 is a block diagram showing an embodiment of a speech synthesis method in an exchange according to the present invention, and FIG. 2 is a flowchart showing an example of the voice individuality information extraction processing procedure in FIG. - In the diagram, 1...Speaker, 2...
Exchange, 3...Receiver, 4...Exchange network, 4a to 4d...Network path, 5, 6...Analog-digital conversion circuit, 7
...Trunk circuit for signal distribution, 8...
Trunk circuit for synthetic speech output, 9... automatic speech translation device t, 1o old... speech synthesis device, 11...
...Voice personality information extraction device, 11a...Data reception bar...Data transmission buffer, 12...Speech personality information registration device t, 12a... -Data input/output control circuit, 12b exchange unit, 16...data communication control bus.

Claims (1)

【特許請求の範囲】[Claims] 有声自動翻訳装置およびこれと結合した音声合成装置を
有する交換機において、音声個性情報抽出装置と音声個
性情報抽出装置とを備え、前記有声自動翻訳装置により
翻訳された音声言語情報に前記音声個性情報抽出装置お
よび音声個性情報登録装置からの送話者音声形質情報を
付加して音声合成し相手受話者に伝達することを特徴と
する交換機における音声合成方式。
An exchange having a voiced automatic translation device and a speech synthesis device coupled thereto, including a voice personality information extraction device and a voice personality information extraction device, extracts the voice personality information from the voice language information translated by the voiced automatic translation device. A speech synthesis method in an exchange, characterized in that speech synthesis is performed by adding speech characteristic information of a speaker from a device and a speech personality information registration device, and transmits the synthesized speech to a receiver.
JP59077738A 1984-04-18 1984-04-18 Speech synthesizing system of exchange Pending JPS60220652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP59077738A JPS60220652A (en) 1984-04-18 1984-04-18 Speech synthesizing system of exchange

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP59077738A JPS60220652A (en) 1984-04-18 1984-04-18 Speech synthesizing system of exchange

Publications (1)

Publication Number Publication Date
JPS60220652A true JPS60220652A (en) 1985-11-05

Family

ID=13642246

Family Applications (1)

Application Number Title Priority Date Filing Date
JP59077738A Pending JPS60220652A (en) 1984-04-18 1984-04-18 Speech synthesizing system of exchange

Country Status (1)

Country Link
JP (1) JPS60220652A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1988005238A1 (en) * 1986-12-29 1988-07-14 Kazuo Hashimoto Answering machine equipped with automatic translation function
EP0307137A2 (en) * 1987-09-11 1989-03-15 Hashimoto Corporation Multiple language telephone answering machine
US5392343A (en) * 1992-11-10 1995-02-21 At&T Corp. On demand language interpretation in a telecommunications system
US9093713B2 (en) 2007-11-05 2015-07-28 The Furukawa Battery Co., Ltd. Method for producing lead-base alloy grid for lead-acid battery
CN111770235A (en) * 2020-07-03 2020-10-13 重庆智者炎麒科技有限公司 Intelligent voice access method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1988005238A1 (en) * 1986-12-29 1988-07-14 Kazuo Hashimoto Answering machine equipped with automatic translation function
EP0307137A2 (en) * 1987-09-11 1989-03-15 Hashimoto Corporation Multiple language telephone answering machine
US5392343A (en) * 1992-11-10 1995-02-21 At&T Corp. On demand language interpretation in a telecommunications system
US9093713B2 (en) 2007-11-05 2015-07-28 The Furukawa Battery Co., Ltd. Method for producing lead-base alloy grid for lead-acid battery
CN111770235A (en) * 2020-07-03 2020-10-13 重庆智者炎麒科技有限公司 Intelligent voice access method and system

Similar Documents

Publication Publication Date Title
US6385585B1 (en) Embedded data in a coded voice channel
US6937977B2 (en) Method and apparatus for processing an input speech signal during presentation of an output audio signal
EP2207335B1 (en) Method and apparatus for storing and forwarding voice signals
CN108141498B (en) Translation method and terminal
EP0378694A1 (en) Response control system
JP2006099124A (en) Automatic voice/speaker recognition on digital radio channel
TW200304638A (en) Network-accessible speaker-dependent voice models of multiple persons
JP3473204B2 (en) Translation device and portable terminal device
JPS60220652A (en) Speech synthesizing system of exchange
JP2006319598A (en) Voice communication system
JPH10341256A (en) Method and system for extracting voiced sound from speech signal and reproducing speech signal from extracted voiced sound
US6498834B1 (en) Speech information communication system
KR0175251B1 (en) Automated response system using phoneme conversion method
JPH0422062B2 (en)
KR100553437B1 (en) wireless telecommunication terminal and method for transmitting voice message using speech synthesizing
JPH0220148A (en) Voice data packet transmitter
US6094628A (en) Method and apparatus for transmitting user-customized high-quality, low-bit-rate speech
JPH03241399A (en) Voice transmitting/receiving equipment
JPH07175495A (en) Voice recognition system
JPS60201393A (en) Voice message register system
KR102125447B1 (en) Data Generating Method And Apparatus for Improving Speech Recognition Performance
JPS58191068A (en) Communication device
JPH04355555A (en) Voice transmission method
KR100606676B1 (en) Apparatus and method for voice conversion in mobile communication system
KR100251000B1 (en) Speech recognition telephone information service system