JPH0434339B2 - - Google Patents

Info

Publication number
JPH0434339B2
JPH0434339B2 JP57145874A JP14587482A JPH0434339B2 JP H0434339 B2 JPH0434339 B2 JP H0434339B2 JP 57145874 A JP57145874 A JP 57145874A JP 14587482 A JP14587482 A JP 14587482A JP H0434339 B2 JPH0434339 B2 JP H0434339B2
Authority
JP
Japan
Prior art keywords
signal
syllable
encoded
voice
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP57145874A
Other languages
Japanese (ja)
Other versions
JPS5936434A (en
Inventor
Masao Tokura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to JP57145874A priority Critical patent/JPS5936434A/en
Publication of JPS5936434A publication Critical patent/JPS5936434A/en
Publication of JPH0434339B2 publication Critical patent/JPH0434339B2/ja
Granted legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/66Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission for reducing bandwidth of signals; for improving efficiency of transmission

Abstract

PURPOSE:To attain the transmission and reception of simultaneous conversation of plural persons on one line, by transmitting and receiving voice with a digital signal having a few bit number and providing additionally an address to the signal. CONSTITUTION:Voices of syllables of conversation used daily are sectioned into some 50 sections and converted into syllable coding signals in binary codes and listed as a table, the inputted voice is selected out as the syllable coding signal from the table, and further, the strength/pitch/length of the syllable are detected and represented at a voice accompanying signal in a binary code. Then, an address signal 5 and an order signal 6 are provided additionally to a voice accompanying signal 1 of the syllable code signal and the voice is transmitted between a transmitter 7 and a receiver 8 connected to one line. Then, even if the voice of plural speakers A-D takes place on one line at the same time, the voice is transmitted to listners A'-D' with the signal having a few bit number encoded by the address signal.

Description

【発明の詳細な説明】 (産業上の利用分野) 本発明は音声送受信方法に関する。具体的に
は、本発明は日常の音声を各音の長短も含めて符
号化(デイジタル)信号に変換して送信し、受信
側で元の音声に復元する音声通信方法に関する。
DETAILED DESCRIPTION OF THE INVENTION (Field of Industrial Application) The present invention relates to a voice transmission/reception method. Specifically, the present invention relates to a voice communication method in which everyday voice is converted into a coded (digital) signal including the length and shortness of each sound, transmitted, and restored to the original voice on the receiving side.

(従来の技術) 電話による通信において、音声の周波数を分析
し、アナログ信号化して伝送し、受信側で会話音
声に復元する方式が知られている。この送信方式
に使用されるボイスデジタライザーは音声波形の
時間軸をきわめてこまかい間隔に分割し、各分割
区分における音声波形の振幅をとらえてアナログ
音声波形信号として取り出し、デイジタル信号で
送信する。例えば1つの音声“ア”を表わす波形
を時間軸方向に2400ビツトに分割し、各ビツトの
振幅信号を順次出力し、入力側で音声波形に復元
して会話音声を得る。第1図を参照してこれをよ
り具体的に説明する。
(Prior Art) In telephone communication, a method is known in which the frequency of voice is analyzed, converted into an analog signal, transmitted, and restored to conversational voice on the receiving side. The voice digitizer used in this transmission method divides the time axis of the audio waveform into extremely fine intervals, captures the amplitude of the audio waveform in each division, extracts it as an analog audio waveform signal, and transmits it as a digital signal. For example, a waveform representing one voice "a" is divided into 2400 bits in the time axis direction, the amplitude signal of each bit is sequentially outputted, and the voice waveform is restored on the input side to obtain a conversational voice. This will be explained in more detail with reference to FIG.

第1図は、音声波形をアナログ信号に変換する
原理を示した図である。例えば図中A区間が
「ア」の音、B区間が「イ」の音、C区間が「ウ
ー」の音を表わすとすれば、A区間の時間軸を
2400ビツトにこまかく時分割し、この分割した各
時間(t1)に対する音声振幅(T1)の値を求め、
これを遂時送信し、A区間の全ビツト数について
各振幅を集計して「ア」の音をアナログ音声波形
信号として得る。
FIG. 1 is a diagram showing the principle of converting a voice waveform into an analog signal. For example, if section A in the diagram represents the sound "a," section B represents the sound "i," and section C represents the sound "woo," then the time axis of section A is
Divide the time into 2400 bits and find the value of the audio amplitude (T 1 ) for each divided time (t 1 ).
This is transmitted at once, and the amplitudes of all the bits in section A are totaled to obtain the sound "A" as an analog audio waveform signal.

(発明が解決しようとする課題) 上述した従来の音声送受信方式では、単位音声
即ち1つの音節を表わすのに2400ビツト使うの
で、会話全体としては厖大な量の情報が必要とな
る。これでは音声通信に必要なデータ量が多くな
り回線が混雑する。また従来の方式で時間軸の分
割数即ちビツト量を少なくすると、鮮明な音声信
号が得られず、場合によつては他の音声と区別が
つかなくなつたりする。
(Problems to be Solved by the Invention) In the conventional voice transmission/reception system described above, 2400 bits are used to represent a unit of speech, that is, one syllable, so an enormous amount of information is required for the entire conversation. This increases the amount of data required for voice communication, resulting in line congestion. Furthermore, if the number of divisions on the time axis, that is, the amount of bits, is reduced in the conventional method, a clear audio signal cannot be obtained, and in some cases, the audio signal may become indistinguishable from other audio signals.

本発明は、音声のデイジタル信号を符号化し、
上述した従来方式に比較してきわめて小量のビツ
ト数で音声信号を送信し、受信側で元の音声に容
易に変換することのできる音声送受信方法を提供
することを目的とする。
The present invention encodes audio digital signals,
It is an object of the present invention to provide a voice transmitting/receiving method capable of transmitting a voice signal with an extremely small number of bits compared to the conventional method described above and easily converting the signal to the original voice on the receiving side.

本発明の他の目的は、送受信間1回線で複数人
が同時に通信しても受信側で混乱することなく
各々の通話者ごとに分類して音声を再現すること
ができ、これによつて送受信間の回線数を少なく
することができ、あるいは少ない回線数で多数人
が同時に音声通信することができる音声送受信方
法を提供するところにある。
Another object of the present invention is that even if multiple people communicate at the same time on a single line between transmitters and receivers, the voices can be classified and reproduced for each party without causing confusion on the receiver side. The purpose of the present invention is to provide a voice transmission/reception method that can reduce the number of lines between users or allow a large number of people to perform voice communication at the same time with a small number of lines.

(課題を解決するための手段) 本発明の上記の目的は、送信側で入力音声を一
定の周期でサンプリングしつつ個々の音節を区分
して符号化し音節符号化信号とし、音声の各音節
に付随する各音の強弱(音声の振幅)、高低(音
声周波数)、および音色(音声周波数の波形)を
検出しそれぞれ音節付随符号化信号に変換してこ
れを前記音節符号化信号に付随させて順次メモリ
に保持し、これらの音節符号化信号または音節付
随符号化信号に変化があつたときの該変化をする
までの間の時間を検出して符号化しこれを時間情
報符号化信号として該変化の直前の前記音節符号
化信号および前記音節付随符号化信号に付随さ
せ、これらの音節符号化信号、音節付随符号化信
号および時間情報符号化信号を1つの音声符号化
エレメント信号として前記変化があるたびに順次
送信し、この送信順序にしたがつて受信側でアナ
ログ信号に変換し音声に順次復元することにより
達成される。
(Means for Solving the Problems) The above-mentioned object of the present invention is to sample the input speech at a constant cycle on the transmitting side, segment and encode each syllable, and generate a syllable coded signal. The strength (audio amplitude), pitch (voice frequency), and timbre (voice frequency waveform) of each accompanying sound are detected, each converted into a syllable accompanying encoded signal, and this is attached to the syllable encoded signal. These syllable encoded signals or syllable accompanying encoded signals are sequentially stored in memory, and when a change occurs in these syllable encoded signals or syllable accompanying encoded signals, the time until the change occurs is detected and encoded, and this is used as a time information encoded signal to detect the change. The syllable encoded signal and the syllable accompanying encoded signal immediately before are accompanied by the syllable encoded signal, the syllable associated encoded signal, and the temporal information encoded signal as one speech encoded element signal. This is achieved by sequentially transmitting each signal, converting it to an analog signal on the receiving side and sequentially restoring it to voice according to this transmission order.

また本発明の他の形態によれば、送信側で入力
音声を一定の周期でサンプリングしつつ個々の
個々の音節を区分して符号化し音節符号化信号と
し、音声の各音節に付随する各音の強弱(音声の
振幅)、高低(音声周波数)および音色(音声周
波数の波形)を検出しそれぞれ音節付随符号化信
号に変換してこれを前記音節符号化信号に付随さ
せて順次メモリに保持し、これらの音節符号化信
号または音節付随符号化信号に変化があつたとき
の該変化をするまでの間の時間を検出して符号化
しこれを時間情報符号化信号として該変化の直前
の前記音節符号化信号および前記音節付随符号化
信号に付随させ、これらの音節符号化信号、音節
付随符号化信号および時間情報符号化信号を1単
位の音声符号化エレメント信号とし、さらに複数
の話者に対応したアドレス信号および該アドレス
信号により特定される各話者の音声符号化エレメ
ント信号における信号順序を示す連続符号信号を
前記音声符号化エレメント信号に付加して前記音
節符号化信号または音節付随符号化信号に変化が
あるたびに該音声符号化エレメント信号とともに
順次送信し、受信側で前記アドレス信号および前
記連続符号信号にしたがつて前記音声符号化エレ
メント信号を分類し、アナログ信号に変換して複
数話者ごとの音声に順次復元することを特徴とす
る音声送受信方法が提供される。
According to another aspect of the present invention, the input voice is sampled at a constant cycle on the transmitting side, and each individual syllable is segmented and encoded to produce a syllable encoded signal, so that each syllable associated with each syllable of the voice is The strength (amplitude of the voice), pitch (voice frequency), and timbre (waveform of the voice frequency) are detected, each converted into a syllable accompanying encoded signal, which is attached to the syllable encoded signal and sequentially stored in the memory. , when there is a change in these syllable encoded signals or syllable accompanying encoded signals, the time up to the change is detected and encoded, and this is used as a time information encoded signal to detect the syllable immediately before the change. The coded signal and the syllable accompanying coded signal are associated with each other, and the syllable coded signal, the syllable accompanied coded signal and the time information coded signal are made into one unit of speech coded element signal, and furthermore, it corresponds to a plurality of speakers. The syllable encoded signal or the syllable accompanying encoded signal is obtained by adding to the voice encoded element signal a continuous code signal indicating the signal order in the voice encoded element signal of each speaker identified by the address signal and the address signal. Each time there is a change, the audio encoding element signal is sequentially transmitted along with the audio encoding element signal, and the receiving side classifies the audio encoding element signal according to the address signal and the continuous code signal, converts it into an analog signal, and transmits the audio encoding element signal for multiple conversations. Provided is a voice transmission/reception method characterized by sequentially restoring the voices of each person.

(実施例) 次に、本発明を図面を参照して、より具体的に
説明する。
(Example) Next, the present invention will be described in more detail with reference to the drawings.

第2図は本発明によつて人間の音声を音声単位
ごとにデイジタル符号化して表わす場合の原理を
示したものである。まず、日常使用される会話音
声の各シラブル即ち音節の音を50個前後の区分に
分けて2進化符号で表わし、これを予めテーブル
化しておく。これは各音節を互いに区別できれば
よく、各音節につき高々8ビツト程度で1つの音
節を表示できる。例えば「アール」という音声を
デイジタル符号化して表わすには、最初のシラブ
ル「ア」の音を特定する音声(音節)符号信号を
(00000001)として定めておき、第2図の符号1
で示す項目(ブロツク)にこの音節符号信号を入
力する。次にアールの発音によつて最初の音アの
長短を検出し、アーの長音の程度を表わす信号を
数ビツトで表わして図中2で示す項目に入力す
る。以下、同様にして「ア」の音の強弱、高低、
および音色を検出して2進符号とし、3,4の項
目に入力する。ここではこれらの信号2,3,4
を「ア」の音節符号信号に付随した信号即ち音節
付随符号化信号と称することにする。
FIG. 2 shows the principle of digitally encoding and representing human speech in units of speech according to the present invention. First, each syllable, that is, the sound of a syllable in everyday conversational speech, is divided into about 50 categories and represented by binary code, and this is made into a table in advance. This only requires that the syllables be distinguished from each other, and one syllable can be displayed using at most about 8 bits for each syllable. For example, to digitally code and represent the sound "A", the sound (syllable) code signal that specifies the sound of the first syllable "A" is determined as (00000001), and the code 1 in FIG.
Input this syllabic code signal into the item (block) indicated by . Next, the length of the first sound A is detected based on the pronunciation of R, and a signal representing the degree of prolongation of A is expressed in several bits and input into the item indicated by 2 in the figure. From now on, do the same thing to adjust the strength, pitch, and pitch of the “A” sound.
Detect and tone color, convert it into a binary code, and input it to items 3 and 4. Here these signals 2, 3, 4
will be referred to as a signal accompanying the syllable code signal of "a", that is, a syllable accompanying coded signal.

ここで音の長短を検出して符号化信号とする例
を説明する。まず入力音声信号は一定の周期でサ
ンプリングされ、各音が符号化されるが、各音の
符号化した信号が変化しない限り、送信側装置内
では当該符号化信号はそのまま保持(蓄積)さ
れ、伝送は行われない。そして次の入力音声信号
に変化(強弱、高低等)が発生したならば、次の
符号化を行うとともに、前の符号化から今回の符
号化までの時間間隔を検出してこれを符号化し、
この時間間隔を示す時間情報の符号化信号をその
直前の音声符号化信号に付加して伝送する。この
ようにして入力音声信号に変化が現れる都度、符
号化した1つ前の信号を、これに付随した時間情
報符号化信号(この時間情報符号化信号は次の信
号の変化たとえば異なる音の入力音声信号が入つ
て後確定できる)とともに伝送する。この方法に
より、リアルタイムの相互会話は困難となる反
面、データ伝送量はきわめて少量のビツト数です
み、蓄積に必要なメモリ量はきわめて小となる。
時間間隔を示す時間情報符号化信号は受信側で実
際の長短のある音声に復元され、より自然な復元
音声が得られる。
Here, an example will be described in which the length of a sound is detected and a coded signal is generated. First, the input audio signal is sampled at a fixed period and each sound is encoded, but as long as the encoded signal of each sound does not change, the encoded signal is held (stored) as it is within the transmitting device. No transmission takes place. If a change occurs in the next input audio signal (strength, weakness, pitch, etc.), the next encoding is performed, and the time interval from the previous encoding to the current encoding is detected and encoded.
An encoded signal of time information indicating this time interval is added to the immediately preceding encoded audio signal and transmitted. In this way, each time a change appears in the input audio signal, the previous coded signal is replaced with the accompanying time information coded signal (this time information coded signal is used to detect the change in the next signal, for example, the input of a different sound). This can be confirmed after the audio signal is input). Although this method makes real-time mutual conversation difficult, the amount of data transmitted is only a very small number of bits, and the amount of memory required for storage is very small.
The time information encoded signal indicating the time interval is restored to actual long and short speech on the receiving side, resulting in more natural restored speech.

音節付随符号化信号としてはその音節の長短、
強弱、高低のほかに「コツプ」のような促音、
「プ」のような半濁音、「ゴ」のような濁音などを
表わす信号が含まれる。次に「ル」の音節につい
て同様に「ル」を特定する音節符号信号をテーブ
ルから選び出して1′の項目に入力し、その長短、
強弱、高低等を検出し、これを2進符号化した音
節付随符号化信号を2′,3′,4′の各項目に入
力する。ここで1つの音節符号信号とこれに対応
する音節付随符号化信号とを合せて音声符号化エ
レメント信号と称することにする。図中の音声符
号化エレメント信号,を結合すれば「アー
ル」の発音言語が特定される。アールなる音声を
送信するには、上述した2つの音声符号化エレメ
ント信号,を続けて送ればよい。受信側で
は、送信側から送られた各デイジタル符号信号を
符号認識回路で分析し、アナログ変換して合成音
発生器で会話音声「アール」を出力する。第3図
はこの場合の送受信系を示した図である。送信側
7にはマイクロホン10、音声分析器11、アナ
ログ−デイジタル変換器12、デイジタル符号信
号送信機16が設けられ、受信側8には符号認識
回路13、デイジタル−アナログ変換器14、合
成音発生器(受話器、スピーカ)15が設けられ
る。
The syllable accompanying encoded signal includes the length and shortness of the syllable,
In addition to strength and weakness, high and low, consonants such as “kotsupu”,
This includes signals representing semi-voiced sounds such as ``pu'' and voiced sounds such as ``go''. Next, for the syllable of "ru", select the syllable code signal that specifies "ru" from the table and input it into the item 1', and check its length and shortness.
Strength, weakness, pitch, etc. are detected, and a syllable accompanying coded signal obtained by binary coding is input to each item 2', 3', and 4'. Here, one syllable code signal and the corresponding syllable accompanying coded signal will be collectively referred to as a speech coded element signal. By combining the audio encoded element signals shown in the figure, the pronunciation language for "R" can be identified. In order to transmit the voice R, it is sufficient to successively transmit the two voice encoded element signals described above. On the receiving side, each digital code signal sent from the transmitting side is analyzed by a code recognition circuit, converted to analog, and a synthesized speech generator outputs the conversational voice "R". FIG. 3 is a diagram showing the transmitting and receiving system in this case. The transmitting side 7 is provided with a microphone 10, a voice analyzer 11, an analog-to-digital converter 12, and a digital code signal transmitter 16, and the receiving side 8 is provided with a code recognition circuit 13, a digital-to-analog converter 14, and a synthesized sound generator. A receiver (telephone receiver, speaker) 15 is provided.

第4図は、送信側および(または)受信側に複
数個の加入者回線をつなぎ、複数人が同時に通話
する場合の通話網を示した系統図であり、第5図
はこの複数人同時通話の場合の音声符号化エレメ
ント信号の1例を示した図である。この場合は、
前述した音節符号信号1および音節付随符号化信
号2,3,4に続いてアドレス信号5、前記アド
レス信号により特定される各話者の音声符号化エ
レメント信号における信号順序を示す連続符号信
号6を符号化(2進符号化)して付加しておく。
複数の話者がそれぞれの送話機で同時に通話して
もアドレス信号によつて各アドレスごとに音声符
号化エレメント信号を作成し、この各音声符号化
エレメント信号を、送信側で定めた或る順序にし
たがつて送信し、受信側ではこれをアドレス信号
5にしたがつて送信側の各話者に対応した受信音
声符号化エレメント信号に分類する。同時に受信
側では前記エレメント信号にのせられている連続
符号信号6にしたがつて各話者の音声符号化エレ
メント信号を配列し、送信側A話者、同B話者、
…に対するそれぞれの合成復元音声を受信側の所
望話者、例えばそれぞれA′話者、B′話者、…に
つなぐ。このようにして各話者の音声符号化エレ
メント信号を送信側、受信側間の同一回線にのせ
て送ることができ、通常の電話通信に比し回線数
を格段に減少させることができる。前記連続符号
信号として2進化連続符号信号としておけば、例
えば送信途中で起つた何らかの事故により、或る
話者の音声符号化エレメント信号のいくつかが欠
落した形で受信されたような場合に、直ちにその
旨を送信側へ知らせて欠落部分の音声符号化エレ
メント信号のみを再送するよう指示することがで
きる。
Figure 4 is a system diagram showing a communication network in which multiple subscriber lines are connected to the sending side and/or the receiving side and multiple people are talking at the same time. FIG. 3 is a diagram showing an example of an audio encoded element signal in the case of FIG. in this case,
Following the aforementioned syllable code signal 1 and syllable accompanying coded signals 2, 3, and 4, an address signal 5 and a continuous code signal 6 indicating the signal order in the speech coded element signal of each speaker specified by the address signal are provided. It is encoded (binary encoded) and added.
Even if multiple speakers talk simultaneously using their respective transmitters, a voice encoding element signal is created for each address based on the address signal, and each voice encoding element signal is sent in a certain order determined by the transmitting side. On the receiving side, the received speech coded element signals are classified according to the address signal 5 into received speech encoded element signals corresponding to each speaker on the sending side. At the same time, on the receiving side, the speech coded element signals of each speaker are arranged according to the continuous code signal 6 carried on the element signal, and the transmitting side speaker A, speaker B,
The synthesized restored speech for... is connected to desired speakers on the receiving side, for example, speaker A', speaker B', and so on. In this way, the voice encoded element signals of each speaker can be sent over the same line between the transmitting side and the receiving side, and the number of lines can be significantly reduced compared to normal telephone communication. If a binary continuous code signal is used as the continuous code signal, for example, if some of the voice coded element signals of a certain speaker are received missing due to some accident during transmission, It is possible to immediately notify the transmitting side of this fact and instruct it to retransmit only the audio encoded element signal of the missing portion.

上述の実施例では、音節符号信号の後に音節付
随符号化信号を続けた形態の音声符号化エレメン
ト信号につき述べたが、本発明はこの形態にのみ
限定されるものではなく、会話音声をメモリに記
憶しておき、音節付随符号化信号の後に、これに
対応する音節符号信号を続けて音声符号化エレメ
ント信号を構成してもよい。同様に前記アドレス
信号、前記連続符号信号を音声符号化エレメント
信号の前にもつてくることも可能である。なお、
上述の説明で音節とは、会話音声の音節部分の発
声音、即ち音節音を意味する。
In the above-mentioned embodiment, the speech encoding element signal is in the form of a syllable code signal followed by a syllable accompanying coded signal, but the present invention is not limited to this form, and the present invention is not limited to this form. After the syllable accompanying encoded signal is stored, a corresponding syllable code signal may be followed to form a speech encoded element signal. Similarly, it is also possible to bring the address signal and the continuous code signal before the audio encoded element signal. In addition,
In the above description, syllables refer to utterances of syllable parts of conversational speech, that is, syllables.

(発明の効果) 以上説明したように本発明によれば、音声の長
短情報をも合せて伝送するようにしたので、より
自然な音声を再現できる。また相手方とのリアル
タイムの会話通信については若干不便なあるもの
の、データ伝送量の圧縮、ビツト数の極少量化、
符号信号蓄積の場合のメモリ量の小量化を図る上
できわめて有効である。さらに、少ない回線数で
多数の人が同時に音声通信しても、受信側で混乱
することなく各々の通話者ごとに分類して音声を
再現することができる。なお、本発明は遠隔電話
通信にのみ適用されるものではなく、ボイスメー
ルシステムその他あらゆる音声送信、受信系統に
適用でき、また例えば音声入力方式による文書作
成機器内に組み込んで応用することも可能であ
る。
(Effects of the Invention) As explained above, according to the present invention, since the length information of the voice is also transmitted, more natural voice can be reproduced. Although it is a little inconvenient for real-time conversation communication with the other party, it is possible to compress the amount of data transmitted and minimize the number of bits.
This is extremely effective in reducing the amount of memory required for code signal storage. Furthermore, even if a large number of people communicate by voice at the same time with a small number of lines, the voices can be classified and reproduced by each caller without confusion on the receiving side. Note that the present invention is not only applicable to remote telephone communication, but can also be applied to voice mail systems and other voice transmission and reception systems, and can also be applied by being incorporated into a document creation device using a voice input method, for example. be.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は音声波形をアナログ信号として表わし
た図、第2図は音声符号化エレメント信号の1例
を示した図、第3図は本発明を実施する場合の送
受信系の1例を示した図、第4図は複数人同時通
話の場合の送受信系を示した図、第5図は複数人
同時通話の場合に適用した音声符号化エレメント
信号の1例を示した図である。 1……音節符号信号、2,3,4……音節付随
符号化信号、5……アドレス信号、6……連続符
号信号、7……送信側、8……受信側、11……
音声分析器、12……アナログ−デイジタル変換
器、13……符号認識回路、14……デイジタル
−アナログ変換器、15……合成音発生器、16
……デイジタル符号信号送信機、,……音声
符号化エレメント信号。
FIG. 1 is a diagram showing a voice waveform as an analog signal, FIG. 2 is a diagram showing an example of a voice encoding element signal, and FIG. 3 is a diagram showing an example of a transmitting/receiving system when implementing the present invention. FIG. 4 is a diagram showing a transmission/reception system in the case of a simultaneous call with multiple people, and FIG. 5 is a diagram showing an example of a voice encoding element signal applied in the case of a simultaneous call with multiple people. 1... Syllable code signal, 2, 3, 4... Syllable accompanying coded signal, 5... Address signal, 6... Continuous code signal, 7... Transmitting side, 8... Receiving side, 11...
Speech analyzer, 12... Analog-digital converter, 13... Code recognition circuit, 14... Digital-analog converter, 15... Synthetic sound generator, 16
...digital code signal transmitter, ...speech coded element signal.

Claims (1)

【特許請求の範囲】 1 送信側で入力音声を一定の周期でサンプリン
グしつつ個々の音節を区分して符号化し音節符号
化信号とし、音声の各音節に付随する各音の強
弱、高低、および音色を検出しそれぞれ音節付随
符号化信号に変換してこれを前記音節符号化信号
に付随させて順次メモリに保持し、これらの音節
符号化信号または音節付随符号化信号に変化があ
つたときの該変化をするまでの間の時間を検出し
て符号化しこれを時間情報符号化信号として該変
化の直前の前記音節符号化信号および前記音節付
随符号化信号に付随させ、これらの音節符号化信
号、音節付随符号化信号および時間情報符号化信
号を1つの音声符号化エレメント信号として前記
変化があるたびに順次送信し、この送信順序にし
たがつて受信側でアナログ信号に変換し音声に順
次復元することを特徴とする音声送受信方法。 2 送信側で入力音声を一定の周期でサンプリン
グしつつ個々の音節を区分して符号化し音節符号
化信号とし、音声の各音節に付随する各音の強
弱、高低、および音色を検出しそれぞれ音節付随
符号化信号に変換してこれを前記音節符号化信号
に付随させて順次メモリに保持し、これらの音節
符号化信号または音節付随符号化信号に変化があ
つたときの該変化をするまでの間の時間を検出し
て符号化しこれを時間情報符号化信号として該変
化の直前の前記音節符号化信号および前記音節付
随符号化信号に付随させ、これらの音節符号化信
号、音節付随符号化信号および時間情報符号化信
号を1単位の音声符号化エレメント信号とし、さ
らに複数の話者に対応したアドレス信号および該
アドレス信号により特定される各話者の音声符号
化エレメント信号における信号順序を示す連続符
号信号を前記音声符号化エレメント信号に付加し
て前記音節符号化信号または音節付随符号化信号
に変化があるたびに該音声符号化エレメント信号
とともに順次送信し、受信側で前記アドレス信号
および前記連続符号信号にしたがつて前記音声符
号化エレメント信号を分類し、アナログ信号に変
換して複数話者ごとの音声に順次復元することを
特徴とする音声送受信方法。
[Claims] 1. On the transmitting side, the input voice is sampled at a constant cycle and each syllable is segmented and encoded to produce a syllable coded signal, and the intensity, pitch, and pitch of each sound accompanying each syllable of the voice are The timbres are detected and converted into syllable-associated encoded signals, which are associated with the syllable-encoded signals and stored sequentially in memory, and when there is a change in these syllable-encoded signals or syllable-associated encoded signals. The time until the change is detected and encoded, and this is made to accompany the syllable encoded signal immediately before the change and the syllable accompanying encoded signal as a time information encoded signal, and these syllable encoded signals are generated. , the syllable accompanying coded signal and the time information coded signal are sequentially transmitted as one audio coded element signal every time there is a change, and according to this transmission order, the receiving side converts it into an analog signal and sequentially restores it to audio. A voice transmission/reception method characterized by: 2 On the transmitting side, input speech is sampled at regular intervals and each syllable is segmented and coded to produce a syllable coded signal, and the intensity, pitch, and timbre of each sound accompanying each syllable of the speech are detected and each syllable is coded. Converting to an accompanying encoded signal and sequentially storing it in memory along with the syllable encoded signal, and when there is a change in these syllable encoded signals or syllable accompanying encoded signals, until the change occurs. Detect and encode the time between, and make it accompany the syllable encoded signal and the syllable accompanying encoded signal immediately before the change as a time information encoded signal, and generate these syllable encoded signals and syllable accompanying encoded signals. and a time information encoded signal as one unit of voice encoded element signal, and furthermore, an address signal corresponding to a plurality of speakers and a sequence indicating the signal order in the voice encoded element signal of each speaker specified by the address signal. A code signal is added to the voice encoded element signal and sequentially transmitted together with the voice encoded element signal every time there is a change in the syllable encoded signal or the syllable accompanying encoded signal, and the receiving side receives the address signal and the continuous A voice transmission and reception method, characterized in that the voice encoded element signals are classified according to code signals, converted into analog signals, and sequentially restored to voices for each of a plurality of speakers.
JP57145874A 1982-08-23 1982-08-23 System for transmitting and receiving conversation Granted JPS5936434A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP57145874A JPS5936434A (en) 1982-08-23 1982-08-23 System for transmitting and receiving conversation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP57145874A JPS5936434A (en) 1982-08-23 1982-08-23 System for transmitting and receiving conversation

Publications (2)

Publication Number Publication Date
JPS5936434A JPS5936434A (en) 1984-02-28
JPH0434339B2 true JPH0434339B2 (en) 1992-06-05

Family

ID=15395040

Family Applications (1)

Application Number Title Priority Date Filing Date
JP57145874A Granted JPS5936434A (en) 1982-08-23 1982-08-23 System for transmitting and receiving conversation

Country Status (1)

Country Link
JP (1) JPS5936434A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5320705A (en) * 1988-06-08 1994-06-14 Nippondenso Co., Ltd. Method of manufacturing a semiconductor pressure sensor
US5095349A (en) * 1988-06-08 1992-03-10 Nippondenso Co., Ltd. Semiconductor pressure sensor and method of manufacturing same

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55103436A (en) * 1979-02-01 1980-08-07 Matsushima Kogyo Co Ltd Voice transmission system
JPS574640A (en) * 1980-06-11 1982-01-11 Hitachi Ltd Telephone communication system using encoded voice

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55103436A (en) * 1979-02-01 1980-08-07 Matsushima Kogyo Co Ltd Voice transmission system
JPS574640A (en) * 1980-06-11 1982-01-11 Hitachi Ltd Telephone communication system using encoded voice

Also Published As

Publication number Publication date
JPS5936434A (en) 1984-02-28

Similar Documents

Publication Publication Date Title
US5533112A (en) Volume control in digital teleconferencing
US5317567A (en) Multi-speaker conferencing over narrowband channels
US5457685A (en) Multi-speaker conferencing over narrowband channels
JPH11503275A (en) Method and apparatus for detecting and avoiding tandem boding
US5272698A (en) Multi-speaker conferencing over narrowband channels
US7233893B2 (en) Method and apparatus for transmitting wideband speech signals
JPH0434339B2 (en)
EP1159738B1 (en) Speech synthesizer based on variable rate speech coding
US6498834B1 (en) Speech information communication system
JP2958726B2 (en) Apparatus for coding and decoding a sampled analog signal with repeatability
EP1298647B1 (en) A communication device and a method for transmitting and receiving of natural speech, comprising a speech recognition module coupled to an encoder
US6044147A (en) Telecommunications system
CN111199747A (en) Artificial intelligence communication system and communication method
JP3549611B2 (en) Multipoint communication device
JP3133353B2 (en) Audio coding device
JPH1188549A (en) Voice coding/decoding device
JPS6171730A (en) Voice data transfer system
JP2630307B2 (en) Channel test equipment
US7603270B2 (en) Method of prioritizing transmission of spectral components of audio signals
JPS6033751A (en) Coder and decoder
JPS5866440A (en) Waveform coding system
JPH03241399A (en) Voice transmitting/receiving equipment
KR960003626B1 (en) Decoding method of transformed coded audio signal for people hard of hearing
KR100233532B1 (en) Audio codec of voice communication system
JP3217237B2 (en) Loop type band division audio conference circuit