GB2257282A - Speech recognizer, processor and responder - Google Patents
Speech recognizer, processor and responder Download PDFInfo
- Publication number
- GB2257282A GB2257282A GB9110335A GB9110335A GB2257282A GB 2257282 A GB2257282 A GB 2257282A GB 9110335 A GB9110335 A GB 9110335A GB 9110335 A GB9110335 A GB 9110335A GB 2257282 A GB2257282 A GB 2257282A
- Authority
- GB
- United Kingdom
- Prior art keywords
- machine
- natural language
- instructions
- languages
- recognize
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/285—Memory allocation or algorithm optimisation to reduce hardware requirements
Abstract
A speech recognizer, processor, and responder based on orthogonal logic, memory and filters. The orthogonality factor greatly simplifies the input data structure by making it vector data and correspondingly enables the invention of orthogonal gates, memories, shift registers, decoders and filters.
Description
SPEECH RECOGNIZER, PROCESSOR AND RESPONDER
This invention relates to speech recognizer, processor and responder.
Speech recognizers, processors and responders are modern machines which are capable of responding to human speech in many languages of the world. They consist of elements : decoders, registers, controllers, arithmetic units, logic units, clocks, transducers, memories, quantizers, adaptive filters, coders, channels, input-output facilities, etc..
The current machines on the market have limited facilities. They are highly expensive, remain mostly idle do not respond to shades of meaning of speech, and therefore cannot be adapted to English,
Europian, and other languages.
According to the present invention there is provided a chasis, 86 specially designed units, means for mounting the units on the chasis, the chasis connected with each other, and input-output facilities.
Referring to Figures 1 and 2, a speech recognizer 60 recognizes speech input 55 and 56 , which is processed in the speech processor 63 with its data input 51 and 52, control output 49 and 50, memory 61 and adress outputs 53 and 54 and adress decoder 59. A speech responder 62 with outputs 57 and 58. The speech processor consists of special accumulator 9, 10 and 11, special registers 12, 13 and 14; 27, 28 and 29; 30, 31 and 32; 33, 34 and 35; 36, 37 and 38; special arithmetic and logic units 15, 16 and 17; special controllers 18, 19 and 20; special buffers 40, 39 and 41; 43, 42 and 44 with external input-output 5,6,7,and 8; control output 3 and 4; and internal input-output 1,2, 45,46,47 and 48.
Referring to Figure 3, a transducer 64 is used to convert human speech to electrical signals, a controller 65 controls the performance of the recognizer through special memories 66 and 67, another transducer 68, special quantizers 69 and 70, special adaptive filters 71 and 72, special coders 73, 74, 75 and 76, special decoders 77, 78, 79 and 80, inverse adaptive filters 81 and 82, input facility 83, output facility 84, clock 85, and channel 86.
In order to respond to speech, varies levels of meaning of words must be identified. The speech recognizer, processor and responder according to the invention identifies the varies meanings of words and responds to them accordinly. In order to recognize human speech in any language, the synthesizer according to the invention characterises human speech in various different classes such as vowel, semi-vowel, dipthong, nasal, fricative, affricate, plosive, aspirate, etc. The machine also analyses the input speech as regards time, frequency, and other parameters.
Claims (11)
1. The machine recognizes English, French, German, Spanish and
other Europian Languages as well as other languages of the
world.
2 The machine can compute by instruction, recognize and
understand showing expert knowledge, producing an output in the
aforesaid and other natural languages.
3. The machine will be able to recognize life preservation signals,
communicating it to the relevent life preservation authorities,
recognizing their instructions and relaying them to the person
whose life is in danger in natural language.
4. The machine will be able to take orders from their costumers in
natural language for various consumer products and envoice their
accounts update their statements of account in natural language.
5. The machine will be able to forcast in natural language the
activities of shiping, marketing and banking, weather details,
stock exchange prices etc in natural language.
6. The machine will be able to recognize instructions of accountants,
carrying out book keeping and various activities and report to
them on the financial activities of their costumers in natural
language.
7. The machine will be able to carry out the reconstruction of visual
patterns stored in its memory by the recognition of instructions
by natural language.
8. The machine will be able to produce necessary aids to carry out
various industrial processes etc in natural language.
9. The machine will advance high quality natural language
communication all over the world.
10. The machine will be indispensable in defence where commanders
on the battle field, in the air, or in the navy men issue
instructions to their troops with maximum efficiency and utmost
urgency in natural language.
11. The machine will raise the quality of office automation by
introducing natural language recognition by phonetic type
writters and other devices.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9110335A GB2257282A (en) | 1991-05-13 | 1991-05-13 | Speech recognizer, processor and responder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9110335A GB2257282A (en) | 1991-05-13 | 1991-05-13 | Speech recognizer, processor and responder |
Publications (2)
Publication Number | Publication Date |
---|---|
GB9110335D0 GB9110335D0 (en) | 1991-07-03 |
GB2257282A true GB2257282A (en) | 1993-01-06 |
Family
ID=10694914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9110335A Withdrawn GB2257282A (en) | 1991-05-13 | 1991-05-13 | Speech recognizer, processor and responder |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2257282A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2126393A (en) * | 1982-08-20 | 1984-03-21 | Asulab Sa | Speech-controlled apparatus |
GB2145551A (en) * | 1983-08-23 | 1985-03-27 | David Thurston Griggs | Speech-controlled phonetic typewriter or display device |
EP0184032A1 (en) * | 1984-11-30 | 1986-06-11 | International Business Machines Corporation | Speech recognition system |
-
1991
- 1991-05-13 GB GB9110335A patent/GB2257282A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2126393A (en) * | 1982-08-20 | 1984-03-21 | Asulab Sa | Speech-controlled apparatus |
GB2145551A (en) * | 1983-08-23 | 1985-03-27 | David Thurston Griggs | Speech-controlled phonetic typewriter or display device |
EP0184032A1 (en) * | 1984-11-30 | 1986-06-11 | International Business Machines Corporation | Speech recognition system |
Also Published As
Publication number | Publication date |
---|---|
GB9110335D0 (en) | 1991-07-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
ATE203119T1 (en) | LANGUAGE RECOGNITION SYSTEM FOR COMPOUND WORD LANGUAGES | |
DE3275779D1 (en) | Recognition of speech or speech-like sounds | |
Howson | Upper sorbian | |
Ghannay et al. | Where are we in semantic concept extraction for Spoken Language Understanding? | |
GB2257282A (en) | Speech recognizer, processor and responder | |
Morgan et al. | Stochastic perceptual auditory-event-based models for speech recognition. | |
JPS57111658A (en) | Schedule controller | |
Wang et al. | An experimental analysis on integrating multi-stream spectro-temporal, cepstral and pitch information for mandarin speech recognition | |
Lyberg | Some consequences of a model for segment duration based on F0-dependence | |
O'Shaughnessy | Design of a real-time French text-to-speech system | |
DE3279549D1 (en) | Apparatus and method for articulatory speech recognition | |
Olabe et al. | Real time text-to-speech conversion system for spanish | |
Chung et al. | The interaction of polar question and declarative intonation with lexical tone in Moro | |
Fushikida et al. | A text to speech synthesizer for the personal computer | |
Fujimura | Comment: Beyond the segment | |
Zdaranok | Linguistic and Acoustic Resources of the Computer-Based System for Analysis and Interpretation of Speech Intonation | |
Dixon et al. | Strategic compromise and modeling in automatic recognition of continuous speech: A hierarchical approach | |
Koushtuev et al. | Voice announcement for vehicle operators | |
Fox | The IPA alphabet: remarks on some proposals for reform | |
KR20000051760A (en) | method for selecting unit of similar morpheme in continue voice cognition system | |
Martin | Study of Sounds: Articles Contributed in Commemoration of the 30th Anniversary of the Founding of the Phonetic Society of Japan | |
Harris et al. | Effect of Learning on Perception: Discrimination of Relative Onset Time of the Formants in Certain Speech and Non‐Speech Patterns | |
Ditang | Parallel neural networks for speaker-independent all-Chinese-syllable speech recognition | |
Ifukube et al. | Speech Recognition Systems for the Hearing Impaired and the Elderly | |
Carterette et al. | Repetition and Confirmation of Messages Received by Ear and by Eye |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |