WO2003085640A1 - Dispositif, systeme, procede et programme de selection de conversation a reconnaissance vocale - Google Patents
Dispositif, systeme, procede et programme de selection de conversation a reconnaissance vocale Download PDFInfo
- Publication number
- WO2003085640A1 WO2003085640A1 PCT/JP2003/002952 JP0302952W WO03085640A1 WO 2003085640 A1 WO2003085640 A1 WO 2003085640A1 JP 0302952 W JP0302952 W JP 0302952W WO 03085640 A1 WO03085640 A1 WO 03085640A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dialogue
- data
- voice
- transmission means
- capability
- Prior art date
Links
- 238000010187 selection method Methods 0.000 title claims description 11
- 239000013598 vector Substances 0.000 claims abstract description 19
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 15
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 15
- 230000005540 biological transmission Effects 0.000 claims description 197
- 238000000034 method Methods 0.000 claims description 62
- 238000012545 processing Methods 0.000 claims description 62
- 238000004891 communication Methods 0.000 claims description 38
- 230000002452 interceptive effect Effects 0.000 claims description 36
- 230000003993 interaction Effects 0.000 claims description 35
- 230000008569 process Effects 0.000 claims description 28
- 238000012546 transfer Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 7
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 238000013404 process transfer Methods 0.000 claims description 3
- 230000006835 compression Effects 0.000 abstract description 4
- 238000007906 compression Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 14
- 230000004913 activation Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000006837 decompression Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 239000007858 starting material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
- G10L2015/228—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
Definitions
- Speech recognition dialogue selection device Speech recognition dialogue system, Speech recognition dialogue selection method, Program technical field
- the present invention transmits voice data input to a terminal (client terminal) such as a mobile phone or an in-vehicle terminal to the recognition dialog server side through the network, and the voice dialog by voice recognition / response on the recognition dialog server side.
- the present invention relates to a voice recognition dialogue apparatus, a voice recognition dialogue selection method and apparatus, and a recording medium for a voice recognition dialogue selection program.
- the voice data output from the client terminal side is transmitted to the recognition dialogue server side via the bucket network, and the voice recognition dialogue processing is performed on the recognition dialogue server side.
- a speech recognition dialogue system using iceover Internet (Protocol) is known. This speech recognition dialogue system is described in detail, for example, in Nikkei Internet Technology, pp. 130-137, March 1998.
- voice recognition or voice recognition and response (synthesis, recorded voice, etc.) is performed in advance using a framework in which the IP addresses of the client terminal side and the recognition conversation server side are already known. Is called. That is, the client terminal and the recognition dialog server are connected to each other in a state where packet communication is possible using the mutual IP address, and in this state, the voice data packet is transmitted from the client terminal side to the recognition dialog server side.
- a framework for conducting speech recognition dialogue is a framework for conducting speech recognition dialogue.
- voice data is transmitted from a client terminal to a voice recognition server via a packet network. It is built as a system that recognizes these audio data.
- the above-described system using the conventional Vo IP performs voice recognition and voice dialogue in a framework in which the IP address between the client terminal side and the recognition dialogue server side is known. If a server exists, it is necessary to select a recognition dialogue server that is most suitable for the client terminal and to develop a new system that links the recognition dialogue server to the client terminal.
- the object of the present invention is to select the optimum recognition / dialog server by referring to the client terminal side capability and the recognition dialog server side capability when there are a plurality of recognition dialogue servers, and To provide a voice recognition dialogue apparatus, a voice recognition dialogue selection method and apparatus, and a recording medium for a voice recognition dialogue selection program capable of performing a voice recognition dialogue between a recognition dialogue server and a client terminal. . Disclosure of the invention
- a speech recognition dialogue apparatus provides speech recognition.
- a plurality of interaction means for performing a dialogue a transmission means for transmitting voice information to the interaction means, a network for linking the transmission means and the interaction means, the capability of the transmission means, and the interaction means According to the capability, it includes a sorting means for selecting one dialogue means from the plurality of dialogue means.
- the speech recognition dialogue apparatus includes a dialogue unit that performs a plurality of voice recognition dialogues, a request unit that requests a service from the dialogue unit, and a transmission unit that transmits voice information to the dialogue unit.
- a network that links the transmitting means, the requesting means, and the interactive means, and the ability of the requesting means, the transmitting means, and the ability of the interactive means to select one from the plurality of interactive means. It is also possible to adopt a configuration that includes a sorting means for selecting a dialog means, and.
- the speech recognition dialogue apparatus includes a dialogue means for performing a plurality of voice recognition dialogues, a service holding means for holding service contents requested to the dialogue means, and transmitting voice information to the dialogue means.
- a plurality of dialogs by means of a transmission unit that performs communication, a network that links the service holding unit, the transmission unit, and the dialogue unit, and the capabilities of the service holding unit, the transmission unit, and the dialogue unit. It is also possible to adopt a configuration that includes a sorting means for selecting one dialogue means from the means.
- the allocating means used in the voice recognition dialogue apparatus described above sends information for specifying the selected dialogue means to the transmission means, and is necessary for the voice recognition dialogue between the dialogue means and the transmission means. It is desirable to have a function to exchange voice information. Also, instead of the allocating means, information for specifying the selected dialog means is sent to the request means and the transmission means, and between the dialog means, the request means and the transmission means means. Therefore, a distribution means having a function of exchanging voice information with the service contents may be used. Further, as the allocating means, one having a function of changing one selected dialog means to another selected dialog means may be used.
- the allocating means the capability of the transmitting means is compared with the abilities of the plurality of dialogue means, and based on the comparison result, the voice information input format to the dialogue means and the transmission are compared.
- the output format of the voice information to the means is the same. It is also possible to use one having a function for determining the dialogue means having a desired ability.
- the capabilities of the requesting means and the transmitting means are compared with the abilities of the plurality of interactive means, and based on the comparison result, the input format of the audio information to the interactive means It is also possible to use one having a function for determining the dialogue means having a desired ability, which matches the output format to the request means and the transmission means.
- the voice information formed from digitized voice data, compressed voice data, or feature vector data as the voice information output from the transmission means.
- the data for judging the capability of the transmission means include C ODEC capability, audio data format, and recording / synthesizing voice input / output function data.
- the data that interrupts the ability of the interactive means include CODEC ability, voice data format, recording / synthesized voice output function, service contents, recognition ability, and operation information data. Is.
- the voice recognition dialogue apparatus transmits a plurality of voice recognition dialogue servers that perform voice recognition dialogue, service contents requested to the voice recognition dialogue server, and voice information.
- Voice recognition dialogue for selecting one dialogue means from the plurality of dialogue means with the client terminal A selection server; and a network that links the client terminal, the voice recognition dialogue server, and the voice recognition dialogue selection server.
- the client terminal includes a data input unit for inputting voice information and data of service contents, a terminal information storage unit for storing data on the capability of the client terminal, and the voice via the network.
- a data communication unit that communicates between the recognition dialogue server and the voice recognition selection server and transmits the voice information to the selected voice recognition dialogue server, and controls the operation of the client terminal.
- a control unit controls the operation of the client terminal.
- the voice recognition dialogue selection server has a data communication unit that communicates between the client terminal and the voice recognition dialogue server via the network, and each capability of the voice recognition dialogue server. Recognizing dialogue server information storage unit to be stored and capability data of the client terminal stored in the terminal information storage unit are read out, and the data and capability data of the voice recognition dialogue server in the recognition dialogue server information storage unit And at least one voice recognition dialogue server is determined from the plurality of voice recognition dialogue servers, and information necessary for specifying the determined voice recognition dialogue server is sent to the client terminal.
- a recognition dialogue server determination unit wherein the voice recognition dialogue server is based on the voice information input from the client terminal.
- a voice recognition dialogue execution unit that executes a voice recognition dialogue; a data communication unit that communicates between the client terminal and the voice recognition dialogue selection server via the network; and the voice recognition dialogue It may be constructed so as to have a control unit for controlling the operation of the server.
- the service content holding server linked to the network and holding the content of the service requested from the client terminal and the voice recognition dialogue server are provided, and the service content holding server is provided. Hold on It is also possible to add a reading unit for reading the contents of the service.
- a process transition unit is provided that is provided in the voice recognition dialogue server and outputs a request to the voice recognition dialogue selection server to transfer the voice recognition dialogue processing to the voice recognition dialogue server different from the voice recognition dialogue server. It may be.
- the audio information output from the client terminal is formed from digitized audio data, compressed audio data, or feature vector data.
- the data for determining the capabilities of the client terminal include data on the capabilities of the CODEC, voice data format, recording, and synthesized voice input / output function.
- the data for judging the capability of the voice recognition dialogue server includes C0DEC capability, voice data format, recording ⁇ synthesized voice output function, service contents, recognition capability, and operation information data. It is desirable.
- the speech recognition dialogue selection method performs data communication through a network between a transmission unit and a plurality of interaction units, and uses the voice information data output from the transmission unit as a specific dialogue unit. That performs the sort process,
- a first step of receiving voice information data from the transmission means a second step of requesting the transmission means for capability data of the transmission means;
- the ability data from the transmission means and the ability data of the plurality of interaction means are compared, and the specific interaction means is uniquely determined based on the comparison result.
- the dialogue unit sends a request to transfer the destination of the transmission unit to another dialogue unit. 7 steps,
- An eighth step of requesting the transmission means for capability data of the transmission means
- the speech recognition dialogue selection method performs data communication through a network between the transmission means, the plurality of interaction means, and the service holding means, and the voice information data output from the transmission means. Is a process that distributes
- Service contents including voice recognition dialogue processing output from the transmission means
- a second step of requesting the transmission means for capability data of the transmission means
- the dialogue unit can transmit the dialogue unit to another dialogue unit.
- a first step of requesting the transmission means for capability data of the transmission means
- a first step of transmitting capability data of the transmission data from the transmission means A first step of transmitting capability data of the transmission data from the transmission means;
- a 16th step for performing a speech recognition dialogue process may be added between the dialogue means determined in the 14th step and the transmission means.
- audio information including digitized audio data, compressed audio data, or feature vector data
- the data for judging the capability of the transmission means include C ODEC capability, voice data format, recording / synthesized voice input / output function, and service content data.
- the data for judging the ability of the dialog means include the data of C O DE C, voice data format, recording / synthetic voice output function, service contents, recognition ability, and operation information data.
- the speech recognition dialogue selection apparatus performs data communication through a network between a transmission unit and a plurality of interaction units, and uses the voice information data output from the transmission unit as a specific dialogue unit.
- a sorting means for sorting The distribution unit may be constructed so as to perform the distribution by specifying the dialog unit according to the capability of the transmission unit and the capability of the dialog unit when performing the distribution.
- the speech recognition dialogue selection apparatus performs data communication through a network between a transmission unit and a plurality of interaction units, and uses the voice information data output from the transmission unit as a specific dialogue unit. That performs the sort process,
- a second means for requesting the transmission means for capability data of the transmission means a third means for transmitting the capability data from the transmission means in response to a request from the second means;
- It may be constructed as a configuration having a fifth means for notifying the transmitting means of information for specifying the dialogue means determined by the fourth means.
- the audio information includes digitized audio data, compressed audio data, or feature vector data.
- the data for judging the capability of the transmission means include the CODEC capability, voice data format, recording / synthetic voice input / output function, and service content data.
- the data for judging the ability of the dialog means include CODEC ability, voice data format, recording / synthetic voice output function, service contents, recognition ability, and operation information. It is desirable to include the evening.
- the present invention may be configured to store a voice recognition dialogue selection program on a recording medium. That is, the recording medium for a speech recognition conversation selection program according to the present invention performs data communication through a network between a transmission unit and a plurality of interaction units, and receives voice information data output from the transmission unit. A first step of receiving voice information data from the transmission unit; and a second step of requesting the transmission unit for capability data of the transmission unit. , '' A third step of transmitting capability data of the transmission means from the transmission means;
- a voice recognition dialog selection program having a sixth step for performing voice recognition dialog processing between the transmission means and the uniquely determined dialog means may be recorded.
- the dialogue unit sends a request to transfer the destination of the transmission unit to another dialogue unit.
- An eighth step of requesting the transmission means for capability data of the transmission means
- a voice recognition dialogue selection program for adding a first step for performing voice recognition dialogue processing between the dialogue means determined in the tenth step and the transmission means may be recorded. It is.
- voice recognition dialogue selection program to be recorded on the recording medium data communication is performed through the network between the transmission means, the plurality of dialogue means, and the service holding means, and is output from the transmission means. This is a process that distributes audio information data to specific interactive means.
- a second step of requesting the transmission means for capability data of the transmission means
- a speech recognition dialogue selection program having a tenth step for performing speech recognition dialogue processing based on the contents of the read service between the transmission means and the dialogue means determined in the fourth step. It is desirable to use. In this case, while a speech recognition dialogue process is being performed between the transmission unit and the dialogue unit, a request to transfer the destination of the transmission unit from the dialogue unit to another dialogue unit is transmitted. 1st step 1 and
- a first step of requesting the transmission means for capability data of the transmission means
- the speech recognition dialogue system is a system in which a client terminal and a plurality of recognition dialogue servers are connected to each other through a network. It is possible to select and determine the appropriate recognition dialogue server and execute the voice recognition dialogue on the optimum recognition dialogue server.
- the data used to determine the capabilities of the client terminal include CODEC capabilities (CODEC type, C 0 DEC compression mode, etc.), audio data formats (compressed audio data, feature vectors, etc.), recorded audio Input / output functions, synthesized speech input / output functions (no synthesis engine, intermediate expression input engine, character string input engine, etc.), service contents, etc. Also recognize
- the data used to determine the talk server's capabilities include CODEC capabilities (CODEC type, CODEC expansion mode, etc.), recording voice output function, synthesized voice output function (no synthesis engine, intermediate expression output engine, Examples include data such as a waveform output engine), service contents, recognition engine capabilities (task engine, dictation engine, command recognition engine, etc.), and operation information.
- C 0 DEC examples include AMR-NB and AMR-WB.
- An example of the intermediate representation of synthesized speech is the representation after converting a character string to a phonetic symbol string.
- Service contents include services such as address recognition, name recognition, song name recognition of incoming melody, phone number recognition, credit number recognition and so on.
- the processing unit that determines the recognition dialogue server is included in the web server or the recognition dialogue selection server, in the recognition dialogue server, or in the web server, or the recognition dialogue selection server and the recognition dialogue server. It may be included in both cases.
- the terminal can automatically access another appropriate recognition server even during the dialogue.
- FIG. 1 is a diagram showing a configuration of a speech recognition dialogue system according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing the configuration of the client terminal 10 of the present invention.
- FIG. 3 is a block diagram showing the configuration of the recognition dialogue server 30 according to the embodiment of the present invention.
- FIG. 4 is a block diagram showing fc3 ⁇ 4 of the recognition dialogue selection server 20 according to the present invention.
- FIG. 5 is a flowchart showing the processing when the recognition dialogue selection server 20 determines the recognition dialogue server in the speech recognition dialogue system according to the embodiment of the present invention.
- FIG. 6 is a flowchart showing a speech recognition dialogue process in the speech recognition dialogue method according to the embodiment of the present invention.
- FIG. 7 shows a case where the recognition dialogue server 30 determines the new recognition dialogue server 80 during the recognition dialogue processing in the recognition dialogue server 30 in the speech recognition dialogue system according to the embodiment of the present invention. This is a flowchart showing the processing.
- FIG. 8 is a block diagram showing the configuration of the recognition dialogue representative server 40 according to the embodiment of the present invention.
- FIG. 9 is a flowchart showing processing when the recognition dialogue representative server 40 determines a new recognition dialogue server 80 during the recognition dialogue processing in the speech recognition dialogue method according to the embodiment of the present invention.
- FIG. 10 is a diagram showing the recognition dialogue server C 50 according to the embodiment of the present invention. The voice recognition dialogue starter and service content reader are added to the device in Fig. 4.
- FIG. 11 is a flowchart showing processing when the recognition dialogue server C 50 reads service contents from the service content holding server 60 in the speech recognition dialogue method according to the embodiment of the present invention.
- FIG. 12 is a diagram showing a program for executing the speech recognition dialogue method according to the embodiment of the present invention on the server computer 91 and a recording medium 900 on which the program is recorded.
- the present invention provides a function for uniquely selecting and determining an optimum recognition dialogue server when there are a plurality of recognition dialogue servers in a voice recognition dialogue system for providing a voice recognition dialogue service using a network. It has a system.
- FIG. 1 is a diagram showing the configuration of a speech recognition dialogue system according to an embodiment of the present invention.
- the client terminal 10 is connected to the recognition dialogue selection server 2 0 via the network 1, the recognition dialogue server 30, the recognition dialogue representative server 40, the recognition dialogue server C 50, and the new recognition dialogue server 8. 0 and service content holding server 60 are connected.
- the client terminal 10 functions as a transmission means for transmitting voice information and a request means for requesting service contents.
- Network 1 types include Internet (including wired or wireless) and Internet.
- FIG. 2 is a block diagram showing the configuration of the client terminal 10 of the present invention.
- the client terminal 10 is a mobile terminal, PDA, in-vehicle terminal, personal computer, or home terminal.
- the client terminal 10 communicates via a control unit 1 2 0 that controls the client terminal 1 0, a terminal information storage unit 1 4 0 that retains the capabilities of the client terminal 1 0, and the network 1. It is composed of a communication section 1 3 0 that performs
- the data used to determine the capabilities of the client terminal 10 include the CODEC capability (CODEC type, CODEC compression mode, etc.), audio data format (compressed audio data, feature vector, etc.), recording audio input Data such as output function, synthesized voice input / output function (without synthesis engine, with intermediate expression input engine, with character string input engine, etc.) and service contents are used.
- CODEC capability CODEC type, CODEC compression mode, etc.
- audio data format compressed audio data, feature vector, etc.
- recording audio input Data such as output function, synthesized voice input / output function (without synthesis engine, with intermediate expression input engine, with character string input engine, etc.) and service contents are used.
- service contents include service data such as address recognition, name recognition, song name recognition of incoming melody, phone number recognition, credit number recognition and so on.
- FIG. 3 is a block diagram showing the configuration of the recognition dialogue server 30 according to the embodiment of the present invention.
- the recognition dialogue server 30 includes a control unit 3 20 that controls the recognition dialogue server 30, a voice recognition dialogue execution unit 3 30 that performs voice recognition and dialogue, and data communication that communicates via the network 1 It consists of part 3 1 0.
- FIG. 4 is a block diagram showing the configuration of the recognition dialogue selection server 20 according to the present invention.
- the recognition dialogue selection server 20 is a data communication unit 2 1 0 that communicates via the network 1, and a recognition dialogue server that uniquely selects and determines the optimum recognition dialogue server when there are multiple recognition dialogue servers. It comprises a determination unit 2 2 0 and a recognition dialog server information storage unit 2 3 0 for storing the selected and determined recognition dialog server capability information.
- the recognition dialogue selection server 20 is configured to select a specific dialogue means from a plurality of dialogue means according to the ability of the client terminal 10 as a transmission means / request means and the ability of the egg recognition server.
- the sorting means to be selected is configured.
- the data used to determine the capabilities of the recognition dialogue server includes C0 DEC capabilities (CODEC type, CODEC expansion mode, etc.), audio data formats (compressed audio data, feature vectors, etc.), recorded audio output function, synthesis Voice output function (no synthesis engine, intermediate expression output engine, waveform output engine, etc.), service content, recognition engine capability (task-specific engine, dictation engine, command recognition engine, etc.), operation Use information such as information.
- the new recognition dialogue server 80 is the same as any one of the recognition dialogue server 30, the recognition dialogue representative server 40, and the recognition dialogue server C 50.
- the recognition dialogue selection server 20, the recognition dialogue server 30, the recognition dialogue representative server 40, the recognition dialogue server C 50, and the new recognition dialogue server 80 are Windows (registered trademark) NT, A computer equipped with Windows (registered trademark) 20000 is a server equipped with Solaris (registered trademark).
- the configuration of the recognition dialogue representative server 40 and the recognition dialogue server C 50 will be described later.
- the recognition dialogue selection server 20, the recognition dialogue server 30, the recognition dialogue representative server 40, the recognition dialogue server C 50, the new recognition dialogue server 80, and the like function as the above-described dialogue means.
- FIG. 5 is a flowchart showing processing when the recognition dialogue selection server 20 determines the recognition dialogue server 30 in the speech recognition dialogue system according to the embodiment of the present invention.
- a request for a service including voice recognition dialogue processing is made from the client terminal 10 to the recognition dialogue selection server 20 (step 5 0 1) ⁇ Specifically, the client terminal 10
- the CGI URL of the program that executes the service and the arguments required for the processing are transmitted from the data communication unit 130 on the side to the recognition dialog selection server 20 side using an HTTP command or the like.
- the recognition dialogue selection server 20 side receives the service request from the client terminal 10 side, and requests the capability information of the client terminal 10 (step 5002).
- the client terminal 10 receives the capability information request from the recognition dialogue selection server 20, and transmits the capability information of the client terminal 10 stored in the terminal information storage unit 140 through the control unit 120.
- the data communication unit 1 3 0 transmits to the recognition dialogue selection server 20 (step 5 03).
- C OD EC capabilities C OD EC type, C OD EC compression mode, etc.
- audio data formats compressed audio data, feature vectors, etc.
- Input / output function synthesized speech input / output function (no synthesis engine, intermediate expression input engine, character string input engine, etc.), service contents, etc.
- the recognition dialogue selection server 20 receives the capability information of the client terminal 10 transmitted from the client terminal 10, and recognizes a plurality of recognition units stored in the recognition dialogue server information storage unit 23 30 in advance.
- the capability information of the dialogue server is read out, and the recognition dialogue server determination unit 220 compares the capability information on the client terminal 10 side with the capabilities of multiple recognition dialogue servers (step 5004). Considering the service content information requested by the terminal 10 side, the optimum recognition dialogue server is uniquely determined (step 05).
- CODEC capabilities CODEC type, CODEC expansion mode, etc.
- audio data formats compressed audio data, feature vectors, etc.
- recording audio output function synthesized audio output function
- synthesized audio output function No synthesis engine, intermediate expression output engine, waveform output engine, etc.
- service content recognition engine capability (task dedicated engine, dictation engine, command recognition engine) Etc.) and operation information.
- 3 0 exists, for example, address task server, name task server ⁇ phone number task server, card ID task server, etc., dedicated recognition dialog server 3 0 exists, client terminal 1
- An example is a method of selecting a recognized dialogue server that can execute the service content requested from 0.
- the recognition dialog selection server 20 notifies the client terminal 10 side of the information of the recognition dialog server determined by the recognition dialog server determination unit 2 20 (step 5 06).
- the address of the recognition dialogue server 30 or the address of the execution program that executes the recognition dialogue on the recognition dialogue server 30 is embedded in a screen such as HTML and notified as an example. Can be mentioned.
- the client terminal 10 receives the notification of the information of the recognition dialogue server 30 from the recognition dialogue selection server 20 and requests the notified recognition dialogue server 30 to start the voice recognition dialogue ( Step 5 0 7).
- the request method for starting the speech recognition dialogue there is a method to send the URL of the address of the execution program that executes the recognition dialogue and the arguments necessary for executing the speech recognition dialogue using the HTTP P ⁇ ST command.
- the arguments mentioned above include documents describing service contents (VoiceXML, etc.), service names, voice recognition dialogue execution commands, and so on.
- the recognition dialogue server 30 receives the request for starting the voice recognition dialogue from the client terminal 10 and executes the voice recognition dialogue (step 5 0 8).
- the dotted line connecting Step 5 0 8 and Step 5 0 9 indicates that data is exchanged several times between the terminal and the recognition dialogue server. ing.
- the speech recognition dialogue process will be described in detail later using FIG.
- a recognition conversation termination request is made from the client terminal 10 side (step 5 0 9).
- the address of the execution program that terminates the recognition conversation is sent using the HTTP POST command, or the address of the execution program that executes the recognition conversation and the recognition conversation is terminated.
- An example is the method of sending commands with the HTTP POST command.
- the recognition dialogue server receives the voice recognition dialogue termination request from the client terminal 10 side, and terminates the voice recognition dialogue (step 7 10).
- FIG. 6 is a flowchart showing speech recognition dialogue processing in the speech recognition dialogue method according to the embodiment of the present invention.
- the voice input to the data input unit 110 of the client terminal 10 is transmitted to the control unit 120, and the control unit 120 performs data processing.
- data processing include digitization, voice detection, and voice analysis. .
- the processed voice data is transmitted from the data communication unit 2 10 to the recognition dialogue server (step 6 0 1).
- audio data include digitized audio data, compressed audio data, and feature vectors.
- Speech recognition dialogue execution unit 3 3 0 is the recognition required for speech recognition dialogue. It has an engine, a dictionary for recognition, a synthesis engine, a dictionary for synthesis, etc., and performs speech recognition dialogue processing step by step (step 60 3).
- the processing contents vary depending on the type of voice data transmitted from the client terminal 10. For example, if the audio data to be transmitted is compressed audio data, decompression of the compressed data, audio analysis, and recognition processing are performed, and if a feature vector is transmitted, only audio recognition processing is performed. After the recognition process is completed, the output recognition result is transmitted to the client terminal 10 (step 60 4).
- Examples of the recognition result format include text, synthesized speech / recorded speech that matches the content of the text, and the URL of the screen reflecting the recognition content.
- the client terminal 10 processes the recognition result received from the recognition dialogue server 30 according to the recognition result format (step 6 0 5). For example, if the recognition result format is synthesized speech or recorded speech, a voice is output, and if the recognition result format is the screen URL, the screen is displayed.
- step 6 0 1 to step 6 0 5 the process from step 6 0 1 to step 6 0 5 is repeated several times, and the voice dialogue proceeds.
- the recognition dialogue server 30 that performs the speech recognition dialogue processing is configured to perform the voice recognition dialogue processing with the other new recognition dialogue server 80.
- FIG. 7 shows a case where a new recognition dialog server 8 0 is added to the recognition dialog selection server 20 during recognition dialog processing in the recognition dialog server 30 in the speech recognition dialog system according to the embodiment of the present invention. It is a flowchart which shows the process in the case of determining.
- Step 703 when a process in the new recognition dialogue server 80 is necessary after multiple exchanges between the client terminal 10 and the recognition dialogue server 30, the recognition dialogue server 30 To Recognition Dialogue Selection Server 2 0 New A process transfer to the recognition dialogue server 80 is requested (step 703).
- the dotted line connecting Step 702 and Step 703 indicates that data is exchanged several times between the terminal and the recognition dialogue server.
- the server migration request is triggered when the service content is changed during the conversation, when there is a mismatch between the service content and the server capability, or when there is a problem with the recognized dialogue server. Can be mentioned.
- a capability information request of the client terminal 10 is made from the recognition dialogue selection server 20 to the client terminal 10 (step 704).
- the client terminal 10 receives the capability information request from the recognition dialogue selection server 20 and receives the capability information of the client terminal 10 stored in the client terminal 10 information storage unit 140. Information is transmitted from the data communication unit 130 to the recognition dialogue server through the control unit 120 (step 705).
- the recognition dialogue selection server 20 receives the capability information of the client terminal 10 transmitted from the client terminal 10 and receives a plurality of information stored in the recognition dialogue server information storage unit 230 in advance.
- the capability information of the recognition dialogue server is read, and the recognition dialogue server determination unit 220 compares the capability information on the client terminal 10 side with the capabilities of the plurality of recognition dialogue servers (step 70 6). Taking into account the information on the service content that triggered the transition request from, the optimal recognition dialogue server is uniquely determined (Step 07 07).
- the capability information of the client terminal 10, the capability information of the recognition dialogue server, and the method for determining the recognition dialogue server are the same as described above.
- the recognition dialog selection server 20 notifies the client terminal 10 side of the information of the new recognition dialog server 80 determined by the recognition dialog server determination unit 220 (step 70 8).
- new notification dialog server An example is the method of embedding the address of an executable program that executes the recognition dialogue on the 80 or the new recognition dialogue server 80 in a screen such as HTML, and the like. .
- the client terminal 10 side receives the address notification of the new recognition dialogue server 80, and requests the notified new recognition dialogue server 80 to start the voice recognition dialogue (step 70). 9).
- An example of a method for requesting the start of a speech recognition dialogue is the method of sending the URL of the address of the execution program that executes the recognition dialogue and the arguments required to execute the speech recognition dialogue using the HTTP POST command. It is done.
- the above-described recognition dialogue selection server 20 and the recognition dialogue server 30 are mounted on the same server, so that the voice recognition dialogue and an appropriate voice recognition dialogue server are implemented.
- the recognition dialogue representative server 40 can be selected.
- FIG. 8 is a block diagram showing the configuration of the recognition dialogue representative server 40 according to the embodiment of the present invention.
- the recognition dialogue representative server 40 has a recognition dialogue server determination unit 4 4 0 and a recognition dialogue server information storage unit 45 50 added to the recognition dialogue server 30 shown in FIG. .
- Other configurations, for example, the data communication unit 4 10, the control unit 4 2 0, and the speech recognition dialogue execution unit 4 3 0 are the same as the corresponding configurations in FIG.
- Control unit 4 2 voice recognition dialogue execution unit 4 3 0 for executing voice recognition and dialogue, data communication unit 4 1 0 for communicating via network 1, control unit 3 2 0, voice recognition This is the same as the voice recognition dialogue execution unit 3 30 for executing the dialogue and the data communication unit 3 1 0 for communicating via the network 1.
- the recognition dialogue server determination unit 4 40 selects and decides the optimum recognition dialogue server uniquely when there are a plurality of recognition dialogue servers.
- the recognized dialogue server information storage unit 45 50 stores the capability information of the recognized dialogue server selected and determined.
- the CODEC capability CODEC type, C DEC expansion mode, etc.
- audio data format compressed audio data, feature vector, etc.
- recording Voice output function synthesized voice output function (no synthesis engine, intermediate expression output engine, waveform output engine, etc.)
- service content recognition engine capability (task-specific engine, dictation engine, command recognition) Engine), operation information, etc.
- FIG. 9 is a flow chart showing processing when the recognition dialogue representative server 40 determines the new recognition dialogue server 80 during the recognition dialogue processing in the speech recognition dialogue method according to the embodiment of the present invention.
- the recognition dialogue representative server 40 will The client terminal 10 is requested for capability information of the client terminal 10 (step 903).
- the dotted line connecting step 9 0 2 and step 9 0 3 indicates that data is exchanged several times between the terminal and the recognition dialogue server.
- a trigger for requesting capability information of the client terminal 10 This can be the case when the service content is changed during the process, when the service content and server capabilities are inconsistent, or when a failure occurs in the recognition dialog server.
- the client terminal 10 receives the capability information request from the recognition dialogue representative server 40 and receives the capability information of the client terminal 10 stored in the terminal information storage unit 14 0 as the control unit 1. Through 20, the data communication unit 13 30 transmits to the recognition dialogue representative server 40 (step 90 4).
- the recognition dialogue representative server 40 receives the capability information of the client terminal 10 transmitted from the client terminal 10 side, and receives a plurality of units stored in the recognition dialogue server information storage unit 45 50 in advance.
- the capability information of the recognized dialogue server is read out, and the capability information of the client terminal 10 is compared with the capabilities of the plurality of recognized dialogue servers at the recognition dialogue server determination unit 44 (0).
- the optimum recognition dialogue server is uniquely determined (step 90 6).
- the capability information of the client terminal 10, the capability information of the recognition dialogue server, and the method for determining the recognition dialogue server are the same as described above.
- the recognition dialogue representative server 40 notifies the client terminal 10 of the information of the new recognition dialogue server 80 determined by the recognition dialogue server determination unit 44 (step 9 07).
- the address of the newly recognized dialogue server 80 or the address of the execution program that executes the recognition dialogue on the newly recognized dialogue server 80 is embedded in a screen such as HTML and notified. As an example.
- the client terminal 10 side receives the address notification of the new recognition dialogue server 80 and requests the notified new recognition dialogue server 80 to start the voice recognition dialogue (step). 9 0 8).
- a method for requesting start of speech recognition dialogue An example is the method of sending the URL of the address of the execution program that executes the recognition dialog and the arguments required to execute the speech recognition dialog using the HTTP POST command.
- the recognition dialogue server C 50 reads the service content from the service content holding server 60, for example, a content provider
- the service content holding server 60 may be mounted on the recognition dialogue selection server 20 and may be a web server that uses web as an interface for providing the service to the user.
- the web browser may be mounted on the client terminal 10 as an interface for selecting and inputting service contents.
- FIG. 10 is a diagram showing a recognition dialogue server C (recognition dialogue server side device) 50 according to the embodiment of the present invention.
- a speech recognition dialogue activation unit 5 3 0 and a service content reading unit 5 40 are added to the recognition dialogue representative server 40 shown in FIG.
- Other configurations such as the data communication unit 5 10, the control unit 5 2 0, the speech recognition dialogue execution unit 5 3 0, the recognition dialogue server determination unit 5 6 0, and the recognition dialogue server information storage unit 5 7 0 are shown in FIG. Same as 8 corresponding configurations.
- the voice recognition dialogue activation unit 5 3 0 activates the voice recognition dialogue processing, and requests service content from the service information transmitted from the client terminal 10 side to the server holding the service content.
- Services include address recognition, name recognition, incoming song name recognition, phone number recognition, credit number recognition, and other services.
- the service content reading unit 5 4 0 reads the service content from the service content holding server 60.
- the data communication unit 5 1 0 is the same as the voice recognition dialogue execution unit 4 3 0, the control unit 4 2 0, and the data communication unit 4 1 0, respectively.
- the recognition dialog server information storage unit 5700 and the recognition dialog server determination unit 5600 need not be implemented. In this case, one recognition dialogue server is determined by the recognition dialogue selection server 20.
- the recognition dialog server information storage unit 5 70 and the recognition dialog server determination unit 5 60 are implemented, they are the same as the recognition dialog server information storage unit 4 5 0 and the recognition dialog server determination unit 44 0, respectively.
- FIG. 11 is a flowchart showing processing when the recognition dialogue server C 50 reads the service content from the service content holding server 60 in the speech recognition dialogue method according to the embodiment of the present invention.
- step 1 1 0 1 to step 1 1 0 5 in FIG. 11 is the same as the processing from step 5 0 1 to step 5 0 6 described above.
- the client terminal 10 makes a voice recognition dialogue start request to the recognition dialogue server C 50 based on the information of the recognition dialogue server C 50 notified from the recognition dialogue selection server 20 (step 1). 1 0 6).
- service information is transmitted.
- An example of a method for requesting the start of a speech recognition dialogue is the method of transmitting the URL of the address of the execution program that executes the recognition dialogue and the service content information using the PTP command of HTP.
- Service content information includes documents describing the service content (VoiceXML, etc.) and service names.
- the recognition dialogue server C 50 receives the request from the client terminal 10 at the data communication unit 5 10 and starts the voice recognition dialogue processing at the voice recognition dialogue activation unit 5 30. From the service information sent from the mobile terminal 10 side, a service content request is made to the service content holding server 60 (state 1 1 0 7).
- a method of accessing the address can be cited as an example.
- the service information sent from the client terminal 10 is a service name
- a method for searching for an address that is paired with the service name and accessing the address is also given as an example. It is done.
- the service content holding server 60 receives the request from the recognition dialogue server C 50 and transmits the service content (step 1 1 0 8).
- the recognition dialogue server C 50 receives the transmitted service content at the data communication unit 5 10 and reads it at the service content reading unit 5 40 (step 1 1 0 9), and starts the speech recognition dialogue processing ( Step 1 1 1 0).
- step 1 1 1 0 to step 1 1 1 2 is the same as the processing from step 5 07 to step 5 1 0 described above.
- the dotted line connecting Step 1 1 1 0 and Step 1 1 1 1 indicates that data is exchanged several times between the terminal and the recognition dialogue server.
- FIG. 12 is a diagram showing a program for executing the speech recognition dialogue method of the embodiment of the present invention on the server computer 911, and a recording medium 9002 on which the program is recorded.
- the client terminal can automatically access another appropriate recognition dialogue server. Dialogue can be continued.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Telephonic Communication Services (AREA)
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03708563A EP1394771A4 (en) | 2002-04-04 | 2003-03-12 | VOICE-RECOGNIZING CONVERSATION SELECTION DEVICE, SYSTEM, METHOD, AND PROGRAM |
US10/476,638 US20040162731A1 (en) | 2002-04-04 | 2003-03-12 | Speech recognition conversation selection device, speech recognition conversation system, speech recognition conversation selection method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002102274A JP2003295890A (ja) | 2002-04-04 | 2002-04-04 | 音声認識対話選択装置、音声認識対話システム、音声認識対話選択方法、プログラム |
JP2002-102274 | 2002-04-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003085640A1 true WO2003085640A1 (fr) | 2003-10-16 |
Family
ID=28786256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/002952 WO2003085640A1 (fr) | 2002-04-04 | 2003-03-12 | Dispositif, systeme, procede et programme de selection de conversation a reconnaissance vocale |
Country Status (6)
Country | Link |
---|---|
US (1) | US20040162731A1 (ja) |
EP (1) | EP1394771A4 (ja) |
JP (1) | JP2003295890A (ja) |
CN (1) | CN1282946C (ja) |
TW (1) | TWI244065B (ja) |
WO (1) | WO2003085640A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11210082B2 (en) | 2009-07-23 | 2021-12-28 | S3G Technology Llc | Modification of terminal and service provider machines using an update server machine |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3885523B2 (ja) * | 2001-06-20 | 2007-02-21 | 日本電気株式会社 | サーバ・クライアント型音声認識装置及び方法 |
FR2853126A1 (fr) * | 2003-03-25 | 2004-10-01 | France Telecom | Procede de reconnaissance de parole distribuee |
US8311822B2 (en) | 2004-11-02 | 2012-11-13 | Nuance Communications, Inc. | Method and system of enabling intelligent and lightweight speech to text transcription through distributed environment |
GB2427500A (en) * | 2005-06-22 | 2006-12-27 | Symbian Software Ltd | Mobile telephone text entry employing remote speech to text conversion |
CA2618623C (en) * | 2005-08-09 | 2015-01-06 | Mobilevoicecontrol, Inc. | Control center for a voice controlled wireless communication device system |
EP1938310A2 (en) * | 2005-10-21 | 2008-07-02 | Callminer, Inc. | Method and apparatus for processing heterogeneous units of work |
US9330668B2 (en) * | 2005-12-20 | 2016-05-03 | International Business Machines Corporation | Sharing voice application processing via markup |
US20080154612A1 (en) * | 2006-12-26 | 2008-06-26 | Voice Signal Technologies, Inc. | Local storage and use of search results for voice-enabled mobile communications devices |
US20080154870A1 (en) * | 2006-12-26 | 2008-06-26 | Voice Signal Technologies, Inc. | Collection and use of side information in voice-mediated mobile search |
US20080154608A1 (en) * | 2006-12-26 | 2008-06-26 | Voice Signal Technologies, Inc. | On a mobile device tracking use of search results delivered to the mobile device |
US20080153465A1 (en) * | 2006-12-26 | 2008-06-26 | Voice Signal Technologies, Inc. | Voice search-enabled mobile device |
CN101079885B (zh) * | 2007-06-26 | 2010-09-01 | 中兴通讯股份有限公司 | 一种提供自动语音识别统一开发平台的系统和方法 |
DE102008033056A1 (de) | 2008-07-15 | 2010-01-21 | Volkswagen Ag | Kraftfahrzeug mit einem Mikrofon zur akustischen Eingabe eines Befehls zur Bedienung der Funktion des Kraftfahrzeuges |
CN102237087B (zh) * | 2010-04-27 | 2014-01-01 | 中兴通讯股份有限公司 | 语音控制方法和语音控制装置 |
US20120059655A1 (en) * | 2010-09-08 | 2012-03-08 | Nuance Communications, Inc. | Methods and apparatus for providing input to a speech-enabled application program |
WO2014020835A1 (ja) * | 2012-07-31 | 2014-02-06 | 日本電気株式会社 | エージェント制御システム、方法およびプログラム |
CN103024169A (zh) * | 2012-12-10 | 2013-04-03 | 深圳市永利讯科技股份有限公司 | 一种通讯终端应用程序的语音启动方法和装置 |
US9413891B2 (en) | 2014-01-08 | 2016-08-09 | Callminer, Inc. | Real-time conversational analytics facility |
CN103870547A (zh) * | 2014-02-26 | 2014-06-18 | 华为技术有限公司 | 联系人的分组处理方法及装置 |
JP2018037819A (ja) * | 2016-08-31 | 2018-03-08 | 京セラ株式会社 | 電子機器、制御方法及びプログラム |
US11663535B2 (en) | 2016-10-03 | 2023-05-30 | Google Llc | Multi computational agent performance of tasks |
CN109844855B (zh) * | 2016-10-03 | 2023-12-05 | 谷歌有限责任公司 | 任务的多重计算代理执行 |
CN106998359A (zh) * | 2017-03-24 | 2017-08-01 | 百度在线网络技术(北京)有限公司 | 基于人工智能的语音识别服务的网络接入方法以及装置 |
JP6843388B2 (ja) * | 2017-03-31 | 2021-03-17 | 株式会社アドバンスト・メディア | 情報処理システム、情報処理装置、情報処理方法及びプログラム |
JP7119218B2 (ja) * | 2018-05-03 | 2022-08-16 | グーグル エルエルシー | オーディオクエリのオーバーラップ処理の協調 |
JP6555838B1 (ja) * | 2018-12-19 | 2019-08-07 | Jeインターナショナル株式会社 | 音声問合せシステム、音声問合せ処理方法、スマートスピーカー運用サーバー装置、チャットボットポータルサーバー装置、およびプログラム。 |
CN109949817B (zh) * | 2019-02-19 | 2020-10-23 | 一汽-大众汽车有限公司 | 基于双操作系统双语音识别引擎的语音仲裁方法及装置 |
CN110718219B (zh) | 2019-09-12 | 2022-07-22 | 百度在线网络技术(北京)有限公司 | 一种语音处理方法、装置、设备和计算机存储介质 |
JP7377668B2 (ja) * | 2019-10-04 | 2023-11-10 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | 制御装置、制御方法及びコンピュータプログラム |
CN113450785B (zh) * | 2020-03-09 | 2023-12-19 | 上海擎感智能科技有限公司 | 车载语音处理的实现方法、系统、介质及云端服务器 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001142488A (ja) * | 1999-11-17 | 2001-05-25 | Oki Electric Ind Co Ltd | 音声認識通信システム |
JP2001222292A (ja) * | 2000-02-08 | 2001-08-17 | Atr Interpreting Telecommunications Res Lab | 音声処理システムおよび音声処理プログラムを記憶したコンピュータ読み取り可能な記録媒体 |
EP1255193A2 (en) * | 2001-05-04 | 2002-11-06 | Microsoft Corporation | Servers for web enabled speech recognition |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5708697A (en) * | 1996-06-27 | 1998-01-13 | Mci Communications Corporation | Communication network call traffic manager |
US6292782B1 (en) * | 1996-09-09 | 2001-09-18 | Philips Electronics North America Corp. | Speech recognition and verification system enabling authorized data transmission over networked computer systems |
US6078886A (en) * | 1997-04-14 | 2000-06-20 | At&T Corporation | System and method for providing remote automatic speech recognition services via a packet network |
WO1998050907A1 (en) * | 1997-05-06 | 1998-11-12 | Speechworks International, Inc. | System and method for developing interactive speech applications |
US7251315B1 (en) * | 1998-09-21 | 2007-07-31 | Microsoft Corporation | Speech processing for telephony API |
US7003463B1 (en) * | 1998-10-02 | 2006-02-21 | International Business Machines Corporation | System and method for providing network coordinated conversational services |
US6408272B1 (en) * | 1999-04-12 | 2002-06-18 | General Magic, Inc. | Distributed voice user interface |
US6363349B1 (en) * | 1999-05-28 | 2002-03-26 | Motorola, Inc. | Method and apparatus for performing distributed speech processing in a communication system |
US6792086B1 (en) * | 1999-08-24 | 2004-09-14 | Microstrategy, Inc. | Voice network access provider system and method |
US6937977B2 (en) * | 1999-10-05 | 2005-08-30 | Fastmobile, Inc. | Method and apparatus for processing an input speech signal during presentation of an output audio signal |
US6633846B1 (en) * | 1999-11-12 | 2003-10-14 | Phoenix Solutions, Inc. | Distributed realtime speech recognition system |
US6396898B1 (en) * | 1999-12-24 | 2002-05-28 | Kabushiki Kaisha Toshiba | Radiation detector and x-ray CT apparatus |
US6505161B1 (en) * | 2000-05-01 | 2003-01-07 | Sprint Communications Company L.P. | Speech recognition that adjusts automatically to input devices |
JP3728177B2 (ja) * | 2000-05-24 | 2005-12-21 | キヤノン株式会社 | 音声処理システム、装置、方法及び記憶媒体 |
US6934756B2 (en) * | 2000-11-01 | 2005-08-23 | International Business Machines Corporation | Conversational networking via transport, coding and control conversational protocols |
GB2376394B (en) * | 2001-06-04 | 2005-10-26 | Hewlett Packard Co | Speech synthesis apparatus and selection method |
US6996525B2 (en) * | 2001-06-15 | 2006-02-07 | Intel Corporation | Selecting one of multiple speech recognizers in a system based on performance predections resulting from experience |
US20030078777A1 (en) * | 2001-08-22 | 2003-04-24 | Shyue-Chin Shiau | Speech recognition system for mobile Internet/Intranet communication |
US7146321B2 (en) * | 2001-10-31 | 2006-12-05 | Dictaphone Corporation | Distributed speech recognition system |
US6785654B2 (en) * | 2001-11-30 | 2004-08-31 | Dictaphone Corporation | Distributed speech recognition system with speech recognition engines offering multiple functionalities |
US6898567B2 (en) * | 2001-12-29 | 2005-05-24 | Motorola, Inc. | Method and apparatus for multi-level distributed speech recognition |
GB2389217A (en) * | 2002-05-27 | 2003-12-03 | Canon Kk | Speech recognition system |
US6834265B2 (en) * | 2002-12-13 | 2004-12-21 | Motorola, Inc. | Method and apparatus for selective speech recognition |
US7076428B2 (en) * | 2002-12-30 | 2006-07-11 | Motorola, Inc. | Method and apparatus for selective distributed speech recognition |
US20050177371A1 (en) * | 2004-02-06 | 2005-08-11 | Sherif Yacoub | Automated speech recognition |
-
2002
- 2002-04-04 JP JP2002102274A patent/JP2003295890A/ja active Pending
-
2003
- 2003-03-12 EP EP03708563A patent/EP1394771A4/en not_active Withdrawn
- 2003-03-12 CN CNB038003465A patent/CN1282946C/zh not_active Expired - Fee Related
- 2003-03-12 WO PCT/JP2003/002952 patent/WO2003085640A1/ja active Application Filing
- 2003-03-12 US US10/476,638 patent/US20040162731A1/en not_active Abandoned
- 2003-04-03 TW TW092107581A patent/TWI244065B/zh not_active IP Right Cessation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001142488A (ja) * | 1999-11-17 | 2001-05-25 | Oki Electric Ind Co Ltd | 音声認識通信システム |
JP2001222292A (ja) * | 2000-02-08 | 2001-08-17 | Atr Interpreting Telecommunications Res Lab | 音声処理システムおよび音声処理プログラムを記憶したコンピュータ読み取り可能な記録媒体 |
EP1255193A2 (en) * | 2001-05-04 | 2002-11-06 | Microsoft Corporation | Servers for web enabled speech recognition |
Non-Patent Citations (1)
Title |
---|
See also references of EP1394771A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11210082B2 (en) | 2009-07-23 | 2021-12-28 | S3G Technology Llc | Modification of terminal and service provider machines using an update server machine |
Also Published As
Publication number | Publication date |
---|---|
EP1394771A1 (en) | 2004-03-03 |
US20040162731A1 (en) | 2004-08-19 |
CN1282946C (zh) | 2006-11-01 |
TW200307908A (en) | 2003-12-16 |
JP2003295890A (ja) | 2003-10-15 |
CN1514995A (zh) | 2004-07-21 |
TWI244065B (en) | 2005-11-21 |
EP1394771A4 (en) | 2005-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2003085640A1 (fr) | Dispositif, systeme, procede et programme de selection de conversation a reconnaissance vocale | |
US9761241B2 (en) | System and method for providing network coordinated conversational services | |
CA2345660C (en) | System and method for providing network coordinated conversational services | |
US6801604B2 (en) | Universal IP-based and scalable architectures across conversational applications using web services for speech and audio processing resources | |
US7421390B2 (en) | Method and system for voice control of software applications | |
US8239204B2 (en) | Inferring switching conditions for switching between modalities in a speech application environment extended for interactive text exchanges | |
US8521527B2 (en) | Computer-implemented system and method for processing audio in a voice response environment | |
US8296139B2 (en) | Adding real-time dictation capabilities for speech processing operations handled by a networked speech processing system | |
EP1311102A1 (en) | Streaming audio under voice control | |
JP2002528804A (ja) | サービスアプリケーションに対するユーザインタフェースの音声制御 | |
US8175084B2 (en) | Data device to speech service bridge | |
JP2001503236A (ja) | パーソナル音声メッセージプロセッサ及び方法 | |
KR100826778B1 (ko) | 멀티모달을 위한 브라우저 기반의 무선 단말과, 무선단말을 위한 브라우저 기반의 멀티모달 서버 및 시스템과이의 운용 방법 | |
US6501751B1 (en) | Voice communication with simulated speech data | |
JP2005151553A (ja) | ボイス・ポータル | |
US8706501B2 (en) | Method and system for sharing speech processing resources over a communication network | |
JP2000285063A (ja) | 情報処理装置および情報処理方法、並びに媒体 | |
JP4224305B2 (ja) | 対話情報処理システム | |
JP2003271376A (ja) | 情報提供システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): DE FR GB IT |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10476638 Country of ref document: US |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2003708563 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 038003465 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 2003708563 Country of ref document: EP |