US20130124197A1 - Multi-layered speech recognition apparatus and method - Google Patents
Multi-layered speech recognition apparatus and method Download PDFInfo
- Publication number
- US20130124197A1 US20130124197A1 US13/732,576 US201313732576A US2013124197A1 US 20130124197 A1 US20130124197 A1 US 20130124197A1 US 201313732576 A US201313732576 A US 201313732576A US 2013124197 A1 US2013124197 A1 US 2013124197A1
- Authority
- US
- United States
- Prior art keywords
- speech
- server
- client
- characteristic
- speech recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000005540 biological transmission Effects 0.000 description 31
- 230000006835 compression Effects 0.000 description 19
- 238000007906 compression Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 18
- 238000000605 extraction Methods 0.000 description 15
- 230000004044 response Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000007796 conventional method Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000004140 cleaning Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/32—Multiple recognisers used in sequence or in parallel; Score combination systems therefor, e.g. voting systems
Definitions
- the present invention relates to speech recognition, and more particularly, to a speech recognition apparatus and method using a terminal and at least one server.
- a conventional speech recognition method by which speech recognition is performed only at a terminal is disclosed in U.S. Pat. No. 6,594,630.
- all procedures of speech recognition are performed only at a terminal.
- speech cannot be recognized with high quality.
- a conventional speech recognition method by which speech is recognized using only a server when a terminal and the server are connected to each other is disclosed in U.S. Pat. No. 5,819,220.
- the terminal simply receives the speech and transmits the received speech to the server, and the server recognizes the speech transmitted from the terminal.
- the load on the server gets very high, and since the speech should be transmitted to the server so that the server can recognize the speech, the speed of speech recognition is reduced.
- a conventional speech recognition method by which speech recognition is performed by both a terminal and a server, is disclosed in U.S. Pat. No. 6,487,534.
- an Internet search domain is targeted, an applied range thereof is narrow, and the speech recognition method cannot be embodied.
- a multi-layered speech recognition apparatus to recognize speech in a multi-layered manner using a client and at least one server, which are connected to each other in a multi-layered manner via a network.
- a multi-layered speech recognition method by which speech is recognized in a multi-layered manner using a client and at least one server, which are connected to each other in a multi-layered manner via a network.
- a multi-layered speech recognition apparatus including a client extracting a characteristic of speech to be recognized, checking whether the client recognizes the speech using the extracted characteristic of the speech and recognizing the speech or transmitting the characteristic of the speech, according to a checked result; and first through N-th (where N is a positive integer equal to or greater than 1) servers, wherein the first server receives the characteristic of the speech transmitted from the client, checks whether the first server recognizes the speech, using the received characteristic of the speech, and recognizes the speech or transmits the characteristic according to a checked result, and wherein the n-th (2 ⁇ n ⁇ N) server receives the characteristic of the speech transmitted from an (n ⁇ 1)-th server, checks whether the n-th server recognizes the speech, using the received characteristic of the speech, and recognizes the speech or transmits the characteristic according to a checked result.
- a multi-layered speech recognition method performed in a multi-layered speech recognition apparatus having a client and first through N-th (where N is a positive integer equal to or greater than 1) servers, the method including extracting a characteristic of speech to be recognized, checking whether the client recognizes the speech using the extracted characteristic of the speech, and recognizing the speech or transmitting the characteristic of the speech according to a checked result; and receiving the characteristic of the speech transmitted from the client, checking whether the first server recognizes the speech, using the received characteristic of the speech, and recognizing the speech or transmitting the characteristic according to a checked result, and receiving the characteristic of the speech transmitted from a (n ⁇ 1)-th (2 ⁇ n ⁇ N) server, checking whether the n-th server recognizes the speech, using the received characteristic of the speech, and recognizing the speech or transmitting the characteristic according to a checked result, wherein the extracting of the characteristic of the speech to be recognized is performed by the client, the receiving of the characteristic of the speech transmitted from the client is performed by the first server,
- FIG. 1 is a schematic block diagram of a multi-layered speech recognition apparatus according to an embodiment of the present invention
- FIG. 2 is a flowchart illustrating a multi-layered speech recognition method performed in the multi-layered speech recognition apparatus shown in FIG. 1 ;
- FIG. 3 is a block diagram of the client shown in FIG. 1 according to an embodiment of the present invention.
- FIG. 4 a flowchart illustrating operation 40 shown in FIG. 2 according to an embodiment of the present invention
- FIG. 5 is a block diagram of the client adjustment unit shown in FIG. 3 according to an embodiment of the present invention.
- FIG. 6 is a flowchart illustrating operation 84 shown in FIG. 4 according to an embodiment of the present invention.
- FIG. 7 is a block diagram of the client speech recognition unit shown in FIG. 3 according to an embodiment of the present invention.
- FIG. 8 is a block diagram of a q-th server according to an embodiment of the present invention.
- FIG. 9 is a flowchart illustrating operation 42 shown in FIG. 2 according to an embodiment of the present invention.
- FIG. 10 is a flowchart illustrating operation 88 shown in FIG. 4 according to an embodiment of the present invention.
- FIG. 11 is a block diagram of the server adjustment unit shown in FIG. 8 according to an embodiment of the present invention
- FIG. 12 is a flowchart illustrating operation 200 shown in FIG. 9 according to an embodiment of the present invention.
- FIG. 13 is a block diagram of the server speech recognition unit shown in FIG. 8 according to an embodiment of the present invention.
- FIG. 14 is a block diagram of the client topic-checking portion shown in FIG. 5 or the server topic-checking portion shown in FIG. 11 according to an embodiment of the present invention
- FIG. 15 is a flowchart illustrating operation 120 shown in FIG. 6 or operation 240 shown in FIG. 12 according to an embodiment of the present invention
- FIG. 16 is a block diagram of an n-th server according to an embodiment of the present invention.
- FIG. 17 is a flowchart illustrating operation 204 according to an embodiment of the present invention when the flowchart shown in FIG. 9 illustrates an embodiment of operation 44 shown in FIG. 2 .
- FIG. 1 is a schematic block diagram of a multi-layered speech recognition apparatus according to an embodiment of the present invention.
- the multi-layered speech recognition apparatus of FIG. 1 includes a client 10 and N servers 20 , 22 , . . . , and 24 (where N is a positive integer equal to or greater than 1).
- FIG. 2 is a flowchart illustrating a multi-layered speech recognition method performed in the multi-layered speech recognition apparatus shown in FIG. 1 .
- the multi-layered speech recognition method of FIG. 2 includes the client 10 recognizing speech or transmitting a characteristic of the speech (operation 40 ) and at least one server recognizing the speech that is not recognized by the client 10 itself (operations 42 and 44 ).
- the client 10 shown in FIG. 1 inputs the speech to be recognized through an input terminal IN 1 , extracts a characteristic of the input speech, checks whether the client 10 itself can recognize the speech using the extracted characteristic of the speech, and recognizes the speech or transmits the characteristic of the speech to one of the servers 20 , 22 , . . . , and 24 according to a checked result.
- the client 10 has a small capacity of resources, like in a mobile phone, a remote controller, or a robot, and can perform word speech recognition and/or connected word speech recognition.
- the resources may be a processing speed of a central processing unit (CPU) and the size of a memory that stores data for speech recognition.
- the word speech recognition is to recognize one word, for example, command such as ‘cleaning,’ etc. which are sent to a robot, etc.
- the connected word speech recognition is to recognize two or more simply-connected words required for a mobile phone etc., such as ‘send message,’ etc.
- a server of the servers 20 , 22 , . . . , and 24 which directly receives a characteristic of the speech transmitted from the client 10 is referred to as a first server
- a server which directly receives a characteristic of the speech transmitted from the first server or from a certain server is referred to a different server
- the different server is also referred to as an n-th server (2 ⁇ n ⁇ N).
- the first server 20 , 22 , . . . , or 24 receives a characteristic of the speech transmitted from the client 10 , checks whether the first server 20 , 22 , . . . , and 24 itself can recognize the speech using the characteristic of the received speech, and recognizes speech or transmits the characteristic of the speech to the n-th server according to a checked result.
- the n-th server receives a characteristic of the speech transmitted from a (n ⁇ 1)-th server, checks whether the n-th server itself can recognize the speech using the received characteristic of the speech, and recognizes speech or transmits the characteristic of the speech to a (n+1)-th server according to a checked result. For example, when the n-th server itself cannot recognize the speech, the (n+1)-th server performs operation 44 , and when the (n+1)-th server itself cannot recognize the speech, a (n+2)-th server performs operation 44 . In this way, several servers try to perform speech recognition until the speech is recognized by one of the servers.
- a recognized result may be outputted via an output terminal (OUT 1 , OUT 2 , . . . , or OUT N ) but may also be outputted to the client 10 .
- the client 10 can also use the result of speech recognition even though the client 10 has not recognized the speech.
- the client 10 may also inform a user of the client 10 via an output terminal OUT N+1 whether the speech is recognized by the server.
- Each of the servers 20 , 22 , . . . , and 24 of FIG. 1 does not extract the characteristic of the speech but receives the characteristic of the speech extracted from the client 10 and can perform speech recognition immediately.
- Each of the servers 20 , 22 , . . . , and 24 can retain more resources than the client 10 , and the servers 20 , 22 , . . . , and 24 retain different capacities of resources.
- the servers are connected to one another via networks 13 , 15 , . . . , and 17 as shown in FIG. 1 , regardless of having small or large resource capacity.
- a home server having a small capacity of resources exists.
- the home server can recognize household appliances-controlling speech conversation composed of a comparatively simple natural language, such as ‘please turn on a golf channel’.
- a service server such as ubiquitous robot companion (URC), having a large capacity of resource exists.
- the service server can recognize a composite command composed of a natural language in a comparatively long sentence, such as ‘please let me know what movie is now showing’.
- the client 10 tries to perform speech recognition (operation 40 ).
- the first server having a larger capacity of resource than the client 10 tries to perform speech recognition (operation 42 ).
- the first server does not recognize the speech, servers having larger capacities of resources than the first server try to perform speech recognition, one after another (operation 44 ).
- the multi-layered speech recognition method shown in FIG. 2 may include operations 40 and 42 and may not include operation 44 .
- the speech can be recognized by a different server having a large capacity of resource.
- the multi-layered speech recognition method shown in FIG. 2 includes operations 40 , 42 , and 44 .
- FIG. 3 is a block diagram of the client 10 shown in FIG. 1 according to an embodiment 10 A of the present invention.
- the client 10 A of FIG. 3 includes a speech input unit 60 , a speech characteristic extraction unit 62 , a client adjustment unit 64 , a client speech recognition unit 66 , a client application unit 68 , and a client compression transmission unit 70 .
- FIG. 4 is a flowchart illustrating operation 40 shown in FIG. 2 according to an embodiment 40 A of the present invention.
- Operation 40 A of FIG. 4 includes extracting a characteristic of speech using a detected valid speech section (operations 80 and 82 ) and recognizing the speech or transmitting the extracted characteristic of the speech depending on whether the speech can be recognized by a client itself (operations 84 through 88 ).
- the speech input unit 60 shown in FIG. 3 inputs speech through, for example, a microphone, through an input terminal IN 2 from the outside, detects a valid speech section from the input speech, and outputs the detected valid speech section to the speech characteristic extraction unit 62 .
- the speech characteristic extraction unit 62 extracts a characteristic of the speech to be recognized from the valid speech section and outputs the extracted characteristic of the speech to the client adjustment unit 64 .
- the speech characteristic extraction unit 60 can also extract the characteristic of the speech in a vector format from the valid speech section.
- the client 10 A shown in FIG. 3 may not include the speech input unit 60
- operation 40 A shown in FIG. 4 may also not include operation 80
- the speech characteristic extraction unit 62 directly inputs the speech to be recognized through the input terminal IN 2 , extracts the characteristic of the input speech, and outputs the extracted characteristic to the client adjustment unit 64 .
- the client adjustment unit 64 checks whether the client 10 itself can recognize speech using the characteristic extracted by the speech characteristic extraction unit 62 and transmits the characteristic of the speech to a first server through an output terminal OUT N+3 or outputs the characteristic of the speech to the client speech recognition unit 66 , according to a checked result.
- the client adjustment unit 64 determines whether the client 10 itself can recognize the speech, using the characteristic of the speech extracted by the speech characteristic extraction unit 62 . If it is determined that the client 10 itself cannot recognize the speech, in operation 88 , the client adjustment unit 64 transmits the extracted characteristic of the speech to the first server through the output terminal OUT N+3 . However, if it is determined that the client 10 itself can recognize the speech, the client adjustment unit 64 outputs the characteristic of the speech extracted by the speech characteristic extraction unit 62 to the client speech recognition unit 66 .
- the client speech recognition unit 66 recognizes the speech from the characteristic input from the client adjustment unit 64 .
- the client speech recognition unit 66 can output a recognized result in a text format to the client application unit 68 .
- the client application unit 68 shown in FIG. 3 performs the same function as the client 10 using a result recognized by the client speech recognition unit 66 and outputs a result through an output terminal OUT N+2 .
- the function performed by the client application unit 68 may be a function of controlling the operation of the robot.
- the client 10 shown in FIG. 3 may not include the client application unit 68 .
- the client speech recognition unit 66 directly outputs a recognized result to the outside.
- FIG. 5 is a block diagram of the client adjustment unit 64 shown in FIG. 3 according to an embodiment 64 A of the present invention.
- the client adjustment unit 64 A of FIG. 5 includes a client topic-checking portion 100 , a client comparison portion 102 , and a client output-controlling portion 104 .
- FIG. 6 is a flowchart illustrating operation 84 shown in FIG. 4 according to an embodiment 84 A of the present invention.
- Operation 84 A includes calculating a score of a topic, which is most similar to an extracted characteristic of speech (operation 120 ), and comparing the calculated score with a client threshold value (operation 122 ).
- the client topic-checking portion 100 detects a topic, which is most similar to the characteristic of the speech extracted by the speech characteristic extraction unit 62 and input through an input terminal IN 3 , calculates a score of the detected most similar topic, outputs a calculated score to the client comparison portion 102 , and outputs the most similar topic to the client speech recognition unit 66 through an output terminal OUT N+5 .
- the client comparison portion 102 compares the detected score with the client threshold value and outputs a compared result to the client output-controlling portion 104 and to the client speech recognition unit 66 through an output terminal OUT N+6 (operation 122 ).
- the client threshold value is a predetermined value and may be determined experimentally.
- the method proceeds to operation 86 and the client speech recognition unit 66 recognizes the speech. However, if the score is not larger than the client threshold value, the method proceeds to operation 88 and transmits the extracted characteristic of the speech to the first server.
- the client output-controlling portion 104 outputs the extracted characteristic of the speech input from the speech characteristic extraction unit 62 through an input terminal IN 3 , to the client speech recognition unit 66 through an output terminal OUT N+4 according to a result compared by the client comparison portion 102 or transmits the characteristic of the speech to the first server through the output terminal OUT N+4 (where OUT N+4 corresponds to an output terminal OUT N+3 shown in FIG. 3 ).
- the client output-controlling portion 104 outputs the extracted characteristic of the speech input from the speech characteristic extraction unit 62 through the input terminal IN 3 , to the client speech recognition unit 66 through the output terminal OUT N+4 .
- the client output-controlling portion 104 transmits the extracted characteristic of the speech input from the speech characteristic extraction unit 62 through the input terminal IN 3 to the first server through the output terminal OUT N+4 .
- FIG. 7 is a block diagram of the client speech recognition unit 66 shown in FIG. 3 according to an embodiment 66 A of the present invention.
- the client speech recognition unit 66 A of FIG. 7 includes a client decoder selection portion 160 and first through P-th speech recognition decoders 162 , 164 , . . . , and 166 .
- P is the number of topics checked by the client topic-checking portion 100 shown in FIG. 5 . That is, the client speech recognition unit 66 shown in FIG. 3 may include a speech recognition decoder according to each topic, as shown in FIG. 7 .
- the client decoder selection portion 160 selects a speech recognition decoder corresponding to a detected most similar topic input from the client topic-checking portion 100 through an input terminal IN 4 , from the first through P-th speech recognition decoders 162 , 164 , . . . , and 166 . In this case, the client decoder selection portion 160 outputs the characteristic of the speech input from the client output-controlling portion 104 through the input terminal IN 4 to the selected speech recognition decoder. To perform this operation, the client decoder selection portion 160 should be activated in response to a compared result input from the client comparison portion 102 .
- the client decoder selection portion 160 selects a speech recognition decoder and outputs the characteristic of the speech to the selected speech recognition decoder, as previously described.
- the p-th (1 ⁇ p ⁇ P) speech recognition decoder shown in FIG. 7 recognizes speech from the characteristic output from the client decoder selection portion 160 and outputs a recognized result through an output terminal OUT N+6+p .
- FIG. 8 is a block diagram of a q-th server according to an embodiment of the present invention.
- the first server of FIG. 8 includes a client restoration receiving unit 180 , a server adjustment unit 182 , a server speech recognition unit 184 , a server application unit 186 , and a server compression transmission unit 188 .
- 1 ⁇ q ⁇ N For an explanatory convenience, the q-th server is assumed to be the first server. However, the present invention is not limited to this assumption.
- FIG. 9 is a flowchart illustrating operation 42 shown in FIG. 2 according to an embodiment of the present invention. Operation 42 of FIG. 9 includes recognizing speech or transmitting a received characteristic of the speech depending on whether a first server itself can recognize the speech (operations 200 through 204 ).
- the network via which the characteristic of the speech is transmitted from the client 10 to the first server may be a loss channel or a lossless channel.
- the loss channel is a channel via which a loss occurs when data or a signal is transmitted and may be a wire/wireless speech channel, for example.
- a lossless channel is a channel via which a loss does not occur when data or a signal is transmitted and may be a wireless LAN data channel such as a transmission control protocol (TCP).
- TCP transmission control protocol
- the client 10 A may further include the client compression transmission unit 70 .
- the client compression transmission unit 70 compresses the characteristic of the speech according to a result obtained by the client comparison portion 102 of the client adjustment unit 64 A and in response to a transmission format signal and transmits the compressed characteristic of the speech to the first server through an output terminal OUT N+4 via the loss channel.
- the transmission format signal is a signal generated by the client adjustment unit 64 when a network via which the characteristic of the speech is transmitted is a loss channel.
- the client compression transmission unit 70 compresses the characteristic of the speech and transmits the compressed characteristic of the speech in response to the transmission format signal input from the client adjustment unit 64 .
- FIG. 10 is a flowchart illustrating operation 88 shown in FIG. 4 according to an embodiment of the present invention.
- Operation 88 of FIG. 10 includes compressing and transmitting a characteristic of speech depending on whether the characteristic of the speech is to be transmitted via a loss channel or a lossless channel (operations 210 through 214 ).
- the client adjustment unit 64 determines whether the characteristic extracted by the speech characteristic extraction unit 62 is to be transmitted via the loss channel or the lossless channel. That is, the client adjustment unit 64 determines whether the network via which the characteristic of the speech is transmitted is a loss channel or a lossless channel.
- the client adjustment unit 64 transmits the characteristic of the speech extracted by the speech characteristic extraction unit 62 to the first server through an output terminal OUT N+3 via the lossless channel.
- the client adjustment unit 64 determines that the extracted characteristic of the speech is to be transmitted via the loss channel.
- the client adjustment unit 64 generates a transmission format signal and outputs the transmission format signal to the client compression transmission unit 70 .
- the client compression transmission unit 70 compresses the characteristic of the speech extracted by the speech characteristic extraction unit 62 and input from the client adjustment unit 64 and transmits the compressed characteristic of the speech to the first server through an output terminal OUT N+4 via the loss channel.
- the client restoration receiving portion 180 shown in FIG. 8 receives the compressed characteristic of the speech transmitted from the client compression transmission unit 70 shown in FIG. 3 through an input terminal IN 5 , restores the received compressed characteristic of the speech, and outputs the restored characteristic of the speech to the server adjustment portion 182 .
- the first server shown in FIG. 8 performs operation 42 shown in FIG. 2 using the restored characteristic of the speech.
- the client 10 A shown in FIG. 3 may not include the client compression transmission unit 70 .
- the first server shown in FIG. 8 may not include the client restoration receiving portion 180 , the server adjustment portion 182 directly receives the characteristic of the speech transmitted from the client 10 through an input terminal IN 6 , and the first server shown in FIG. 8 performs operation 42 shown in FIG. 2 using the received characteristic of the speech.
- the present invention is not limited to this.
- the server adjustment portion 182 receives the characteristic of the speech transmitted from the client 10 through the input terminal IN 6 , checks whether the first server itself can recognize the speech, using the received characteristic of the speech, and transmits the received characteristic of the speech to a different server or outputs the received characteristic of the speech to the server speech recognition unit 184 , according to a checked result (operations 200 through 204 ).
- the server adjustment unit 182 determines whether the first server itself can recognize the speech, using the received characteristic of the speech. If it is determined by the server adjustment unit 182 that the first server itself can recognize the speech, in operation 202 , the server speech recognition unit 184 recognizes the speech using the received characteristic of the speech input from the server adjustment unit 182 and outputs a recognized result. In this case, the server speech recognition unit 184 can output the recognized result in a textual format.
- the server application unit 186 performs as the first server using the recognized result and outputs a performed result through an output terminal OUT N+P+7 .
- a function performed by the server application unit 186 may be a function of controlling household appliances or searching information.
- the server adjustment unit 182 transmits the received characteristic of the speech to a different server through an output terminal OUT N+P+8 .
- FIG. 11 is a block diagram of the server adjustment unit 182 shown in FIG. 8 according to an embodiment 182 A of the present invention.
- the server adjustment unit 182 A includes a server topic-checking portion 220 , a server comparison portion 222 , and a server output-controlling portion 224 .
- FIG. 12 is a flowchart illustrating operation 200 shown in FIG. 9 according to an embodiment 200 A of the present invention.
- Operation 200 A of FIG. 9 includes calculating a score of a topic that is most similar to a received characteristic of speech (operation 240 ) and comparing the score with a server threshold value (operation 242 ).
- the server topic-checking portion 220 detects the topic that is most similar to the characteristic of the speech, which is transmitted from the client 10 and received through an input terminal IN 7 , calculates a score of the detected most similar topic, outputs the calculated score to the server comparison portion 222 , and outputs the most similar topic to the server speech recognition unit 184 through an output terminal OUT N+P+11 .
- the server comparison portion 222 compares the score detected by the server topic-checking portion 220 with the server threshold value and outputs a compared result to the server output-controlling portion 224 and to the server speech recognition unit 184 through an output terminal OUT N+P+12 .
- the server threshold value is a predetermined value and may be determined experimentally.
- the server speech recognition unit 184 recognizes the speech.
- the server adjustment unit 182 transmits the received characteristic of the speech to a different server.
- the server output-controlling portion 224 outputs the characteristic of the speech received through an input terminal IN 7 to the server speech recognition unit 184 through an output terminal OUT N+P+10 or transmits the received characteristic of the speech to a different server through the output terminal OUT N+P+10 (where OUT N+P+10 corresponds to an output terminal OUT N+P+8 shown in FIG. 8 ) in response to a result compared by the server comparison portion 222 . More specifically, if it is recognized by the result compared by the server comparison portion 222 that the score is larger than the server threshold value, the server output-controlling portion 224 outputs the characteristic of the speech received through the input terminal IN 7 to the server speech recognition unit 184 through the output terminal OUT N+P+10 .
- the server output-controlling portion 224 transmits the characteristic of the speech received through the input terminal IN 7 to a different server through the output terminal OUT N+P+10 .
- FIG. 13 is a block diagram of the server speech recognition unit 184 shown in FIG. 8 according to an embodiment 184 A of the present invention.
- the server speech recognition unit 184 A of FIG. 13 includes a server decoder selection portion 260 and first through R-th speech recognition decoders 262 , 264 , . . . , and 266 .
- R is the number of topics checked by the server topic-checking portion 220 shown in FIG. 11 . That is, the server speech recognition unit 184 shown in FIG. 8 may include a speech recognition decoder according to each topic, as shown in FIG. 13 .
- the server decoder selection portion 260 selects a speech recognition decoder corresponding to a detected most similar topic input from the server topic-checking portion 220 through an input terminal IN 8 , from the first through R-th speech recognition decoders 262 , 264 , . . . , and 266 .
- the server decoder selection portion 260 outputs the characteristic of the speech input from the server output-controlling portion 224 through the input terminal IN 8 to the selected speech recognition decoder.
- the client decoder selection portion 260 should be activated in response to a compared result input from the server comparison portion 222 .
- the server decoder selection portion 260 selects a speech recognition decoder and outputs the characteristic of the speech to the selected speech recognition decoder, as previously described.
- the r-th (1 ⁇ r ⁇ R) speech recognition decoder shown in FIG. 13 recognizes speech from the received characteristic input from the server decoder selection portion 260 and outputs a recognized result through an output terminal OUT N+P+r+12 .
- FIG. 14 is a block diagram of the client topic-checking portion 100 shown in FIG. 5 or the server topic-checking portion 220 shown in FIG. 11 according to an embodiment of the present invention.
- the client topic-checking portion 100 or the server topic-checking portion 220 of FIG. 14 includes a keyword storage portion 280 , a keyword search portion 282 , and a score calculation portion 284 .
- FIG. 15 is a flowchart illustrating operation 120 shown in FIG. 6 or operation 240 shown in FIG. 12 according to an embodiment of the present invention.
- Operation 120 or 240 includes searching keywords (operation 300 ) and determining a score of a most similar topic (operation 302 ).
- the keyword search portion 282 searches keywords having a characteristic of speech similar to a characteristic of speech input through an input terminal IN 9 , from a plurality of keywords that have been previously stored in the keyword storage portion 280 , and outputs the searched keywords in a list format to the score calculation portion 284 .
- the keyword storage portion 280 stores a plurality of keywords.
- Each of the keywords stored in the keyword storage portion 280 has its own speech characteristic and scores according to each topic. That is, an i-th keyword Keyword i stored in the keyword storage portion 280 has a format such as [a speech characteristic of Keyword i , Topic 1i , Score 1i , Topic 2i , Score 2i , . . . ].
- Topic ki is a k-th topic for Keyword i
- Score ki is a score of Topic ki .
- the score calculation portion 284 calculates scores according to each topic from the searched keywords having the list format input from the keyword search portion 282 , selects a largest score from the calculated scores according to each topic, outputs the selected largest score as a score of a most similar topic through an output terminal OUT N+P+R+13 , and outputs a topic having the selected largest score as a most similar topic through an output terminal OUT N+P+R+14 .
- the score calculation portion 284 can calculate scores according to each topic using Equation 1:
- Score(Topic k ) is a score for a k-th topic Topic k and # is a total number of searched keywords having the list format input from the keyword search portion 282 . Consequently, as shown in Equation 1, Score(Topic k ) means a result of multiplication of scores, that is, Scorek1 to Scorek# for Topic k among keywords from Keyword1 to Keyword#.
- FIG. 16 is a block diagram of an n-th server according to an embodiment of the present invention.
- the n-th server includes a server restoration receiving unit 320 , a server adjustment unit 322 , a server speech recognition unit 324 , a server application unit 326 , and a server compression transmission unit 328 .
- the n-th server may be a server that receives a characteristic of speech transmitted from a first server or a server that receives a characteristic of speech transmitted from a certain server excluding the first server and recognizes the speech.
- the flowchart shown in FIG. 9 may also be a flowchart illustrating operation 44 shown in FIG. 2 according to an embodiment of the present invention.
- operation 200 shown in FIG. 9 it is determined whether the n-th server itself instead of the first server itself can recognize the speech.
- the network via which the characteristic of the speech is transmitted from one server to a different server may be a loss channel or a lossless channel.
- a loss occurs when the characteristic of the speech is transmitted to the loss channel
- a characteristic to be transmitted from the first server (or the n-th server) is compressed, and the n-th server (or the (n+1)-th server) should restore the compressed characteristic of the speech.
- FIG. 16 assuming that the characteristic of the speech is transmitted from the first server to the n-th server, FIG. 16 will be described. However, the following description may be applied to a case where the characteristic of the speech is transmitted from the n-th server to the (n+1)-th server.
- the first server may further include the server compression transmission unit 188 .
- the server compression transmission unit 188 compresses the characteristic of the speech according to a result compared by the server comparison portion 222 of the server adjustment unit 182 A and in response to a transmission format signal and transmits the compressed characteristic of the speech to the n-th server through an output terminal OUT N+P+9 via the loss channel.
- the transmission format signal is a signal generated by the server adjustment unit 182 when a network via which the characteristic of the speech is transmitted is a loss channel.
- the server compression transmission unit 188 compresses the characteristic of the speech and transmits the compressed characteristic of the speech in response to the transmission format signal input from the server adjustment unit 182 .
- FIG. 17 is a flowchart illustrating operation 204 according to an embodiment of the present invention when the flowchart shown in FIG. 9 illustrates an embodiment of operation 44 shown in FIG. 2 .
- Operation 204 of FIG. 17 includes compressing and transmitting a characteristic of speech depending on whether the characteristic of the speech is to be transmitted via a loss channel or a lossless channel (operations 340 through 344 ).
- the server adjustment unit 182 determines whether a characteristic of speech is transmitted via the loss channel or the lossless channel. If it is determined that the received characteristic of the speech is to be transmitted via the lossless channel, in operation 342 , the server adjustment unit 182 transmits the received characteristic of the speech to the n-th server through an output terminal OUT N+P+8 via the lossless channel.
- the server adjustment unit 182 determines that the received characteristic of the speech is to be transmitted via the loss channel.
- the server adjustment unit 182 generates a transmission format signal and outputs the transmission format signal to the server compression transmission unit 188 .
- the server compression transmission unit 188 compresses the characteristic of the speech input from the server adjustment unit 182 when the transmission format signal is input from the server adjustment unit 182 and transmits the compressed characteristic of the speech to the n-th server through an output terminal OUT N+P+9 via the loss channel.
- the server restoration receiving unit 320 shown in FIG. 16 receives the characteristic of the speech transmitted from the compression transmission unit 188 shown in FIG. 8 through an input terminal IN 10 , restores the received compressed characteristic of the speech, and outputs the restored characteristic of the speech to the server adjustment unit 322 .
- the n-th server performs operation 44 shown in FIG. 2 using the restored characteristic of the speech.
- the first server shown in FIG. 8 may not include the server compression transmission unit 188 .
- the n-th server shown in FIG. 16 may not include the server restoration receiving unit 320 .
- the server adjustment unit 322 directly receives the characteristic of the speech transmitted from the first server through an input terminal IN 11 , and the n-th server performs operation 44 shown in FIG. 2 using the received characteristic of the speech.
- the server adjustment unit 322 , the server speech recognition unit 324 , the server application unit 326 , and the server compression transmission unit 328 of FIG. 16 perform the same functions as those of the server adjustment unit 182 , the server speech recognition unit 184 , the server application unit 186 , and the server compression transmission unit 188 of FIG. 8 , and thus, a detailed description thereof will be omitted.
- the output terminals OUT N+P+R+15 , OUT N+P+R+16 , and OUT N+P+R+17 shown in FIG. 16 correspond to the output terminals OUT N+P+R+7 , OUT N+P+R+8 , and OUT N+P+R+9 , respectively, shown in FIG. 8 .
- the multi-layered speech recognition apparatus and method since speech recognition is to be performed in a multi-layered manner using a client and at least one server, which are connected to each other in a multi-layered manner via a network, a user of a client can recognize speech with high quality.
- the client can recognize speech continuously, and the load on speech recognition between a client and at least one server is optimally dispersed such that the speed of speech recognition can be improved.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Telephonic Communication Services (AREA)
Abstract
A multi-layered speech recognition apparatus and method, the apparatus includes a client checking whether the client recognizes the speech using a characteristic of speech to be recognized and recognizing the speech or transmitting the characteristic of the speech according to a checked result; and first through N-th servers, wherein the first server checks whether the first server recognizes the speech using the characteristic of the speech transmitted from the client, and recognizes the speech or transmits the characteristic according to a checked result, and wherein an n-th (2≦n≦N) server checks whether the n-th server recognizes the speech using the characteristic of the speech transmitted from an (n−1)-th server, and recognizes the speech or transmits the characteristic according to a checked result.
Description
- This application is a continuation of U.S. application Ser. No. 11/120,983, filed May 4, 2005, which claims the benefit of Korean Patent Application No. 2004-80352, filed on Oct. 8, 2004 in the Korean Intellectual Property Office, the disclosures of which are herein incorporated by reference.
- 1. Field of the Invention
- The present invention relates to speech recognition, and more particularly, to a speech recognition apparatus and method using a terminal and at least one server.
- 2. Description of the Related Art
- A conventional speech recognition method by which speech recognition is performed only at a terminal is disclosed in U.S. Pat. No. 6,594,630. In the disclosed conventional method, all procedures of speech recognition are performed only at a terminal. Thus, in the conventional method, due to limitation of resources of the terminal, speech cannot be recognized with high quality.
- A conventional speech recognition method by which speech is recognized using only a server when a terminal and the server are connected to each other, is disclosed in U.S. Pat. No. 5,819,220. In the disclosed conventional method, the terminal simply receives the speech and transmits the received speech to the server, and the server recognizes the speech transmitted from the terminal. In the conventional method, since all speech input is directed to the server, the load on the server gets very high, and since the speech should be transmitted to the server so that the server can recognize the speech, the speed of speech recognition is reduced.
- A conventional speech recognition method, by which speech recognition is performed by both a terminal and a server, is disclosed in U.S. Pat. No. 6,487,534. In the disclosed conventional method, since an Internet search domain is targeted, an applied range thereof is narrow, and the speech recognition method cannot be embodied.
- According to an aspect of the present invention, there is provided a multi-layered speech recognition apparatus to recognize speech in a multi-layered manner using a client and at least one server, which are connected to each other in a multi-layered manner via a network.
- According to another aspect of the present invention, there is also provided a multi-layered speech recognition method by which speech is recognized in a multi-layered manner using a client and at least one server, which are connected to each other in a multi-layered manner via a network.
- According to an aspect of the present invention, there is provided a multi-layered speech recognition apparatus, the apparatus including a client extracting a characteristic of speech to be recognized, checking whether the client recognizes the speech using the extracted characteristic of the speech and recognizing the speech or transmitting the characteristic of the speech, according to a checked result; and first through N-th (where N is a positive integer equal to or greater than 1) servers, wherein the first server receives the characteristic of the speech transmitted from the client, checks whether the first server recognizes the speech, using the received characteristic of the speech, and recognizes the speech or transmits the characteristic according to a checked result, and wherein the n-th (2≦n≦N) server receives the characteristic of the speech transmitted from an (n−1)-th server, checks whether the n-th server recognizes the speech, using the received characteristic of the speech, and recognizes the speech or transmits the characteristic according to a checked result.
- According to another aspect of the present invention, there is provided a multi-layered speech recognition method performed in a multi-layered speech recognition apparatus having a client and first through N-th (where N is a positive integer equal to or greater than 1) servers, the method including extracting a characteristic of speech to be recognized, checking whether the client recognizes the speech using the extracted characteristic of the speech, and recognizing the speech or transmitting the characteristic of the speech according to a checked result; and receiving the characteristic of the speech transmitted from the client, checking whether the first server recognizes the speech, using the received characteristic of the speech, and recognizing the speech or transmitting the characteristic according to a checked result, and receiving the characteristic of the speech transmitted from a (n−1)-th (2≦n≦N) server, checking whether the n-th server recognizes the speech, using the received characteristic of the speech, and recognizing the speech or transmitting the characteristic according to a checked result, wherein the extracting of the characteristic of the speech to be recognized is performed by the client, the receiving of the characteristic of the speech transmitted from the client is performed by the first server, and the receiving of the characteristic of the speech transmitted from a (n−1)-th server is performed by the n-th server.
- Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
- These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
-
FIG. 1 is a schematic block diagram of a multi-layered speech recognition apparatus according to an embodiment of the present invention; -
FIG. 2 is a flowchart illustrating a multi-layered speech recognition method performed in the multi-layered speech recognition apparatus shown inFIG. 1 ; -
FIG. 3 is a block diagram of the client shown inFIG. 1 according to an embodiment of the present invention; -
FIG. 4 a flowchartillustrating operation 40 shown inFIG. 2 according to an embodiment of the present invention; -
FIG. 5 is a block diagram of the client adjustment unit shown inFIG. 3 according to an embodiment of the present invention; -
FIG. 6 is a flowchartillustrating operation 84 shown inFIG. 4 according to an embodiment of the present invention; -
FIG. 7 is a block diagram of the client speech recognition unit shown inFIG. 3 according to an embodiment of the present invention; -
FIG. 8 is a block diagram of a q-th server according to an embodiment of the present invention; -
FIG. 9 is a flowchartillustrating operation 42 shown inFIG. 2 according to an embodiment of the present invention; -
FIG. 10 is a flowchartillustrating operation 88 shown inFIG. 4 according to an embodiment of the present invention; -
FIG. 11 is a block diagram of the server adjustment unit shown inFIG. 8 according to an embodiment of the present invention -
FIG. 12 is a flowchartillustrating operation 200 shown inFIG. 9 according to an embodiment of the present invention; -
FIG. 13 is a block diagram of the server speech recognition unit shown inFIG. 8 according to an embodiment of the present invention; -
FIG. 14 is a block diagram of the client topic-checking portion shown inFIG. 5 or the server topic-checking portion shown inFIG. 11 according to an embodiment of the present invention; -
FIG. 15 is a flowchartillustrating operation 120 shown inFIG. 6 oroperation 240 shown inFIG. 12 according to an embodiment of the present invention; -
FIG. 16 is a block diagram of an n-th server according to an embodiment of the present invention; and -
FIG. 17 is a flowchartillustrating operation 204 according to an embodiment of the present invention when the flowchart shown inFIG. 9 illustrates an embodiment ofoperation 44 shown inFIG. 2 . - Reference will now be made in detail to the present embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
-
FIG. 1 is a schematic block diagram of a multi-layered speech recognition apparatus according to an embodiment of the present invention. The multi-layered speech recognition apparatus ofFIG. 1 includes aclient 10 andN servers -
FIG. 2 is a flowchart illustrating a multi-layered speech recognition method performed in the multi-layered speech recognition apparatus shown inFIG. 1 . The multi-layered speech recognition method ofFIG. 2 includes theclient 10 recognizing speech or transmitting a characteristic of the speech (operation 40) and at least one server recognizing the speech that is not recognized by theclient 10 itself (operations 42 and 44). - In
operation 40, theclient 10 shown inFIG. 1 inputs the speech to be recognized through an input terminal IN1, extracts a characteristic of the input speech, checks whether theclient 10 itself can recognize the speech using the extracted characteristic of the speech, and recognizes the speech or transmits the characteristic of the speech to one of theservers client 10 has a small capacity of resources, like in a mobile phone, a remote controller, or a robot, and can perform word speech recognition and/or connected word speech recognition. The resources may be a processing speed of a central processing unit (CPU) and the size of a memory that stores data for speech recognition. The word speech recognition is to recognize one word, for example, command such as ‘cleaning,’ etc. which are sent to a robot, etc., and the connected word speech recognition is to recognize two or more simply-connected words required for a mobile phone etc., such as ‘send message,’ etc. - Hereinafter, a server of the
servers client 10, is referred to as a first server, a server which directly receives a characteristic of the speech transmitted from the first server or from a certain server is referred to a different server, and the different server is also referred to as an n-th server (2≦n≦N). - After
operation 40, inoperation 42, thefirst server client 10, checks whether thefirst server - After
operation 42, inoperation 44, the n-th server receives a characteristic of the speech transmitted from a (n−1)-th server, checks whether the n-th server itself can recognize the speech using the received characteristic of the speech, and recognizes speech or transmits the characteristic of the speech to a (n+1)-th server according to a checked result. For example, when the n-th server itself cannot recognize the speech, the (n+1)-th server performsoperation 44, and when the (n+1)-th server itself cannot recognize the speech, a (n+2)-th server performsoperation 44. In this way, several servers try to perform speech recognition until the speech is recognized by one of the servers. - In this case, when one of first through N-th servers recognizes the speech, a recognized result may be outputted via an output terminal (OUT1, OUT2, . . . , or OUTN) but may also be outputted to the
client 10. This is because theclient 10 can also use the result of speech recognition even though theclient 10 has not recognized the speech. - According to an embodiment of the present invention, when the result of speech recognized by one of the first through N-th servers is outputted to the
client 10, theclient 10 may also inform a user of theclient 10 via an output terminal OUTN+1 whether the speech is recognized by the server. - Each of the
servers FIG. 1 does not extract the characteristic of the speech but receives the characteristic of the speech extracted from theclient 10 and can perform speech recognition immediately. Each of theservers client 10, and theservers networks FIG. 1 , regardless of having small or large resource capacity. Likewise, theservers client 10 may be connected to one another vianetworks - In the multi-layered speech recognition apparatus and method shown in
FIGS. 1 and 2 , theclient 10 tries to perform speech recognition (operation 40). When theclient 10 does not recognize the speech, the first server having a larger capacity of resource than theclient 10 tries to perform speech recognition (operation 42). When the first server does not recognize the speech, servers having larger capacities of resources than the first server try to perform speech recognition, one after another (operation 44). - When the speech is a comparatively simple natural language, the speech can be recognized by the first server having a small resource capacity. In this case, the multi-layered speech recognition method shown in
FIG. 2 may includeoperations operation 44. However, when the speech is a natural language in a comparatively long sentence, the speech can be recognized by a different server having a large capacity of resource. In this case, the multi-layered speech recognition method shown inFIG. 2 includesoperations - Hereinafter, the configuration and operation of the multi-layered speech recognition apparatus according to embodiments of the present invention and the multi-layered speech recognition method performed in the multi-layered speech recognition apparatus will be described with reference to the accompanying drawings.
-
FIG. 3 is a block diagram of theclient 10 shown inFIG. 1 according to anembodiment 10A of the present invention. Theclient 10A ofFIG. 3 includes aspeech input unit 60, a speechcharacteristic extraction unit 62, aclient adjustment unit 64, a clientspeech recognition unit 66, aclient application unit 68, and a clientcompression transmission unit 70. -
FIG. 4 is aflowchart illustrating operation 40 shown inFIG. 2 according to anembodiment 40A of the present invention.Operation 40A ofFIG. 4 includes extracting a characteristic of speech using a detected valid speech section (operations 80 and 82) and recognizing the speech or transmitting the extracted characteristic of the speech depending on whether the speech can be recognized by a client itself (operations 84 through 88). - In
operation 80, thespeech input unit 60 shown inFIG. 3 inputs speech through, for example, a microphone, through an input terminal IN2 from the outside, detects a valid speech section from the input speech, and outputs the detected valid speech section to the speechcharacteristic extraction unit 62. - After
operation 80, inoperation 82, the speechcharacteristic extraction unit 62 extracts a characteristic of the speech to be recognized from the valid speech section and outputs the extracted characteristic of the speech to theclient adjustment unit 64. Here, the speechcharacteristic extraction unit 60 can also extract the characteristic of the speech in a vector format from the valid speech section. - According to another embodiment of the present invention, the
client 10A shown inFIG. 3 may not include thespeech input unit 60, andoperation 40A shown inFIG. 4 may also not includeoperation 80. In this case, inoperation 82, the speechcharacteristic extraction unit 62 directly inputs the speech to be recognized through the input terminal IN2, extracts the characteristic of the input speech, and outputs the extracted characteristic to theclient adjustment unit 64. - After
operation 82, inoperations client adjustment unit 64 checks whether theclient 10 itself can recognize speech using the characteristic extracted by the speechcharacteristic extraction unit 62 and transmits the characteristic of the speech to a first server through an output terminal OUTN+3 or outputs the characteristic of the speech to the clientspeech recognition unit 66, according to a checked result. - For example, in
operation 84, theclient adjustment unit 64 determines whether theclient 10 itself can recognize the speech, using the characteristic of the speech extracted by the speechcharacteristic extraction unit 62. If it is determined that theclient 10 itself cannot recognize the speech, inoperation 88, theclient adjustment unit 64 transmits the extracted characteristic of the speech to the first server through the output terminal OUTN+3. However, if it is determined that theclient 10 itself can recognize the speech, theclient adjustment unit 64 outputs the characteristic of the speech extracted by the speechcharacteristic extraction unit 62 to the clientspeech recognition unit 66. - Thus, in
operation 86, the clientspeech recognition unit 66 recognizes the speech from the characteristic input from theclient adjustment unit 64. In this case, the clientspeech recognition unit 66 can output a recognized result in a text format to theclient application unit 68. - The
client application unit 68 shown inFIG. 3 performs the same function as theclient 10 using a result recognized by the clientspeech recognition unit 66 and outputs a result through an output terminal OUTN+2. For example, when theclient 10 is a robot, the function performed by theclient application unit 68 may be a function of controlling the operation of the robot. - The
client 10 shown inFIG. 3 may not include theclient application unit 68. In this case, the clientspeech recognition unit 66 directly outputs a recognized result to the outside. -
FIG. 5 is a block diagram of theclient adjustment unit 64 shown inFIG. 3 according to anembodiment 64A of the present invention. Theclient adjustment unit 64A ofFIG. 5 includes a client topic-checkingportion 100, aclient comparison portion 102, and a client output-controllingportion 104. -
FIG. 6 is aflowchart illustrating operation 84 shown inFIG. 4 according to anembodiment 84A of the present invention.Operation 84A includes calculating a score of a topic, which is most similar to an extracted characteristic of speech (operation 120), and comparing the calculated score with a client threshold value (operation 122). - After
operation 82, inoperation 120, the client topic-checkingportion 100 detects a topic, which is most similar to the characteristic of the speech extracted by the speechcharacteristic extraction unit 62 and input through an input terminal IN3, calculates a score of the detected most similar topic, outputs a calculated score to theclient comparison portion 102, and outputs the most similar topic to the clientspeech recognition unit 66 through an output terminal OUTN+5. - After
operation 120, inoperation 122, theclient comparison portion 102 compares the detected score with the client threshold value and outputs a compared result to the client output-controllingportion 104 and to the clientspeech recognition unit 66 through an output terminal OUTN+6 (operation 122). Here, the client threshold value is a predetermined value and may be determined experimentally. - For a better understanding of the present invention, assuming that the score is larger than the client threshold value when the speech can be recognized by the client itself, if the score is larger than the client threshold value, the method proceeds to
operation 86 and the clientspeech recognition unit 66 recognizes the speech. However, if the score is not larger than the client threshold value, the method proceeds tooperation 88 and transmits the extracted characteristic of the speech to the first server. - To this end, the client output-controlling
portion 104 outputs the extracted characteristic of the speech input from the speechcharacteristic extraction unit 62 through an input terminal IN3, to the clientspeech recognition unit 66 through an output terminal OUTN+4 according to a result compared by theclient comparison portion 102 or transmits the characteristic of the speech to the first server through the output terminal OUTN+4 (where OUTN+4 corresponds to an output terminal OUTN+3 shown inFIG. 3 ). More specifically, if it is recognized by the result compared by theclient comparison portion 102 that the score is larger than the client threshold value, the client output-controllingportion 104 outputs the extracted characteristic of the speech input from the speechcharacteristic extraction unit 62 through the input terminal IN3, to the clientspeech recognition unit 66 through the output terminal OUTN+4. However, if it is recognized by the result compared by theclient comparison portion 102 that the score is not larger than the client threshold value, inoperation 88, the client output-controllingportion 104 transmits the extracted characteristic of the speech input from the speechcharacteristic extraction unit 62 through the input terminal IN3 to the first server through the output terminal OUTN+4. -
FIG. 7 is a block diagram of the clientspeech recognition unit 66 shown inFIG. 3 according to anembodiment 66A of the present invention. The clientspeech recognition unit 66A ofFIG. 7 includes a clientdecoder selection portion 160 and first through P-thspeech recognition decoders portion 100 shown inFIG. 5 . That is, the clientspeech recognition unit 66 shown inFIG. 3 may include a speech recognition decoder according to each topic, as shown inFIG. 7 . - The client
decoder selection portion 160 selects a speech recognition decoder corresponding to a detected most similar topic input from the client topic-checkingportion 100 through an input terminal IN4, from the first through P-thspeech recognition decoders decoder selection portion 160 outputs the characteristic of the speech input from the client output-controllingportion 104 through the input terminal IN4 to the selected speech recognition decoder. To perform this operation, the clientdecoder selection portion 160 should be activated in response to a compared result input from theclient comparison portion 102. More specifically, if it is recognized by the compared result input from theclient comparison portion 102 that the score is larger than the client threshold value, the clientdecoder selection portion 160 selects a speech recognition decoder and outputs the characteristic of the speech to the selected speech recognition decoder, as previously described. - The p-th (1≦p≦P) speech recognition decoder shown in
FIG. 7 recognizes speech from the characteristic output from the clientdecoder selection portion 160 and outputs a recognized result through an output terminal OUTN+6+p. -
FIG. 8 is a block diagram of a q-th server according to an embodiment of the present invention. The first server ofFIG. 8 includes a clientrestoration receiving unit 180, aserver adjustment unit 182, a serverspeech recognition unit 184, aserver application unit 186, and a servercompression transmission unit 188. Here, 1≦q≦N. For an explanatory convenience, the q-th server is assumed to be the first server. However, the present invention is not limited to this assumption. -
FIG. 9 is aflowchart illustrating operation 42 shown inFIG. 2 according to an embodiment of the present invention.Operation 42 ofFIG. 9 includes recognizing speech or transmitting a received characteristic of the speech depending on whether a first server itself can recognize the speech (operations 200 through 204). - Before describing the apparatus shown in
FIG. 8 and the method shown inFIG. 9 , an environment of a network via which the characteristic of the speech is transmitted from theclient 10 to the first server will now be described. - According to an embodiment of the present invention, the network via which the characteristic of the speech is transmitted from the
client 10 to the first server may be a loss channel or a lossless channel. Here, the loss channel is a channel via which a loss occurs when data or a signal is transmitted and may be a wire/wireless speech channel, for example. In addition, a lossless channel is a channel via which a loss does not occur when data or a signal is transmitted and may be a wireless LAN data channel such as a transmission control protocol (TCP). In this case, since a loss occurs when the characteristic of the speech is transmitted to the loss channel, in order to transmit the characteristic of the speech to the loss channel, a characteristic to be transmitted from theclient 10 is compressed, and the first server should restore the compressed characteristic of the speech. - For example, as shown in
FIG. 3 , theclient 10A may further include the clientcompression transmission unit 70. Here, the clientcompression transmission unit 70 compresses the characteristic of the speech according to a result obtained by theclient comparison portion 102 of theclient adjustment unit 64A and in response to a transmission format signal and transmits the compressed characteristic of the speech to the first server through an output terminal OUTN+4 via the loss channel. Here, the transmission format signal is a signal generated by theclient adjustment unit 64 when a network via which the characteristic of the speech is transmitted is a loss channel. More specifically, if it is recognized by the result compared by theclient comparison portion 102 that the score is not larger than the client threshold value, the clientcompression transmission unit 70 compresses the characteristic of the speech and transmits the compressed characteristic of the speech in response to the transmission format signal input from theclient adjustment unit 64. -
FIG. 10 is aflowchart illustrating operation 88 shown inFIG. 4 according to an embodiment of the present invention.Operation 88 ofFIG. 10 includes compressing and transmitting a characteristic of speech depending on whether the characteristic of the speech is to be transmitted via a loss channel or a lossless channel (operations 210 through 214). - In
operation 210, theclient adjustment unit 64 determines whether the characteristic extracted by the speechcharacteristic extraction unit 62 is to be transmitted via the loss channel or the lossless channel. That is, theclient adjustment unit 64 determines whether the network via which the characteristic of the speech is transmitted is a loss channel or a lossless channel. - If it is determined that the extracted characteristic of the speech is to be transmitted via the lossless channel, in
operation 212, theclient adjustment unit 64 transmits the characteristic of the speech extracted by the speechcharacteristic extraction unit 62 to the first server through an output terminal OUTN+3 via the lossless channel. - However, if it is determined that the extracted characteristic of the speech is to be transmitted via the loss channel, the
client adjustment unit 64 generates a transmission format signal and outputs the transmission format signal to the clientcompression transmission unit 70. In this case, inoperation 214, the clientcompression transmission unit 70 compresses the characteristic of the speech extracted by the speechcharacteristic extraction unit 62 and input from theclient adjustment unit 64 and transmits the compressed characteristic of the speech to the first server through an output terminal OUTN+4 via the loss channel. - The client
restoration receiving portion 180 shown inFIG. 8 receives the compressed characteristic of the speech transmitted from the clientcompression transmission unit 70 shown inFIG. 3 through an input terminal IN5, restores the received compressed characteristic of the speech, and outputs the restored characteristic of the speech to theserver adjustment portion 182. In this case, the first server shown inFIG. 8 performsoperation 42 shown inFIG. 2 using the restored characteristic of the speech. - According to another embodiment of the present invention, the
client 10A shown inFIG. 3 may not include the clientcompression transmission unit 70. In this case, the first server shown inFIG. 8 may not include the clientrestoration receiving portion 180, theserver adjustment portion 182 directly receives the characteristic of the speech transmitted from theclient 10 through an input terminal IN6, and the first server shown inFIG. 8 performsoperation 42 shown inFIG. 2 using the received characteristic of the speech. - Hereinafter, for a better understanding of the present invention, assuming that the first server shown in
FIG. 8 does not include the clientrestoration receiving portion 180, the operation of the first server shown inFIG. 8 will be described. However, the present invention is not limited to this. - The
server adjustment portion 182 receives the characteristic of the speech transmitted from theclient 10 through the input terminal IN6, checks whether the first server itself can recognize the speech, using the received characteristic of the speech, and transmits the received characteristic of the speech to a different server or outputs the received characteristic of the speech to the serverspeech recognition unit 184, according to a checked result (operations 200 through 204). - In
operation 200, theserver adjustment unit 182 determines whether the first server itself can recognize the speech, using the received characteristic of the speech. If it is determined by theserver adjustment unit 182 that the first server itself can recognize the speech, inoperation 202, the serverspeech recognition unit 184 recognizes the speech using the received characteristic of the speech input from theserver adjustment unit 182 and outputs a recognized result. In this case, the serverspeech recognition unit 184 can output the recognized result in a textual format. - The
server application unit 186 performs as the first server using the recognized result and outputs a performed result through an output terminal OUTN+P+7. For example, when the first server is a home server, a function performed by theserver application unit 186 may be a function of controlling household appliances or searching information. - However, if it is determined by the
server adjustment unit 182 that the first server itself cannot recognize the speech, inoperation 204, theserver adjustment unit 182 transmits the received characteristic of the speech to a different server through an output terminal OUTN+P+8. -
FIG. 11 is a block diagram of theserver adjustment unit 182 shown inFIG. 8 according to anembodiment 182A of the present invention. Theserver adjustment unit 182A includes a server topic-checkingportion 220, aserver comparison portion 222, and a server output-controllingportion 224. -
FIG. 12 is aflowchart illustrating operation 200 shown inFIG. 9 according to anembodiment 200A of the present invention.Operation 200A ofFIG. 9 includes calculating a score of a topic that is most similar to a received characteristic of speech (operation 240) and comparing the score with a server threshold value (operation 242). - In
operation 240, the server topic-checkingportion 220 detects the topic that is most similar to the characteristic of the speech, which is transmitted from theclient 10 and received through an input terminal IN7, calculates a score of the detected most similar topic, outputs the calculated score to theserver comparison portion 222, and outputs the most similar topic to the serverspeech recognition unit 184 through an output terminal OUTN+P+11. - After
operation 240, inoperation 242, theserver comparison portion 222 compares the score detected by the server topic-checkingportion 220 with the server threshold value and outputs a compared result to the server output-controllingportion 224 and to the serverspeech recognition unit 184 through an output terminal OUTN+P+12. Here, the server threshold value is a predetermined value and may be determined experimentally. - For a better understanding of the present invention, if the score is larger than the server threshold value, in
operation 202, the serverspeech recognition unit 184 recognizes the speech. However, if the score is not larger than the server threshold value, inoperation 204, theserver adjustment unit 182 transmits the received characteristic of the speech to a different server. - To this end, the server output-controlling
portion 224 outputs the characteristic of the speech received through an input terminal IN7 to the serverspeech recognition unit 184 through an output terminal OUTN+P+10 or transmits the received characteristic of the speech to a different server through the output terminal OUTN+P+10 (where OUTN+P+10 corresponds to an output terminal OUTN+P+8 shown inFIG. 8 ) in response to a result compared by theserver comparison portion 222. More specifically, if it is recognized by the result compared by theserver comparison portion 222 that the score is larger than the server threshold value, the server output-controllingportion 224 outputs the characteristic of the speech received through the input terminal IN7 to the serverspeech recognition unit 184 through the output terminal OUTN+P+10. However, if it is recognized by the result compared by theserver comparison portion 222 that the score is not larger than the server threshold value, inoperation 204, the server output-controllingportion 224 transmits the characteristic of the speech received through the input terminal IN7 to a different server through the output terminal OUTN+P+10. -
FIG. 13 is a block diagram of the serverspeech recognition unit 184 shown inFIG. 8 according to anembodiment 184A of the present invention. The serverspeech recognition unit 184A ofFIG. 13 includes a serverdecoder selection portion 260 and first through R-thspeech recognition decoders portion 220 shown inFIG. 11 . That is, the serverspeech recognition unit 184 shown inFIG. 8 may include a speech recognition decoder according to each topic, as shown inFIG. 13 . - The server
decoder selection portion 260 selects a speech recognition decoder corresponding to a detected most similar topic input from the server topic-checkingportion 220 through an input terminal IN8, from the first through R-thspeech recognition decoders decoder selection portion 260 outputs the characteristic of the speech input from the server output-controllingportion 224 through the input terminal IN8 to the selected speech recognition decoder. To perform this operation, the clientdecoder selection portion 260 should be activated in response to a compared result input from theserver comparison portion 222. More specifically, if it is recognized by the compared result input from theserver comparison portion 222 that the score is larger than the server threshold value, the serverdecoder selection portion 260 selects a speech recognition decoder and outputs the characteristic of the speech to the selected speech recognition decoder, as previously described. - The r-th (1≦r≦R) speech recognition decoder shown in
FIG. 13 recognizes speech from the received characteristic input from the serverdecoder selection portion 260 and outputs a recognized result through an output terminal OUTN+P+r+12. -
FIG. 14 is a block diagram of the client topic-checkingportion 100 shown inFIG. 5 or the server topic-checkingportion 220 shown inFIG. 11 according to an embodiment of the present invention. The client topic-checkingportion 100 or the server topic-checkingportion 220 ofFIG. 14 includes akeyword storage portion 280, akeyword search portion 282, and ascore calculation portion 284. -
FIG. 15 is aflowchart illustrating operation 120 shown inFIG. 6 oroperation 240 shown inFIG. 12 according to an embodiment of the present invention.Operation - In
operation 300, thekeyword search portion 282 searches keywords having a characteristic of speech similar to a characteristic of speech input through an input terminal IN9, from a plurality of keywords that have been previously stored in thekeyword storage portion 280, and outputs the searched keywords in a list format to thescore calculation portion 284. To this end, thekeyword storage portion 280 stores a plurality of keywords. Each of the keywords stored in thekeyword storage portion 280 has its own speech characteristic and scores according to each topic. That is, an i-th keyword Keywordi stored in thekeyword storage portion 280 has a format such as [a speech characteristic of Keywordi, Topic1i, Score1i, Topic2i, Score2i, . . . ]. Here, Topicki is a k-th topic for Keywordi and Scoreki is a score of Topicki. - After
operation 300, inoperation 302, thescore calculation portion 284 calculates scores according to each topic from the searched keywords having the list format input from thekeyword search portion 282, selects a largest score from the calculated scores according to each topic, outputs the selected largest score as a score of a most similar topic through an output terminal OUTN+P+R+13, and outputs a topic having the selected largest score as a most similar topic through an output terminal OUTN+P+R+14. For example, thescore calculation portion 284 can calculate scores according to each topic using Equation 1: -
- where Score(Topick) is a score for a k-th topic Topick and # is a total number of searched keywords having the list format input from the
keyword search portion 282. Consequently, as shown inEquation 1, Score(Topick) means a result of multiplication of scores, that is, Scorek1 to Scorek# for Topick among keywords from Keyword1 to Keyword#. -
FIG. 16 is a block diagram of an n-th server according to an embodiment of the present invention. The n-th server includes a serverrestoration receiving unit 320, aserver adjustment unit 322, a serverspeech recognition unit 324, aserver application unit 326, and a servercompression transmission unit 328. As described above, the n-th server may be a server that receives a characteristic of speech transmitted from a first server or a server that receives a characteristic of speech transmitted from a certain server excluding the first server and recognizes the speech. - Before describing the apparatus shown in
FIG. 16 , an environment of a network via which a characteristic of speech is transmitted will now be described. - The flowchart shown in
FIG. 9 may also be aflowchart illustrating operation 44 shown inFIG. 2 according to an embodiment of the present invention. In this case, inoperation 200 shown inFIG. 9 , it is determined whether the n-th server itself instead of the first server itself can recognize the speech. - According to an embodiment of the present invention, the network via which the characteristic of the speech is transmitted from one server to a different server, for example, from the first server to the n-th server or from the n-th server to a (n+1)-th server may be a loss channel or a lossless channel. In this case, since a loss occurs when the characteristic of the speech is transmitted to the loss channel, in order to transmit the characteristic of the speech to the loss channel, a characteristic to be transmitted from the first server (or the n-th server) is compressed, and the n-th server (or the (n+1)-th server) should restore the compressed characteristic of the speech.
- Hereinafter, for a better understanding of the present invention, assuming that the characteristic of the speech is transmitted from the first server to the n-th server,
FIG. 16 will be described. However, the following description may be applied to a case where the characteristic of the speech is transmitted from the n-th server to the (n+1)-th server. - As shown in
FIG. 8 , the first server may further include the servercompression transmission unit 188. Here, the servercompression transmission unit 188 compresses the characteristic of the speech according to a result compared by theserver comparison portion 222 of theserver adjustment unit 182A and in response to a transmission format signal and transmits the compressed characteristic of the speech to the n-th server through an output terminal OUTN+P+9 via the loss channel. Here, the transmission format signal is a signal generated by theserver adjustment unit 182 when a network via which the characteristic of the speech is transmitted is a loss channel. More specifically, if it is recognized by the result compared by theserver comparison portion 222 that the score is not larger than the server threshold value, the servercompression transmission unit 188 compresses the characteristic of the speech and transmits the compressed characteristic of the speech in response to the transmission format signal input from theserver adjustment unit 182. -
FIG. 17 is aflowchart illustrating operation 204 according to an embodiment of the present invention when the flowchart shown inFIG. 9 illustrates an embodiment ofoperation 44 shown inFIG. 2 .Operation 204 ofFIG. 17 includes compressing and transmitting a characteristic of speech depending on whether the characteristic of the speech is to be transmitted via a loss channel or a lossless channel (operations 340 through 344). - In
operation 340, theserver adjustment unit 182 determines whether a characteristic of speech is transmitted via the loss channel or the lossless channel. If it is determined that the received characteristic of the speech is to be transmitted via the lossless channel, inoperation 342, theserver adjustment unit 182 transmits the received characteristic of the speech to the n-th server through an output terminal OUTN+P+8 via the lossless channel. - However, if it is determined that the received characteristic of the speech is to be transmitted via the loss channel, the
server adjustment unit 182 generates a transmission format signal and outputs the transmission format signal to the servercompression transmission unit 188. In this case, inoperation 344, the servercompression transmission unit 188 compresses the characteristic of the speech input from theserver adjustment unit 182 when the transmission format signal is input from theserver adjustment unit 182 and transmits the compressed characteristic of the speech to the n-th server through an output terminal OUTN+P+9 via the loss channel. - Thus, the server
restoration receiving unit 320 shown inFIG. 16 receives the characteristic of the speech transmitted from thecompression transmission unit 188 shown inFIG. 8 through an input terminal IN10, restores the received compressed characteristic of the speech, and outputs the restored characteristic of the speech to theserver adjustment unit 322. In this case, the n-th server performsoperation 44 shown inFIG. 2 using the restored characteristic of the speech. - According to another embodiment of the present invention, the first server shown in
FIG. 8 may not include the servercompression transmission unit 188. In this case, the n-th server shown inFIG. 16 may not include the serverrestoration receiving unit 320. Theserver adjustment unit 322 directly receives the characteristic of the speech transmitted from the first server through an input terminal IN11, and the n-th server performsoperation 44 shown inFIG. 2 using the received characteristic of the speech. - The
server adjustment unit 322, the serverspeech recognition unit 324, theserver application unit 326, and the servercompression transmission unit 328 ofFIG. 16 perform the same functions as those of theserver adjustment unit 182, the serverspeech recognition unit 184, theserver application unit 186, and the servercompression transmission unit 188 ofFIG. 8 , and thus, a detailed description thereof will be omitted. Thus, the output terminals OUTN+P+R+15, OUTN+P+R+16, and OUTN+P+R+17 shown inFIG. 16 correspond to the output terminals OUTN+P+R+7, OUTN+P+R+8, and OUTN+P+R+9, respectively, shown inFIG. 8 . - As described above, in the multi-layered speech recognition apparatus and method according to the present invention, since speech recognition is to be performed in a multi-layered manner using a client and at least one server, which are connected to each other in a multi-layered manner via a network, a user of a client can recognize speech with high quality. For example, the client can recognize speech continuously, and the load on speech recognition between a client and at least one server is optimally dispersed such that the speed of speech recognition can be improved.
- Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in this embodiment without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Claims (3)
1. A method of speech recognition, the method comprising;
obtaining the speech from a user;
characterizing the speech in a client; and
optimally distributing a workload of the speech recognition in the client and servers, based on the characterizing of the speech
2. The method of claim 1 , wherein the distributing a workload of the speech recognition comprises determining whether the client recognizes the speech and transmitting the speech that the client itself does not recognize.
3. The method of claim 1 , wherein the distributing a workload of the speech recognition in the client and server is performed by the client and a first server, the distributing a workload of the speech recognition in servers is performed by a n-th server, and
wherein the first server has a larger capacity of resource than the client and the n-th server has a larger capacity of resource than the (n−1)-th server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/732,576 US8892425B2 (en) | 2004-10-08 | 2013-01-02 | Multi-layered speech recognition apparatus and method |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020040080352A KR100695127B1 (en) | 2004-10-08 | 2004-10-08 | Multi-Layered speech recognition apparatus and method |
US11/120,983 US8370159B2 (en) | 2004-10-08 | 2005-05-04 | Multi-layered speech recognition apparatus and method |
US13/732,576 US8892425B2 (en) | 2004-10-08 | 2013-01-02 | Multi-layered speech recognition apparatus and method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/120,983 Continuation US8370159B2 (en) | 2004-10-08 | 2005-05-04 | Multi-layered speech recognition apparatus and method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130124197A1 true US20130124197A1 (en) | 2013-05-16 |
US8892425B2 US8892425B2 (en) | 2014-11-18 |
Family
ID=36146471
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/120,983 Active 2028-04-20 US8370159B2 (en) | 2004-10-08 | 2005-05-04 | Multi-layered speech recognition apparatus and method |
US13/478,656 Active US8380517B2 (en) | 2004-10-08 | 2012-05-23 | Multi-layered speech recognition apparatus and method |
US13/732,576 Active US8892425B2 (en) | 2004-10-08 | 2013-01-02 | Multi-layered speech recognition apparatus and method |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/120,983 Active 2028-04-20 US8370159B2 (en) | 2004-10-08 | 2005-05-04 | Multi-layered speech recognition apparatus and method |
US13/478,656 Active US8380517B2 (en) | 2004-10-08 | 2012-05-23 | Multi-layered speech recognition apparatus and method |
Country Status (5)
Country | Link |
---|---|
US (3) | US8370159B2 (en) |
EP (1) | EP1646038B1 (en) |
JP (1) | JP5058474B2 (en) |
KR (1) | KR100695127B1 (en) |
DE (1) | DE602005000628T2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10438590B2 (en) * | 2016-12-31 | 2019-10-08 | Lenovo (Beijing) Co., Ltd. | Voice recognition |
USRE48569E1 (en) | 2013-04-19 | 2021-05-25 | Panasonic Intellectual Property Corporation Of America | Control method for household electrical appliance, household electrical appliance control system, and gateway |
Families Citing this family (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5320064B2 (en) * | 2005-08-09 | 2013-10-23 | モバイル・ヴォイス・コントロール・エルエルシー | Voice-controlled wireless communication device / system |
US8996379B2 (en) * | 2007-03-07 | 2015-03-31 | Vlingo Corporation | Speech recognition text entry for software applications |
US8886545B2 (en) | 2007-03-07 | 2014-11-11 | Vlingo Corporation | Dealing with switch latency in speech recognition |
US8886540B2 (en) * | 2007-03-07 | 2014-11-11 | Vlingo Corporation | Using speech recognition results based on an unstructured language model in a mobile communication facility application |
US10056077B2 (en) | 2007-03-07 | 2018-08-21 | Nuance Communications, Inc. | Using speech recognition results based on an unstructured language model with a music system |
US8838457B2 (en) * | 2007-03-07 | 2014-09-16 | Vlingo Corporation | Using results of unstructured language model based speech recognition to control a system-level function of a mobile communications facility |
US8949266B2 (en) * | 2007-03-07 | 2015-02-03 | Vlingo Corporation | Multiple web-based content category searching in mobile search application |
US20080221889A1 (en) * | 2007-03-07 | 2008-09-11 | Cerra Joseph P | Mobile content search environment speech processing facility |
US20090030691A1 (en) * | 2007-03-07 | 2009-01-29 | Cerra Joseph P | Using an unstructured language model associated with an application of a mobile communication facility |
US8635243B2 (en) * | 2007-03-07 | 2014-01-21 | Research In Motion Limited | Sending a communications header with voice recording to send metadata for use in speech recognition, formatting, and search mobile search application |
US8949130B2 (en) * | 2007-03-07 | 2015-02-03 | Vlingo Corporation | Internal and external speech recognition use with a mobile communication facility |
US8180641B2 (en) * | 2008-09-29 | 2012-05-15 | Microsoft Corporation | Sequential speech recognition with two unequal ASR systems |
US8515762B2 (en) * | 2009-01-22 | 2013-08-20 | Microsoft Corporation | Markup language-based selection and utilization of recognizers for utterance processing |
CN103038818B (en) * | 2010-06-24 | 2016-10-12 | 本田技研工业株式会社 | Communication system between the outer speech recognition system of vehicle-mounted voice identification system and car and method |
US8818797B2 (en) * | 2010-12-23 | 2014-08-26 | Microsoft Corporation | Dual-band speech encoding |
US9953643B2 (en) * | 2010-12-23 | 2018-04-24 | Lenovo (Singapore) Pte. Ltd. | Selective transmission of voice data |
US8996381B2 (en) * | 2011-09-27 | 2015-03-31 | Sensory, Incorporated | Background speech recognition assistant |
US8768707B2 (en) | 2011-09-27 | 2014-07-01 | Sensory Incorporated | Background speech recognition assistant using speaker verification |
JP5821639B2 (en) * | 2012-01-05 | 2015-11-24 | 株式会社デンソー | Voice recognition device |
US9093076B2 (en) * | 2012-04-30 | 2015-07-28 | 2236008 Ontario Inc. | Multipass ASR controlling multiple applications |
US9431012B2 (en) | 2012-04-30 | 2016-08-30 | 2236008 Ontario Inc. | Post processing of natural language automatic speech recognition |
US9583100B2 (en) * | 2012-09-05 | 2017-02-28 | GM Global Technology Operations LLC | Centralized speech logger analysis |
CN104769668B (en) * | 2012-10-04 | 2018-10-30 | 纽昂斯通讯公司 | The improved mixture control for ASR |
CN103730117A (en) * | 2012-10-12 | 2014-04-16 | 中兴通讯股份有限公司 | Self-adaptation intelligent voice device and method |
US9305554B2 (en) * | 2013-07-17 | 2016-04-05 | Samsung Electronics Co., Ltd. | Multi-level speech recognition |
KR102060661B1 (en) * | 2013-07-19 | 2020-02-11 | 삼성전자주식회사 | Method and divece for communication |
DE102014200570A1 (en) * | 2014-01-15 | 2015-07-16 | Bayerische Motoren Werke Aktiengesellschaft | Method and system for generating a control command |
KR102387567B1 (en) * | 2015-01-19 | 2022-04-18 | 삼성전자주식회사 | Method and apparatus for speech recognition |
US9966073B2 (en) | 2015-05-27 | 2018-05-08 | Google Llc | Context-sensitive dynamic update of voice to text model in a voice-enabled electronic device |
US10083697B2 (en) | 2015-05-27 | 2018-09-25 | Google Llc | Local persisting of data for selectively offline capable voice action in a voice-enabled electronic device |
KR101934280B1 (en) * | 2016-10-05 | 2019-01-03 | 현대자동차주식회사 | Apparatus and method for analyzing speech meaning |
US10971157B2 (en) | 2017-01-11 | 2021-04-06 | Nuance Communications, Inc. | Methods and apparatus for hybrid speech recognition processing |
JP6843388B2 (en) * | 2017-03-31 | 2021-03-17 | 株式会社アドバンスト・メディア | Information processing system, information processing device, information processing method and program |
DE102017206281A1 (en) * | 2017-04-12 | 2018-10-18 | Bayerische Motoren Werke Aktiengesellschaft | Processing a voice input |
KR20180118461A (en) | 2017-04-21 | 2018-10-31 | 엘지전자 주식회사 | Voice recognition module and and voice recognition method |
DE102017123443A1 (en) * | 2017-10-09 | 2019-04-11 | Lenze Automation Gmbh | System for controlling and / or diagnosing an electric drive system |
US10839809B1 (en) * | 2017-12-12 | 2020-11-17 | Amazon Technologies, Inc. | Online training with delayed feedback |
US11087766B2 (en) * | 2018-01-05 | 2021-08-10 | Uniphore Software Systems | System and method for dynamic speech recognition selection based on speech rate or business domain |
CN113921016A (en) * | 2021-10-15 | 2022-01-11 | 阿波罗智联(北京)科技有限公司 | Voice processing method, device, electronic equipment and storage medium |
Citations (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5224167A (en) * | 1989-09-11 | 1993-06-29 | Fujitsu Limited | Speech coding apparatus using multimode coding |
US5825830A (en) * | 1995-08-17 | 1998-10-20 | Kopf; David A. | Method and apparatus for the compression of audio, video or other data |
US5960399A (en) * | 1996-12-24 | 1999-09-28 | Gte Internetworking Incorporated | Client/server speech processor/recognizer |
US20010010711A1 (en) * | 1995-11-21 | 2001-08-02 | U.S. Philips Corporation | Digital transmission system for transmitting a digital audio signal being in the form of samples of a specific wordlength and occurring at a specific sampling rate |
US20010031092A1 (en) * | 2000-05-01 | 2001-10-18 | Zeck Norman W. | Method for compressing digital documents with control of image quality and compression rate |
US6327568B1 (en) * | 1997-11-14 | 2001-12-04 | U.S. Philips Corporation | Distributed hardware sharing for speech processing |
US20020046023A1 (en) * | 1995-08-18 | 2002-04-18 | Kenichi Fujii | Speech recognition system, speech recognition apparatus, and speech recognition method |
US20020045961A1 (en) * | 2000-10-13 | 2002-04-18 | Interactive Objects, Inc. | System and method for data transfer optimization in a portable audio device |
US20020059068A1 (en) * | 2000-10-13 | 2002-05-16 | At&T Corporation | Systems and methods for automatic speech recognition |
US20020091528A1 (en) * | 1997-04-14 | 2002-07-11 | Daragosh Pamela Leigh | System and method for providing remote automatic speech recognition and text to speech services via a packet network |
US20020137517A1 (en) * | 2000-05-31 | 2002-09-26 | Williams Bill G. | Wireless communication device with multiple external communication links |
US6487534B1 (en) * | 1999-03-26 | 2002-11-26 | U.S. Philips Corporation | Distributed client-server speech recognition system |
US6615171B1 (en) * | 1997-06-11 | 2003-09-02 | International Business Machines Corporation | Portable acoustic interface for remote access to automatic speech/speaker recognition server |
US6615172B1 (en) * | 1999-11-12 | 2003-09-02 | Phoenix Solutions, Inc. | Intelligent query engine for processing voice based queries |
US6633848B1 (en) * | 1998-04-03 | 2003-10-14 | Vertical Networks, Inc. | Prompt management method supporting multiple languages in a system having a multi-bus structure and controlled by remotely generated commands |
US6633846B1 (en) * | 1999-11-12 | 2003-10-14 | Phoenix Solutions, Inc. | Distributed realtime speech recognition system |
US6650773B1 (en) * | 2000-09-29 | 2003-11-18 | Hewlett-Packard Development Company, L.P. | Method including lossless compression of luminance channel and lossy compression of chrominance channels |
US20040148164A1 (en) * | 2003-01-23 | 2004-07-29 | Aurilab, Llc | Dual search acceleration technique for speech recognition |
US20040192384A1 (en) * | 2002-12-30 | 2004-09-30 | Tasos Anastasakos | Method and apparatus for selective distributed speech recognition |
US6804647B1 (en) * | 2001-03-13 | 2004-10-12 | Nuance Communications | Method and system for on-line unsupervised adaptation in speaker verification |
US20050010422A1 (en) * | 2003-07-07 | 2005-01-13 | Canon Kabushiki Kaisha | Speech processing apparatus and method |
US6898567B2 (en) * | 2001-12-29 | 2005-05-24 | Motorola, Inc. | Method and apparatus for multi-level distributed speech recognition |
US20050259566A1 (en) * | 2001-09-12 | 2005-11-24 | Jae-Hak Chung | Method and apparatus for transferring channel information in ofdm communications |
US20060009980A1 (en) * | 2004-07-12 | 2006-01-12 | Burke Paul M | Allocation of speech recognition tasks and combination of results thereof |
US20060080079A1 (en) * | 2004-09-29 | 2006-04-13 | Nec Corporation | Translation system, translation communication system, machine translation method, and medium embodying program |
US7085560B2 (en) * | 2000-05-31 | 2006-08-01 | Wahoo Communications Corporation | Wireless communications device with artificial intelligence-based distributive call routing |
US7120585B2 (en) * | 2000-03-24 | 2006-10-10 | Eliza Corporation | Remote server object architecture for speech recognition |
US7184957B2 (en) * | 2002-09-25 | 2007-02-27 | Toyota Infotechnology Center Co., Ltd. | Multiple pass speech recognition method and system |
US7343288B2 (en) * | 2002-05-08 | 2008-03-11 | Sap Ag | Method and system for the processing and storing of voice information and corresponding timeline information |
US7366673B2 (en) * | 2001-06-15 | 2008-04-29 | International Business Machines Corporation | Selective enablement of speech recognition grammars |
US7406413B2 (en) * | 2002-05-08 | 2008-07-29 | Sap Aktiengesellschaft | Method and system for the processing of voice data and for the recognition of a language |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03203794A (en) | 1989-12-29 | 1991-09-05 | Pioneer Electron Corp | Voice remote controller |
US5819220A (en) | 1996-09-30 | 1998-10-06 | Hewlett-Packard Company | Web triggered word set boosting for speech interfaces to the world wide web |
US6418431B1 (en) | 1998-03-30 | 2002-07-09 | Microsoft Corporation | Information retrieval and speech recognition based on language models |
US6377922B2 (en) | 1998-12-29 | 2002-04-23 | At&T Corp. | Distributed recognition system having multiple prompt-specific and response-specific speech recognizers |
US6556970B1 (en) | 1999-01-28 | 2003-04-29 | Denso Corporation | Apparatus for determining appropriate series of words carrying information to be recognized |
JP2001034292A (en) | 1999-07-26 | 2001-02-09 | Denso Corp | Word string recognizing device |
US6606280B1 (en) | 1999-02-22 | 2003-08-12 | Hewlett-Packard Development Company | Voice-operated remote control |
DE19910236A1 (en) | 1999-03-09 | 2000-09-21 | Philips Corp Intellectual Pty | Speech recognition method |
DE60015531T2 (en) * | 1999-03-26 | 2005-03-24 | Scansoft, Inc., Peabody | CLIENT SERVER VOICE RECOGNITION SYSTEM |
US6513006B2 (en) | 1999-08-26 | 2003-01-28 | Matsushita Electronic Industrial Co., Ltd. | Automatic control of household activity using speech recognition and natural language |
KR100331465B1 (en) * | 1999-10-26 | 2002-04-09 | 서평원 | Apparatus for Unificating Multiple Connection Ports in the Network |
JP2001142488A (en) | 1999-11-17 | 2001-05-25 | Oki Electric Ind Co Ltd | Voice recognition communication system |
US6594630B1 (en) | 1999-11-19 | 2003-07-15 | Voice Signal Technologies, Inc. | Voice-activated control for electrical device |
US6397186B1 (en) | 1999-12-22 | 2002-05-28 | Ambush Interactive, Inc. | Hands-free, voice-operated remote control transmitter |
JP2001188787A (en) * | 1999-12-28 | 2001-07-10 | Sony Corp | Device and method for processing conversation and recording medium |
JP2001319045A (en) | 2000-05-11 | 2001-11-16 | Matsushita Electric Works Ltd | Home agent system using vocal man-machine interface and program recording medium |
JP3728177B2 (en) * | 2000-05-24 | 2005-12-21 | キヤノン株式会社 | Audio processing system, apparatus, method, and storage medium |
JP3567864B2 (en) | 2000-07-21 | 2004-09-22 | 株式会社デンソー | Voice recognition device and recording medium |
JP3477432B2 (en) | 2000-08-04 | 2003-12-10 | 旭化成株式会社 | Speech recognition method and server and speech recognition system |
US6785654B2 (en) * | 2001-11-30 | 2004-08-31 | Dictaphone Corporation | Distributed speech recognition system with speech recognition engines offering multiple functionalities |
JP4017887B2 (en) | 2002-02-28 | 2007-12-05 | 富士通株式会社 | Voice recognition system and voice file recording system |
EP1411497A1 (en) * | 2002-10-18 | 2004-04-21 | Koninklijke KPN N.V. | System and method for hierarchical voice activated dialling and service selection |
JP3862169B2 (en) | 2002-12-05 | 2006-12-27 | オムロン株式会社 | Speech recognition service mediation system and speech recognition master reference method used therefor |
-
2004
- 2004-10-08 KR KR1020040080352A patent/KR100695127B1/en not_active IP Right Cessation
-
2005
- 2005-04-18 DE DE602005000628T patent/DE602005000628T2/en active Active
- 2005-04-18 EP EP05252392A patent/EP1646038B1/en not_active Ceased
- 2005-05-04 US US11/120,983 patent/US8370159B2/en active Active
- 2005-10-07 JP JP2005294761A patent/JP5058474B2/en not_active Expired - Fee Related
-
2012
- 2012-05-23 US US13/478,656 patent/US8380517B2/en active Active
-
2013
- 2013-01-02 US US13/732,576 patent/US8892425B2/en active Active
Patent Citations (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5224167A (en) * | 1989-09-11 | 1993-06-29 | Fujitsu Limited | Speech coding apparatus using multimode coding |
US5825830A (en) * | 1995-08-17 | 1998-10-20 | Kopf; David A. | Method and apparatus for the compression of audio, video or other data |
US20020046023A1 (en) * | 1995-08-18 | 2002-04-18 | Kenichi Fujii | Speech recognition system, speech recognition apparatus, and speech recognition method |
US20010010711A1 (en) * | 1995-11-21 | 2001-08-02 | U.S. Philips Corporation | Digital transmission system for transmitting a digital audio signal being in the form of samples of a specific wordlength and occurring at a specific sampling rate |
US5960399A (en) * | 1996-12-24 | 1999-09-28 | Gte Internetworking Incorporated | Client/server speech processor/recognizer |
US20020091528A1 (en) * | 1997-04-14 | 2002-07-11 | Daragosh Pamela Leigh | System and method for providing remote automatic speech recognition and text to speech services via a packet network |
US6615171B1 (en) * | 1997-06-11 | 2003-09-02 | International Business Machines Corporation | Portable acoustic interface for remote access to automatic speech/speaker recognition server |
US6327568B1 (en) * | 1997-11-14 | 2001-12-04 | U.S. Philips Corporation | Distributed hardware sharing for speech processing |
US6633848B1 (en) * | 1998-04-03 | 2003-10-14 | Vertical Networks, Inc. | Prompt management method supporting multiple languages in a system having a multi-bus structure and controlled by remotely generated commands |
US6487534B1 (en) * | 1999-03-26 | 2002-11-26 | U.S. Philips Corporation | Distributed client-server speech recognition system |
US6633846B1 (en) * | 1999-11-12 | 2003-10-14 | Phoenix Solutions, Inc. | Distributed realtime speech recognition system |
US6615172B1 (en) * | 1999-11-12 | 2003-09-02 | Phoenix Solutions, Inc. | Intelligent query engine for processing voice based queries |
US7120585B2 (en) * | 2000-03-24 | 2006-10-10 | Eliza Corporation | Remote server object architecture for speech recognition |
US20010031092A1 (en) * | 2000-05-01 | 2001-10-18 | Zeck Norman W. | Method for compressing digital documents with control of image quality and compression rate |
US20020137517A1 (en) * | 2000-05-31 | 2002-09-26 | Williams Bill G. | Wireless communication device with multiple external communication links |
US7085560B2 (en) * | 2000-05-31 | 2006-08-01 | Wahoo Communications Corporation | Wireless communications device with artificial intelligence-based distributive call routing |
US6650773B1 (en) * | 2000-09-29 | 2003-11-18 | Hewlett-Packard Development Company, L.P. | Method including lossless compression of luminance channel and lossy compression of chrominance channels |
US20020045961A1 (en) * | 2000-10-13 | 2002-04-18 | Interactive Objects, Inc. | System and method for data transfer optimization in a portable audio device |
US20020059068A1 (en) * | 2000-10-13 | 2002-05-16 | At&T Corporation | Systems and methods for automatic speech recognition |
US6804647B1 (en) * | 2001-03-13 | 2004-10-12 | Nuance Communications | Method and system for on-line unsupervised adaptation in speaker verification |
US7366673B2 (en) * | 2001-06-15 | 2008-04-29 | International Business Machines Corporation | Selective enablement of speech recognition grammars |
US20050259566A1 (en) * | 2001-09-12 | 2005-11-24 | Jae-Hak Chung | Method and apparatus for transferring channel information in ofdm communications |
US6898567B2 (en) * | 2001-12-29 | 2005-05-24 | Motorola, Inc. | Method and apparatus for multi-level distributed speech recognition |
US7406413B2 (en) * | 2002-05-08 | 2008-07-29 | Sap Aktiengesellschaft | Method and system for the processing of voice data and for the recognition of a language |
US7343288B2 (en) * | 2002-05-08 | 2008-03-11 | Sap Ag | Method and system for the processing and storing of voice information and corresponding timeline information |
US7184957B2 (en) * | 2002-09-25 | 2007-02-27 | Toyota Infotechnology Center Co., Ltd. | Multiple pass speech recognition method and system |
US20040192384A1 (en) * | 2002-12-30 | 2004-09-30 | Tasos Anastasakos | Method and apparatus for selective distributed speech recognition |
US20040148164A1 (en) * | 2003-01-23 | 2004-07-29 | Aurilab, Llc | Dual search acceleration technique for speech recognition |
US20050010422A1 (en) * | 2003-07-07 | 2005-01-13 | Canon Kabushiki Kaisha | Speech processing apparatus and method |
US20060009980A1 (en) * | 2004-07-12 | 2006-01-12 | Burke Paul M | Allocation of speech recognition tasks and combination of results thereof |
US20060080079A1 (en) * | 2004-09-29 | 2006-04-13 | Nec Corporation | Translation system, translation communication system, machine translation method, and medium embodying program |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USRE48569E1 (en) | 2013-04-19 | 2021-05-25 | Panasonic Intellectual Property Corporation Of America | Control method for household electrical appliance, household electrical appliance control system, and gateway |
US10438590B2 (en) * | 2016-12-31 | 2019-10-08 | Lenovo (Beijing) Co., Ltd. | Voice recognition |
Also Published As
Publication number | Publication date |
---|---|
KR100695127B1 (en) | 2007-03-14 |
JP5058474B2 (en) | 2012-10-24 |
US8892425B2 (en) | 2014-11-18 |
EP1646038B1 (en) | 2007-02-28 |
DE602005000628D1 (en) | 2007-04-12 |
JP2006106761A (en) | 2006-04-20 |
DE602005000628T2 (en) | 2007-10-31 |
US20060080105A1 (en) | 2006-04-13 |
EP1646038A1 (en) | 2006-04-12 |
KR20060031357A (en) | 2006-04-12 |
US20120232893A1 (en) | 2012-09-13 |
US8370159B2 (en) | 2013-02-05 |
US8380517B2 (en) | 2013-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8892425B2 (en) | Multi-layered speech recognition apparatus and method | |
US20210304770A1 (en) | Low power integrated circuit to analyze a digitized audio stream | |
US20060195323A1 (en) | Distributed speech recognition system | |
US7299186B2 (en) | Speech input system, speech portal server, and speech input terminal | |
US20140129223A1 (en) | Method and apparatus for voice recognition | |
WO2014208231A1 (en) | Voice recognition client device for local voice recognition | |
US20070061147A1 (en) | Distributed speech recognition method | |
KR20060077988A (en) | System and method for information providing service through retrieving of context in multimedia communication system | |
KR20170102930A (en) | Method, apparatus, storage medium and apparatus for processing Q & A information | |
CN105190746A (en) | Method and apparatus for detecting a target keyword | |
CN107544271A (en) | Terminal control method, device and computer-readable recording medium | |
US20200196013A1 (en) | Customized recommendations of multimedia content streams | |
US10755696B2 (en) | Speech service control apparatus and method thereof | |
US20170270909A1 (en) | Method for correcting false recognition contained in recognition result of speech of user | |
US11703343B2 (en) | Methods and systems for managing communication sessions | |
US20040254787A1 (en) | System and method for distributed speech recognition with a cache feature | |
KR20150097872A (en) | Interactive Server and Method for controlling server thereof | |
JP2003241788A (en) | Device and system for speech recognition | |
US20180315423A1 (en) | Voice interaction system and information processing apparatus | |
JP2002044610A (en) | Method and apparatus for detecting signal, its program, and recording medium | |
CN113535926B (en) | Active dialogue method and device and voice terminal | |
CN113254706B (en) | Video matching method, video processing device, electronic equipment and medium | |
US11594220B2 (en) | Electronic apparatus and controlling method thereof | |
CN109150408B (en) | Information processing method and device | |
KR20210065308A (en) | Electronic apparatus and the method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |