WO2016129188A1 - Speech recognition processing device, speech recognition processing method, and program - Google Patents

Speech recognition processing device, speech recognition processing method, and program Download PDF

Info

Publication number
WO2016129188A1
WO2016129188A1 PCT/JP2015/086000 JP2015086000W WO2016129188A1 WO 2016129188 A1 WO2016129188 A1 WO 2016129188A1 JP 2015086000 W JP2015086000 W JP 2015086000W WO 2016129188 A1 WO2016129188 A1 WO 2016129188A1
Authority
WO
WIPO (PCT)
Prior art keywords
speech recognition
voice
permutation
speech
recognition result
Prior art date
Application number
PCT/JP2015/086000
Other languages
French (fr)
Japanese (ja)
Inventor
久 坂本
Original Assignee
Necソリューションイノベータ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necソリューションイノベータ株式会社 filed Critical Necソリューションイノベータ株式会社
Priority to JP2016574636A priority Critical patent/JP6429294B2/en
Publication of WO2016129188A1 publication Critical patent/WO2016129188A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications

Definitions

  • the present invention relates to a speech recognition processing device for recognizing information by human speech, a speech recognition processing method, and a program for causing a computer to execute the method.
  • Patent Literature 1 and Non-Patent Literature 1 include all of a large amount of data related to these language models and teacher data.
  • the operation of such a voice recognition system is often performed in a personal computer (PC) or a terminal device such as a smartphone or a tablet terminal that has been increasingly used in recent years.
  • PC personal computer
  • terminal device such as a smartphone or a tablet terminal
  • main storage device and the auxiliary storage device of these terminal devices are increasing in capacity, it is necessary to store all of the large amount of data necessary for the speech recognition system in the terminal device. Difficult from a viewpoint.
  • Non-Patent Document 2 a cloud-type speech recognition service is provided (see Non-Patent Document 2).
  • a large amount of data necessary for speech recognition processing is stored not on the terminal device but on a cloud platform built in a data center. If this service is used, the terminal device is connected to the data center via a network, so that a result of voice recognition processing using the large amount of data can be obtained.
  • the speed of information processing has increased, so that a user operating a terminal device can immediately obtain a speech recognition result from the cloud platform when inputting voice into the terminal device. In this way, the user can obtain a highly accurate speech recognition result without accumulating a large amount of language models and teacher data in the terminal device.
  • the cloud-type voice recognition service disclosed in Non-Patent Document 2 solves the problems of the techniques disclosed in Patent Document 1 and Non-Patent Document 1.
  • this cloud-type speech recognition service is not suitable for recognition processing of long conversational sentences composed of multiple sentences because short sentences such as short words at a word level and relatively short conversational sentences are processed. .
  • An object of the present invention is to provide a speech recognition processing device, a speech recognition processing method, and a program that enable recognition processing of a long conversation sentence.
  • a speech recognition processing apparatus includes a voice sampling unit that acquires input voice as voice data, a voice sampling unit that divides the voice data into a plurality of voice data pieces, and each of the plurality of voice data pieces.
  • Voice dividing means for assigning permutation numbers in accordance with the order input to the memory, storage means for storing the permutation numbers, and assigning permutation numbers to a plurality of preset communication ports, and distributing voice data pieces via the network
  • a permutation number is assigned, and the speech recognition result is stored in an area of the storage means in which the permutation number matching the assigned permutation number is stored.
  • Recognition request transmission / reception means recognition result aggregating means for generating a recognition result sentence in which speech recognition results stored in the storage means together with the permutation number are arranged according to the permutation number, display means for displaying the generated recognition result sentence, It is the structure which has.
  • a speech recognition processing method is a speech recognition processing method by an information processing apparatus, which acquires input speech as speech data, divides speech data into a plurality of speech data pieces, A permutation number is assigned to each of the audio data pieces according to the order in which the audio data is acquired, the permutation numbers are stored in the storage means, and the audio data pieces are assigned to the plurality of preset communication ports while associating the permutation numbers with the network.
  • a voice recognition result which is a result of recognition processing of the voice data piece by the voice recognition server, is received from the voice recognition server via the communication port, the received voice recognition result is transmitted to the communication port.
  • a permutation number associated with the assigned permutation number is assigned, and the speech recognition result is stored in the storage means area where the permutation number matching the assigned permutation number is stored. It was paid, in which the speech recognition result stored in the storage means together with the permutation numbers to generate a recognition result sentences arranged according permutation number, displays the generated recognition results statement.
  • a program is a procedure for acquiring input audio as audio data in a computer, dividing the audio data into a plurality of audio data pieces, and acquiring the audio data for each of the plurality of audio data pieces.
  • a permutation associated with the received speech recognition result corresponding to the communication port Voice recognition in the area of the storage means in which the number assignment procedure and the permutation number matching the assigned permutation number are stored
  • a procedure for storing the results, a procedure for generating a recognition result sentence in which the speech recognition results stored in the storage means together with the permutation numbers are arranged according to the permutation numbers, and a procedure for displaying the generated recognition result sentences. is there.
  • FIG. 1 is a block diagram for explaining the configuration of the speech recognition processing apparatus of the present embodiment.
  • FIG. 2 is a diagram showing a configuration example of data stored in the permutation number storage means shown in FIG.
  • FIG. 3 is a diagram showing another configuration example of data stored in the permutation number storage means shown in FIG.
  • FIG. 4 is a flowchart showing an operation procedure performed by the speech recognition processing apparatus according to the present embodiment.
  • FIG. 5 is a flowchart showing the detailed operation of step S02 shown in FIG.
  • FIG. 6 is a flowchart showing the detailed operation of step S05 shown in FIG.
  • FIG. 7 is a block diagram for explaining the configuration of the speech recognition processing apparatus according to the first embodiment.
  • FIG. 8 is a diagram illustrating a configuration of data stored in the permutation number storage unit in the first embodiment.
  • FIG. 9 is a diagram showing the transmission contents of the recognition request transmission / reception means in the first embodiment.
  • FIG. 10 is a diagram illustrating the contents received by the recognition request transmission / reception unit according to the first embodiment.
  • FIG. 11 is a diagram showing an example when data is stored in the field shown in FIG. 12 is a diagram illustrating an example of the screen of the display unit illustrated in FIG. 7 in the first embodiment.
  • FIG. 13 is a block diagram showing another configuration example of the speech recognition processing apparatus of this embodiment.
  • FIG. 1 is a block diagram for explaining the configuration of the speech recognition processing apparatus of this embodiment.
  • the voice recognition processing device 1 is an information processing device that outputs information in which a conversation made by the speaker 4 is converted to text so that the viewer 5 can view it.
  • the speech recognition processing device 1 may be a desktop or notebook PC, or a portable information terminal such as a PDA (Personal Digital Assistant) smaller than the PC.
  • PDA Personal Digital Assistant
  • the number of speakers 4 and viewers 5 may be plural.
  • the voice recognition processing device 1 is connected via a network 6 to a voice recognition server 3 that provides a cloud type voice recognition service.
  • the cloud type speech recognition service is a cloud type speech recognition service disclosed in Non-Patent Document 2, for example.
  • the speech recognition processing device 1 includes a permutation number storage unit 13, a recognition request transmission / reception unit 14, and a control unit 30.
  • the control unit 30 is provided with a memory (not shown) that stores a computer program (hereinafter simply referred to as a program) and a CPU (Central Processing Unit) (not shown) that executes processing according to the program. .
  • the control unit 30 includes a voice collection unit 11, a voice division unit 12, a recognition result aggregation unit 15, and a recognition result display unit 16.
  • the voice collection unit 11, the voice division unit 12, the recognition result aggregation unit 15, and the recognition result display unit 16 are virtually configured in the voice recognition processing device 1.
  • a microphone is connected to the voice sampling means 11 and a display unit is connected to the recognition result display means 16, but the illustration is omitted. Further, although a case where a display unit is connected to the recognition result display unit 16 as an output unit for the recognition result sentence will be described, a printer may be used.
  • the voice collection unit 11, the voice division unit 12, the recognition result aggregation unit 15, and the recognition result display unit 16 shown in FIG. 1 are ASIC (Application Specific Integrated Circuit) or the like specialized for each function.
  • a dedicated integrated circuit may be used.
  • speech recognition technology it is necessary to perform speech recognition processing in accordance with the input speed of speech, and the speed of information processing becomes important.
  • the overall information processing speed can be improved.
  • the voice sampling means 11 receives voice information emitted from one or more speakers 4 as digital data, which is voice data continuously input via a microphone (not shown), and is continuous information like stream data. Get as.
  • the voice collection unit 11 will be described as a case where the acquired voice data is output to the voice division unit 12 without being processed, but the voice data may be processed and output.
  • noise canceling processing for removing noise and filtering processing for extracting only a frequency band indicating human speech can be considered.
  • the voice dividing unit 12 analyzes the voice data acquired by the voice sampling unit 11, and divides the voice data into voice data pieces that are smaller units.
  • the method of dividing is to detect a part in which voice information of a person does not exist (for example, a part in which no person's voice exists) or a breathing part, and extract data before and after that as a piece of voice data. is there.
  • the audio data in the area sandwiched between the detected parts corresponds to an audio data piece.
  • a frequency band eg, about 200 Hz to about 4 KHz
  • a method for determining whether there is sound information sound data in a state in which no human sound is included is collected, and the sound is recorded as an environmental sound.
  • a method of determining that there is no information is conceivable.
  • the method for detecting the presence or absence of audio information is not limited to the method described here, and other methods may be used.
  • the voice dividing means 12 assigns a permutation number indicating the order of appearance of the voice data to the divided voice data pieces.
  • the voice dividing means 12 assigns permutation numbers in order from the first voice data piece of the voice data received from the voice sampling means 11. Therefore, the permutation numbers assigned to the audio data pieces are in the order of input to the audio sampling means 11.
  • the recognition request transmission / reception means 14 When the recognition request transmission / reception means 14 receives from the voice division means 12 a pair of the voice data piece divided by the voice division means 12 and the permutation number assigned to the voice data piece, the voice request containing the voice data piece is sent to the voice request. It transmits to the recognition server 3. At that time, the recognition request transmission / reception means 14 transmits a plurality of voice recognition requests to the voice recognition server 3 in parallel. This will be specifically described below.
  • the number of communication ports (communication channels) for transmitting / receiving data to / from the voice recognition server 3 is preset.
  • the number of communication ports is determined by the information processing capability of the voice recognition server 3 that is a data transmission / reception destination.
  • a plurality of communication ports are set to be usable in the recognition request transmission / reception means 14.
  • the recognition request transmission / reception unit 14 has a plurality of logically usable communication ports, and associates a pair of a permutation number and a voice data piece passed from the voice division unit 12 to each of the plurality of communication ports. And permutation number combination information.
  • the recognition request transmission / reception means 14 can transmit a plurality of voice recognition requests in parallel by transmitting a recognition request including a voice data fragment to the voice recognition server 3 via each communication port. At this time, there is no need to synchronize between communication ports, and transmission can be performed asynchronously.
  • the number of voice recognition requests that can be made at one time may be fixedly set in the recognition request transmission / reception means 14 or may be set freely by a setting file or the like.
  • the recognition request transmission / reception unit 14 associates the received voice data piece with the communication port when receiving the voice recognition result from the voice recognition server 3 through the communication port.
  • the permutation number that has been assigned is assigned to the received speech recognition result.
  • the recognition request transmission / reception unit 14 stores the speech recognition result and the permutation number in the permutation number storage unit 13 in association with each other.
  • the permutation number storage means 13 records the permutation numbers assigned to the audio data pieces divided by the audio dividing means 12.
  • FIG. 2 is a diagram showing a configuration example of data stored in the permutation number storage means.
  • the storage area of T1301 shown in FIG. 2 is a field in which the maximum value of the permutation numbers assigned to the audio data pieces divided by the audio dividing means 12 is recorded.
  • 0 is recorded in the field T1301 of the permutation number storage means 13 as the initial value of the maximum permutation number.
  • the initial stage is when the speech recognition processing program of this embodiment is started.
  • the voice division means 12 When assigning the permutation number, the voice division means 12 reads the maximum value of the permutation number from the permutation number storage means 13, assigns the value obtained by adding 1 to the next voice data piece, and then updates it. The maximum value of the permutation number is recorded in the permutation number storage means 13.
  • the permutation number storage means 13 stores the speech recognition result received by the recognition request transmission / reception means 14 in pairs with the permutation number.
  • FIG. 3 is a diagram showing another configuration example of data stored in the permutation number storage means shown in FIG.
  • the storage area of T1311 shown in FIG. 3 is a field for storing the number (permutation number) assigned to the audio data pieces divided by the audio dividing means 12.
  • the storage area of T1312 shown in FIG. 3 is a field for storing the speech recognition result received by the recognition request transmission / reception means 14.
  • the permutation number storage means 13 is not limited to the data structure described above, and may be realized by a database or the like so that data can be referred to and recorded as described above.
  • the recognition result aggregating unit 15 reads the speech recognition result received by the recognition request transmitting / receiving unit 14 from the speech recognition server 3 and the permutation number associated with the recognition result from the permutation number storage unit 13, and the speech recognition results in the order of the permutation number. And a recognition result sentence composed of a certain number of words or syllables is created.
  • the recognition result aggregating unit 15 periodically searches the permutation number storage unit 13 to determine whether a predetermined number or more of speech recognition results are stored from the smallest permutation number.
  • the recognition result sentence is created by connecting the voice recognition results in order, and the created recognition result sentence is displayed as a recognition result.
  • the recognition result aggregation means 15 deletes the connected speech recognition result and the record of the permutation number from the data stored in the permutation number storage means 13.
  • the number of speech recognition results for confirming a sentence may be fixedly set in the recognition result aggregating unit 15 or may be freely set by a setting file or the like.
  • the recognition result display unit 16 When the recognition result display unit 16 receives the recognition result sentence from the recognition result aggregation unit 15, the recognition result display unit 16 outputs the recognition result sentence as a character string to a display unit (not shown) so that the viewer 5 can view it.
  • a window may be displayed by GUI (Graphical User Interface), or output to a file or the like.
  • processing for converting all output sentences into “Hiragana” or “Katakana” may be performed, or processing for converting a part or all of them into Roman characters or the like may be performed.
  • FIG. 4 is a flowchart showing the operation procedure of the speech recognition processing apparatus of this embodiment.
  • Step S01 The voice collecting means 11 continuously receives voice information from one or more speakers 4 from a microphone (not shown) as digital data, and acquires it as continuous information such as stream data.
  • Step S02 The voice dividing unit 12 detects a breathing and silent portion from the voice data collected by the voice collecting unit 11, and divides the voice data before and after the breathing and silent parts. Subsequently, the voice dividing means 12 assigns a permutation number to the divided voice data pieces, registers the permutation numbers in the permutation number storage means 13, and sets the divided voice data pieces and the assigned permutation numbers as a set. To the recognition request transmission / reception means 14.
  • step S02 shown in FIG. 4 will be described in detail with reference to FIG.
  • Step S0201 The voice dividing means 12 detects breathing and silent parts from the collected voice data.
  • Step S0202 The voice dividing unit 12 divides the voice data before and after the detected breathing and silence parts to create a voice data piece.
  • Step S0203 The voice dividing unit 12 assigns a permutation number to each of the voice data pieces in the divided order. Then, the voice dividing unit 12 acquires the current maximum value of the permutation number from the field T1301 of the permutation number storage unit 13, increments the value by 1, and records it in the field T1301 of the permutation number storage unit 13.
  • Step S0204 The voice dividing unit 12 assigns the permutation number assigned in step S0203 to the divided voice data piece and passes it to the recognition request transmitting / receiving unit 14.
  • Step S03 The recognition request transmission / reception means 14 requests voice recognition by transmitting a plurality of pieces of voice data divided by the voice division means 12 to the voice recognition server 3 asynchronously and in parallel.
  • the recognition request transmission / reception unit 14 has a plurality of communication ports, associates the communication port used for transmission of the recognition request with the permutation number passed from the voice division unit 12, and holds information on the correspondence.
  • Step S04 Upon receiving the speech recognition result from the speech recognition server 3, the recognition request transmission / reception means 14 searches the permutation number in the field T1311 of the permutation number storage means 13 using the permutation number held in Step S03. The speech recognition result is stored in the field T1312 of the record whose value matches.
  • Step S05 The recognition result aggregating unit 15 periodically searches the speech recognition result storage state in the permutation number storage unit 13, and if the speech recognition result is continuously stored for a certain length, those recognition results are stored. Connect the results to create a recognition result sentence.
  • step S05 shown in FIG. 4 will be described in detail with reference to FIG.
  • Step S0501 The recognition result aggregating unit 15 periodically retrieves the speech recognition result storage state in the permutation number storage unit 13, and a predetermined number of speech recognition results are continuously registered from the smallest permutation number. Find out the state of being.
  • Step S0502 The recognition result aggregating means 15 acquires a plurality of permutation number (field T1311) and speech recognition result (field T1312) pairs found in step S0501 from the permutation number storage means 13, and performs speech recognition in the order of the permutation numbers. The results are rearranged and the speech recognition results are connected to generate a recognition result sentence.
  • Step S0503 The recognition result aggregating unit 15 deletes the record storing the speech recognition result and the permutation number acquired in step S0502 from the permutation number storage unit 13.
  • Step S0504 The recognition result aggregating unit 15 passes the recognition result sentence generated in step S0502 to the recognition result display unit 16.
  • step S05 This completes the detailed description of the operation in step S05.
  • Step S06 The recognition result display means 16 displays the recognition result sentence generated by the recognition result aggregating means 15 so that the viewer 5 can view it.
  • the recognition result display means 16 outputs a recognition result sentence to a display unit (not shown).
  • the speech recognition processing method by the speech recognition processing apparatus of the present embodiment will be specifically described using examples. Detailed description of the same configuration as that shown in FIG. 1 is omitted.
  • FIG. 7 is a block diagram showing a configuration example of the voice recognition processing apparatus of the present embodiment.
  • the voice recognition processing apparatus 1 of the present embodiment has a configuration in which a program for executing the above-described voice recognition processing method is stored in advance in a memory (not shown) in the control unit 30 in a general PC.
  • a microphone 21 is connected to the voice sampling means 11 of the voice recognition processing device 1 as a device for inputting voice.
  • a display unit 22 is connected to the recognition result display means 16 of the speech recognition processing device 1 as a device for displaying the recognition result sentence.
  • the network 6 is a network including the Internet.
  • the voice recognition processing device 1 and the voice recognition server 3 use TCP (Transmission Control Protocol) / IP (Internet Protocol) as a communication protocol.
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • Each of the speech recognition processing device 1 and the speech recognition server 3 stores in advance the terminal identification of the own device and the counterpart device.
  • the recognition request transmission / reception means 14 has five communication ports that can simultaneously transmit a voice recognition request.
  • the recognition result aggregating unit 15 creates a recognition result sentence when three voice recognition results are continuously obtained.
  • a punctuation mark indicates breathing in actual occurrence, and a punctuation mark indicates a silent part.
  • Step S01 The voice sampling means 11 continuously receives voice data from the microphone 21 as digital data, and obtains it as stream data.
  • Step S02 The breath data and the silent part are detected from the voice data collected by the voice dividing unit 12 (today is fine), and the voice data is divided before and after that. Subsequently, the voice dividing means 12 assigns a permutation number to the divided voice data pieces, registers the permutation numbers in the permutation number storage means 13, and sets the divided voice data pieces and the assigned permutation numbers as a set. To the recognition request transmission / reception means 14.
  • step S02 will be described in detail with reference to FIG.
  • Step S0201 The voice dividing means 12 detects breathing and silent parts from the collected voice data (it is fine today). In this embodiment, a punctuation mark of a sentence representing voice data is detected. As a detection method, when the voice data of 200 Hz to 4 KHz is less than 60 decibels and the state is continued for 0.5 seconds or more, it is determined that breathing and silence are present.
  • Step S0202 The voice dividing unit 12 divides the voice data before and after the detected breathing and silence parts to create a voice data piece. In this embodiment, it is divided into a voice data piece “Today”, a voice data piece “Sunny”, and a voice data piece “I”.
  • Step S0203 The voice dividing unit 12 assigns a permutation number to each of the voice data pieces in the divided order. Then, the voice dividing unit 12 acquires the current maximum value of the permutation number from the field T1301 of the permutation number storage unit 13, increments the value by 1, and records it in the field T1301 of the permutation number storage unit 13.
  • Step S0204 The voice dividing means 12 passes the permutation number numbered in step S0203 to the divided voice data piece and assigns it to the allocation recognition request transmission / reception means 14, and records the permutation number information in the permutation number storage means 13.
  • the voice division means 12 assigns the permutation number 1 to the voice data piece “Today”, the permutation number 2 to the voice data piece “Sunny”, and the voice data “I”. Number permutation number 3 on a piece.
  • the state of the permutation number storage means 13 at this time is shown in FIG.
  • Step S03 The recognition request transmission / reception means 14 requests the voice recognition by transmitting a plurality of pieces of voice data divided by the voice dividing means 12 to the voice recognition server 3 asynchronously and in parallel.
  • the recognition request transmission / reception means 14 has five communication ports, and holds the communication port used for transmission of the recognition request and the permutation number passed from the voice division means 12 as shown in FIG. .
  • FIG. 9 is a diagram showing a state in which a recognition request is sent from the voice recognition processing device to the voice recognition server for each communication port.
  • ports 1 to 3 represent communication port numbers, and “port 1: permutation number 1” means that permutation number 1 is held in association with communication port 1.
  • the communication ports 4 and 5 are omitted. Since the speech recognition processing device 1 is realized on the PC by executing a program, each communication port and the corresponding permutation number are recorded on a memory (not shown) assigned from the PC.
  • Step S04 The recognition request transmission / reception means 14 receives the speech recognition result from the speech recognition server 3 as shown in FIG.
  • FIG. 10 is a diagram illustrating a state in which the recognition result is returned from the voice recognition server to the voice recognition processing apparatus for each communication port. Comparing FIG. 9 and FIG. 10, it can be seen that the recognition result corresponding to the recognition request is returned from the voice recognition server 3 to the same communication port.
  • the recognition request transmission / reception means 14 searches the permutation number in the field T1311 of the permutation number storage means 13 using the permutation number held in step S03, and displays the speech recognition result in the field T1312 of the record with the matching value. Store as shown in FIG.
  • the order of arrival of the speech recognition results is the order of permutation number 2, permutation number 3, and permutation number 1, and the speech recognition results are stored in the permutation number storage means 13 in that order.
  • Step S05 The recognition result aggregating unit 15 periodically searches the speech recognition result storage state in the permutation number storage unit 13, and finds a data string in which three speech recognition results are stored continuously. Then, the recognition result aggregating unit 15 connects the results to create a recognition result sentence “Today is sunny”. At the time of joining, the recognition result aggregation means 15 inserts a blank character between the speech recognition results.
  • step S05 The operation of step S05 will be described in detail with reference to FIG.
  • Step S0501 The recognition result aggregating means 15 periodically searches the speech recognition result storage state in the permutation number storage means 13, and three consecutive speech recognition results are registered from the smallest permutation number. Find the state, that is, the records with permutation numbers 1, 2, and 3.
  • Step S0502 The recognition result aggregating unit 15 acquires a plurality of permutation number (field T1311) and speech recognition result (field T1312) pairs found in step S0501 from the permutation number storage unit 13.
  • the recognition result aggregating unit 15 acquires “Today” from the record with the permutation number 1, acquires “sunny” from the record with the permutation number 2, and acquires “is” from the record with the permutation number 3. To do. Thereafter, the recognition result aggregating unit 15 rearranges the speech recognition results according to the order of the permutation numbers, and generates a recognition result sentence “Today is fine” by connecting the respective speech recognition results. When connecting each speech recognition result, a space is inserted between them.
  • Step S0503 The recognition result aggregating unit 15 deletes the record storing the speech recognition result and the permutation number acquired in step S0502 from the permutation number storage unit 13. In this case, the records with permutation numbers 1, 2, and 3 correspond.
  • Step S0504 The recognition result aggregating unit 15 passes the recognition result sentence “Today is fine” generated in step S0502 to the recognition result display unit 16.
  • step S05 This completes the detailed description of the operation of step S05.
  • Step S06 The recognition result display means 16 outputs the recognition result sentence “Today is fine” generated by the recognition result aggregation means 15 to the result display area 2201 of the display unit 22 as shown in FIG. To the user.
  • FIG. 12 is an example of a display screen.
  • steps S03 to S05 of the first embodiment there may be a case where the speech recognition result of the recognition request sent after the recognition request sent earlier is delivered first. A method for processing the next recognition request in this case will be described.
  • the recognition request transmission / reception means 14 assigns the permutation number 2 to the speech recognition result received through the port 2 while the port 1 is waiting for the speech recognition result reception, and the permutation number 3 to the speech recognition result received through the port 3. And the combination of the speech recognition result and the permutation number is stored in the permutation number storage means 13.
  • the recognition request transmission / reception means 14 When the recognition request transmission / reception means 14 receives a combination of the voice data piece and the permutation number as the next recognition processing target from the voice division means 12, the recognition request transmission / reception means 14 associates the pair with a communication port that is not waiting for the recognition result.
  • the recognition request transmitting / receiving unit 14 associates each of the four voice data pieces with each of the ports 2 to 5. That is, the recognition request transmission / reception means 14 sequentially distributes the next speech data pieces to be recognized to the unused ports 2 to 5 without waiting for the reception of the speech recognition result of the port 1.
  • the next recognition processing target speech data piece is sequentially associated with the communication port that is not used. Thus, information processing can be performed efficiently.
  • the voice data of a long conversation sentence is divided into voice data pieces having a word level that can be recognized by the cloud type voice recognition service, and each voice data piece is obtained using the cloud type voice recognition service.
  • the obtained speech recognition results are arranged in the original order to output a recognition result sentence of a long conversation sentence. Therefore, a user can convert a long conversation sentence composed of a plurality of sentences into character information without storing a large amount of data necessary for recognition processing in a terminal device such as his / her PC, smartphone or tablet terminal. Voice recognition results can be acquired.
  • the speech recognition processing device of the present invention has been specifically described for easy understanding.
  • the speech recognition processing device may be an information processing device as shown in FIG. Good.
  • FIG. 13 is a block diagram showing another configuration example of the speech recognition processing apparatus of the present embodiment.
  • the speech recognition processing apparatus includes a storage unit 33, a communication unit 34, and a control unit 30.
  • Each of the communication unit 34 and the storage unit 33 illustrated in FIG. 13 corresponds to the recognition request transmission / reception unit 14 and the permutation number storage unit 13 illustrated in FIG. 1.
  • a program for causing a computer to execute the speech recognition processing method described in this embodiment may be stored in a computer-readable recording medium. In this case, by installing the program from the recording medium into another information processing apparatus, it is possible to cause the other information processing apparatus to execute the above information processing method.
  • a user can obtain a speech recognition result of a long conversation sentence without storing a large amount of data necessary for speech recognition processing in his / her terminal device.
  • the present invention can be applied to the use of supporting the communication by displaying the contents spoken by the speaker in a characterized manner in a scene where a person with hearing impairments in general life recognizes the surrounding conversation. Moreover, by replacing the speech recognition process with a translation process, the present invention can be applied to a purpose of supporting communication with a foreigner.

Abstract

A speech recognition processing device according to the present invention comprises: a speech collection means (11) for acquiring input speech as speech data; a speech segmentation means (12) for segmenting the speech data into a plurality of speech data pieces and allocating a sequence number to each speech data piece in the order of input; a storage means (13) for storing the sequence number; a recognition-request transmitting and receiving means (14) for sorting the speech data pieces while mapping the sequence numbers to a plurality of preliminarily established communication ports, and transmitting the speech data pieces to a speech recognition server via a network, and when receiving speech recognition results from the speech recognition server via the communication ports, the recognition-request transmitting and receiving means (14) allocating the sequence numbers mapped to the communication ports to the speech recognition results received, and storing the speech recognition results in the storage means together with sequence numbers matching the allocated sequence numbers; a recognition result integrating means (15) for generating a recognition result sentence composed of the speech recognition results stored together with the sequence numbers, arranged in order of the sequence number; and a display means (16) for displaying the generated recognition result sentence.

Description

音声認識処理装置、音声認識処理方法およびプログラムSpeech recognition processing apparatus, speech recognition processing method, and program
 本発明は、人の音声による情報を認識する音声認識処理装置、音声認識処理方法、およびその方法をコンピュータに実行させるためのプログラムに関する。 The present invention relates to a speech recognition processing device for recognizing information by human speech, a speech recognition processing method, and a program for causing a computer to execute the method.
 音声認識に関する技術はこの十数年の間大きな変化はなく、認識の精度を向上させるためにはより多くの言語モデルや教師データを蓄積する必要があるとされている。特許文献1および非特許文献1に開示された音声認識システムは、これら言語モデルや教師データに関して大量のデータの全てをシステムに内包していた。このような音声認識システムの稼動は、パーソナルコンピュータ(PC)や、近年利用が拡大しているスマートフォンおよびタブレット端末等の端末装置内で行われることが多い。しかし、これらの端末装置の主記憶装置および補助記憶装置が大容量化しているとはいえ、音声認識システムに必要な大量のデータの全てを端末装置に蓄積させることは、処理速度やデータ運用の観点から難しい。 The technology related to speech recognition has not changed significantly over the past decade, and it is said that more language models and teacher data need to be accumulated in order to improve recognition accuracy. The speech recognition systems disclosed in Patent Literature 1 and Non-Patent Literature 1 include all of a large amount of data related to these language models and teacher data. The operation of such a voice recognition system is often performed in a personal computer (PC) or a terminal device such as a smartphone or a tablet terminal that has been increasingly used in recent years. However, even though the main storage device and the auxiliary storage device of these terminal devices are increasing in capacity, it is necessary to store all of the large amount of data necessary for the speech recognition system in the terminal device. Difficult from a viewpoint.
 この問題に対して、クラウド型の音声認識サービスが提供されている(非特許文献2参照)。このようなクラウド型音声認識サービスでは、音声認識処理に必要な大量のデータは、端末装置内ではなく、データセンタに構築されたクラウド基盤上に蓄積されている。このサービスを用いれば、端末装置がデータセンタとネットワークを介して接続することで、当該大量のデータを利用した音声認識処理の結果を得られる。ネットワーク技術や通信技術の進歩により情報処理の速度が速くなったことで、端末装置を操作するユーザは、音声を端末装置に入力すると、クラウド基盤から音声認識結果をすぐに得ることができる。このようにして、大量の言語モデルや教師データを端末装置内に蓄積しなくても、ユーザが高精度の音声認識結果を得ることを可能にしている。さらに、クラウド基盤には大量の記憶領域が存在するため、膨大な言語モデル、さまざまな話者のパターンごとの音声データを蓄積することが可能となり、さらなる精度の向上を実現している。 To solve this problem, a cloud-type speech recognition service is provided (see Non-Patent Document 2). In such a cloud-type speech recognition service, a large amount of data necessary for speech recognition processing is stored not on the terminal device but on a cloud platform built in a data center. If this service is used, the terminal device is connected to the data center via a network, so that a result of voice recognition processing using the large amount of data can be obtained. As a result of advances in network technology and communication technology, the speed of information processing has increased, so that a user operating a terminal device can immediately obtain a speech recognition result from the cloud platform when inputting voice into the terminal device. In this way, the user can obtain a highly accurate speech recognition result without accumulating a large amount of language models and teacher data in the terminal device. Furthermore, since there is a large amount of storage area on the cloud platform, it is possible to store a huge amount of language models and voice data for each pattern of various speakers, and further improve accuracy.
特許第1603542号公報Japanese Patent No. 1603542
 非特許文献2に開示されたクラウド型音声認識サービスは、特許文献1および非特許文献1に開示された技術の課題を解決するものである。しかし、このクラウド型音声認識サービスでは、単語レベルの短い言葉や比較的短い会話文などの短い文を処理の対象としているため、複数の文章から構成される長い会話文の認識処理に適していない。 The cloud-type voice recognition service disclosed in Non-Patent Document 2 solves the problems of the techniques disclosed in Patent Document 1 and Non-Patent Document 1. However, this cloud-type speech recognition service is not suitable for recognition processing of long conversational sentences composed of multiple sentences because short sentences such as short words at a word level and relatively short conversational sentences are processed. .
 本発明の目的の一つは、長い会話文の認識処理を可能にした音声認識処理装置、音声認識処理方法およびプログラムを提供することである。 An object of the present invention is to provide a speech recognition processing device, a speech recognition processing method, and a program that enable recognition processing of a long conversation sentence.
 本発明の一側面の音声認識処理装置は、入力される音声を音声データとして取得する音声採取手段と、音声データを複数の音声データ片に分割し、複数の音声データ片のそれぞれに音声採取手段に入力された順番にしたがって順列番号を割り当てる音声分割手段と、順列番号を記憶する記憶手段と、予め設定された複数の通信ポートに順列番号を対応づけながら音声データ片を振り分けてネットワークを介して音声認識サーバに送信し、音声データ片が音声認識サーバによって認識処理された結果である音声認識結果を音声認識サーバから通信ポートを介して受信すると、受信した音声認識結果に通信ポートに対応づけた順列番号を割り当て、割り当てた順列番号に一致する順列番号が記憶される記憶手段の領域に音声認識結果を格納する認識要求送受信手段と、順列番号とともに記憶手段に格納された音声認識結果を順列番号にしたがって並べた認識結果文を生成する認識結果集約手段と、生成された認識結果文を表示する表示手段と、を有する構成である。 A speech recognition processing apparatus according to an aspect of the present invention includes a voice sampling unit that acquires input voice as voice data, a voice sampling unit that divides the voice data into a plurality of voice data pieces, and each of the plurality of voice data pieces. Voice dividing means for assigning permutation numbers in accordance with the order input to the memory, storage means for storing the permutation numbers, and assigning permutation numbers to a plurality of preset communication ports, and distributing voice data pieces via the network When a speech recognition result, which is a result of recognition processing performed on the speech data piece by the speech recognition server, is received from the speech recognition server via the communication port, the received speech recognition result is associated with the communication port. A permutation number is assigned, and the speech recognition result is stored in an area of the storage means in which the permutation number matching the assigned permutation number is stored. Recognition request transmission / reception means, recognition result aggregating means for generating a recognition result sentence in which speech recognition results stored in the storage means together with the permutation number are arranged according to the permutation number, display means for displaying the generated recognition result sentence, It is the structure which has.
 本発明の一側面の音声認識処理方法は、情報処理装置による音声認識処理方法であって、入力される音声を音声データとして取得し、音声データを複数の音声データ片に分割して、複数の音声データ片のそれぞれに音声データを取得した順番にしたがって順列番号を割り当て、順列番号を記憶手段に記憶し、予め設定された複数の通信ポートに順列番号を対応づけながら音声データ片を振り分けてネットワークを介して音声認識サーバに送信し、音声データ片が音声認識サーバによって認識処理された結果である音声認識結果を音声認識サーバから通信ポートを介して受信すると、受信した音声認識結果に通信ポートに対応づけた順列番号を割り当て、割り当てた順列番号に一致する順列番号が記憶される記憶手段の領域に音声認識結果を格納し、順列番号とともに記憶手段に格納された音声認識結果を順列番号にしたがって並べた認識結果文を生成し、生成された認識結果文を表示するものである。 A speech recognition processing method according to an aspect of the present invention is a speech recognition processing method by an information processing apparatus, which acquires input speech as speech data, divides speech data into a plurality of speech data pieces, A permutation number is assigned to each of the audio data pieces according to the order in which the audio data is acquired, the permutation numbers are stored in the storage means, and the audio data pieces are assigned to the plurality of preset communication ports while associating the permutation numbers with the network. When a voice recognition result, which is a result of recognition processing of the voice data piece by the voice recognition server, is received from the voice recognition server via the communication port, the received voice recognition result is transmitted to the communication port. A permutation number associated with the assigned permutation number is assigned, and the speech recognition result is stored in the storage means area where the permutation number matching the assigned permutation number is stored. It was paid, in which the speech recognition result stored in the storage means together with the permutation numbers to generate a recognition result sentences arranged according permutation number, displays the generated recognition results statement.
 本発明の一側面のプログラムは、コンピュータに、入力される音声を音声データとして取得する手順と、音声データを複数の音声データ片に分割して、複数の音声データ片のそれぞれに音声データを取得した順番にしたがって順列番号を割り当てる手順と、順列番号を記憶手段に記憶する手順と、予め設定された複数の通信ポートに順列番号を対応づけながら音声データ片を振り分けてネットワークを介して音声認識サーバに送信する手順と、音声データ片が音声認識サーバによって認識処理された結果である音声認識結果を音声認識サーバから通信ポートを介して受信すると、受信した音声認識結果に通信ポートに対応づけた順列番号を割り当てる手順と、割り当てた順列番号に一致する順列番号が記憶される記憶手段の領域に音声認識結果を格納する手順と、順列番号とともに記憶手段に格納された音声認識結果を順列番号にしたがって並べた認識結果文を生成する手順と、生成された認識結果文を表示する手順を実行させるものである。 A program according to one aspect of the present invention is a procedure for acquiring input audio as audio data in a computer, dividing the audio data into a plurality of audio data pieces, and acquiring the audio data for each of the plurality of audio data pieces. A procedure for assigning permutation numbers in accordance with the order in which the permutation numbers are assigned, a procedure for storing the permutation numbers in the storage means, and a voice recognition server that distributes voice data pieces while associating permutation numbers with a plurality of preset communication ports. And when a speech recognition result, which is a result of recognition processing of the speech data piece by the speech recognition server, is received from the speech recognition server via the communication port, a permutation associated with the received speech recognition result corresponding to the communication port Voice recognition in the area of the storage means in which the number assignment procedure and the permutation number matching the assigned permutation number are stored A procedure for storing the results, a procedure for generating a recognition result sentence in which the speech recognition results stored in the storage means together with the permutation numbers are arranged according to the permutation numbers, and a procedure for displaying the generated recognition result sentences. is there.
図1は本実施形態の音声認識処理装置の構成を説明するためのブロック図である。FIG. 1 is a block diagram for explaining the configuration of the speech recognition processing apparatus of the present embodiment. 図2は図1に示した順列番号記憶手段に保存されるデータの構成例を示す図である。FIG. 2 is a diagram showing a configuration example of data stored in the permutation number storage means shown in FIG. 図3は図1に示した順列番号記憶手段に保存されるデータの別の構成例を示す図である。FIG. 3 is a diagram showing another configuration example of data stored in the permutation number storage means shown in FIG. 図4は本実施形態の音声認識処理装置による動作手順を示すフロー図である。FIG. 4 is a flowchart showing an operation procedure performed by the speech recognition processing apparatus according to the present embodiment. 図5は図4に示すステップS02の詳細な動作を示すフロー図である。FIG. 5 is a flowchart showing the detailed operation of step S02 shown in FIG. 図6は図4に示すステップS05の詳細な動作を示すフロー図である。FIG. 6 is a flowchart showing the detailed operation of step S05 shown in FIG. 図7は実施例1における音声認識処理装置の構成を説明するためのブロック図である。FIG. 7 is a block diagram for explaining the configuration of the speech recognition processing apparatus according to the first embodiment. 図8は実施例1における順列番号記憶手段に保存されるデータの構成を示す図である。FIG. 8 is a diagram illustrating a configuration of data stored in the permutation number storage unit in the first embodiment. 図9は実施例1における認識要求送受信手段の送信内容を示す図である。FIG. 9 is a diagram showing the transmission contents of the recognition request transmission / reception means in the first embodiment. 図10は実施例1における認識要求送受信手段の受信内容を示す図である。FIG. 10 is a diagram illustrating the contents received by the recognition request transmission / reception unit according to the first embodiment. 図11は図8に示したフィールドにデータが保存された場合の一例を示す図である。FIG. 11 is a diagram showing an example when data is stored in the field shown in FIG. 図12は実施例1において、図7に示した表示部の画面の一例を示す図である。12 is a diagram illustrating an example of the screen of the display unit illustrated in FIG. 7 in the first embodiment. 図13は本実施形態の音声認識処理装置の別の構成例を示すブロック図である。FIG. 13 is a block diagram showing another configuration example of the speech recognition processing apparatus of this embodiment.
 本実施形態の音声認識処理装置の構成を説明する。 The configuration of the speech recognition processing apparatus of this embodiment will be described.
 図1は本実施形態の音声認識処理装置の構成を説明するためのブロック図である。 FIG. 1 is a block diagram for explaining the configuration of the speech recognition processing apparatus of this embodiment.
 音声認識処理装置1は、話者4が発する会話を文字化した情報を閲覧者5に閲覧可能に出力する情報処理装置である。音声認識処理装置1は、デスクトップ型またはノートブック型のPCであってもよく、PCより小型のPDA(Personal Digital Assistants)等の携帯型情報端末であってもよい。話者4および閲覧者5のそれぞれの人数は複数であってもよい。 The voice recognition processing device 1 is an information processing device that outputs information in which a conversation made by the speaker 4 is converted to text so that the viewer 5 can view it. The speech recognition processing device 1 may be a desktop or notebook PC, or a portable information terminal such as a PDA (Personal Digital Assistant) smaller than the PC. The number of speakers 4 and viewers 5 may be plural.
 音声認識処理装置1は、クラウド型音声認識サービスを提供する音声認識サーバ3とネットワーク6を介して接続される。クラウド型音声認識サービスは、例えば、非特許文献2に開示されたクラウド型音声認識サービスである。 The voice recognition processing device 1 is connected via a network 6 to a voice recognition server 3 that provides a cloud type voice recognition service. The cloud type speech recognition service is a cloud type speech recognition service disclosed in Non-Patent Document 2, for example.
 図1に示すように、音声認識処理装置1は、順列番号記憶手段13と、認識要求送受信手段14と、制御部30とを有する。制御部30には、コンピュータプログラム(以下では、単にプログラムと称する)を記憶するメモリ(不図示)と、プログラムにしたがって処理を実行するCPU(Central Processing Unit)(不図示)とが設けられている。 As shown in FIG. 1, the speech recognition processing device 1 includes a permutation number storage unit 13, a recognition request transmission / reception unit 14, and a control unit 30. The control unit 30 is provided with a memory (not shown) that stores a computer program (hereinafter simply referred to as a program) and a CPU (Central Processing Unit) (not shown) that executes processing according to the program. .
 制御部30は、音声採取手段11と、音声分割手段12と、認識結果集約手段15と、認識結果表示手段16とを有する。制御部30内のCPUがプログラムにしたがって処理を実行することで、音声採取手段11、音声分割手段12、認識結果集約手段15および認識結果表示手段16が音声認識処理装置1に仮想的に構成される。 The control unit 30 includes a voice collection unit 11, a voice division unit 12, a recognition result aggregation unit 15, and a recognition result display unit 16. When the CPU in the control unit 30 executes processing according to the program, the voice collection unit 11, the voice division unit 12, the recognition result aggregation unit 15, and the recognition result display unit 16 are virtually configured in the voice recognition processing device 1. The
 なお、音声採取手段11にはマイクが接続され、認識結果表示手段16には表示部が接続されているが、図に示すことを省略している。また、認識結果文の出力手段として表示部が認識結果表示手段16に接続された場合で説明するが、プリンタであってもよい。 It should be noted that a microphone is connected to the voice sampling means 11 and a display unit is connected to the recognition result display means 16, but the illustration is omitted. Further, although a case where a display unit is connected to the recognition result display unit 16 as an output unit for the recognition result sentence will be described, a printer may be used.
 また、図1に示す音声採取手段11、音声分割手段12、認識結果集約手段15および認識結果表示手段16のうち、一部または全部が各機能に特化したASIC(Application Specific Integrated Circuit)等の専用集積回路で構成されてもよい。特に、音声認識技術では、音声の入力速度に応じて音声認識処理を行う必要があり、情報処理の速度が重要となる。上記複数の手段のうち、一部でも、その機能に特化した専用集積回路を設けることで、全体の情報処理の速度向上を図れる。 Also, some or all of the voice collection unit 11, the voice division unit 12, the recognition result aggregation unit 15, and the recognition result display unit 16 shown in FIG. 1 are ASIC (Application Specific Integrated Circuit) or the like specialized for each function. A dedicated integrated circuit may be used. In particular, in speech recognition technology, it is necessary to perform speech recognition processing in accordance with the input speed of speech, and the speed of information processing becomes important. By providing a dedicated integrated circuit specialized for the function of some of the plurality of means, the overall information processing speed can be improved.
 図1に示した音声認識処理装置1の各構成について詳しく説明する。 Each configuration of the speech recognition processing device 1 shown in FIG. 1 will be described in detail.
 音声採取手段11は、単数または複数の話者4が発する音声情報を、マイク(不図示)を介して連続的に入力される音声データをデジタルデータとして受信し、ストリームデータのように連続した情報として取得する。本実施形態では、音声採取手段11は取得した音声データを無加工で音声分割手段12に出力する場合で説明するが、音声データを加工して出力してもよい。音声データの加工として、例えば、ノイズを除去するノイズキャンセリング処理や人間の音声を示す周波数帯のみ抽出するフィルタリング処理が考えられる。 The voice sampling means 11 receives voice information emitted from one or more speakers 4 as digital data, which is voice data continuously input via a microphone (not shown), and is continuous information like stream data. Get as. In the present embodiment, the voice collection unit 11 will be described as a case where the acquired voice data is output to the voice division unit 12 without being processed, but the voice data may be processed and output. As processing of audio data, for example, noise canceling processing for removing noise and filtering processing for extracting only a frequency band indicating human speech can be considered.
 音声分割手段12は、音声採取手段11が取得した音声データを解析し、音声データをそれよりも小さい単位である音声データ片に分割する。分割する方法は、音声データ内で人の音声情報が存在しない部分(例えば、人の音声が存在しない部分)や息継ぎの部分を検出し、その前後のデータを音声データの断片として抽出するものである。検出した部分に挟まれる領域の音声データが音声データ片に相当する。人の音声が存在するか否かの判定方法として、対象となる音声データにおいて、通常、人の音声として認識される周波数帯域(例えば、約200Hz~約4KHz)のデータの有無を調べることで、音声情報があるか否かを判断する方法がある。また、音声情報があるか否かの判定方法として、人の音声が含まれない状態での音声データを採取し、その音声を環境音として記録しておき、環境音と一致する場合に「音声情報がない」と判定する方法が考えられる。音声情報の有無を検出する方法は、ここで説明した方法に限定されず、他の方法であってもよい。 The voice dividing unit 12 analyzes the voice data acquired by the voice sampling unit 11, and divides the voice data into voice data pieces that are smaller units. The method of dividing is to detect a part in which voice information of a person does not exist (for example, a part in which no person's voice exists) or a breathing part, and extract data before and after that as a piece of voice data. is there. The audio data in the area sandwiched between the detected parts corresponds to an audio data piece. As a method of determining whether or not a human voice exists, by examining the presence or absence of data in a frequency band (eg, about 200 Hz to about 4 KHz) that is normally recognized as a human voice in the target voice data, There is a method for determining whether there is audio information. In addition, as a method for determining whether there is sound information, sound data in a state in which no human sound is included is collected, and the sound is recorded as an environmental sound. A method of determining that there is no information is conceivable. The method for detecting the presence or absence of audio information is not limited to the method described here, and other methods may be used.
 また、音声分割手段12は、分割した音声データ片に音声データの出現順番を表す順列番号を割り当てる。ここで、音声分割手段12は音声採取手段11から受け取る音声データの先頭の音声データ片から順に順列番号を割り当てる。そのため、音声データ片に割り当てられる順列番号は音声採取手段11に入力される順になる。 Also, the voice dividing means 12 assigns a permutation number indicating the order of appearance of the voice data to the divided voice data pieces. Here, the voice dividing means 12 assigns permutation numbers in order from the first voice data piece of the voice data received from the voice sampling means 11. Therefore, the permutation numbers assigned to the audio data pieces are in the order of input to the audio sampling means 11.
 認識要求送受信手段14は、音声分割手段12が分割した音声データ片とその音声データ片に割り当てられた順列番号とを対にして音声分割手段12から受け取ると、音声データ片を含む音声要求を音声認識サーバ3に対して送信する。その際、認識要求送受信手段14は、音声認識サーバ3に対して、音声認識要求を複数、かつ並行に送信する。以下に、このことを具体的に説明する。 When the recognition request transmission / reception means 14 receives from the voice division means 12 a pair of the voice data piece divided by the voice division means 12 and the permutation number assigned to the voice data piece, the voice request containing the voice data piece is sent to the voice request. It transmits to the recognition server 3. At that time, the recognition request transmission / reception means 14 transmits a plurality of voice recognition requests to the voice recognition server 3 in parallel. This will be specifically described below.
 認識要求送受信手段14には、音声認識サーバ3とデータを送受信するための通信ポート(通信チャネル)の数が予め設定されている。通信ポートの数はデータの送受信先となる音声認識サーバ3の情報処理能力によって決められる。本実施形態では、認識要求送受信手段14に複数の通信ポートが利用可能に設定されている。認識要求送受信手段14は、論理的に使用可能な複数の通信ポートを有し、複数の通信ポートのそれぞれに音声分割手段12から渡される、順列番号および音声データ片の対を対応づけ、通信ポートと順列番号の組み合わせの情報を保持する。そして、認識要求送受信手段14は、各通信ポートを介して音声データ片を含む認識要求を音声認識サーバ3に送信することで、音声認識要求を複数、かつ並行に送信することができる。その際、通信ポート間で同期を取る必要もなく、非同期で送信することができる。なお、一度に音声認識要求できる数は認識要求送受信手段14内に固定で設定されていてもよく、設定ファイル等により自由に設定できるようにしてもよい。 In the recognition request transmission / reception means 14, the number of communication ports (communication channels) for transmitting / receiving data to / from the voice recognition server 3 is preset. The number of communication ports is determined by the information processing capability of the voice recognition server 3 that is a data transmission / reception destination. In the present embodiment, a plurality of communication ports are set to be usable in the recognition request transmission / reception means 14. The recognition request transmission / reception unit 14 has a plurality of logically usable communication ports, and associates a pair of a permutation number and a voice data piece passed from the voice division unit 12 to each of the plurality of communication ports. And permutation number combination information. And the recognition request transmission / reception means 14 can transmit a plurality of voice recognition requests in parallel by transmitting a recognition request including a voice data fragment to the voice recognition server 3 via each communication port. At this time, there is no need to synchronize between communication ports, and transmission can be performed asynchronously. The number of voice recognition requests that can be made at one time may be fixedly set in the recognition request transmission / reception means 14 or may be set freely by a setting file or the like.
 また、認識要求送受信手段14は、送信した音声データ片が音声認識サーバ3によって認識処理された結果である音声認識結果を音声認識サーバ3から通信ポートを介して受信すると、その通信ポートに対応づけていた順列番号を受信した音声認識結果に割り当てる。さらに、認識要求送受信手段14は、音声認識結果と順列番号を関連づけて順列番号記憶手段13に格納する。 The recognition request transmission / reception unit 14 associates the received voice data piece with the communication port when receiving the voice recognition result from the voice recognition server 3 through the communication port. The permutation number that has been assigned is assigned to the received speech recognition result. Furthermore, the recognition request transmission / reception unit 14 stores the speech recognition result and the permutation number in the permutation number storage unit 13 in association with each other.
 順列番号記憶手段13は、音声分割手段12が分割した音声データ片に割り当てられる順列番号を記録する。図2は順列番号記憶手段に保存されるデータの構成例を示す図である。 The permutation number storage means 13 records the permutation numbers assigned to the audio data pieces divided by the audio dividing means 12. FIG. 2 is a diagram showing a configuration example of data stored in the permutation number storage means.
 図2を参照して、順列番号記憶手段13に設けられた記憶領域に保存されるデータの構成を説明する。 Referring to FIG. 2, the configuration of data stored in the storage area provided in the permutation number storage means 13 will be described.
 図2に示すT1301の記憶領域は、音声分割手段12が分割した音声データ片に割り当てた順列番号の最大値が記録されるフィールドである。順列番号が1つも採番されていない初期段階では、順列番号の最大値の初期値として0が順列番号記憶手段13のフィールドT1301に記録されている。初期段階とは、本実施形態の音声認識処理のプログラムが起動したときである。 The storage area of T1301 shown in FIG. 2 is a field in which the maximum value of the permutation numbers assigned to the audio data pieces divided by the audio dividing means 12 is recorded. In the initial stage where no permutation number is assigned, 0 is recorded in the field T1301 of the permutation number storage means 13 as the initial value of the maximum permutation number. The initial stage is when the speech recognition processing program of this embodiment is started.
 音声分割手段12は、順列番号を採番する際、順列番号記憶手段13から順列番号の最大値を読み出し、読み出した値に1を加えた値を次の音声データ片に割り当て、その後、更新した順列番号の最大値を順列番号記憶手段13に記録する。また、順列番号記憶手段13は認識要求送受信手段14が受信した音声認識結果を順列番号と対にして記憶する。 When assigning the permutation number, the voice division means 12 reads the maximum value of the permutation number from the permutation number storage means 13, assigns the value obtained by adding 1 to the next voice data piece, and then updates it. The maximum value of the permutation number is recorded in the permutation number storage means 13. The permutation number storage means 13 stores the speech recognition result received by the recognition request transmission / reception means 14 in pairs with the permutation number.
 順列番号記憶手段13における、図2に示したデータ構造とは別のデータ構造のスキーマを説明する。図3は図1に示した順列番号記憶手段に保存されるデータの別の構成例を示す図である。 A schema having a data structure different from the data structure shown in FIG. 2 in the permutation number storage means 13 will be described. FIG. 3 is a diagram showing another configuration example of data stored in the permutation number storage means shown in FIG.
 図3に示すT1311の記憶領域は、音声分割手段12が分割した音声データ片に割り当てられた番号(順列番号)を格納するためのフィールドである。図3に示すT1312の記憶領域は、認識要求送受信手段14が受信した音声認識結果を格納するためのフィールドである。 The storage area of T1311 shown in FIG. 3 is a field for storing the number (permutation number) assigned to the audio data pieces divided by the audio dividing means 12. The storage area of T1312 shown in FIG. 3 is a field for storing the speech recognition result received by the recognition request transmission / reception means 14.
 なお、順列番号記憶手段13は、上述したデータ構造に限らず、上記のようにデータの参照および記録ができるようにデータベース等で実現してもよい。 The permutation number storage means 13 is not limited to the data structure described above, and may be realized by a database or the like so that data can be referred to and recorded as described above.
 認識結果集約手段15は、認識要求送受信手段14が音声認識サーバ3から受信した音声認識結果とその認識結果に対応づけられた順列番号を順列番号記憶手段13から読み出し、順列番号の順に音声認識結果を並べ、一定の語数または音節数から構成される認識結果文を作成する。また、認識結果集約手段15は、定期的に順列番号記憶手段13を検索することにより、最小の順列番号から一定の個数以上の音声認識結果が格納されているかを判定する。認識結果集約手段15は、一定の個数以上の音声認識結果が連結できると判断した場合、それらの音声認識結果を順番に繋ぎ合わせて認識結果文を作成し、作成した認識結果文を認識結果表示手段16に渡す。そして、認識結果集約手段15は、順列番号記憶手段13に保存されているデータから、連結した音声認識結果とその順列番号のレコードを削除する。なお、文章を確定する音声認識結果数は、認識結果集約手段15内に固定で設定されていてもよいし、設定ファイル等で自由に設定できるようにしてもよい。 The recognition result aggregating unit 15 reads the speech recognition result received by the recognition request transmitting / receiving unit 14 from the speech recognition server 3 and the permutation number associated with the recognition result from the permutation number storage unit 13, and the speech recognition results in the order of the permutation number. And a recognition result sentence composed of a certain number of words or syllables is created. The recognition result aggregating unit 15 periodically searches the permutation number storage unit 13 to determine whether a predetermined number or more of speech recognition results are stored from the smallest permutation number. When the recognition result aggregating unit 15 determines that a certain number or more of speech recognition results can be connected, the recognition result sentence is created by connecting the voice recognition results in order, and the created recognition result sentence is displayed as a recognition result. Pass to means 16. Then, the recognition result aggregation means 15 deletes the connected speech recognition result and the record of the permutation number from the data stored in the permutation number storage means 13. Note that the number of speech recognition results for confirming a sentence may be fixedly set in the recognition result aggregating unit 15 or may be freely set by a setting file or the like.
 認識結果表示手段16は、認識結果集約手段15から認識結果文を受け取ると、認識結果文を文字列にして閲覧者5が閲覧できるように表示部(不図示)に出力する。表示方法はGUI(Graphical User Interface)によりウインドウ表示させてもよいし、ファイル等に出力してもよい。また、表示の際には、出力文を全て「ひらがな」または「カタカナ」に変換する処理を行ってもよく、一部または全部をローマ字などに変換する処理を行ってもよい。 When the recognition result display unit 16 receives the recognition result sentence from the recognition result aggregation unit 15, the recognition result display unit 16 outputs the recognition result sentence as a character string to a display unit (not shown) so that the viewer 5 can view it. As a display method, a window may be displayed by GUI (Graphical User Interface), or output to a file or the like. Further, at the time of display, processing for converting all output sentences into “Hiragana” or “Katakana” may be performed, or processing for converting a part or all of them into Roman characters or the like may be performed.
 次に、本実施形態の音声認識処理装置の動作手順を説明する。 Next, the operation procedure of the speech recognition processing apparatus of this embodiment will be described.
 図4は本実施形態の音声認識処理装置の動作手順を示すフロー図である。 FIG. 4 is a flowchart showing the operation procedure of the speech recognition processing apparatus of this embodiment.
 ステップS01:音声採取手段11が、単数または複数の話者4が発する音声情報をマイク(不図示)から連続的に音声データをデジタルデータとして受信し、ストリームデータ等連続した情報として取得する。 Step S01: The voice collecting means 11 continuously receives voice information from one or more speakers 4 from a microphone (not shown) as digital data, and acquires it as continuous information such as stream data.
 ステップS02:音声分割手段12は、音声採取手段11によって採取された音声データの中から息継ぎや無音部分を検出してその前後で音声データを分割する。続いて、音声分割手段12は、分割した音声データ片に順列番号を採番し、順列番号記憶手段13に順列番号を登録し、分割された音声データ片と採番された順列番号を組みにして認識要求送受信手段14に渡す。 Step S02: The voice dividing unit 12 detects a breathing and silent portion from the voice data collected by the voice collecting unit 11, and divides the voice data before and after the breathing and silent parts. Subsequently, the voice dividing means 12 assigns a permutation number to the divided voice data pieces, registers the permutation numbers in the permutation number storage means 13, and sets the divided voice data pieces and the assigned permutation numbers as a set. To the recognition request transmission / reception means 14.
 ここで、図4に示すステップS02の動作を、図5を参照して詳細に説明する。 Here, the operation of step S02 shown in FIG. 4 will be described in detail with reference to FIG.
 ステップS0201:音声分割手段12は、採取した音声データの中から息継ぎや無音部分を検出する。 Step S0201: The voice dividing means 12 detects breathing and silent parts from the collected voice data.
 ステップS0202:音声分割手段12は、検出した息継ぎや無音部分の前後で音声データを分割して音声データ片を作成する。 Step S0202: The voice dividing unit 12 divides the voice data before and after the detected breathing and silence parts to create a voice data piece.
 ステップS0203:音声分割手段12は、音声データ片のそれぞれに、分割した順番で順列番号を採番する。そして、音声分割手段12は順列番号記憶手段13のフィールドT1301から順列番号の現在の最大値を取得し、その値を1増加させ順列番号記憶手段13のフィールドT1301に記録する。 Step S0203: The voice dividing unit 12 assigns a permutation number to each of the voice data pieces in the divided order. Then, the voice dividing unit 12 acquires the current maximum value of the permutation number from the field T1301 of the permutation number storage unit 13, increments the value by 1, and records it in the field T1301 of the permutation number storage unit 13.
 ステップS0204:音声分割手段12は、ステップS0203にて採番した順列番号を分割した音声データ片に割り当てて認識要求送受信手段14に渡す。 Step S0204: The voice dividing unit 12 assigns the permutation number assigned in step S0203 to the divided voice data piece and passes it to the recognition request transmitting / receiving unit 14.
 以上で、ステップS02の動作の詳細な説明を終了する。 This completes the detailed description of the operation in step S02.
 図4に示すフロー図の説明に戻る。 Returning to the description of the flowchart shown in FIG.
 ステップS03:認識要求送受信手段14は、音声分割手段12によって分割された複数の音声データ片を非同期、かつ並列で音声認識サーバ3に送信することで音声認識を要求する。送信の際、認識要求送受信手段14は複数の通信ポートをもち、認識要求の送信に使用する通信ポートと音声分割手段12から渡された順列番号を対応づけ、その対応づけの情報を保持する。 Step S03: The recognition request transmission / reception means 14 requests voice recognition by transmitting a plurality of pieces of voice data divided by the voice division means 12 to the voice recognition server 3 asynchronously and in parallel. At the time of transmission, the recognition request transmission / reception unit 14 has a plurality of communication ports, associates the communication port used for transmission of the recognition request with the permutation number passed from the voice division unit 12, and holds information on the correspondence.
 ステップS04:認識要求送受信手段14は、音声認識サーバ3から音声認識結果を受信すると、ステップS03で保持していた順列番号を用いて、順列番号記憶手段13のフィールドT1311の順列番号を検索し、値が一致するレコードのフィールドT1312に音声認識結果を格納する。 Step S04: Upon receiving the speech recognition result from the speech recognition server 3, the recognition request transmission / reception means 14 searches the permutation number in the field T1311 of the permutation number storage means 13 using the permutation number held in Step S03. The speech recognition result is stored in the field T1312 of the record whose value matches.
 ステップS05:認識結果集約手段15は、定期的に順列番号記憶手段13内の音声認識結果格納状態を検索し、音声認識結果がある一定の長さ分連続して格納されている場合、それらの結果を繋ぎ合わせて認識結果文を作成する。 Step S05: The recognition result aggregating unit 15 periodically searches the speech recognition result storage state in the permutation number storage unit 13, and if the speech recognition result is continuously stored for a certain length, those recognition results are stored. Connect the results to create a recognition result sentence.
 図4に示すステップS05の動作を、図6を参照して詳細に説明する。 The operation of step S05 shown in FIG. 4 will be described in detail with reference to FIG.
 ステップS0501:認識結果集約手段15は、定期的に順列番号記憶手段13内の音声認識結果格納状態を検索し、最小の順列番号から連続して一定の数だけ連続して音声認識結果が登録されている状態を見つけ出す。 Step S0501: The recognition result aggregating unit 15 periodically retrieves the speech recognition result storage state in the permutation number storage unit 13, and a predetermined number of speech recognition results are continuously registered from the smallest permutation number. Find out the state of being.
 ステップS0502:認識結果集約手段15は、ステップS0501で見つかった複数の順列番号(フィールドT1311)と音声認識結果(フィールドT1312)の対を順列番号記憶手段13から取得し、順列番号の順番で音声認識結果を並べ直し、それぞれの音声認識結果を繋いで認識結果文を生成する。 Step S0502: The recognition result aggregating means 15 acquires a plurality of permutation number (field T1311) and speech recognition result (field T1312) pairs found in step S0501 from the permutation number storage means 13, and performs speech recognition in the order of the permutation numbers. The results are rearranged and the speech recognition results are connected to generate a recognition result sentence.
 ステップS0503:認識結果集約手段15は、ステップS0502で取得した音声認識結果および順列番号が格納されているレコードを順列番号記憶手段13から削除する。 Step S0503: The recognition result aggregating unit 15 deletes the record storing the speech recognition result and the permutation number acquired in step S0502 from the permutation number storage unit 13.
 ステップS0504:認識結果集約手段15は、ステップS0502で生成した認識結果文を認識結果表示手段16に渡す。 Step S0504: The recognition result aggregating unit 15 passes the recognition result sentence generated in step S0502 to the recognition result display unit 16.
  以上で、ステップS05の動作の詳細な説明を終了する。 This completes the detailed description of the operation in step S05.
 図4に示すフロー図の説明に戻る。 Returning to the description of the flowchart shown in FIG.
 ステップS06:認識結果表示手段16は、認識結果集約手段15が生成した認識結果文を閲覧者5に閲覧可能に表示する。本実施形態では、認識結果表示手段16は認識結果文を表示部(不図示)に出力する。 Step S06: The recognition result display means 16 displays the recognition result sentence generated by the recognition result aggregating means 15 so that the viewer 5 can view it. In this embodiment, the recognition result display means 16 outputs a recognition result sentence to a display unit (not shown).
 本実施形態の音声認識処理装置による音声認識処理方法を、実施例を用いて具体的に説明する。なお、図1に示した構成と同様な構成についての詳細な説明を省略する。 The speech recognition processing method by the speech recognition processing apparatus of the present embodiment will be specifically described using examples. Detailed description of the same configuration as that shown in FIG. 1 is omitted.
 図7は本実施例の音声認識処理装置の構成例を示すブロック図である。 FIG. 7 is a block diagram showing a configuration example of the voice recognition processing apparatus of the present embodiment.
 本実施例の音声認識処理装置1は、一般的なPCに、上述した音声認識処理方法を実行するためのプログラムが制御部30内のメモリ(不図示)に予め格納された構成である。音声を入力するための装置として、マイク21が音声認識処理装置1の音声採取手段11に接続されている。また、認識結果文を表示するための装置として、表示部22が音声認識処理装置1の認識結果表示手段16に接続されている。 The voice recognition processing apparatus 1 of the present embodiment has a configuration in which a program for executing the above-described voice recognition processing method is stored in advance in a memory (not shown) in the control unit 30 in a general PC. A microphone 21 is connected to the voice sampling means 11 of the voice recognition processing device 1 as a device for inputting voice. Further, a display unit 22 is connected to the recognition result display means 16 of the speech recognition processing device 1 as a device for displaying the recognition result sentence.
 本実施例では、ネットワーク6はインタ―ネットを含むネットワークである。音声認識処理装置1と音声認識サーバ3は、通信プロトコルとして、TCP(Transmission Control Protocol)/IP(Internet Protocol)を使用する。音声認識処理装置1と音声認識サーバ3のそれぞれには、自装置および相手装置の端末識別が予め格納されている。 In this embodiment, the network 6 is a network including the Internet. The voice recognition processing device 1 and the voice recognition server 3 use TCP (Transmission Control Protocol) / IP (Internet Protocol) as a communication protocol. Each of the speech recognition processing device 1 and the speech recognition server 3 stores in advance the terminal identification of the own device and the counterpart device.
 順列番号記憶手段13のフィールドT1301には、順列番号最大値の初期値として0が記録されている。また、認識要求送受信手段14は同時に音声認識要求を送信することのできる通信ポートを5つ有している。認識結果集約手段15は、3個の音声認識結果が連続してそろえば認識結果文を作成するものとする。 In the field T1301 of the permutation number storage means 13, 0 is recorded as the initial value of the permutation number maximum value. The recognition request transmission / reception means 14 has five communication ports that can simultaneously transmit a voice recognition request. The recognition result aggregating unit 15 creates a recognition result sentence when three voice recognition results are continuously obtained.
 本実施例の音声認識処理装置1の動作を、図4を参照して説明する。 The operation of the speech recognition processing apparatus 1 of this embodiment will be described with reference to FIG.
 閲覧者5が音声認識処理装置1を操作して音声認識処理のプログラムを起動する指示を入力した後、話者4が「今日は、晴れ、です。」と話す。ただし、この文中において句点は実際の発生では息継ぎを示し、読点は無音部分を示す。 After the viewer 5 operates the voice recognition processing device 1 and inputs an instruction to start the voice recognition processing program, the speaker 4 speaks “Today is fine”. However, in this sentence, a punctuation mark indicates breathing in actual occurrence, and a punctuation mark indicates a silent part.
 ステップS01:音声採取手段11が、話者4が発する音声情報(今日は、晴れ、です。)を、マイク21から連続的に音声データをデジタルデータとして受信し、ストリームデータとして取得する。 Step S01: The voice sampling means 11 continuously receives voice data from the microphone 21 as digital data, and obtains it as stream data.
 ステップS02:音声分割手段12が採取した音声データ(今日は、晴れ、です。)の中から息継ぎや無音部分を検出してその前後で音声データを分割する。続いて、音声分割手段12は、分割した音声データ片に順列番号を採番し、順列番号記憶手段13に順列番号を登録し、分割された音声データ片と採番された順列番号を組みにして認識要求送受信手段14に渡す。 Step S02: The breath data and the silent part are detected from the voice data collected by the voice dividing unit 12 (today is fine), and the voice data is divided before and after that. Subsequently, the voice dividing means 12 assigns a permutation number to the divided voice data pieces, registers the permutation numbers in the permutation number storage means 13, and sets the divided voice data pieces and the assigned permutation numbers as a set. To the recognition request transmission / reception means 14.
 ここで、上記ステップS02の動作を、図5を参照して詳細に説明する。 Here, the operation of step S02 will be described in detail with reference to FIG.
 ステップS0201:音声分割手段12は、採取された音声データ(今日は、晴れ、です。)の中から息継ぎや無音部分を検出する。本実施例では、音声データを表す文面の句読点を検出する。検出する方法は200Hz~4KHzの音声データが60デシベル未満であり、その状態が0.5秒以上継続される場合を息継ぎおよび無音と判断する。 Step S0201: The voice dividing means 12 detects breathing and silent parts from the collected voice data (it is fine today). In this embodiment, a punctuation mark of a sentence representing voice data is detected. As a detection method, when the voice data of 200 Hz to 4 KHz is less than 60 decibels and the state is continued for 0.5 seconds or more, it is determined that breathing and silence are present.
 ステップS0202:音声分割手段12は、検出した息継ぎや無音部分の前後で音声データを分割して音声データ片を作成する。本実施例では「今日は」という音声データ片と「晴れ」という音声データ片と「です」という音声データ片に分割される。 Step S0202: The voice dividing unit 12 divides the voice data before and after the detected breathing and silence parts to create a voice data piece. In this embodiment, it is divided into a voice data piece “Today”, a voice data piece “Sunny”, and a voice data piece “I”.
 ステップS0203:音声分割手段12は、音声データ片のそれぞれに、分割した順番で順列番号を採番する。そして、音声分割手段12は順列番号記憶手段13のフィールドT1301から順列番号の現在の最大値を取得し、その値を1増加させ順列番号記憶手段13のフィールドT1301に記録する。 Step S0203: The voice dividing unit 12 assigns a permutation number to each of the voice data pieces in the divided order. Then, the voice dividing unit 12 acquires the current maximum value of the permutation number from the field T1301 of the permutation number storage unit 13, increments the value by 1, and records it in the field T1301 of the permutation number storage unit 13.
 ステップS0204:音声分割手段12は、ステップS0203にて採番した順列番号を分割した音声データ片に割り当て認識要求送受信手段14に渡し、順列番号の情報を順列番号記憶手段13に記録する。本実施例では、音声分割手段12は、「今日は」という音声データ片に順列番号1を採番し、「晴れ」という音声データ片に順列番号2を採番し、「です」という音声データ片に順列番号3を採番する。このときの順列番号記憶手段13の状態を図8に示す。 Step S0204: The voice dividing means 12 passes the permutation number numbered in step S0203 to the divided voice data piece and assigns it to the allocation recognition request transmission / reception means 14, and records the permutation number information in the permutation number storage means 13. In this embodiment, the voice division means 12 assigns the permutation number 1 to the voice data piece “Today”, the permutation number 2 to the voice data piece “Sunny”, and the voice data “I”. Number permutation number 3 on a piece. The state of the permutation number storage means 13 at this time is shown in FIG.
 以上で、ステップS02の動作の詳細な説明を終了する。 This completes the detailed description of the operation in step S02.
 図4に示すフロー図の説明に戻る。 Returning to the description of the flowchart shown in FIG.
 ステップS03:認識要求送受信手段14は、音声分割手段12によって分割された複数の音声データ片を非同期、かつ並列に音声認識サーバ3に送信して音声認識を要求する。送信の際、認識要求送受信手段14は5つの通信ポートをもち、認識要求の送信に使用する通信ポートと音声分割手段12から渡された順列番号を、図9に示すように対応づけて保持する。図9は認識要求が通信ポート毎に音声認識処理装置から音声認識サーバに送られる状態を示す図である。図9において、ポート1~3は通信ポートの番号を表し、「ポート1:順列番号1」は通信ポート1に対応づけて順列番号1が保持されていることを意味する。図9では通信ポート4、5を省略している。音声認識処理装置1はプログラムを実行することによりPC上で実現されるので、各通信ポートおよび対応する順列番号はPCから割り当てられるメモリ(不図示)上に記録される。 Step S03: The recognition request transmission / reception means 14 requests the voice recognition by transmitting a plurality of pieces of voice data divided by the voice dividing means 12 to the voice recognition server 3 asynchronously and in parallel. At the time of transmission, the recognition request transmission / reception means 14 has five communication ports, and holds the communication port used for transmission of the recognition request and the permutation number passed from the voice division means 12 as shown in FIG. . FIG. 9 is a diagram showing a state in which a recognition request is sent from the voice recognition processing device to the voice recognition server for each communication port. In FIG. 9, ports 1 to 3 represent communication port numbers, and “port 1: permutation number 1” means that permutation number 1 is held in association with communication port 1. In FIG. 9, the communication ports 4 and 5 are omitted. Since the speech recognition processing device 1 is realized on the PC by executing a program, each communication port and the corresponding permutation number are recorded on a memory (not shown) assigned from the PC.
 ステップS04:認識要求送受信手段14は、音声認識サーバ3から音声認識結果を、図10に示すように受信する。図10は認識結果が通信ポート毎に音声認識サーバから音声認識処理装置に返信される状態を示す図である。図9と図10を見比べると、認識要求に対応した認識結果が同じ通信ポートに音声認識サーバ3から返信されることがわかる。
認識要求送受信手段14は、ステップS03で保持していた順列番号を用いて、順列番号記憶手段13のフィールドT1311の順列番号を検索し、値が一致するレコードのフィールドT1312に音声認識結果を、図11に示すように格納する。
Step S04: The recognition request transmission / reception means 14 receives the speech recognition result from the speech recognition server 3 as shown in FIG. FIG. 10 is a diagram illustrating a state in which the recognition result is returned from the voice recognition server to the voice recognition processing apparatus for each communication port. Comparing FIG. 9 and FIG. 10, it can be seen that the recognition result corresponding to the recognition request is returned from the voice recognition server 3 to the same communication port.
The recognition request transmission / reception means 14 searches the permutation number in the field T1311 of the permutation number storage means 13 using the permutation number held in step S03, and displays the speech recognition result in the field T1312 of the record with the matching value. Store as shown in FIG.
 このとき、音声認識結果が到着した順番は、順列番号2、順列番号3、順列番号1の順番とし、その順番で音声認識結果が順列番号記憶手段13に格納されたとする。 At this time, it is assumed that the order of arrival of the speech recognition results is the order of permutation number 2, permutation number 3, and permutation number 1, and the speech recognition results are stored in the permutation number storage means 13 in that order.
 ステップS05:認識結果集約手段15は、定期的に順列番号記憶手段13内の音声認識結果格納状態を検索し、音声認識結果が3個分連続して格納されているデータ列を見つける。そして、認識結果集約手段15は、その結果を繋ぎ合わせて認識結果文である「今日は 晴れ です」を作成する。繋ぎ合わせる際に、認識結果集約手段15は、音声認識結果間に空白文字を挿入する。 Step S05: The recognition result aggregating unit 15 periodically searches the speech recognition result storage state in the permutation number storage unit 13, and finds a data string in which three speech recognition results are stored continuously. Then, the recognition result aggregating unit 15 connects the results to create a recognition result sentence “Today is sunny”. At the time of joining, the recognition result aggregation means 15 inserts a blank character between the speech recognition results.
 上記ステップS05の動作を、図5を参照して詳細に説明する。 The operation of step S05 will be described in detail with reference to FIG.
 ステップS0501:認識結果集約手段15は、定期的に順列番号記憶手段13内の音声認識結果格納状態を検索し、最小の順列番号から連続して3個連続して音声認識結果が登録されている状態、すなわち順列番号1、2、3のレコードを見つけ出す。 Step S0501: The recognition result aggregating means 15 periodically searches the speech recognition result storage state in the permutation number storage means 13, and three consecutive speech recognition results are registered from the smallest permutation number. Find the state, that is, the records with permutation numbers 1, 2, and 3.
 ステップS0502:認識結果集約手段15は、ステップS0501で発見した複数の順列番号(フィールドT1311)と音声認識結果(フィールドT1312)の対を順列番号記憶手段13から取得する。本実施例では、認識結果集約手段15は、順列番号1のレコードから「今日は」を取得し、順列番号2のレコードから「晴れ」を取得し、順列番号3のレコードから「です」を取得する。その後、認識結果集約手段15は、順列番号の順番にしたがって音声認識結果を並べ直し、それぞれの音声認識結果を繋いで認識結果文「今日は 晴れ です」を生成する。それぞれの音声認識結果を繋ぐ際は、間に空白を挿入する。 Step S0502: The recognition result aggregating unit 15 acquires a plurality of permutation number (field T1311) and speech recognition result (field T1312) pairs found in step S0501 from the permutation number storage unit 13. In this embodiment, the recognition result aggregating unit 15 acquires “Today” from the record with the permutation number 1, acquires “sunny” from the record with the permutation number 2, and acquires “is” from the record with the permutation number 3. To do. Thereafter, the recognition result aggregating unit 15 rearranges the speech recognition results according to the order of the permutation numbers, and generates a recognition result sentence “Today is fine” by connecting the respective speech recognition results. When connecting each speech recognition result, a space is inserted between them.
 ステップS0503:認識結果集約手段15は、ステップS0502で取得した音声認識結果および順列番号が格納されているレコードを順列番号記憶手段13から削除する。今回の場合は、順列番号1、2、3のレコードが該当する。 Step S0503: The recognition result aggregating unit 15 deletes the record storing the speech recognition result and the permutation number acquired in step S0502 from the permutation number storage unit 13. In this case, the records with permutation numbers 1, 2, and 3 correspond.
 ステップS0504:認識結果集約手段15はステップS0502で生成した認識結果文「今日は 晴れ です」を認識結果表示手段16に渡す。 Step S0504: The recognition result aggregating unit 15 passes the recognition result sentence “Today is fine” generated in step S0502 to the recognition result display unit 16.
 以上で、ステップS05の動作の詳細な説明を終了する。 This completes the detailed description of the operation of step S05.
 図4に示すフロー図の説明に戻る。 Returning to the description of the flowchart shown in FIG.
 ステップS06:認識結果表示手段16は、認識結果集約手段15が生成した認識結果文「今日は 晴れ です」を、図12に示すように、表示部22の結果表示領域2201に出力し認識結果閲覧者に表示する。図12は表示画面の一例である。 Step S06: The recognition result display means 16 outputs the recognition result sentence “Today is fine” generated by the recognition result aggregation means 15 to the result display area 2201 of the display unit 22 as shown in FIG. To the user. FIG. 12 is an example of a display screen.
 このようにして、長い会話文の音声認識処理を行い、閲覧者は、会話文に対応する認識結果文を閲覧することが可能となる。 In this way, voice recognition processing of a long conversation sentence is performed, and the viewer can browse the recognition result sentence corresponding to the conversation sentence.
 なお、実施例1のステップS03~S05で説明したように、先に送った認識要求よりも後に送った認識要求の音声認識結果が先に届けられる場合が考えられる。この場合における次の認識要求の処理方法を説明する。 Note that, as described in steps S03 to S05 of the first embodiment, there may be a case where the speech recognition result of the recognition request sent after the recognition request sent earlier is delivered first. A method for processing the next recognition request in this case will be described.
 認識要求送受信手段14は、ポート1が音声認識結果受信待ちの間に、ポート2を介して受信した音声認識結果に順列番号2を割り当て、ポート3を介して受信した音声認識結果に順列番号3を割り当て、音声認識結果と順列番号の組みを順列番号記憶手段13に保存する。 The recognition request transmission / reception means 14 assigns the permutation number 2 to the speech recognition result received through the port 2 while the port 1 is waiting for the speech recognition result reception, and the permutation number 3 to the speech recognition result received through the port 3. And the combination of the speech recognition result and the permutation number is stored in the permutation number storage means 13.
 認識要求送受信手段14は、次の認識処理対象となる音声データ片と順列番号の組みを音声分割手段12から受け取ると、その組を認識結果受信待ちになっていない通信ポートに対応づける。ここで、次の認識処理対象の音声データ片が4つある場合、認識要求送受信手段14は、4つの音声データ片のそれぞれをポート2から5のそれぞれに対応づける。つまり、認識要求送受信手段14は、ポート1の音声認識結果の受信を待たずに、使用されていないポート2~5に順次、次の認識処理対象の音声データ片を振り分ける。 When the recognition request transmission / reception means 14 receives a combination of the voice data piece and the permutation number as the next recognition processing target from the voice division means 12, the recognition request transmission / reception means 14 associates the pair with a communication port that is not waiting for the recognition result. Here, when there are four voice data pieces to be recognized next, the recognition request transmitting / receiving unit 14 associates each of the four voice data pieces with each of the ports 2 to 5. That is, the recognition request transmission / reception means 14 sequentially distributes the next speech data pieces to be recognized to the unused ports 2 to 5 without waiting for the reception of the speech recognition result of the port 1.
 このように、先に送った認識要求よりも後に送った認識要求の音声認識結果が先に届けられても、使用されていない通信ポートに順次、次の認識処理対象の音声データ片を対応づけることで、情報処理を効率よく行うことが可能となる。 As described above, even if the speech recognition result of the recognition request sent after the recognition request sent earlier is delivered first, the next recognition processing target speech data piece is sequentially associated with the communication port that is not used. Thus, information processing can be performed efficiently.
 本実施形態では、上述したように、長い会話文の音声データをクラウド型音声認識サービスで認識可能な単語レベルである音声データ片に分割し、クラウド型音声認識サービスを利用して各音声データ片の音声認識結果を取得し、取得した音声認識結果を元の順番に並べて長い会話文の認識結果文を出力する。そのため、ユーザは、認識処理に必要な大量のデータを自分のPC、スマートフォンおよびタブレット端末等の端末装置内に蓄積しなくても、複数の文章から構成される長い会話文が文字情報に変換された音声認識結果を取得できる。 In the present embodiment, as described above, the voice data of a long conversation sentence is divided into voice data pieces having a word level that can be recognized by the cloud type voice recognition service, and each voice data piece is obtained using the cloud type voice recognition service. Are obtained, and the obtained speech recognition results are arranged in the original order to output a recognition result sentence of a long conversation sentence. Therefore, a user can convert a long conversation sentence composed of a plurality of sentences into character information without storing a large amount of data necessary for recognition processing in a terminal device such as his / her PC, smartphone or tablet terminal. Voice recognition results can be acquired.
 また、端末装置内に大量の言語モデルや教師データを配置していないため、これらデータの更新作業やバックアップといった管理を必要とせず、端末装置内の記憶領域を圧迫することなく比較的低負荷で音声認識処理を実現することができる。その理由は、本実施形態では、ネットワークを介して提供されるクラウド型音声認識サービスに音声認識を依頼し、その結果を受け取って認識結果文を提示するようにしているからである。 In addition, since a large amount of language models and teacher data are not arranged in the terminal device, management such as update work and backup of these data is not required, and the storage area in the terminal device is not compressed and the load is relatively low. Voice recognition processing can be realized. The reason is that in this embodiment, the cloud type speech recognition service provided via the network is requested to perform speech recognition, the result is received, and the recognition result sentence is presented.
 なお、上述の実施形態および実施例では、本発明の音声認識処理装置を理解しやすくするために具体的に説明したが、音声認識処理装置は図13に示すような情報処理装置であってもよい。 In the above-described embodiments and examples, the speech recognition processing device of the present invention has been specifically described for easy understanding. However, the speech recognition processing device may be an information processing device as shown in FIG. Good.
 図13は本実施形態の音声認識処理装置の別の構成例を示すブロック図である。図13に示すように音声認識処理装置は、記憶部33と、通信部34と、制御部30とを有する。図13に示す通信部34および記憶部33のそれぞれは、図1に示した認識要求送受信手段14および順列番号記憶手段13のそれぞれに相当する。 FIG. 13 is a block diagram showing another configuration example of the speech recognition processing apparatus of the present embodiment. As illustrated in FIG. 13, the speech recognition processing apparatus includes a storage unit 33, a communication unit 34, and a control unit 30. Each of the communication unit 34 and the storage unit 33 illustrated in FIG. 13 corresponds to the recognition request transmission / reception unit 14 and the permutation number storage unit 13 illustrated in FIG. 1.
 図13に示す装置でも、上述の実施形態と同様な効果を得ることができる。 The same effect as that of the above-described embodiment can be obtained with the apparatus shown in FIG.
 なお、本実施形態で説明した音声認識処理方法をコンピュータに実行させるためのプログラムを、コンピュータ読み取り可能な記録媒体に格納してもよい。この場合、プログラムを記録媒体から他の情報処理装置にインストールすることで、他の情報処理装置にも上述した情報処理方法を実行させることが可能となる。 Note that a program for causing a computer to execute the speech recognition processing method described in this embodiment may be stored in a computer-readable recording medium. In this case, by installing the program from the recording medium into another information processing apparatus, it is possible to cause the other information processing apparatus to execute the above information processing method.
 本発明の効果の一例を説明する。本発明によれば、ユーザは自分の端末装置に音声認識処理に必要な大量のデータを蓄積していなくても、長い会話文の音声認識結果を取得することができる。 An example of the effect of the present invention will be described. According to the present invention, a user can obtain a speech recognition result of a long conversation sentence without storing a large amount of data necessary for speech recognition processing in his / her terminal device.
 以上、実施形態を参照して本願発明を説明したが、本願発明は上記実施形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 The present invention has been described above with reference to the embodiments, but the present invention is not limited to the above embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 本発明を、一般生活の中で聴覚に障害を持つ人が周囲の会話を認識する場面において、話者が話した内容を文字化して表示し、コミュニケーションを支援する用途に適用できる。また、音声認識処理を翻訳処理に置き換えることで、本発明を、外国人との意思疎通を支援するという用途にも適用できる。 The present invention can be applied to the use of supporting the communication by displaying the contents spoken by the speaker in a characterized manner in a scene where a person with hearing impairments in general life recognizes the surrounding conversation. Moreover, by replacing the speech recognition process with a translation process, the present invention can be applied to a purpose of supporting communication with a foreigner.
 なお、この出願は、2015年2月10日に出願された日本出願の特願2015-023836の内容が全て取り込まれており、この日本出願を基礎として優先権を主張するものである。 Note that this application incorporates all the contents of Japanese Patent Application No. 2015-023836 filed on February 10, 2015, and claims priority based on this Japanese application.
 1  音声認識処理装置
 3  音声認識サーバ
 6  ネットワーク
 11  音声採取手段
 12  音声分割手段
 13  順列番号記憶手段
 14  認識要求送受信手段
 15  認識結果集約手段
 16  認識結果表示手段
 21  マイク
 22  表示部
 
DESCRIPTION OF SYMBOLS 1 Voice recognition processing apparatus 3 Voice recognition server 6 Network 11 Voice collection means 12 Voice division means 13 Permutation number storage means 14 Recognition request transmission / reception means 15 Recognition result aggregation means 16 Recognition result display means 21 Microphone 22 Display part

Claims (9)

  1.  入力される音声を音声データとして取得する音声採取手段と、
     前記音声データを複数の音声データ片に分割し、該複数の音声データ片のそれぞれに前記音声採取手段に入力された順番にしたがって順列番号を割り当てる音声分割手段と、
     前記順列番号を記憶する記憶手段と、
     予め設定された複数の通信ポートに前記順列番号を対応づけながら前記音声データ片を振り分けてネットワークを介して音声認識サーバに送信し、前記音声データ片が該音声認識サーバによって認識処理された結果である音声認識結果を該音声認識サーバから前記通信ポートを介して受信すると、受信した音声認識結果に該通信ポートに対応づけた順列番号を割り当て、割り当てた順列番号に一致する順列番号が記憶される前記記憶手段の領域に該音声認識結果を格納する認識要求送受信手段と、
     前記順列番号とともに前記記憶手段に格納された前記音声認識結果を該順列番号にしたがって並べた認識結果文を生成する認識結果集約手段と、
     生成された前記認識結果文を表示する表示手段と、
    を有する音声認識処理装置。
    A voice sampling means for acquiring input voice as voice data;
    Voice dividing means for dividing the voice data into a plurality of voice data pieces, and assigning a permutation number to each of the plurality of voice data pieces according to the order input to the voice sampling means;
    Storage means for storing the permutation number;
    As a result of distributing the voice data pieces to the plurality of communication ports set in advance while associating the permutation numbers with each other and transmitting them to the voice recognition server via the network, the voice data pieces are recognized by the voice recognition server. When a speech recognition result is received from the speech recognition server via the communication port, a permutation number associated with the communication port is assigned to the received speech recognition result, and a permutation number that matches the assigned permutation number is stored. A recognition request transmission / reception means for storing the speech recognition result in an area of the storage means;
    Recognition result aggregating means for generating a recognition result sentence in which the speech recognition results stored in the storage means together with the permutation numbers are arranged according to the permutation numbers;
    Display means for displaying the generated recognition result sentence;
    A speech recognition processing apparatus.
  2.  請求項1に記載の音声認識処理装置において、
     前記認識要求送受信手段は、
     前記複数の通信ポートのうち、前記音声認識サーバからの認識結果受信待ちになっていない通信ポートに、次の認識処理対象の前記音声データ片を振り分ける、音声認識処理装置。
    The speech recognition processing device according to claim 1,
    The recognition request transmission / reception means includes:
    A speech recognition processing device that distributes the speech data pieces to be subjected to the next recognition processing to communication ports that are not waiting to receive a recognition result from the speech recognition server among the plurality of communication ports.
  3.  請求項1または2に記載の音声認識処理装置において、
     前記音声分割手段は、
     前記音声採取手段が取得した音声データを分割する際、該音声データにおいて人の音声情報が存在しない部分および息継ぎの部分を検出し、検出した部分で挟まれる領域の音声データを前記音声データ片として抽出する、音声認識処理装置。
    The speech recognition processing device according to claim 1 or 2,
    The voice dividing means is
    When dividing the voice data acquired by the voice sampling means, a part where no voice information of a person exists and a breathing part are detected in the voice data, and voice data in a region sandwiched between the detected parts is used as the voice data piece. A speech recognition processing device for extraction.
  4.  情報処理装置による音声認識処理方法であって、
     入力される音声を音声データとして取得し、
     前記音声データを複数の音声データ片に分割して、該複数の音声データ片のそれぞれに前記音声データを取得した順番にしたがって順列番号を割り当て、
     前記順列番号を記憶手段に記憶し、
     予め設定された複数の通信ポートに前記順列番号を対応づけながら前記音声データ片を振り分けてネットワークを介して音声認識サーバに送信し、
     前記音声データ片が該音声認識サーバによって認識処理された結果である音声認識結果を該音声認識サーバから前記通信ポートを介して受信すると、受信した音声認識結果に該通信ポートに対応づけた順列番号を割り当て、
     割り当てた順列番号に一致する順列番号が記憶される前記記憶手段の領域に該音声認識結果を格納し、
     前記順列番号とともに前記記憶手段に格納された前記音声認識結果を該順列番号にしたがって並べた認識結果文を生成し、
     生成された前記認識結果文を表示する、音声認識処理方法。
    A speech recognition processing method by an information processing device,
    The input voice is acquired as voice data,
    Dividing the audio data into a plurality of audio data pieces, and assigning a permutation number to each of the plurality of audio data pieces according to the order in which the audio data was acquired;
    Storing the permutation number in a storage means;
    Distributing the voice data pieces while associating the permutation numbers with a plurality of communication ports set in advance, and transmitting them to the voice recognition server via the network,
    When a speech recognition result, which is a result of recognition processing of the speech data piece by the speech recognition server, is received from the speech recognition server via the communication port, a permutation number associated with the communication port to the received speech recognition result Assign
    Storing the speech recognition result in an area of the storage means in which a permutation number matching the assigned permutation number is stored;
    Generating a recognition result sentence in which the speech recognition results stored in the storage means together with the permutation numbers are arranged according to the permutation numbers;
    A speech recognition processing method for displaying the generated recognition result sentence.
  5.  請求項4に記載の音声認識処理方法において、
     前記複数の通信ポートのうち、前記音声認識サーバからの認識結果受信待ちになっていない通信ポートに、次の認識処理対象の前記音声データ片を振り分ける、音声認識処理方法。
    The speech recognition processing method according to claim 4,
    The speech recognition processing method of distributing the said audio | voice data piece of the next recognition process object to the communication port which is not waiting for the recognition result reception from the said speech recognition server among these communication ports.
  6.  請求項4または5に記載の音声認識処理方法において、
     前記取得した音声データを分割する際、該音声データにおいて人の音声情報が存在しない部分および息継ぎの部分を検出し、検出した部分で挟まれる領域の音声データを前記音声データ片として抽出する、音声認識処理方法。
    The speech recognition processing method according to claim 4 or 5,
    When dividing the acquired voice data, the voice data detects a part where no human voice information exists and a breathing part, and extracts voice data of a region sandwiched between the detected parts as the voice data piece Recognition processing method.
  7.  コンピュータに、
     入力される音声を音声データとして取得する手順と、
     前記音声データを複数の音声データ片に分割して、該複数の音声データ片のそれぞれに前記音声データを取得した順番にしたがって順列番号を割り当てる手順と、
     前記順列番号を記憶手段に記憶する手順と、
     予め設定された複数の通信ポートに前記順列番号を対応づけながら前記音声データ片を振り分けてネットワークを介して音声認識サーバに送信する手順と、
     前記音声データ片が該音声認識サーバによって認識処理された結果である音声認識結果を該音声認識サーバから前記通信ポートを介して受信すると、受信した音声認識結果に該通信ポートに対応づけた順列番号を割り当てる手順と、
     割り当てた順列番号に一致する順列番号が記憶される前記記憶手段の領域に該音声認識結果を格納する手順と、
     前記順列番号とともに前記記憶手段に格納された前記音声認識結果を該順列番号にしたがって並べた認識結果文を生成する手順と、
     生成された前記認識結果文を表示する手順を実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体。
    On the computer,
    The procedure to acquire the input audio as audio data,
    Dividing the audio data into a plurality of audio data pieces, and assigning a permutation number to each of the plurality of audio data pieces according to the order in which the audio data was acquired;
    Storing the permutation number in a storage means;
    A procedure of distributing the voice data pieces while associating the permutation numbers to a plurality of communication ports set in advance and transmitting them to a voice recognition server via a network;
    When a speech recognition result, which is a result of recognition processing of the speech data piece by the speech recognition server, is received from the speech recognition server via the communication port, a permutation number associated with the communication port to the received speech recognition result The steps to assign
    Storing the speech recognition result in an area of the storage means in which a permutation number that matches the assigned permutation number is stored;
    Generating a recognition result sentence in which the speech recognition results stored in the storage means together with the permutation numbers are arranged according to the permutation numbers;
    A computer-readable recording medium recording a program for executing a procedure for displaying the generated recognition result sentence.
  8.  請求項7に記載の記録媒体において、
     前記複数の通信ポートのうち、前記音声認識サーバからの認識結果受信待ちになっていない通信ポートに、次の認識処理対象の前記音声データ片を振り分ける手順を有するプログラムを記録したコンピュータ読み取り可能な記録媒体。
    The recording medium according to claim 7,
    A computer-readable record in which a program having a procedure for allocating the voice data piece to be subjected to the next recognition processing to a communication port not waiting for reception of a recognition result from the voice recognition server among the plurality of communication ports is recorded. Medium.
  9.  請求項7または8に記載の記録媒体において、
     前記取得した音声データを分割する手順で、該音声データにおいて人の音声情報が存在しない部分および息継ぎの部分を検出し、検出した部分で挟まれる領域の音声データを前記音声データ片として抽出する処理を有するプログラムを記録したコンピュータ読み取り可能な記録媒体。
    The recording medium according to claim 7 or 8,
    In the procedure of dividing the acquired audio data, a process in which a part where no human voice information exists and a breathing part are detected in the voice data, and voice data in a region sandwiched between the detected parts is extracted as the voice data piece A computer-readable recording medium having a program recorded thereon.
PCT/JP2015/086000 2015-02-10 2015-12-24 Speech recognition processing device, speech recognition processing method, and program WO2016129188A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016574636A JP6429294B2 (en) 2015-02-10 2015-12-24 Speech recognition processing apparatus, speech recognition processing method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015023836 2015-02-10
JP2015-023836 2015-02-10

Publications (1)

Publication Number Publication Date
WO2016129188A1 true WO2016129188A1 (en) 2016-08-18

Family

ID=56614333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/086000 WO2016129188A1 (en) 2015-02-10 2015-12-24 Speech recognition processing device, speech recognition processing method, and program

Country Status (2)

Country Link
JP (1) JP6429294B2 (en)
WO (1) WO2016129188A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019090917A (en) * 2017-11-14 2019-06-13 株式会社情報環境デザイン研究所 Voice-to-text conversion device, method and computer program
JP2020184007A (en) * 2019-05-07 2020-11-12 株式会社チェンジ Information processing device, voice-to-text conversion system, voice-to-text conversion method and voice-to-text conversion program
CN113053380A (en) * 2021-03-29 2021-06-29 海信电子科技(武汉)有限公司 Server and voice recognition method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006011066A (en) * 2004-06-25 2006-01-12 Nec Corp Voice recognition/synthesis system, synchronous control method, synchronous control program and synchronous controller
JP2008107624A (en) * 2006-10-26 2008-05-08 Kddi Corp Transcription system
JP2012190088A (en) * 2011-03-09 2012-10-04 Nec Corp Audio recording device and method, and program
JP2013015726A (en) * 2011-07-05 2013-01-24 Yamaha Corp Voice recording server device and voice recording system
JP2014056258A (en) * 2008-08-29 2014-03-27 Mmodal Ip Llc Distributed speech recognition with the use of one-way communication

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006011066A (en) * 2004-06-25 2006-01-12 Nec Corp Voice recognition/synthesis system, synchronous control method, synchronous control program and synchronous controller
JP2008107624A (en) * 2006-10-26 2008-05-08 Kddi Corp Transcription system
JP2014056258A (en) * 2008-08-29 2014-03-27 Mmodal Ip Llc Distributed speech recognition with the use of one-way communication
JP2012190088A (en) * 2011-03-09 2012-10-04 Nec Corp Audio recording device and method, and program
JP2013015726A (en) * 2011-07-05 2013-01-24 Yamaha Corp Voice recording server device and voice recording system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019090917A (en) * 2017-11-14 2019-06-13 株式会社情報環境デザイン研究所 Voice-to-text conversion device, method and computer program
JP2020184007A (en) * 2019-05-07 2020-11-12 株式会社チェンジ Information processing device, voice-to-text conversion system, voice-to-text conversion method and voice-to-text conversion program
CN113053380A (en) * 2021-03-29 2021-06-29 海信电子科技(武汉)有限公司 Server and voice recognition method
CN113053380B (en) * 2021-03-29 2023-12-01 海信电子科技(武汉)有限公司 Server and voice recognition method

Also Published As

Publication number Publication date
JPWO2016129188A1 (en) 2017-11-09
JP6429294B2 (en) 2018-11-28

Similar Documents

Publication Publication Date Title
CN112115706B (en) Text processing method and device, electronic equipment and medium
EP3271917B1 (en) Communicating metadata that identifies a current speaker
CN105895103B (en) Voice recognition method and device
JP6771805B2 (en) Speech recognition methods, electronic devices, and computer storage media
JP6327848B2 (en) Communication support apparatus, communication support method and program
CN108683937A (en) Interactive voice feedback method, system and the computer-readable medium of smart television
JP2015176099A (en) Dialog system construction assist system, method, and program
US9196253B2 (en) Information processing apparatus for associating speaker identification information to speech data
JP2018045001A (en) Voice recognition system, information processing apparatus, program, and voice recognition method
JP6622165B2 (en) Dialog log analysis apparatus, dialog log analysis method and program
CN106713111B (en) Processing method for adding friends, terminal and server
JP6429294B2 (en) Speech recognition processing apparatus, speech recognition processing method, and program
CN110119514A (en) The instant translation method of information, device and system
WO2019123854A1 (en) Translation device, translation method, and program
CN110232921A (en) Voice operating method, apparatus, smart television and system based on service for life
CN114168710A (en) Method, device, system, equipment and storage medium for generating conference record
JP6507010B2 (en) Apparatus and method combining video conferencing system and speech recognition technology
WO2018198807A1 (en) Translation device
CN106873798B (en) Method and apparatus for outputting information
JP2018055022A (en) Voice recognition system, information processor, and program
KR20160131730A (en) System, Apparatus and Method For Processing Natural Language, and Computer Readable Recording Medium
JP2004348552A (en) Voice document search device, method, and program
US20200243092A1 (en) Information processing device, information processing system, and computer program product
CN112632241A (en) Method, device, equipment and computer readable medium for intelligent conversation
CN111582708A (en) Medical information detection method, system, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15882061

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016574636

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15882061

Country of ref document: EP

Kind code of ref document: A1