WO2019142230A1 - Voice analysis device, voice analysis method, voice analysis program, and voice analysis system - Google Patents
Voice analysis device, voice analysis method, voice analysis program, and voice analysis system Download PDFInfo
- Publication number
- WO2019142230A1 WO2019142230A1 PCT/JP2018/000941 JP2018000941W WO2019142230A1 WO 2019142230 A1 WO2019142230 A1 WO 2019142230A1 JP 2018000941 W JP2018000941 W JP 2018000941W WO 2019142230 A1 WO2019142230 A1 WO 2019142230A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- participant
- voice
- unit
- transition
- participants
- Prior art date
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 191
- 230000007704 transition Effects 0.000 claims abstract description 129
- 238000004891 communication Methods 0.000 claims description 56
- 230000008859 change Effects 0.000 claims description 19
- 238000000034 method Methods 0.000 description 16
- 230000004807 localization Effects 0.000 description 14
- 239000011159 matrix material Substances 0.000 description 10
- 238000012545 processing Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 230000001186 cumulative effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/10—Speech classification or search using distance or distortion measures between unknown speech and reference templates
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
Definitions
- the present invention relates to a voice analysis device for analyzing voice, a voice analysis method, a voice analysis program and a voice analysis system.
- the Harkness method (also referred to as the Harkness method) is known as a method for analyzing discussions in group learning and meetings (see, for example, Non-Patent Document 1).
- the Harkness method the transition of each participant's utterance is recorded in a line. In this way, it is possible to analyze the contribution of each participant to the discussion and the relationship with others.
- the Harkness Law can also be effectively applied to active learning where students take the initiative in learning.
- Harkness method the burden on the registrar is large because the registrar needs to constantly record the discussion. Also, in order to analyze a plurality of groups, it is necessary to arrange a recorder for each group. Therefore, there is a problem that high cost is required to implement the Harkness method.
- the present invention has been made in view of these points, and it is an object of the present invention to provide a voice analysis device, a voice analysis method, a voice analysis program, and a voice analysis system that can analyze arguments at low cost.
- a voice analysis device comprising: an acquisition unit for acquiring voices uttered by a plurality of participants; and a plurality of the plurality of participants from the speech of a first participant among the plurality of participants It has an analysis part which detects transition to a statement of the 2nd participant among participants, and an output part which displays on a display part the information which shows the timing which the transition generated.
- the output unit may display the information indicating the timing on the display unit by a line connecting a position corresponding to the first participant and a position corresponding to the second participant.
- the output unit generates the line at the time when the transition occurs on the display unit, and erases the line after a predetermined time has elapsed from the time when the transition occurs, as information indicating the timing.
- the time change of the transition may be displayed.
- the output unit may change the display mode of the line according to a combination of the first participant and the second participant.
- the output unit may change the display mode of the line according to the number of times the transition occurs.
- the analysis unit identifies a period in which each of the plurality of participants is speaking based on the voice, and the second participant speaks from the period in which the first participant speaks The transition may be detected when the period is switched.
- the output unit may cause the display unit to display an amount of speech of each of the plurality of participants in addition to the time change of the transition.
- the speech analysis method is characterized in that the processor acquires the speech uttered by a plurality of participants, and the speech of the first participant among the plurality of participants in the speech A step of detecting a transition to a second participant's utterance among a plurality of participants, and a step of displaying information indicating timing at which the transition has occurred on a display unit are performed.
- the voice analysis program is characterized in that the step of obtaining voices uttered by a plurality of participants on a computer, and the utterance of the first participant among the plurality of participants in the voice A step of detecting a transition to a second participant's utterance among a plurality of participants, and a step of displaying information indicating a timing at which the transition has occurred on a display unit are performed.
- a voice analysis system includes a voice analysis device and a communication terminal capable of communicating with the voice analysis device, the communication terminal having a display unit for displaying information, the voice
- the analysis device includes an acquisition unit for acquiring voices uttered by a plurality of participants, and a second participant among the plurality of participants from a statement of a first participant among the plurality of participants in the voice.
- An analysis unit that detects a transition to a message, and an output unit that causes the display unit to display information indicating a timing at which the transition has occurred.
- FIG. 1 is a schematic view of a speech analysis system S according to the present embodiment.
- the voice analysis system S includes a voice analysis device 100, a sound collection device 10, and a communication terminal 20.
- the number of sound collectors 10 and communication terminals 20 included in the speech analysis system S is not limited.
- the voice analysis system S may include devices such as other servers and terminals.
- the voice analysis device 100, the sound collection device 10, and the communication terminal 20 are connected via a network N such as a local area network or the Internet. At least a part of the voice analysis device 100, the sound collection device 10, and the communication terminal 20 may be directly connected without the network N.
- the sound collector 10 includes a microphone array including a plurality of sound collectors (microphones) arranged in different orientations.
- the microphone array includes eight microphones equally spaced on the same circumference in the horizontal plane with respect to the ground.
- the sound collection device 10 transmits the voice acquired using the microphone array to the voice analysis device 100 as data.
- the communication terminal 20 is a communication device capable of performing wired or wireless communication.
- the communication terminal 20 is, for example, a portable terminal such as a smart phone terminal or a computer terminal such as a personal computer.
- the communication terminal 20 receives the setting of analysis conditions from the analyst and displays the analysis result by the voice analysis device 100.
- the voice analysis device 100 is a computer that analyzes the voice acquired by the sound collection device 10 by a voice analysis method described later. Further, the voice analysis device 100 transmits the result of the voice analysis to the communication terminal 20.
- FIG. 2 is a block diagram of the speech analysis system S according to the present embodiment. Arrows in FIG. 2 indicate the main data flow, and there may be data flows not shown in FIG. In FIG. 2, each block is not a hardware (apparatus) unit configuration but a function unit configuration. As such, the blocks shown in FIG. 2 may be implemented in a single device or may be implemented separately in multiple devices. Transfer of data between the blocks may be performed via any means such as a data bus, a network, a portable storage medium, and the like.
- the communication terminal 20 has a display unit 21 for displaying various information, and an operation unit 22 for receiving an operation by an analyst.
- the display unit 21 includes a display device such as a liquid crystal display or an organic light emitting diode (OLED) display.
- the operation unit 22 includes operation members such as a button, a switch, and a dial.
- the display unit 21 and the operation unit 22 may be integrally configured by using a touch screen capable of detecting the position of contact by the analyst as the display unit 21.
- the voice analysis device 100 includes a control unit 110, a communication unit 120, and a storage unit 130.
- the control unit 110 includes a setting unit 111, a sound acquisition unit 112, a sound source localization unit 113, an analysis unit 114, and an output unit 115.
- the storage unit 130 includes a setting information storage unit 131, a voice storage unit 132, and an analysis result storage unit 133.
- the communication unit 120 is a communication interface for communicating with the sound collection device 10 and the communication terminal 20 via the network N.
- the communication unit 120 includes a processor for performing communication, a connector, an electric circuit, and the like.
- the communication unit 120 performs predetermined processing on a communication signal received from the outside to acquire data, and inputs the acquired data to the control unit 110. Further, the communication unit 120 performs predetermined processing on the data input from the control unit 110 to generate a communication signal, and transmits the generated communication signal to the outside.
- the storage unit 130 is a storage medium including a read only memory (ROM), a random access memory (RAM), a hard disk drive, and the like.
- the storage unit 130 stores in advance a program to be executed by the control unit 110.
- the storage unit 130 may be provided outside the voice analysis device 100, and in this case, data may be exchanged with the control unit 110 via the communication unit 120.
- the setting information storage unit 131 stores setting information indicating analysis conditions set by the analyst in the communication terminal 20.
- the voice storage unit 132 stores the voice acquired by the sound collection device 10.
- the analysis result storage unit 133 stores an analysis result indicating the result of analyzing the voice.
- the setting information storage unit 131, the voice storage unit 132, and the analysis result storage unit 133 may be storage areas on the storage unit 130, or a database configured on the storage unit 130.
- the control unit 110 is, for example, a processor such as a central processing unit (CPU), and executes the program stored in the storage unit 130 to set the setting unit 111, the sound acquisition unit 112, the sound source localization unit 113, the analysis unit 114, and the like. It functions as the output unit 115.
- the functions of the setting unit 111, the sound acquisition unit 112, the sound source localization unit 113, the analysis unit 114, and the output unit 115 will be described later with reference to FIGS. 3 to 8. At least a part of the functions of the control unit 110 may be performed by an electrical circuit. In addition, at least a part of the functions of the control unit 110 may be executed by a program executed via a network.
- the speech analysis system S is not limited to the specific configuration shown in FIG.
- the voice analysis device 100 is not limited to one device, and may be configured by connecting two or more physically separated devices in a wired or wireless manner.
- FIG. 3 is a schematic view of the speech analysis method performed by the speech analysis system S according to the present embodiment.
- the analyst sets the analysis conditions by operating the operation unit 22 of the communication terminal 20.
- the analysis condition is information indicating the number of participants in the argument to be analyzed and the direction in which each participant (that is, each of a plurality of participants) is located with reference to the sound collection device 10.
- the communication terminal 20 receives the setting of analysis conditions from the analyst, and transmits the setting as the setting information to the voice analysis device 100 (a).
- the setting unit 111 of the voice analysis device 100 acquires setting information from the communication terminal 20 and causes the setting information storage unit 131 to store the setting information.
- FIG. 4 is a front view of the display unit 21 of the communication terminal 20 displaying the setting screen A.
- the communication terminal 20 displays the setting screen A on the display unit 21 and receives the setting of the analysis condition by the analyst.
- the setting screen A includes a position setting area A1, a start button A2, and an end button A3.
- the positioning area A1 is an area for setting the direction in which each participant U is actually positioned with reference to the sound collection device 10 in the argument to be analyzed.
- the position setting area A1 represents a circle centered on the position of the sound collector 10 as shown in FIG. 4, and further represents an angle based on the sound collector 10 along the circle.
- the analyst sets the position of each participant U in the position setting area A1 by operating the operation unit 22 of the communication terminal 20.
- identification information here, U1 to U4
- U1 to U4 identification information for identifying each participant U is allocated and displayed.
- four participants U1 to U4 are set.
- the portion corresponding to each participant U in the positioning area A1 is displayed in a different color for each participant. Thereby, the analyst can easily recognize the direction in which each participant U is set.
- the start button A2 and the end button A3 are virtual buttons displayed on the display unit 21, respectively.
- the communication terminal 20 transmits a signal of a start instruction to the voice analysis device 100 when the analyst presses the start button A2.
- the communication terminal 20 transmits a signal of a termination instruction to the voice analysis device 100 when the analyst presses the termination button A3.
- from the start instruction to the end instruction by the analyst is one discussion.
- the voice acquisition unit 112 of the voice analysis device 100 When the voice acquisition unit 112 of the voice analysis device 100 receives the signal of the start instruction from the communication terminal 20, the voice acquisition unit 112 transmits a signal instructing acquisition of voice to the sound collection device 10 (b). When the sound collection device 10 receives a signal instructing acquisition of voice from the voice analysis device 100, the collection of voice is started. Further, when the voice acquisition unit 112 of the voice analysis device 100 receives the signal of the termination instruction from the communication terminal 20, the voice acquisition unit 112 transmits a signal instructing the termination of the voice acquisition to the sound collection device 10. When the sound collection device 10 receives a signal instructing the end of the acquisition of sound from the speech analysis device 100, the sound collection device 10 ends the acquisition of sound.
- the sound collection device 10 acquires voices in each of a plurality of sound collection units, and internally records the sound as the sound of each channel corresponding to each sound collection unit. Then, the sound collection device 10 transmits the acquired voices of the plurality of channels to the voice analysis device 100 (c). The sound collector 10 may transmit the acquired voice sequentially or may transmit a predetermined amount or a predetermined time of sound. Further, the sound collection device 10 may collectively transmit the sound from the start to the end of the acquisition.
- the voice acquisition unit 112 of the voice analysis device 100 receives voice from the sound collection device 10 and stores the voice in the voice storage unit 132.
- the voice analysis device 100 analyzes voice at predetermined timing using the voice acquired from the sound collection device 10.
- the voice analysis device 100 may analyze the voice when the analyst gives an analysis instruction at the communication terminal 20 by a predetermined operation. In this case, the analyst selects a voice corresponding to the argument to be analyzed from the voices stored in the voice storage unit 132.
- the voice analysis device 100 may analyze the voice when the voice acquisition ends. In this case, the speech from the start to the end of the acquisition corresponds to the argument to be analyzed. In addition, the voice analysis device 100 may analyze voice sequentially (that is, in real time processing) during acquisition of voice. In this case, the voice analysis device 100 goes back from the current time, and the voice for a predetermined time (for example, 30 seconds) in the past corresponds to the argument to be analyzed.
- a predetermined time for example, 30 seconds
- the sound source localization unit 113 When analyzing speech, first, the sound source localization unit 113 performs sound source localization based on the plurality of channels of speech acquired by the speech acquisition unit 112 (d). The sound source localization is processing for estimating the direction of the sound source included in the sound acquired by the sound acquisition unit 112 for each time (for example, every 10 milliseconds to 100 milliseconds). The sound source localization unit 113 associates the direction of the sound source estimated for each time with the direction of the participant indicated by the setting information stored in the setting information storage unit 131.
- the sound source localization unit 113 can identify the direction of the sound source based on the sound acquired from the sound collection device 10, a known sound source localization method such as Multiple Signal Classification (MUSIC) method or beam forming method can be used. .
- MUSIC Multiple Signal Classification
- the analysis unit 114 analyzes the voice based on the voice acquired by the voice acquisition unit 112 and the direction of the sound source estimated by the sound source localization unit 113 (e).
- the analysis unit 114 may analyze the entire completed discussion as an analysis target, or may analyze a part of the discussion in the case of real-time processing.
- the analysis unit 114 first performs analysis (for example, 10 milliseconds to 100 milliseconds) in the discussion of the analysis target. Every second), it is determined which participant speaks (speaks).
- the analysis unit 114 specifies a continuous period from the start to the end of one participant's speech as a speech period, and causes the analysis result storage unit 133 to store the same. When a plurality of participants speak at the same time, the analysis unit 114 specifies a speech period for each participant.
- the analysis unit 114 calculates the amount of speech of each participant for each time, and causes the analysis result storage unit 133 to store the amount. Specifically, in a certain time window (for example, 5 seconds), the analysis unit 114 calculates a value obtained by dividing the length of time during which the participant speaks by the length of the time window as the amount of speech per time Do. Then, the analysis unit 114 calculates the amount of speech per hour for each participant while shifting the time window by a predetermined time (for example, one second) from the start time of the discussion to the end time (currently in the case of real time processing). repeat.
- a predetermined time for example, one second
- the analysis unit 114 detects the transition of the speaker.
- the same participant participates when another participant (second participant) speaks after one participant (first participant) finishes speaking and after the one participant finishes speaking May make the following remarks.
- it may be detected as one transition that the speech period has been switched two or more times. For example, after one participant (the first participant) finished speaking, another participant (the second participant) spoke and then another participant (the third participant) spoke May be detected as one transition.
- the analysis unit 114 counts the occurrence time of the transition detected in the discussion of the analysis target, the transition source participant, and the transition destination participant, associates them with one another, and stores the result in the analysis result storage unit 133.
- FIG. 5 is a schematic view of the matrix B indicating the transition of the speaker collected by the analysis unit 114. As shown in FIG. Although in FIG. 5 the matrix B is represented as a table of character strings for visibility, it may be represented in other forms recognizable by a computer, such as binary data.
- the matrix B represents the number of transitions from the transition source participant to the transition destination participant in the analysis target discussion.
- the number of transitions from the participant U1 to the same participant U1 is two, and the number of transitions from the participant U1 to another participant U4 is eight.
- the diagonal components of matrix B indicate that the speakers did not alternate, and the non-diagonal components of matrix B indicate that the speakers alternate. Therefore, the analysis unit 114 can determine the atmosphere of the group by comparing the diagonal and non-diagonal components of the matrix B.
- the output unit 115 performs control to display the analysis result by the analysis unit 114 on the display unit 21 by transmitting the display information to the communication terminal 20 (f).
- the display control method of the analysis result by the output unit 115 will be described below with reference to FIGS. 6 to 8.
- the output unit 115 of the voice analysis device 100 reads the analysis result by the analysis unit 114 for the display target discussion from the analysis result storage unit 133.
- the output unit 115 may display a discussion immediately after the analysis by the analysis unit 114 is completed, or may display a discussion specified by the analyst.
- FIG. 6 is a front view of the display unit 21 of the communication terminal 20 displaying the speaker transition screen C.
- the speaker transition screen C includes a circle C1 indicating the arrangement of the participants U, a line C2 indicating the transition of the speakers, and a bar C3 indicating the amount of speech of each participant U.
- the output unit 115 displays the time change of the speaker transition as information indicating the transition timing of the speaker based on the analysis result read from the analysis result storage unit 133.
- the output unit 115 determines the position of the transition source participant for a predetermined period (for example, 5 seconds) from the time of occurrence of the transition. And display information for displaying a line connecting the position of the target and the transition destination participant.
- the circle C1 is a circular area that schematically represents the arrangement of each participant U.
- the output unit 115 displays the identification information (that is, U1 to U4) of the participant U near the position on the circle C1 corresponding to the position of each participant U set in FIG.
- a line C2 is a line connecting the position of the transition source participant U on the circle C1 and the position of the transition destination participant U on the circle C1 when the transition of the speaker occurs.
- the line C2 is displayed in a predetermined color and a predetermined thickness.
- the line C2 may be a straight line segment, a bent line, or a broken line like a dotted line.
- the output unit 115 causes the display unit 21 to display a line C2 connecting the position of the transition source participant U and the position of the transition destination participant U for a predetermined period (five seconds in this case) from the transition generation time. Then, the output unit 115 causes the display unit 21 to erase the line C2 after a predetermined period from the occurrence time of the transition.
- the output unit 115 repeats generation and deletion of a line representing the transition of the speaker from the start time to the end time of the display target discussion. Thus, the output unit 115 can cause the display unit 21 to display the time change of the transition of the speaker.
- the output unit 115 may automatically advance the time during display (that is, may display as a moving image), or may advance the time during display according to the operation by the user.
- the output unit 115 indicates how the transition tendency changes along the time series of the discussion by displaying the time change of the transition of the speaker as information indicating the timing of the transition of the speaker. be able to. As a result, the analyst can efficiently grasp the role of each participant U and the relationship between the participants U along the time series of the discussion.
- the output unit 115 may shift the positions of both ends of the plurality of lines C2 by a predetermined amount and cause the display unit 21 to display the positions. As a result, the output unit 115 can prevent the plurality of lines C2 from matching each other even when a plurality of transitions occur in the same time between the same participants U.
- the output unit 115 displays a display mode such as the thickness and color of the line C2 based on the number of transitions that occur. You may change the For example, the output unit 115 causes the display unit 21 to display a thicker line C2 as the number of transitions increases, or displays the line C2 in a different color according to the number of transitions.
- the output unit 115 can display the fact that a plurality of transitions have occurred in the same time between the same participants U in a manner easily understood by the analyst.
- the output unit 115 may change the display mode such as thickness or color of the line C2 based on the number of times of transition from the start time of the discussion to the time during display in the same combination of the participants U .
- the output unit 115 causes the display unit 21 to display the line C2 thicker as the number of cumulative transitions increases, or causes the line C2 to be displayed in a different color according to the number of cumulative transitions.
- the output unit 115 can display the fact that the number of transitions of the cumulative total is high or low for each combination of the participants U in a manner easily understood by the analyst.
- the output unit 115 may change the display mode such as the thickness and color of the line C2 depending on the combination of the participants U.
- the output unit 115 causes the display unit 21 to display the line C2 with a different thickness or color depending on the combination of the participants U.
- the output unit 115 can display the combination of the participants U corresponding to the line C2 in a manner easy for the analyst to understand.
- a bar C3 is a bar-like area that represents the amount of speech of each participant U.
- the output unit 115 acquires the amount of speech of each participant U at each time during display, which is indicated by the analysis result read out from the analysis result storage unit 133. Then, the output unit 115 causes the bar C3 having a length or a size corresponding to the read amount of speech to be displayed at a position on the circle C1 corresponding to the position of each participant U. For example, the output unit 115 causes the display unit 21 to display the bar C3 such that the length from the circumference of the circle C1 toward the center becomes longer as the amount of speech of the participant U increases. As a result, the output unit 115 can display the amount of speech of each participant during the displayed time in an easily understandable manner to the analyst in addition to the time change of the transition of the speech.
- the output unit 115 may display a bar C3 having a length or a size corresponding to the cumulative value of the amount of speech from the start time of the discussion to the time being displayed, not limited to the amount of speech for each time.
- the output unit 115 may change the display mode such as the color or pattern of the bar C3 depending on the participant U.
- the output unit 115 is not limited to the time change of the transition from one participant U to another participant U, and may display the time change of the combination of the participants U in which the transition has occurred. In this case, the output unit 115 causes the circle C1 to display identification information (for example, “U1-U2”, “U1-U3”, etc.) indicating the combination of the participants U.
- identification information for example, “U1-U2”, “U1-U3”, etc.
- the output unit 115 outputs “U1-U2
- the display unit 21 displays a line C2 connecting the position of “” and the position of “U1-U3”.
- the output unit 115 causes the display unit 21 to erase the line C2 a predetermined time after the line C2 is displayed.
- the output unit 115 can represent how the combination of the participants U in which the transition has occurred changes along the time series of the discussion.
- FIG. 7 is a front view of the display unit 21 of the communication terminal 20 displaying the speech order screen D.
- the speech order screen D includes a region D1 indicating the amount of speech of the participant U and an arrow D2 indicating the number of transitions between the speakers.
- the output unit 115 When displaying the speech order screen D, the output unit 115 acquires the amount of speech of each participant U in the discussion of the display target for each time indicated by the analysis result read out from the analysis result storage unit 133. Then, the output unit 115 calculates the total amount of speech of each participant U by summing up the amounts of speech for each time from the start time to the end time of the display target discussion. Further, the output unit 115 acquires, from the analysis result read out from the analysis result storage unit 133, the number of transitions (that is, the matrix B illustrated in FIG. 5) generated in the display target discussion for each combination of participants U.
- a region D1 is a graphic representing the total amount of speech of each participant U.
- the output unit 115 causes the display unit 21 to display an area D1 having a size corresponding to the total amount of speech.
- the output unit 115 causes the display unit 21 to display a circle with a larger radius as the total amount of speech of each participant U is larger as the area D1.
- the area D1 is not limited to a circle, but may be another figure such as a polygon.
- An arrow D2 is a graphic representing the direction of transition and the number of transitions from one participant U to another participant U.
- the output unit 115 displays, from the area D1 corresponding to the transition source participant U, to the area D1 corresponding to the transition destination participant U, an arrow D2 of a thickness corresponding to the number of transitions on the display unit
- the arrow D2 may be a straight arrow, a curved arrow, or a broken arrow like a dotted line.
- the output unit 115 causes the display unit 21 to display the arrow D2 thicker as the number of transitions from the transition source participant U to the transition destination participant U increases.
- the output unit 115 may not display the arrow D2 for the combination of the participants U whose number of transitions is equal to or less than a predetermined threshold.
- the output unit 115 may adjust the arrangement of the plurality of areas D1 based on the number of transitions between the participants U. In this case, the output unit 115 arranges two areas D1 corresponding to the participant U with many transition times near and arranges two areas D1 corresponding to the participant U with small transition numbers in the distance. Do. Alternatively, the output unit 115 may arrange the plurality of areas D1 based on the physical position of the participant U. In this case, the output unit 115 arranges the plurality of areas D1 so as to match the positions of the participants U set in FIG. 4.
- the output unit 115 simultaneously indicates the amount of speech of the participant U and the number of transitions between the participants. As a result, the analyst can grasp at a glance the flow of utterances among the participants U and which participant U talks more or less.
- FIG. 8 is a front view of the display unit 21 of the communication terminal 20 displaying the analysis report screen E.
- the analysis report screen E includes a main statement order E1, a group atmosphere E2, and a participant classification E3.
- the output unit 115 When displaying the analysis report screen E, the output unit 115 acquires the amount of speech of each participant U in the discussion of the display target for each time indicated by the analysis result read out from the analysis result storage unit 133. Then, the output unit 115 calculates the total amount of speech of each participant U by summing up the amounts of speech for each time from the start time to the end time of the display target discussion. Further, the output unit 115 acquires, from the analysis result read out from the analysis result storage unit 133, the number of transitions (that is, the matrix B illustrated in FIG. 5) generated in the display target discussion for each combination of participants U.
- the order of main utterances E1 is information indicating the transition of the speaker who frequently occurred in the discussion.
- the output unit 115 sums up the number of transitions for a series of transitions from one participant U to one or more other participants U and then returning to the first participant U.
- the series of transitions includes transitioning from participant U1 to participant U4, then transitioning from participant U4 to participant U3, and then transitioning from participant U3 to the first participant U1.
- the output unit 115 determines the combination of the participants U indicated by a series of transitions having the largest number of transitions as the main utterance order E1 and causes the analysis report screen E to be displayed.
- the output unit 115 may determine the order E1 of two or more main utterances in descending order of the number of transitions. This allows the analyst to grasp the participant U who was at the center of the discussion.
- the group atmosphere E2 is information indicating an atmosphere as to whether the number of speaker changes is large or small in the discussion.
- the output unit 115 calculates the average value of the number of transitions of diagonal components (that is, between the same participants U) and non-diagonal components (that is, between different participants U). And the average value of the number of transitions of Then, the output unit 115 causes the analysis report screen E to display the ratio of the average value of the diagonal components and the average value of the non-diagonal components as the atmosphere E2 of the group.
- the analysis report screen E causes the analysis report screen E to display the ratio of the average value of the diagonal components and the average value of the non-diagonal components as the atmosphere E2 of the group.
- the output unit 115 displays an arrow at a position corresponding to the ratio of the average value of the diagonal components and the average value of the non-diagonal components on the scale extending in the left-right direction. Further, the output unit 115 may display a value indicating an average value of diagonal components and an average value of non-diagonal components. This enables the analyst to grasp the atmosphere of the whole group who discussed.
- Participant classification E3 is information for classifying each participant U based on the volume and transition of each participant U in the discussion.
- the output unit 115 classifies each participant U with respect to two axes of an axis indicating the amount of utterance of the participant U and an axis indicating whether the participant U was at the center of the discussion.
- the output unit 115 arranges the participant U whose utterance amount is equal to or more than a predetermined threshold above the origin (rightward in FIG. 8) on the axis indicating the utterance amount of the participant U, and the utterance amount Is arranged below the origin (to the left in FIG. 8).
- the output unit 115 arranges the participant U included in the main statement order E1 above the origin (upward in FIG. 8) with respect to an axis indicating whether the participant U was at the center of the discussion. , Place the participant U not included in the main statement order E1 below the origin (downward in FIG. 8).
- the output unit 115 displays predetermined labels for four areas (quadrants) divided into two axes.
- the labels of the respective areas are preset in the voice analysis device 100.
- the output unit 115 is “leader type” with respect to the upper right area (the participant U who has a large amount of speech and is the center of discussion) and the upper left area (the small amount of speech is the center of the discussion "Participant type” for one participant U), "One more for another type” for the lower right area (a large amount of talks, non-controversial participant U), lower left area (a small amount of speech , "Non-participatory" is displayed for the non-centered participant U).
- the analyst can grasp the state of each participant U in the entire discussion.
- the output unit 115 may determine the affinity of the participants U based on the transition of the speaker, and may display the analysis on the analysis report screen E.
- the output unit 115 sums up the numbers of transitions for all combinations of two participants U.
- the output unit 115 determines that the combination of participants U whose number of transitions is equal to or more than a predetermined threshold is good compatibility, and determines that the combination of participants U whose number of transitions is less than the predetermined threshold is bad.
- the output unit 115 causes the analysis report screen E to display the compatibility determined for each combination of the participants U. Thereby, the analyst can grasp that there are more or less transitions for each combination of participants U.
- the output unit 115 switches the speaker transition screen C, the speech order screen D, and the analysis report screen E and causes the display unit 21 to display the screen by receiving an operation by the analyst.
- the output unit 115 may cause the display unit 21 to display only a part of the speaker transition screen C, the speech order screen D, and the analysis report screen E.
- the output unit 115 is not limited to the display on the display unit, and may output the analysis result by other methods such as printing by a printer, data recording to a storage device, and the like.
- FIG. 9 is a sequence diagram of the speech analysis method performed by the speech analysis system S according to the present embodiment.
- the communication terminal 20 receives the setting of analysis conditions from the analyst, and transmits the setting as setting information to the voice analysis device 100 (S11).
- the setting unit 111 of the voice analysis device 100 acquires setting information from the communication terminal 20 and causes the setting information storage unit 131 to store the setting information.
- the voice acquisition unit 112 of the voice analysis device 100 transmits a signal instructing voice acquisition to the sound collection device 10 (S12).
- the sound collection device 10 receives a signal instructing acquisition of voice from the voice analysis device 100
- recording of voice is started using a plurality of sound collection units, and the voice analysis device 100 collects voices of a plurality of channels recorded.
- the voice acquisition unit 112 of the voice analysis device 100 receives voice from the sound collection device 10 and stores the voice in the voice storage unit 132.
- the voice analysis device 100 starts voice analysis at one of timings when an analyst gives instructions, when voice acquisition ends, or during voice acquisition (that is, real-time processing).
- the sound source localization unit 113 performs sound source localization based on the speech acquired by the speech acquisition unit 112 (S14).
- the analysis unit 114 determines, based on the voice acquired by the voice acquisition unit 112 and the direction of the sound source estimated by the sound source localization unit 113, which participant has spoken at each time.
- the speech period and the speech volume are specified in (S15).
- the analysis unit 114 causes the analysis result storage unit 133 to store the utterance period and the utterance amount for each participant.
- the analysis unit 114 detects the transition of the speaker (S16).
- the analysis unit 114 counts the time of occurrence of transition, the participant at the transition source, and the participant at the transition destination, associates them with one another, and stores them in the analysis result storage unit 133.
- the output unit 115 performs control to display the analysis result on the display unit 21 of the communication terminal 20 (S17). Specifically, the output unit 115 transmits, to the communication terminal 20, display information for displaying the speaker transition screen C, the speech order screen D, and the analysis report screen E described above.
- the communication terminal 20 causes the display unit 21 to display the analysis result in accordance with the display information received from the voice analysis device 100 (S18).
- the voice analysis device 100 automatically analyzes the discussions of a plurality of participants based on the voice acquired using the sound collection device 10 having a plurality of sound collection units. Therefore, it is not necessary to have the recorder monitor the discussion as in the Harkness method described in Non-Patent Document 1, and it is not necessary to arrange the recorder for each group, so the cost is low.
- Non-Patent Document 1 represents the transition of speech in the entire period from the start to the end of the discussion. Therefore, the analyst could not grasp the change of the transition tendency along the time series of the argument.
- the speech analysis device 100 displays the time change of the transition as information indicating the timing of the transition of the utterance between the participants in the discussion. Thereby, the analyst can grasp the role of each participant U and the relationship between the participants U along the time series of the discussion.
- the voice analysis device 100 simultaneously displays the amount of speech of the participant U and the number of transitions between the participants based on the acquired voice. As a result, the analyst can grasp at a glance the flow of utterances among the participants U and which participant U talks more or less.
- the voice analysis device 100 displays the order of main utterances in the discussion, the atmosphere of the group, and the classification of the participants based on the acquired voice. This enables the analyst to understand the participants who were at the center of the discussion, the atmosphere of the whole group who discussed, and the state of each participant in the whole discussion.
- the processor of the speech analysis device 100, the sound collection device 10, and the communication terminal 20 is the main body of each step (process) included in the speech analysis method shown in FIG. That is, the processors of the voice analysis device 100, the sound collection device 10 and the communication terminal 20 read a program for executing the voice analysis method shown in FIG. 9 from the storage unit, and execute the program to execute the voice analysis device 100. By controlling each part of the sound device 10 and the communication terminal 20, the voice analysis method shown in FIG. 9 is performed.
- the steps included in the speech analysis method shown in FIG. 9 may be partially omitted, the order between the steps may be changed, and a plurality of steps may be performed in parallel.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
図1は、本実施形態に係る音声分析システムSの模式図である。音声分析システムSは、音声分析装置100と、集音装置10と、通信端末20とを含む。音声分析システムSが含む集音装置10及び通信端末20の数は限定されない。音声分析システムSは、その他のサーバ、端末等の機器を含んでもよい。 [Overview of speech analysis system S]
FIG. 1 is a schematic view of a speech analysis system S according to the present embodiment. The voice analysis system S includes a
図2は、本実施形態に係る音声分析システムSのブロック図である。図2において、矢印は主なデータの流れを示しており、図2に示していないデータの流れがあってよい。図2において、各ブロックはハードウェア(装置)単位の構成ではなく、機能単位の構成を示している。そのため、図2に示すブロックは単一の装置内に実装されてよく、あるいは複数の装置内に別れて実装されてよい。ブロック間のデータの授受は、データバス、ネットワーク、可搬記憶媒体等、任意の手段を介して行われてよい。 [Configuration of speech analysis system S]
FIG. 2 is a block diagram of the speech analysis system S according to the present embodiment. Arrows in FIG. 2 indicate the main data flow, and there may be data flows not shown in FIG. In FIG. 2, each block is not a hardware (apparatus) unit configuration but a function unit configuration. As such, the blocks shown in FIG. 2 may be implemented in a single device or may be implemented separately in multiple devices. Transfer of data between the blocks may be performed via any means such as a data bus, a network, a portable storage medium, and the like.
図3は、本実施形態に係る音声分析システムSが行う音声分析方法の模式図である。まず分析者は、通信端末20の操作部22を操作することによって、分析条件の設定を行う。例えば分析条件は、分析対象とする議論の参加者の人数と、集音装置10を基準とした各参加者(すなわち、複数の参加者それぞれ)が位置する向きとを示す情報である。通信端末20は、分析者から分析条件の設定を受け付け、設定情報として音声分析装置100に送信する(a)。音声分析装置100の設定部111は、通信端末20から設定情報を取得して設定情報記憶部131に記憶させる。 [Description of voice analysis method]
FIG. 3 is a schematic view of the speech analysis method performed by the speech analysis system S according to the present embodiment. First, the analyst sets the analysis conditions by operating the
出力部115は、表示情報を通信端末20に送信することによって、分析部114による分析結果を表示部21上に表示させる制御を行う(f)。出力部115による分析結果の表示制御方法を、図6~図8を用いて以下に説明する。 [Description of display method]
The
図9は、本実施形態に係る音声分析システムSが行う音声分析方法のシーケンス図である。まず通信端末20は、分析者から分析条件の設定を受け付け、設定情報として音声分析装置100に送信する(S11)。音声分析装置100の設定部111は、通信端末20から設定情報を取得して設定情報記憶部131に記憶させる。 [Sequence of voice analysis method]
FIG. 9 is a sequence diagram of the speech analysis method performed by the speech analysis system S according to the present embodiment. First, the
本実施形態に係る音声分析装置100は、複数の集音部を有する集音装置10を用いて取得した音声に基づいて、自動的に複数の参加者の議論を分析する。そのため、非特許文献1に記載のハークネス法のように記録者が議論を監視する必要がなく、またグループごとに記録者を配置する必要がないため、低コストである。 [Effect of this embodiment]
The
100 音声分析装置
110 制御部
112 音声取得部
114 分析部
115 出力部
10 集音装置
20 通信端末
21 表示部 S
Claims (10)
- 複数の参加者が発した音声を取得する取得部と、
前記音声における、前記複数の参加者のうち第1参加者の発言から、前記複数の参加者のうち第2参加者の発言への遷移を検出する分析部と、
前記遷移が発生したタイミングを示す情報を表示部に表示させる出力部と、
を有する音声分析装置。 An acquisition unit for acquiring voices uttered by a plurality of participants;
An analysis unit that detects a transition from a first participant's utterance among the plurality of participants in the voice to a second participant's utterance among the plurality of participants;
An output unit configured to display information indicating timing at which the transition has occurred on a display unit;
Voice analyzer with. - 前記出力部は、前記表示部上で、前記第1参加者に対応する位置と、前記第2参加者に対応する位置とを結ぶ線によって、前記タイミングを示す情報を表示する、請求項1に記載の音声分析装置。 The said output part displays the information which shows the said timing by the line which ties the position corresponding to a said 1st participant, and the position corresponding to a said 2nd participant on the said display part. Voice analyzer as described.
- 前記出力部は、前記表示部上で、前記遷移が発生した時間に前記線を生成し、前記遷移が発生した時間から所定時間の経過後に前記線を消去することによって、前記タイミングを示す情報として前記遷移の時間変化を表示する、請求項2に記載の音声分析装置。 The output unit generates the line at the time when the transition occurs on the display unit, and erases the line after a predetermined time has elapsed from the time when the transition occurs, as information indicating the timing. The voice analysis device according to claim 2, wherein the time change of the transition is displayed.
- 前記出力部は、前記第1参加者と前記第2参加者との組み合わせに応じて、前記線の表示態様を変更する、請求項3に記載の音声分析装置。 The voice analysis device according to claim 3, wherein the output unit changes a display mode of the line according to a combination of the first participant and the second participant.
- 前記出力部は、前記遷移が発生した回数に応じて、前記線の表示態様を変更する、請求項3又は4に記載の音声分析装置。 The voice analysis device according to claim 3, wherein the output unit changes a display mode of the line according to the number of times the transition occurs.
- 前記分析部は、前記音声に基づいて前記複数の参加者のそれぞれが発言している期間を特定し、前記第1参加者が発言している前記期間から前記第2参加者が発言している前記期間に切り替わった場合に前記遷移を検出する、請求項1から5のいずれか一項に記載の音声分析装置。 The analysis unit identifies a period in which each of the plurality of participants is speaking based on the voice, and the second participant speaks from the period in which the first participant speaks The voice analysis device according to any one of claims 1 to 5, which detects the transition when switching to the period.
- 前記出力部は、前記遷移の時間変化に加えて、前記複数の参加者のそれぞれの発言量を、前記表示部に表示させる、請求項1から6のいずれか一項に記載の音声分析装置。 The voice analysis device according to any one of claims 1 to 6, wherein the output unit causes the display unit to display the amount of speech of each of the plurality of participants in addition to the time change of the transition.
- プロセッサが、
複数の参加者が発した音声を取得するステップと、
前記音声における、前記複数の参加者のうち第1参加者の発言から、前記複数の参加者のうち第2参加者の発言への遷移を検出するステップと、
前記遷移が発生したタイミングを示す情報を表示部に表示させるステップと、
を実行する音声分析方法。 Processor is
Acquiring voices uttered by a plurality of participants;
Detecting a transition from a first participant's utterance among the plurality of participants in the voice to a second participant's utterance among the plurality of participants;
Displaying information indicating timing at which the transition has occurred on a display unit;
Voice analysis method to perform. - コンピュータに、
複数の参加者が発した音声を取得するステップと、
前記音声における、前記複数の参加者のうち第1参加者の発言から、前記複数の参加者のうち第2参加者の発言への遷移を検出するステップと、
前記遷移が発生したタイミングを示す情報を表示部に表示させるステップと、
を実行させる音声分析プログラム。 On the computer
Acquiring voices uttered by a plurality of participants;
Detecting a transition from a first participant's utterance among the plurality of participants in the voice to a second participant's utterance among the plurality of participants;
Displaying information indicating timing at which the transition has occurred on a display unit;
Voice analysis program to run. - 音声分析装置と、前記音声分析装置と通信可能な通信端末と、を備え、
前記通信端末は、情報を表示する表示部を有し、
前記音声分析装置は、
複数の参加者が発した音声を取得する取得部と、
前記音声における、前記複数の参加者のうち第1参加者の発言から、前記複数の参加者のうち第2参加者の発言への遷移を検出する分析部と、
前記遷移が発生したタイミングを示す情報を前記表示部に表示させる出力部と、
を有する、音声分析システム。 A voice analysis device; and a communication terminal capable of communicating with the voice analysis device;
The communication terminal has a display unit for displaying information;
The voice analysis device
An acquisition unit for acquiring voices uttered by a plurality of participants;
An analysis unit that detects a transition from a first participant's utterance among the plurality of participants in the voice to a second participant's utterance among the plurality of participants;
An output unit that causes the display unit to display information indicating timing at which the transition has occurred;
Voice analysis system.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018502278A JP6646134B2 (en) | 2018-01-16 | 2018-01-16 | Voice analysis device, voice analysis method, voice analysis program, and voice analysis system |
PCT/JP2018/000941 WO2019142230A1 (en) | 2018-01-16 | 2018-01-16 | Voice analysis device, voice analysis method, voice analysis program, and voice analysis system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2018/000941 WO2019142230A1 (en) | 2018-01-16 | 2018-01-16 | Voice analysis device, voice analysis method, voice analysis program, and voice analysis system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019142230A1 true WO2019142230A1 (en) | 2019-07-25 |
Family
ID=67301369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/000941 WO2019142230A1 (en) | 2018-01-16 | 2018-01-16 | Voice analysis device, voice analysis method, voice analysis program, and voice analysis system |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6646134B2 (en) |
WO (1) | WO2019142230A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023021972A (en) * | 2019-10-28 | 2023-02-14 | ハイラブル株式会社 | Speech analysis device, speech analysis method, speech analysis program, and speech analysis system |
WO2023210052A1 (en) * | 2022-04-27 | 2023-11-02 | ハイラブル株式会社 | Voice analysis device, voice analysis method, and voice analysis program |
JP7530070B2 (en) | 2020-06-01 | 2024-08-07 | ハイラブル株式会社 | Audio conference device, audio conference system, and audio conference method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004350134A (en) * | 2003-05-23 | 2004-12-09 | Nippon Telegr & Teleph Corp <Ntt> | Meeting outline grasp support method in multi-point electronic conference system, server for multi-point electronic conference system, meeting outline grasp support program, and recording medium with the program recorded thereon |
JP2013058221A (en) * | 2012-10-18 | 2013-03-28 | Hitachi Ltd | Conference analysis system |
-
2018
- 2018-01-16 JP JP2018502278A patent/JP6646134B2/en active Active
- 2018-01-16 WO PCT/JP2018/000941 patent/WO2019142230A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004350134A (en) * | 2003-05-23 | 2004-12-09 | Nippon Telegr & Teleph Corp <Ntt> | Meeting outline grasp support method in multi-point electronic conference system, server for multi-point electronic conference system, meeting outline grasp support program, and recording medium with the program recorded thereon |
JP2013058221A (en) * | 2012-10-18 | 2013-03-28 | Hitachi Ltd | Conference analysis system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023021972A (en) * | 2019-10-28 | 2023-02-14 | ハイラブル株式会社 | Speech analysis device, speech analysis method, speech analysis program, and speech analysis system |
JP7427274B2 (en) | 2019-10-28 | 2024-02-05 | ハイラブル株式会社 | Speech analysis device, speech analysis method, speech analysis program and speech analysis system |
JP7530070B2 (en) | 2020-06-01 | 2024-08-07 | ハイラブル株式会社 | Audio conference device, audio conference system, and audio conference method |
WO2023210052A1 (en) * | 2022-04-27 | 2023-11-02 | ハイラブル株式会社 | Voice analysis device, voice analysis method, and voice analysis program |
WO2023209898A1 (en) * | 2022-04-27 | 2023-11-02 | ハイラブル株式会社 | Voice analysis device, voice analysis method, and voice analysis program |
Also Published As
Publication number | Publication date |
---|---|
JP6646134B2 (en) | 2020-02-14 |
JPWO2019142230A1 (en) | 2020-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10056094B2 (en) | Method and apparatus for speech behavior visualization and gamification | |
WO2019142230A1 (en) | Voice analysis device, voice analysis method, voice analysis program, and voice analysis system | |
JP7453714B2 (en) | Argument analysis device and method | |
CN111901627B (en) | Video processing method and device, storage medium and electronic equipment | |
CN110600033A (en) | Learning condition evaluation method and device, storage medium and electronic equipment | |
CN110473533A (en) | Speech dialogue system, speech dialog method and program | |
WO2024099359A1 (en) | Voice detection method and apparatus, electronic device and storage medium | |
JP6589042B1 (en) | Speech analysis apparatus, speech analysis method, speech analysis program, and speech analysis system | |
JP7427274B2 (en) | Speech analysis device, speech analysis method, speech analysis program and speech analysis system | |
JP6589040B1 (en) | Speech analysis apparatus, speech analysis method, speech analysis program, and speech analysis system | |
WO2022079777A1 (en) | Analysis device, analysis system, analysis method, and non-transitory computer-readable medium having program stored thereon | |
JP6733452B2 (en) | Speech analysis program, speech analysis device, and speech analysis method | |
JP7452299B2 (en) | Conversation support system, conversation support method and program | |
US20210012791A1 (en) | Image representation of a conversation to self-supervised learning | |
JP6975755B2 (en) | Voice analyzer, voice analysis method, voice analysis program and voice analysis system | |
JP6589041B1 (en) | Speech analysis apparatus, speech analysis method, speech analysis program, and speech analysis system | |
JP7414319B2 (en) | Speech analysis device, speech analysis method, speech analysis program and speech analysis system | |
JP6975756B2 (en) | Voice analyzer, voice analysis method, voice analysis program and voice analysis system | |
JP7149019B2 (en) | Speech analysis device, speech analysis method, speech analysis program and speech analysis system | |
WO2022079767A1 (en) | Analysis device, system, method, and non-transitory computer-readable medium storing program | |
JP7449577B2 (en) | Information processing device, information processing method, and program | |
WO2022079773A1 (en) | Analysis device, system, method, and non-transitory computer-readable medium having program stored therein | |
JP2022144417A (en) | Hearing support device, hearing support method and hearing support program | |
CN115440231A (en) | Speaker recognition method, device, storage medium, client and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018502278 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18900613 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.10.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18900613 Country of ref document: EP Kind code of ref document: A1 |