WO2016157650A1 - 情報処理装置、制御方法、およびプログラム - Google Patents
情報処理装置、制御方法、およびプログラム Download PDFInfo
- Publication number
- WO2016157650A1 WO2016157650A1 PCT/JP2015/085845 JP2015085845W WO2016157650A1 WO 2016157650 A1 WO2016157650 A1 WO 2016157650A1 JP 2015085845 W JP2015085845 W JP 2015085845W WO 2016157650 A1 WO2016157650 A1 WO 2016157650A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- score
- information processing
- processing apparatus
- utterance
- display
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000004458 analytical method Methods 0.000 claims abstract description 82
- 230000004044 response Effects 0.000 claims abstract description 72
- 238000004364 calculation method Methods 0.000 claims abstract description 35
- 230000006870 function Effects 0.000 claims description 17
- 230000000875 corresponding effect Effects 0.000 description 129
- 238000012545 processing Methods 0.000 description 17
- 230000009471 action Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 4
- 230000001276 controlling effect Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000004397 blinking Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011038 discontinuous diafiltration by volume reduction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/221—Announcement of recognition results
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- the present disclosure relates to an information processing device, a control method, and a program.
- a voice UI application mounted on a smartphone or a tablet terminal
- a voice UI application mounted on a smartphone or a tablet terminal
- Patent Document 1 proposes a system in which input speech is converted in real time and displayed as text.
- UI is not assumed. That is, the displayed content is only text obtained by converting the input voice, and no response is given to a response (also referred to as a corresponding action) based on semantic analysis or semantic analysis such as voice dialogue. Therefore, the user cannot confirm the specific action by his / her utterance until the system action is activated.
- the present disclosure proposes an information processing apparatus, a control method, and a program capable of notifying a user of response candidates from the middle of speaking in a voice UI.
- a semantic analysis unit that performs semantic analysis on an utterance text recognized by a voice recognition unit during utterance, a score calculation unit that calculates a score of a response candidate based on an analysis result by the semantic analysis unit, And a notification control unit that controls to notify the response candidate during the utterance according to the score calculated by the score calculation unit.
- semantic analysis is performed on the utterance text recognized by the voice recognition unit in the middle of utterance, the score of the response candidate based on the result of the semantic analysis is calculated by the score calculation unit, A control method is proposed, including controlling to notify the response candidate during the utterance according to the calculated score.
- the computer performs a semantic analysis on the utterance text recognized by the voice recognition unit during the utterance, and the score for calculating the response candidate score based on the analysis result by the semantic analysis unit
- a program is proposed that functions as a calculation unit and a notification control unit that controls to notify the response candidate during the utterance according to the score calculated by the score calculation unit.
- a speech recognition system has a basic function of performing speech recognition / semantic analysis on a user's utterance and responding by speech.
- FIG. 1 An outline of a speech recognition system according to an embodiment of the present disclosure will be described with reference to FIG.
- FIG. 1 is a diagram for describing an overview of a speech recognition system according to an embodiment of the present disclosure.
- the information processing apparatus 1 shown in FIG. 1 has a voice UI agent function that can perform voice recognition and semantic analysis on a user's utterance and output a response to the user by voice.
- the external appearance of the information processing apparatus 1 is not particularly limited, but may be, for example, a cylindrical shape as shown in FIG. 1 and is installed on the floor of a room or on a table.
- the information processing apparatus 1 is provided with a light-emitting portion 18 formed of light-emitting elements such as LEDs (Light-Emitting-Diode) or the like so as to surround a horizontal central region on the side surface.
- the information processing apparatus 1 can notify the user of the state of the information processing apparatus 1 by illuminating the entire light emitting unit 18 or illuminating a part thereof. For example, when interacting with the user, the information processing apparatus 1 causes the light emitting unit 18 to partially illuminate the user's direction, that is, the speaker's direction, so that the user's line of sight is directed to the user as shown in FIG. Can do. Further, the information processing apparatus 1 can notify the user that the processing is in progress by controlling the light emitting unit 18 so that the light travels on the side surface during response generation or data search.
- FIG. 2 is a diagram illustrating the timing of speech and response in a general voice UI. As shown in FIG. 2, in the utterance section in which the utterance 100 “Tell me about the weather” from the user is performed, the system does not perform voice recognition and semantic analysis. Executed.
- the system After the process is completed, the system outputs a response voice 102 such as “Today's weather is sunny” and a response image 104 indicating the weather information as a confirmed response.
- a response voice 102 such as “Today's weather is sunny”
- a response image 104 indicating the weather information as a confirmed response.
- FIG. 3 is a diagram for explaining the timing of speech and response in the voice UI according to the present embodiment.
- the utterance section where the utterance 200 “Tell me about the weather” from the user is performed, voice recognition and semantic analysis are sequentially performed on the system side, and response candidates based on the recognition result are displayed. The user is notified.
- the icon 201 indicating the weather application is displayed based on the speech recognition up to “Today's weather”.
- the system After the utterance ends, the system outputs a response voice 202 such as “Today's weather is sunny” and a response image 204 indicating the weather information as a confirmed response.
- a response voice 202 such as “Today's weather is sunny”
- a response image 204 indicating the weather information as a confirmed response.
- the information processing apparatus 1 performs voice recognition and semantic analysis of “weather of the week” while the user is uttering the utterance 30 of “weather of the week...”, and based on the result, The activation of the video application, weather forecast application, and calendar application is acquired as a corresponding action. Then, the information processing apparatus 1 projects a video application icon 21a, a weather forecast application icon 21b, and a calendar application icon 21c on the wall 20 to notify the user of a response candidate.
- the shape of the information processing apparatus 1 is not limited to the cylindrical shape shown in FIG. 1 and may be a cube, a sphere, a polyhedron, or the like.
- the basic configuration and operation processing of the information processing apparatus 1 that implements the speech recognition system according to the embodiment of the present disclosure will be sequentially described.
- FIG. 4 is a diagram illustrating an example of the configuration of the information processing apparatus 1 according to the present embodiment.
- the information processing apparatus 1 includes a control unit 10, a communication unit 11, a microphone 12, a speaker 13, a camera 14, a distance measuring sensor 15, a projection unit 16, a storage unit 17, and a light emitting unit 18.
- Control unit 10 The control unit 10 controls each configuration of the information processing apparatus 1.
- the control unit 10 is realized by a microcomputer including a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), and a nonvolatile memory. Further, as shown in FIG. 4, the control unit 10 according to the present embodiment is also used as a voice recognition unit 10a, a semantic analysis unit 10b, a corresponding action acquisition unit 10c, a score calculation unit 10d, a display control unit 10e, and an execution unit 10f. Function.
- the voice recognition unit 10a recognizes the user's voice picked up by the microphone 12 of the information processing apparatus 1, converts it into a character string, and acquires the utterance text.
- the voice recognition unit 10a can also identify a person who is speaking based on the characteristics of the voice, and can estimate the direction of the voice source, that is, the speaker.
- the speech recognition unit 10a sequentially performs speech recognition in real time after the user's utterance is started, and outputs a speech recognition result during the utterance to the semantic analysis unit 10b.
- the semantic analysis unit 10b performs semantic analysis on the utterance text acquired by the speech recognition unit 10a using natural language processing or the like.
- the result of the semantic analysis is output to the corresponding action acquisition unit 10c.
- the semantic analysis unit 10b can sequentially perform the semantic analysis based on the speech recognition result during the utterance output from the speech recognition unit 10a.
- the semantic analysis unit 10b outputs the result of the sequential semantic analysis to the corresponding action acquisition unit 10c.
- the corresponding action acquisition unit 10c acquires a corresponding action for the user's utterance based on the semantic analysis result.
- the corresponding action acquisition unit 10c can acquire the current corresponding action candidate based on the semantic analysis result during the utterance.
- the corresponding action acquisition unit 10c selects an action corresponding to an example sentence having a high similarity based on a comparison between the utterance text recognized by the voice recognition unit 10a and an example sentence registered for semantic analysis learning. get.
- the corresponding action acquisition unit 10c may compare the first half of the example sentence with the length of the utterance.
- the corresponding action acquisition unit 10c can also acquire a corresponding action candidate using the occurrence probability in units of words included in the utterance text.
- a semantic analysis engine using natural language processing can be created on a learning basis. That is, a large amount of utterance cases assumed by the system are collected in advance, and these cases are correctly assigned to the corresponding actions of the system (also referred to as labeling) and learned as a data set. And the target corresponding action can be acquired by comparing the data set and the speech-recognized utterance text. Note that this embodiment does not depend on the type of semantic analysis engine. Moreover, the data set learned by the semantic analysis engine may be personalized for each user.
- the corresponding action acquisition unit 10c outputs the acquired corresponding action candidate to the score calculation unit 10d.
- the corresponding action acquisition unit 10c determines the corresponding action and outputs the determined corresponding action to the execution unit 10f based on the semantic analysis result after the end of the utterance.
- the score calculation unit 10d calculates the score of the corresponding action candidate acquired by the corresponding action acquisition unit 10c, and outputs the calculated score of each corresponding action candidate to the display control unit 10e. For example, the score calculation unit 10d calculates a score according to the degree of similarity in comparison with an example sentence registered for semantic analysis learning performed when the corresponding action candidate is acquired.
- the score calculation unit 10d can calculate the score in consideration of the user environment. For example, when the voice UI according to this embodiment is operated, the user environment is continuously acquired and stored as the user's history, so that the user's operation history and the current situation can be taken into account when the user can be identified. A score can be calculated. As the user environment, for example, the time zone, day of the week, who is with, the state of external devices present in the vicinity (for example, the TV is on), the noise environment, the brightness of the room (that is, the illuminance environment), etc. are acquired. obtain. Thereby, the score calculation part 10d can calculate a score in consideration of the operation history of the user so far and the current situation when the user can be identified. Basically, weighting according to the user environment can be performed in combination with the score calculation according to the similarity with the example sentence at the time of acquiring the corresponding action candidate described above.
- the information processing apparatus 1 may perform score weighting according to the current user environment after learning the following data set.
- the score calculation unit 10d activates the video application in a user environment where the user is alone in the room on the weekend night.
- the score is calculated by weighting the action candidates.
- the acquisition of the utterance text by the speech recognition unit 10a is performed sequentially, and the semantic analysis of the semantic analysis unit 10b is also performed sequentially, so that the corresponding action candidate acquisition by the corresponding action acquisition unit 10c is also updated sequentially. Is done.
- the score calculation unit 10d sequentially updates the score of each corresponding action candidate according to the acquisition update of the corresponding action candidate, and outputs it to the display control unit 10e.
- the display control unit 10e functions as a notification control unit that controls the corresponding action candidate to be notified to the user during the utterance according to the score of each corresponding action candidate calculated by the score calculation unit 10d. For example, the display control unit 10e controls the projection unit 16 to project and display an icon indicating each corresponding action candidate on the wall 20. Further, when the score is updated by the score calculation unit 10d, the display control unit 10e updates the display so as to notify each corresponding action candidate to the user according to the new score.
- FIG. 5 is a diagram illustrating a display example of corresponding action candidates according to the score according to the present embodiment.
- the score of the weather application is “0.5” and the score of the video application is “0.3”.
- the score of the calendar application is calculated as “0.2”.
- the display control unit 10e controls to project and display an icon 21a indicating a weather application, an icon 21b indicating a moving image application, and an icon 21c indicating a calendar application, as shown on the left in FIG.
- the display control unit 10e may display the animation of the icons 21a to 21c so as to slide in the display area from the outside of the display area. Accordingly, the user can intuitively grasp the corresponding action candidate currently acquired by the system by performing voice recognition processing on the system side during the utterance. At this time, the display control unit 10e may correlate the image area (size) of the icon to be projected with the score.
- the display control unit 10e updates the projection screen so that, for example, the corresponding action that falls below a predetermined threshold is hidden, and the icon of the remaining corresponding action is displayed larger. Specifically, as shown in the center of FIG. 5, the display control unit 10e controls to project and display only the icon 21c-1 indicating the calendar application. Note that the slide-out or fade-out to the outside of the display area may be used for the icon non-display control.
- the score of the weather application is as shown in the score table 42.
- the score of the video application is updated to “0.00”
- the score of the video application is “0.02”
- the score of the calendar application is updated to “0.98”.
- the display control unit 10e performs display control so that the icon 21c-2 indicating the displayed calendar application is hidden (for example, it is hidden using fade-out).
- the corresponding action acquisition unit 10c determines to start the calendar application as the corresponding action based on the utterance text and the semantic analysis result determined after the utterance is finished, and the execution unit 10f starts the calendar application.
- the display control unit 10e displays the scheduled month image 22 generated by the calendar application activated by the execution unit 10f.
- voice recognition is performed sequentially during the utterance, and corresponding action candidates are fed back to the user. Also, as the utterance advances, the corresponding action candidate is updated, and when the utterance ends, the finally determined corresponding action is executed.
- the execution unit 10f executes the corresponding action determined by the corresponding action acquisition unit 10c when the utterance is finished and the utterance text is confirmed (that is, the voice recognition is finished).
- the following example is assumed as the corresponding action.
- the communication unit 11 transmits / receives data to / from an external device.
- the communication unit 11 is connected to a predetermined server on the network, and receives various kinds of information necessary for executing the corresponding action by the execution unit 10f.
- the microphone 12 has a function of picking up surrounding sound and outputting it as a sound signal to the control unit 10.
- the microphone 12 may be realized by an array microphone.
- the speaker 13 has a function of converting an audio signal into sound according to the control of the control unit 10 and outputting the sound.
- the camera 14 has a function of capturing the periphery with an imaging lens provided in the information processing apparatus 1 and outputting the captured image to the control unit 10.
- the camera 14 may be realized by a 360-degree camera, a wide-angle camera, or the like.
- the distance measuring sensor 15 has a function of measuring the distance between the information processing apparatus 1 and the user or a person around the user.
- the distance measuring sensor 15 is realized by, for example, an optical sensor (a sensor that measures a distance to an object based on phase difference information of light emission / light reception timing).
- the projection unit 16 is an example of a display device, and has a function of displaying an image by projecting (enlarged) an image on a wall or a screen.
- the storage unit 17 stores a program for each component of the information processing apparatus 1 to function. Further, the storage unit 17 stores various parameters used when the score calculation unit 10d calculates the score of the corresponding action candidate, and application programs executed by the execution unit 10f.
- the storage unit 17 stores user registration information.
- User registration information includes personal identification information (speech features, face images, features of human images (including body images), name, identification number, etc.), age, gender, hobbies / preferences, attributes (housewives) , Office workers, students, etc.), and information on communication terminals owned by users.
- the light emitting unit 18 is realized by a light emitting element such as an LED, and is capable of controlling all lighting, partial lighting, blinking, or lighting position. For example, the light emitting unit 18 can make it look as if the line of sight of the speaker is directed by turning on a part of the direction of the speaker recognized by the voice recognition unit 10 a according to the control of the control unit 10.
- the configuration of the information processing apparatus 1 according to the present embodiment has been specifically described above.
- the configuration shown in FIG. 4 is an example, and the present embodiment is not limited to this.
- the information processing apparatus 1 may further include an IR (infrared) camera, a depth camera, a stereo camera, a human sensor, or the like in order to acquire information related to the surrounding environment.
- the installation positions of the microphone 12, the speaker 13, the camera 14, the light emitting unit 18, and the like provided in the information processing apparatus 1 are not particularly limited.
- the projection unit 16 is an example of a display device, and the information processing device 1 may perform display by other means.
- the information processing apparatus 1 may be connected to an external display device to display a predetermined screen.
- Each function of the control unit 10 according to the present embodiment may be on a cloud connected via the communication unit 11.
- FIG. 6 is a flowchart showing an operation process of the voice recognition system according to the present embodiment.
- the control unit 10 of the information processing apparatus 1 determines whether or not there is an utterance from the user. Specifically, the control unit 10 performs voice recognition by the voice recognition unit 10a on the voice signal picked up by the microphone 12, and determines whether or not it is a user's utterance to the system.
- step S106 the voice recognition unit 10a acquires the utterance text by the voice recognition process.
- step S109 the control unit 10 determines whether or not the speech recognition is finished, that is, whether the utterance text is confirmed.
- speech recognition has not been completed, that is, speech text has not been finalized.
- step S112 the semantic analysis unit 10b acquires the utterance text from the speech recognition unit 10a up to the present time.
- step S115 the semantic analysis unit 10b performs a semantic analysis process based on the utterance text in the middle of the utterance.
- step S118 the corresponding action acquisition unit 10c acquires a corresponding action candidate for the user's utterance based on the semantic analysis result of the semantic analysis unit 10b, and the score calculation unit 10d receives the current corresponding action candidate. Calculate the score.
- the display control unit 10e determines a display method of corresponding action candidates.
- the display method of the corresponding action candidate is, for example, displayed as an icon, displayed as text, displayed in a sub display area, or below the display area when the user is watching a movie in the main display area. For example, whether a special footer area is provided and display is performed in the area. A specific method for displaying the corresponding action candidate will be described later with reference to FIGS.
- the display control unit 10e may determine the display method according to the number and score of each corresponding action candidate.
- step S124 the display control unit 10e controls to display the top N corresponding action candidates.
- the display control unit 10e controls the projection unit 16 to control the icon indicating the corresponding action candidate to be projected onto the wall 20.
- step S127 the semantic analysis unit 10b performs a semantic analysis process based on the confirmed utterance text.
- step S130 the corresponding action acquisition unit 10c determines the corresponding action to the user's utterance based on the semantic analysis result of the semantic analysis unit 10b. Note that the corresponding action acquisition unit 10c can also confirm the corresponding action selected by the user when an explicit corresponding action is selected by the user.
- step S133 the execution unit 10f executes the corresponding action determined by the corresponding action acquisition unit 10c.
- FIG. 7 is a diagram illustrating a case where the utterance text is displayed together with the corresponding action candidate according to the present embodiment.
- the utterance text recognized together may be displayed.
- the recognized utterance text 300 “Weather this week” is displayed. Thereby, the user can grasp
- the displayed utterance text changes sequentially in conjunction with the utterance.
- the display area of the icon indicating the corresponding action candidate is correlated with the score to feed back the difference in the score of each corresponding action candidate.
- the present embodiment is not limited to this. For example, even if the display area of the icon image is the same, it is possible to feed back the difference in the scores of the corresponding action candidates.
- a specific description will be given with reference to FIG.
- FIG. 8 is a diagram for explaining a display method for feeding back the difference in score of each corresponding action candidate by changing the display granularity.
- a predetermined threshold for example, “0.5”
- the score is updated in conjunction with the utterance, and as shown in the right of FIG. 8, when the score of the weather application that is a candidate for the corresponding action becomes “0.8” and exceeds a predetermined threshold, the corresponding action is executed.
- An icon 21b-1 including information (for example, date and maximum temperature / minimum temperature) presented at the time of display is displayed.
- the display granularity can be changed according to the height of the score.
- the display area and information amount of the corresponding action candidate can be dynamically changed according to the score.
- FIG. 9 is a diagram illustrating a display method for changing the display area and the information amount according to the score of the corresponding action candidate. Like the icon 23 shown in FIG. 9, the display area and the information amount can be increased according to the score, and more information can be presented to the user.
- FIG. 10 is a diagram for explaining gray-out display of corresponding action candidates according to the present embodiment.
- corresponding action candidate icons 24a to 24e are displayed in the same display area according to voice recognition / semantic analysis during the utterance of the user, and the score is updated in conjunction with the next utterance progressing.
- the icons 24b ′ and 24e ′ are grayed out.
- the user can intuitively understand that the score of the corresponding action indicated by the icons 24b 'and 24e' is below a predetermined value.
- corresponding action candidates are displayed in a list, so that the user can immediately select the desired corresponding action even during the utterance. That is, the displayed corresponding action candidate can be used as a shortcut for the action. At this time, the user can also select a corresponding action candidate displayed in gray out.
- the user can specify the utterance such as “left icon!” Or “third!”.
- the designation can be performed not only by voice but also by a gesture, a touch operation, a remote controller, or the like. Such designation by the user may be used not only for determining the action to be activated but also for canceling the function. For example, if you say "This week's weather ... Oh, not that", the corresponding action candidate that was displayed larger (increased score) linked to "This week's weather " was canceled (hidden) and the score was Can be lowered.
- the voice recognition system can be used by a plurality of users. For example, it is assumed that the position of a user (speaker) is recognized by using an array microphone or a camera, and a display area is divided according to the user position to display action candidates for each user. At this time, real-time speech recognition, semantic analysis, acquisition processing of corresponding actions, and the like as shown in the flow of FIG. Hereinafter, a specific description will be given with reference to FIG.
- FIG. 11 is a diagram for explaining a display method of corresponding action candidates when using multiple users according to the present embodiment.
- the corresponding action candidates for the utterance 33 “weather of the week” of the user AA are displayed on the left side of the display area according to the relative position with respect to the display area of the user AA.
- To 25c are displayed.
- the display of the corresponding action candidate for the utterance 34 “concert” of the user BB is performed on the right side of the display area according to the relative position with respect to the display area of the user BB, for example, the icon 26 is displayed.
- the information processing apparatus 1 runs the integrated real-time speech recognition, semantic analysis, and corresponding action acquisition processing without dividing the display area for each user. The result may be fed back.
- the speech recognition system can also notify corresponding action candidates in the middle of speaking other than the main display area.
- the main display area refers to an area for projection display by the projection unit 16.
- the information processing apparatus 1 has a display area other than the main display area, such as a sub-display (not shown) formed by a liquid crystal display or the like provided on the side surface of the information processing apparatus 1, a TV, a smartphone, Corresponding action candidates can be displayed on an external display device such as a tablet terminal or a wearable terminal worn by the user.
- the display method is not limited to the display method as shown in FIG. 5, and only the corresponding action candidate icon or character having the highest score may be displayed.
- the voice recognition system according to the present embodiment can also use light such as LED as feedback.
- the information processing apparatus 1 may feed back to the user in real time by causing the light emitting unit 18 to emit light in a color assigned in advance for each corresponding action.
- the speech recognition system may change the display method of the corresponding action candidate according to the current screen state of the display area.
- a specific description will be given with reference to FIG.
- FIG. 12 is a diagram for explaining a display method of corresponding action candidates according to the screen state according to the present embodiment.
- the user can speak to the voice recognition system and use the voice UI.
- volume adjustment or the like can be instructed only by voice.
- a corresponding action candidate icon is largely superimposed on the screen according to the user's utterance, it will be an obstacle to watching a movie.
- the display control unit 10e of the information processing apparatus 1 provides a special footer area 45 below the display area when the moving image 50 is displayed, for example, as shown in the left of FIG.
- Corresponding action candidate icons for example, icons 27a to 27c
- the display control unit 10e displays a reduced moving image screen 51 as shown on the right side of FIG. It can be made not to overlap with the area 45).
- the information processing apparatus 1 can adjust the number of icons to be displayed and the display size so as not to interfere with the viewing of the moving image.
- the display control unit 10e of the information processing apparatus 1 displays the screen state (for example, the display content, the size of the display area) and the display state (icon, text, display) of the corresponding action candidate to be displayed.
- the information processing apparatus 1 may use a display method other than the main display area as described above during moving image reproduction.
- the corresponding action candidate can be notified to the user without causing any overlay on the moving picture screen reproduced in the main display area.
- FIG. 13 is a diagram showing an example of an icon indicating a more specific action related to the application. As shown in FIG. 13, for example, an icon 28a indicating a reading of an e-mail, an icon 28b indicating an uninstall of a weather application, an icon 28c indicating a monthly schedule display of a calendar application, and an icon 28d indicating a schedule addition of the calendar application are included.
- FIG. 14 is a diagram illustrating an example of icons indicating actions related to volume adjustment.
- an icon 28e indicating volume adjustment is displayed in the footer area provided below the display area.
- an icon 28e-1 indicating adjustment of volume increase is displayed.
- an icon 28e-2 indicating adjustment of volume reduction is displayed.
- a response candidate (corresponding action candidate) is notified to the user from the middle of speech in the speech UI, that is, semantic analysis is sequentially performed in real time, and the response candidate is sent to the user. Feedback is possible.
- the display control unit 10e may display the number of corresponding action candidates to be displayed at a predetermined number or more, may display all the corresponding action candidates having a score exceeding a predetermined threshold value, and the score is a predetermined threshold value. A predetermined number or more of corresponding action candidates may be displayed until the value exceeds.
- the display control unit 10e may display the candidate score together with the corresponding action candidate.
- this technique can also take the following structures.
- a semantic analysis unit that performs semantic analysis on the utterance text recognized by the voice recognition unit during utterance;
- a score calculation unit that calculates a score of a response candidate based on an analysis result by the semantic analysis unit;
- a notification control unit that controls to notify the response candidate during utterance according to the score calculated by the score calculation unit;
- An information processing apparatus comprising: (2) The score calculation unit updates the score according to the sequential semantic analysis of utterances by the semantic analysis unit, The information processing apparatus according to (1), wherein the notification control unit controls to update the display of the response candidate in conjunction with the update of the score. (3) The information processing apparatus according to (1), wherein the notification control unit controls to notify a plurality of response candidates in a display mode according to the score.
- the information processing apparatus includes: The information processing apparatus according to any one of (1) to (10), further including an execution control unit that controls to execute the confirmed response.
- control is performed so as to execute a response determined based on a semantic analysis result of the utterance text determined after the utterance is completed.
- control is performed so as to execute a response designated and determined by a user.
- Computer A semantic analysis unit that performs semantic analysis on the utterance text recognized by the voice recognition unit during utterance; A score calculation unit that calculates a score of a response candidate based on an analysis result by the semantic analysis unit; A notification control unit that controls to notify the response candidate during utterance according to the score calculated by the score calculation unit; A program that functions as
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
1.本開示の一実施形態による音声認識システムの概要
2.構成
3.動作処理
4.対応アクション候補の表示例
4-1.発話テキストの表示
4-2.スコアに応じた表示方法
4-3.複数発話者がいる場合の表示方法
4-4.メインの表示領域以外への表示方法
4-5.画面状態に応じて異なる表示方法
4-6.その他のアイコン表示例
5.まとめ
本開示の一実施形態による音声認識システムは、ユーザの発話に対して音声認識・意味解析を行い、音声により応答を行う基本機能を有する。以下、図1を参照して本開示の一実施形態による音声認識システムの概要について説明する。
図4は、本実施形態による情報処理装置1の構成の一例を示す図である。図4に示すように、情報処理装置1は、制御部10、通信部11、マイクロホン12、スピーカ13、カメラ14、測距センサ15、投影部16、記憶部17、および発光部18を有する。
制御部10は、情報処理装置1の各構成を制御する。制御部10は、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)、および不揮発性メモリを備えたマイクロコンピュータにより実現される。また、本実施形態による制御部10は、図4に示すように、音声認識部10a、意味解析部10b、対応アクション取得部10c、スコア算出部10d、表示制御部10e、および実行部10fとしても機能する。
通信部11は、外部装置とデータの送受信を行う。例えば通信部11は、ネットワーク上の所定サーバと接続し、実行部10fによる対応アクション実行時に必要な各種情報を受信する。
マイクロホン12は、周辺の音声を収音し、音声信号として制御部10に出力する機能を有する。また、マイクロホン12は、アレイマイクロホンにより実現されていてもよい。
スピーカ13は、制御部10の制御に従って音声信号を音声に変換して出力する機能を有する。
カメラ14は、情報処理装置1に設けられた撮像レンズにより周辺を撮像し、撮像画像を制御部10に出力する機能を有する。また、カメラ14は、360度カメラまたは広角カメラ等により実現されてもよい。
測距センサ15は、情報処理装置1とユーザやユーザの周辺に居る人物との距離を測定する機能を有する。測距センサ15は、例えば光センサ(発光・受光タイミングの位相差情報に基づいて対象物までの距離を測定するセンサ)により実現される。
投影部16は、表示装置の一例であって、壁やスクリーンに画像を(拡大して)投影することで表示する機能を有する。
記憶部17は、情報処理装置1の各構成が機能するためのプログラムを格納する。また、記憶部17は、スコア算出部10dが対応アクション候補のスコアを算出する際に用いる各種パラメータや、実行部10fが実行するアプリケーションプログラムを格納する。また、記憶部17は、ユーザの登録情報を格納する。ユーザの登録情報には、個人識別用情報(音声の特徴量、顔画像、人画像(身体画像を含む)の特徴量、氏名、識別番号等)、年齢、性別、趣味・嗜好、属性(主婦、会社員、学生等)、およびユーザが所有する通信端末に関する情報等が含まれる。
発光部18は、LED等の発光素子により実現され、全灯、一部点灯、点滅、または点灯位置の制御等が可能である。例えば発光部18は、制御部10の制御にしたがって音声認識部10aにより認識された発話者の方向を一部点灯することで、発話者の方向に視線を向けているように見せることができる。
次に、本実施形態による音声認識システムの動作処理について図6を参照して具体的に説明する。
<4-1.発話テキストの表示>
図7は、本実施形態による対応アクション候補の表示時に併せて発話テキストを表示する場合について示す図である。図1、図5に示す例では、対応アクション候補のみが表示されているが、本実施形態はこれに限定されず、併せて認識した発話テキストを表示してもよい。具体的には、図7に示すように、対応アクション候補を示すアイコン21bと共に、認識した発話途中の発話テキスト300「今週の天気を…」が表示される。これにより、ユーザは自分の発話がどのように音声認識されたかを把握することができる。また、表示される発話テキストは、発話に連動して順次変化する。
上述した図5に示す例では、対応アクション候補を示すアイコンの表示領域をスコアに相関させることで、各対応アクション候補のスコアの違いをフィードバックしているが、本実施形態はこれに限定されない。例えば、アイコン画像の表示面積が同じであっても各対応アクション候補のスコアの違いをフィードバックすることが可能である。以下、図8を参照して具体的に説明する。
また、本実施形態による音声認識システムは、複数ユーザによる利用も可能である。例えばアレイマイクやカメラを利用することでユーザ(発話者)の位置を認識し、ユーザ位置に応じて表示領域を分割して各ユーザに対するアクション候補を表示することが想定される。この際、利用ユーザの人数分だけ、図6のフローに示すようなリアルタイム音声認識、意味解析、および対応アクションの取得処理等が並列して走っている。以下、図11を参照して具体的に説明する。
また、本実施形態による音声認識システムは、メイン表示領域以外に発話途中における対応アクション候補を通知することも可能である。ここで、メイン表示領域とは投影部16による投影表示の領域を言う。情報処理装置1は、メイン表示領域以外の表示領域として、例えば情報処理装置1の側面上に設けられた液晶ディスプレイ等により形成されたサブディスプレイ(不図示)や、付近に存在するTV、スマートフォン、タブレット端末、ユーザが装着するウェアラブル端末等の外部表示装置に対応アクション候補を表示し得る。
また、本実施形態による音声認識システムは、表示領域の現在の画面状態に応じて対応アクション候補の表示方法を変更してもよい。以下、図12を参照して具体的に説明する。
以上説明した各表示画面例では、対応アクション候補のアイコンとして、各種アプリケーションの起動アクションを示すアイコンを示したが、本実施形態はこれに限定されない。以下、図13、図14を参照して、他の対応アクション候補の表示例について説明する。
上述したように、本開示の実施形態による音声認識システムでは、音声UIにおいて発話途中から応答候補(対応アクション候補)をユーザに通知すること、すなわちリアルタイムで意味解析を逐次行い、応答候補をユーザへフィードバックすることが可能となる。
(1)
発話途中で音声認識部により認識された発話テキストに対して意味解析を行う意味解析部と、
前記意味解析部による解析結果に基づく応答候補のスコアを算出するスコア算出部と、
前記スコア算出部により算出されたスコアに応じて前記応答候補を発話途中に通知するよう制御する通知制御部と、
を備える、情報処理装置。
(2)
前記スコア算出部は、前記意味解析部による発話の逐次意味解析に応じてスコアを更新し、
前記通知制御部は、前記スコアの更新に連動して前記応答候補の表示を更新するよう制御する、前記(1)に記載の情報処理装置。
(3)
前記通知制御部は、複数の前記応答候補を前記スコアに応じた表示態様で通知するよう制御する、前記(1)に記載の情報処理装置。
(4)
前記通知制御部は、前記スコアに基づいて上位所定数の前記応答候補を表示するよう制御する、前記(3)に記載の情報処理装置。
(5)
前記通知制御部は、所定値を上回るスコアの前記応答候補を表示するよう制御する、前記(3)または(4)に記載の情報処理装置。
(6)
前記通知制御部は、前記スコアの高さに応じた表示面積で前記応答候補を表示するよう制御する、前記(3)~(4)のいずれか1項に記載の情報処理装置。
(7)
前記通知制御部は、前記スコアに応じた表示粒度の情報を含む前記応答候補のアイコンを表示するよう制御する、前記(3)~(5)のいずれか1項に記載の情報処理装置。
(8)
前記通知制御部は、所定値を下回るスコアの前記応答候補はグレーアウト表示するよう制御する、前記(3)~(6)のいずれか1項に記載の情報処理装置。
(9)
前記通知制御部は、認識した前記発話テキストを前記応答候補と共に表示するよう制御する、前記(3)~(8)のいずれか1項に記載の情報処理装置。
(10)
前記スコア算出部は、さらに現在のユーザ環境を考慮して前記スコアを算出する、前記(1)~(8)のいずれか1項に記載の情報処理装置。
(11)
前記情報処理装置は、
確定された応答を実行するよう制御する実行制御部をさらに備える、前記(1)~(10)のいずれか1項に記載の情報処理装置。
(12)
発話が終了して確定された発話テキストの意味解析結果に基づいて確定された応答を実行するよう制御する、前記(11)に記載の情報処理装置。
(13)
ユーザに指定されて確定された応答を実行するよう制御する、前記(11)に記載の情報処理装置。
(14)
発話途中で音声認識部により認識された発話テキストに対して意味解析を行うことと、
前記意味解析の結果に基づく応答候補のスコアをスコア算出部により算出することと、
前記算出されたスコアに応じて前記応答候補を発話途中に通知するよう制御することと、
を含む、制御方法。
(15)
コンピュータを、
発話途中で音声認識部により認識された発話テキストに対して意味解析を行う意味解析部と、
前記意味解析部による解析結果に基づく応答候補のスコアを算出するスコア算出部と、
前記スコア算出部により算出されたスコアに応じて前記応答候補を発話途中に通知するよう制御する通知制御部と、
として機能させる、プログラム。
10 制御部
10a 音声認識部
10b 意味解析部
10c 対応アクション取得部
10d スコア算出部
10e 表示制御部
10f 実行部
11 通信部
12 マイクロホン
13 スピーカ
14 カメラ
15 測距センサ
16 投影部
17 記憶部
18 発光部
20 壁
Claims (15)
- 発話途中で音声認識部により認識された発話テキストに対して意味解析を行う意味解析部と、
前記意味解析部による解析結果に基づく応答候補のスコアを算出するスコア算出部と、
前記スコア算出部により算出されたスコアに応じて前記応答候補を発話途中に通知するよう制御する通知制御部と、
を備える、情報処理装置。 - 前記スコア算出部は、前記意味解析部による発話の逐次意味解析に応じてスコアを更新し、
前記通知制御部は、前記スコアの更新に連動して前記応答候補の表示を更新するよう制御する、請求項1に記載の情報処理装置。 - 前記通知制御部は、複数の前記応答候補を前記スコアに応じた表示態様で通知するよう制御する、請求項1に記載の情報処理装置。
- 前記通知制御部は、前記スコアに基づいて上位所定数の前記応答候補を表示するよう制御する、請求項3に記載の情報処理装置。
- 前記通知制御部は、所定値を上回るスコアの前記応答候補を表示するよう制御する、請求項3に記載の情報処理装置。
- 前記通知制御部は、前記スコアの高さに応じた表示面積で前記応答候補を表示するよう制御する、請求項3に記載の情報処理装置。
- 前記通知制御部は、前記スコアに応じた表示粒度の情報を含む前記応答候補のアイコンを表示するよう制御する、請求項3に記載の情報処理装置。
- 前記通知制御部は、所定値を下回るスコアの前記応答候補はグレーアウト表示するよう制御する、請求項3に記載の情報処理装置。
- 前記通知制御部は、認識した前記発話テキストを前記応答候補と共に表示するよう制御する、請求項3に記載の情報処理装置。
- 前記スコア算出部は、さらに現在のユーザ環境を考慮して前記スコアを算出する、請求項1に記載の情報処理装置。
- 前記情報処理装置は、
確定された応答を実行するよう制御する実行制御部をさらに備える、請求項1に記載の情報処理装置。 - 発話が終了して確定された発話テキストの意味解析結果に基づいて確定された応答を実行するよう制御する、請求項11に記載の情報処理装置。
- ユーザに指定されて確定された応答を実行するよう制御する、請求項11に記載の情報処理装置。
- 発話途中で音声認識部により認識された発話テキストに対して意味解析を行うことと、
前記意味解析の結果に基づく応答候補のスコアをスコア算出部により算出することと、
前記算出されたスコアに応じて前記応答候補を発話途中に通知するよう制御することと、
を含む、制御方法。 - コンピュータを、
発話途中で音声認識部により認識された発話テキストに対して意味解析を行う意味解析部と、
前記意味解析部による解析結果に基づく応答候補のスコアを算出するスコア算出部と、
前記スコア算出部により算出されたスコアに応じて前記応答候補を発話途中に通知するよう制御する通知制御部と、
として機能させる、プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15887792.8A EP3282447B1 (en) | 2015-03-31 | 2015-12-22 | PROGRESSIVE UTTERANCE ANALYSIS FOR SUCCESSIVELY DISPLAYING EARLY SUGGESTIONS BASED ON PARTIAL SEMANTIC PARSES FOR VOICE CONTROL. 
REAL TIME PROGRESSIVE SEMANTIC UTTERANCE ANALYSIS FOR VISUALIZATION AND ACTIONS CONTROL. |
CN201580026858.8A CN106463114B (zh) | 2015-03-31 | 2015-12-22 | 信息处理设备、控制方法及程序存储单元 |
JP2016554514A JP6669073B2 (ja) | 2015-03-31 | 2015-12-22 | 情報処理装置、制御方法、およびプログラム |
US15/304,641 US20170047063A1 (en) | 2015-03-31 | 2015-12-22 | Information processing apparatus, control method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-073894 | 2015-03-31 | ||
JP2015073894 | 2015-03-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016157650A1 true WO2016157650A1 (ja) | 2016-10-06 |
Family
ID=57004067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/085845 WO2016157650A1 (ja) | 2015-03-31 | 2015-12-22 | 情報処理装置、制御方法、およびプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20170047063A1 (ja) |
EP (1) | EP3282447B1 (ja) |
JP (1) | JP6669073B2 (ja) |
CN (1) | CN106463114B (ja) |
WO (1) | WO2016157650A1 (ja) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018504623A (ja) * | 2015-09-03 | 2018-02-15 | グーグル エルエルシー | 強化された発話エンドポイント指定 |
CN108399526A (zh) * | 2018-01-31 | 2018-08-14 | 上海思愚智能科技有限公司 | 日程安排提醒方法和装置 |
JP2018198058A (ja) * | 2017-05-24 | 2018-12-13 | ネイバー コーポレーションNAVER Corporation | 情報提供方法、電子機器、コンピュータプログラム及び記録媒体 |
JP2019079345A (ja) * | 2017-10-25 | 2019-05-23 | アルパイン株式会社 | 情報提示装置、情報提示システム、端末装置 |
KR20190068830A (ko) * | 2017-12-11 | 2019-06-19 | 현대자동차주식회사 | 차량의 환경에 기반한 추천 신뢰도 판단 장치 및 방법 |
US10339917B2 (en) | 2015-09-03 | 2019-07-02 | Google Llc | Enhanced speech endpointing |
WO2019142447A1 (ja) * | 2018-01-17 | 2019-07-25 | ソニー株式会社 | 情報処理装置および情報処理方法 |
JP2020060809A (ja) * | 2018-10-04 | 2020-04-16 | トヨタ自動車株式会社 | エージェント装置 |
JP2020079921A (ja) * | 2018-11-13 | 2020-05-28 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | 音声インタラクション実現方法、装置、コンピュータデバイス及びプログラム |
JP2020112932A (ja) * | 2019-01-09 | 2020-07-27 | キヤノン株式会社 | 情報処理システム、情報処理装置、制御方法、プログラム |
JP2020190587A (ja) * | 2019-05-20 | 2020-11-26 | カシオ計算機株式会社 | ロボットの制御装置、ロボット、ロボットの制御方法及びプログラム |
WO2020240958A1 (ja) * | 2019-05-30 | 2020-12-03 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
Families Citing this family (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
DE112014000709B4 (de) | 2013-02-07 | 2021-12-30 | Apple Inc. | Verfahren und vorrichtung zum betrieb eines sprachtriggers für einen digitalen assistenten |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
EP3937002A1 (en) | 2013-06-09 | 2022-01-12 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
DE112014003653B4 (de) | 2013-08-06 | 2024-04-18 | Apple Inc. | Automatisch aktivierende intelligente Antworten auf der Grundlage von Aktivitäten von entfernt angeordneten Vorrichtungen |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
TWI566107B (zh) | 2014-05-30 | 2017-01-11 | 蘋果公司 | 用於處理多部分語音命令之方法、非暫時性電腦可讀儲存媒體及電子裝置 |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
WO2018013366A1 (en) | 2016-07-12 | 2018-01-18 | Proteq Technologies Llc | Intelligent software agent |
US10861436B1 (en) * | 2016-08-24 | 2020-12-08 | Gridspace Inc. | Audio call classification and survey system |
US11601552B2 (en) | 2016-08-24 | 2023-03-07 | Gridspace Inc. | Hierarchical interface for adaptive closed loop communication system |
US11715459B2 (en) | 2016-08-24 | 2023-08-01 | Gridspace Inc. | Alert generator for adaptive closed loop communication system |
US11721356B2 (en) | 2016-08-24 | 2023-08-08 | Gridspace Inc. | Adaptive closed loop communication system |
JP6759445B2 (ja) * | 2017-02-24 | 2020-09-23 | ソニーモバイルコミュニケーションズ株式会社 | 情報処理装置、情報処理方法及びコンピュータプログラム |
US10938767B2 (en) * | 2017-03-14 | 2021-03-02 | Google Llc | Outputting reengagement alerts by a computing device |
DK180048B1 (en) | 2017-05-11 | 2020-02-04 | Apple Inc. | MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770428A1 (en) | 2017-05-12 | 2019-02-18 | Apple Inc. | LOW-LATENCY INTELLIGENT AUTOMATED ASSISTANT |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770411A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | MULTI-MODAL INTERFACES |
AU2018269238B2 (en) * | 2017-05-15 | 2021-03-25 | Apple Inc. | Hierarchical belief states for digital assistants |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
US20180336275A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Intelligent automated assistant for media exploration |
CN107291704B (zh) * | 2017-05-26 | 2020-12-11 | 北京搜狗科技发展有限公司 | 处理方法和装置、用于处理的装置 |
CN107919130B (zh) * | 2017-11-06 | 2021-12-17 | 百度在线网络技术(北京)有限公司 | 基于云端的语音处理方法和装置 |
CN107919120B (zh) * | 2017-11-16 | 2020-03-13 | 百度在线网络技术(北京)有限公司 | 语音交互方法及装置,终端,服务器及可读存储介质 |
JP6828667B2 (ja) * | 2017-11-28 | 2021-02-10 | トヨタ自動車株式会社 | 音声対話装置、音声対話方法及びプログラム |
CN108683937B (zh) * | 2018-03-09 | 2020-01-21 | 百度在线网络技术(北京)有限公司 | 智能电视的语音交互反馈方法、系统及计算机可读介质 |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | VIRTUAL ASSISTANT OPERATION IN MULTI-DEVICE ENVIRONMENTS |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
DK179822B1 (da) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
CN109215679A (zh) * | 2018-08-06 | 2019-01-15 | 百度在线网络技术(北京)有限公司 | 基于用户情绪的对话方法和装置 |
CN109117233A (zh) * | 2018-08-22 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | 用于处理信息的方法和装置 |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11140099B2 (en) * | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
DK201970510A1 (en) | 2019-05-31 | 2021-02-11 | Apple Inc | Voice identification in digital assistant systems |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | USER ACTIVITY SHORTCUT SUGGESTIONS |
US11468890B2 (en) | 2019-06-01 | 2022-10-11 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11810578B2 (en) | 2020-05-11 | 2023-11-07 | Apple Inc. | Device arbitration for digital assistant-based intercom systems |
US11061543B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | Providing relevant data items based on context |
US11038934B1 (en) | 2020-05-11 | 2021-06-15 | Apple Inc. | Digital assistant hardware abstraction |
US11610065B2 (en) | 2020-06-12 | 2023-03-21 | Apple Inc. | Providing personalized responses based on semantic context |
US11490204B2 (en) | 2020-07-20 | 2022-11-01 | Apple Inc. | Multi-device audio adjustment coordination |
US11438683B2 (en) | 2020-07-21 | 2022-09-06 | Apple Inc. | User identification using headphones |
US12112742B2 (en) * | 2021-03-03 | 2024-10-08 | Samsung Electronics Co., Ltd. | Electronic device for correcting speech input of user and operating method thereof |
CN113256751B (zh) * | 2021-06-01 | 2023-09-29 | 平安科技(深圳)有限公司 | 基于语音的图像生成方法、装置、设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003208196A (ja) * | 2002-01-11 | 2003-07-25 | Matsushita Electric Ind Co Ltd | 音声対話方法および装置 |
JP2005283972A (ja) * | 2004-03-30 | 2005-10-13 | Advanced Media Inc | 音声認識方法及びこの音声認識方法を利用した情報提示方法と情報提示装置 |
JP2014203207A (ja) * | 2013-04-03 | 2014-10-27 | ソニー株式会社 | 情報処理装置、情報処理方法及びコンピュータプログラム |
Family Cites Families (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5734893A (en) * | 1995-09-28 | 1998-03-31 | Ibm Corporation | Progressive content-based retrieval of image and video with adaptive and iterative refinement |
US8301436B2 (en) * | 2003-05-29 | 2012-10-30 | Microsoft Corporation | Semantic object synchronous understanding for highly interactive interface |
JP3962763B2 (ja) * | 2004-04-12 | 2007-08-22 | 松下電器産業株式会社 | 対話支援装置 |
JP4471715B2 (ja) * | 2004-04-14 | 2010-06-02 | 富士通株式会社 | 情報処理方法及びコンピュータ・システム |
JP4679254B2 (ja) * | 2004-10-28 | 2011-04-27 | 富士通株式会社 | 対話システム、対話方法、及びコンピュータプログラム |
US20080167914A1 (en) * | 2005-02-23 | 2008-07-10 | Nec Corporation | Customer Help Supporting System, Customer Help Supporting Device, Customer Help Supporting Method, and Customer Help Supporting Program |
GB0513786D0 (en) * | 2005-07-05 | 2005-08-10 | Vida Software S L | User interfaces for electronic devices |
CN101008864A (zh) * | 2006-01-28 | 2007-08-01 | 北京优耐数码科技有限公司 | 一种数字键盘多功能、多语种输入系统和方法 |
US9032430B2 (en) * | 2006-08-24 | 2015-05-12 | Rovi Guides, Inc. | Systems and methods for providing blackout support in video mosaic environments |
JP5294086B2 (ja) * | 2007-02-28 | 2013-09-18 | 日本電気株式会社 | 重み係数学習システム及び音声認識システム |
US7596766B1 (en) * | 2007-03-06 | 2009-09-29 | Adobe Systems Inc. | Preview window including a storage context view of one or more computer resources |
US9483755B2 (en) * | 2008-03-04 | 2016-11-01 | Apple Inc. | Portable multifunction device, method, and graphical user interface for an email client |
US20100088097A1 (en) * | 2008-10-03 | 2010-04-08 | Nokia Corporation | User friendly speaker adaptation for speech recognition |
US8635237B2 (en) * | 2009-07-02 | 2014-01-21 | Nuance Communications, Inc. | Customer feedback measurement in public places utilizing speech recognition technology |
CN101697121A (zh) * | 2009-10-26 | 2010-04-21 | 哈尔滨工业大学 | 一种基于程序源代码语义分析的代码相似度检测方法 |
JP2012047924A (ja) * | 2010-08-26 | 2012-03-08 | Sony Corp | 情報処理装置、および情報処理方法、並びにプログラム |
KR101208166B1 (ko) * | 2010-12-16 | 2012-12-04 | 엔에이치엔(주) | 온라인 음성인식을 처리하는 음성인식 클라이언트 시스템, 음성인식 서버 시스템 및 음성인식 방법 |
KR20120088394A (ko) * | 2011-01-31 | 2012-08-08 | 삼성전자주식회사 | 전자 책 단말기, 서버 및 그 서비스 제공 방법 |
US20130044111A1 (en) * | 2011-05-15 | 2013-02-21 | James VanGilder | User Configurable Central Monitoring Station |
JP5790238B2 (ja) * | 2011-07-22 | 2015-10-07 | ソニー株式会社 | 情報処理装置、情報処理方法及びプログラム |
US9047007B2 (en) * | 2011-07-28 | 2015-06-02 | National Instruments Corporation | Semantic zoom within a diagram of a system |
US8914288B2 (en) * | 2011-09-01 | 2014-12-16 | At&T Intellectual Property I, L.P. | System and method for advanced turn-taking for interactive spoken dialog systems |
US8909512B2 (en) * | 2011-11-01 | 2014-12-09 | Google Inc. | Enhanced stability prediction for incrementally generated speech recognition hypotheses based on an age of a hypothesis |
JP2013101450A (ja) * | 2011-11-08 | 2013-05-23 | Sony Corp | 情報処理装置及び方法、並びにプログラム |
JP2013135310A (ja) * | 2011-12-26 | 2013-07-08 | Sony Corp | 情報処理装置、情報処理方法、プログラム、記録媒体、及び、情報処理システム |
US9361883B2 (en) * | 2012-05-01 | 2016-06-07 | Microsoft Technology Licensing, Llc | Dictation with incremental recognition of speech |
US9256349B2 (en) * | 2012-05-09 | 2016-02-09 | Microsoft Technology Licensing, Llc | User-resizable icons |
JP5846442B2 (ja) * | 2012-05-28 | 2016-01-20 | ソニー株式会社 | 情報処理装置、情報処理方法、および、プログラム |
US20130325779A1 (en) * | 2012-05-30 | 2013-12-05 | Yahoo! Inc. | Relative expertise scores and recommendations |
US10346542B2 (en) * | 2012-08-31 | 2019-07-09 | Verint Americas Inc. | Human-to-human conversation analysis |
US20140122619A1 (en) * | 2012-10-26 | 2014-05-01 | Xiaojiang Duan | Chatbot system and method with interactive chat log |
JP2014109889A (ja) * | 2012-11-30 | 2014-06-12 | Toshiba Corp | コンテンツ検索装置、コンテンツ検索方法及び制御プログラム |
US9015048B2 (en) * | 2012-11-30 | 2015-04-21 | At&T Intellectual Property I, L.P. | Incremental speech recognition for dialog systems |
CN103064826B (zh) * | 2012-12-31 | 2016-01-06 | 百度在线网络技术(北京)有限公司 | 一种用于表情输入的方法、装置与系统 |
CN103945044A (zh) * | 2013-01-22 | 2014-07-23 | 中兴通讯股份有限公司 | 一种信息处理方法和移动终端 |
US10395651B2 (en) * | 2013-02-28 | 2019-08-27 | Sony Corporation | Device and method for activating with voice input |
US9269354B2 (en) * | 2013-03-11 | 2016-02-23 | Nuance Communications, Inc. | Semantic re-ranking of NLU results in conversational dialogue applications |
AU2014233517B2 (en) * | 2013-03-15 | 2017-05-25 | Apple Inc. | Training an at least partial voice command system |
PT2994908T (pt) * | 2013-05-07 | 2019-10-18 | Veveo Inc | Interface de entrada incremental de discurso com retorno em tempo real |
CN104166462B (zh) * | 2013-05-17 | 2017-07-21 | 北京搜狗科技发展有限公司 | 一种文字的输入方法和系统 |
US9640182B2 (en) * | 2013-07-01 | 2017-05-02 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and vehicles that provide speech recognition system notifications |
US9298811B2 (en) * | 2013-07-15 | 2016-03-29 | International Business Machines Corporation | Automated confirmation and disambiguation modules in voice applications |
US10102851B1 (en) * | 2013-08-28 | 2018-10-16 | Amazon Technologies, Inc. | Incremental utterance processing and semantic stability determination |
CN103794214A (zh) * | 2014-03-07 | 2014-05-14 | 联想(北京)有限公司 | 一种信息处理方法、装置和电子设备 |
US9552817B2 (en) * | 2014-03-19 | 2017-01-24 | Microsoft Technology Licensing, Llc | Incremental utterance decoder combination for efficient and accurate decoding |
JP6346281B2 (ja) * | 2014-07-04 | 2018-06-20 | クラリオン株式会社 | 車載対話型システム、及び車載情報機器 |
US9530412B2 (en) * | 2014-08-29 | 2016-12-27 | At&T Intellectual Property I, L.P. | System and method for multi-agent architecture for interactive machines |
US9378740B1 (en) * | 2014-09-30 | 2016-06-28 | Amazon Technologies, Inc. | Command suggestions during automatic speech recognition |
US20160162601A1 (en) * | 2014-12-03 | 2016-06-09 | At&T Intellectual Property I, L.P. | Interface for context based communication management |
EP3239975A4 (en) * | 2014-12-26 | 2018-08-08 | Sony Corporation | Information processing device, information processing method, and program |
-
2015
- 2015-12-22 US US15/304,641 patent/US20170047063A1/en not_active Abandoned
- 2015-12-22 WO PCT/JP2015/085845 patent/WO2016157650A1/ja active Application Filing
- 2015-12-22 JP JP2016554514A patent/JP6669073B2/ja active Active
- 2015-12-22 EP EP15887792.8A patent/EP3282447B1/en active Active
- 2015-12-22 CN CN201580026858.8A patent/CN106463114B/zh not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003208196A (ja) * | 2002-01-11 | 2003-07-25 | Matsushita Electric Ind Co Ltd | 音声対話方法および装置 |
JP2005283972A (ja) * | 2004-03-30 | 2005-10-13 | Advanced Media Inc | 音声認識方法及びこの音声認識方法を利用した情報提示方法と情報提示装置 |
JP2014203207A (ja) * | 2013-04-03 | 2014-10-27 | ソニー株式会社 | 情報処理装置、情報処理方法及びコンピュータプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3282447A4 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018504623A (ja) * | 2015-09-03 | 2018-02-15 | グーグル エルエルシー | 強化された発話エンドポイント指定 |
US10339917B2 (en) | 2015-09-03 | 2019-07-02 | Google Llc | Enhanced speech endpointing |
US11996085B2 (en) | 2015-09-03 | 2024-05-28 | Google Llc | Enhanced speech endpointing |
JP2018198058A (ja) * | 2017-05-24 | 2018-12-13 | ネイバー コーポレーションNAVER Corporation | 情報提供方法、電子機器、コンピュータプログラム及び記録媒体 |
JP2019079345A (ja) * | 2017-10-25 | 2019-05-23 | アルパイン株式会社 | 情報提示装置、情報提示システム、端末装置 |
KR20190068830A (ko) * | 2017-12-11 | 2019-06-19 | 현대자동차주식회사 | 차량의 환경에 기반한 추천 신뢰도 판단 장치 및 방법 |
KR102485342B1 (ko) * | 2017-12-11 | 2023-01-05 | 현대자동차주식회사 | 차량의 환경에 기반한 추천 신뢰도 판단 장치 및 방법 |
WO2019142447A1 (ja) * | 2018-01-17 | 2019-07-25 | ソニー株式会社 | 情報処理装置および情報処理方法 |
CN108399526A (zh) * | 2018-01-31 | 2018-08-14 | 上海思愚智能科技有限公司 | 日程安排提醒方法和装置 |
JP7028130B2 (ja) | 2018-10-04 | 2022-03-02 | トヨタ自動車株式会社 | エージェント装置 |
JP2020060809A (ja) * | 2018-10-04 | 2020-04-16 | トヨタ自動車株式会社 | エージェント装置 |
JP2020079921A (ja) * | 2018-11-13 | 2020-05-28 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | 音声インタラクション実現方法、装置、コンピュータデバイス及びプログラム |
JP2020112932A (ja) * | 2019-01-09 | 2020-07-27 | キヤノン株式会社 | 情報処理システム、情報処理装置、制御方法、プログラム |
JP7327939B2 (ja) | 2019-01-09 | 2023-08-16 | キヤノン株式会社 | 情報処理システム、情報処理装置、制御方法、プログラム |
JP2020190587A (ja) * | 2019-05-20 | 2020-11-26 | カシオ計算機株式会社 | ロボットの制御装置、ロボット、ロボットの制御方法及びプログラム |
JP7342419B2 (ja) | 2019-05-20 | 2023-09-12 | カシオ計算機株式会社 | ロボットの制御装置、ロボット、ロボットの制御方法及びプログラム |
WO2020240958A1 (ja) * | 2019-05-30 | 2020-12-03 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
US12033630B2 (en) | 2019-05-30 | 2024-07-09 | Sony Group Corporation | Information processing device, information processing method, and program |
Also Published As
Publication number | Publication date |
---|---|
EP3282447B1 (en) | 2020-08-26 |
CN106463114A (zh) | 2017-02-22 |
EP3282447A1 (en) | 2018-02-14 |
JPWO2016157650A1 (ja) | 2018-01-25 |
CN106463114B (zh) | 2020-10-27 |
EP3282447A4 (en) | 2018-12-05 |
US20170047063A1 (en) | 2017-02-16 |
JP6669073B2 (ja) | 2020-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016157650A1 (ja) | 情報処理装置、制御方法、およびプログラム | |
US11861061B2 (en) | Virtual sharing of physical notebook | |
JP6669162B2 (ja) | 情報処理装置、制御方法、およびプログラム | |
JP7266619B2 (ja) | デバイスをキャプチャする画像データの処理および/またはデバイスのインストール環境に基づくデバイスの制御 | |
JP2023120205A (ja) | 選択されたサジェスチョンによる自動アシスタントへのボイス入力の補足 | |
US20150177843A1 (en) | Device and method for displaying user interface of virtual input device based on motion recognition | |
KR20130125367A (ko) | 콘텐츠의 시청자 기반 제공 및 맞춤화 | |
WO2019107145A1 (ja) | 情報処理装置、及び情報処理方法 | |
WO2017141530A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US11966518B2 (en) | Assistant device arbitration using wearable device data | |
WO2018105373A1 (ja) | 情報処理装置、情報処理方法、および情報処理システム | |
JP6973380B2 (ja) | 情報処理装置、および情報処理方法 | |
JPWO2018105373A1 (ja) | 情報処理装置、情報処理方法、および情報処理システム | |
JP2017182275A (ja) | 情報処理装置、情報処理方法、及びプログラム | |
EP2793105A1 (en) | Controlling a user interface of an interactive device | |
WO2020158218A1 (ja) | 情報処理装置、情報処理方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2016554514 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15304641 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15887792 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2015887792 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015887792 Country of ref document: EP |