US20180074785A1 - Information processing device, control method, and program - Google Patents

Information processing device, control method, and program Download PDF

Info

Publication number
US20180074785A1
US20180074785A1 US15/559,940 US201515559940A US2018074785A1 US 20180074785 A1 US20180074785 A1 US 20180074785A1 US 201515559940 A US201515559940 A US 201515559940A US 2018074785 A1 US2018074785 A1 US 2018074785A1
Authority
US
United States
Prior art keywords
response
user
output
information processing
processing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/559,940
Other languages
English (en)
Inventor
Junki OHMURA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Assigned to SONY CORPORATION reassignment SONY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OHMURA, Junki
Publication of US20180074785A1 publication Critical patent/US20180074785A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present disclosure relates to information processing devices, control methods, and programs.
  • Patent Literature 1 listed below discloses a voice conversation control method in which an importance level of contents of a response is considered by a system side to continue or stop the response in the case where a user interrupts a speech while the system is responding (in other words, while the system is outputting speech) in voice conversation with a single user.
  • Patent Literature 2 listed below discloses a voice conversation device by which users can easily recognize whose voice is being output when the plurality of users are talking with each other.
  • Patent Literature 1 JP 2004-325848A
  • Patent Literature 2 JP 2009-261010A
  • the voice UI is assumed to be used in one-to-one conversation between a system and a user, and the voice UI is not assumed to be used in conversation between the system and a plurality of users. Therefore, for example, when it is assumed that the voice UI system is used in a house or a public space, a certain user is likely to occupy the system.
  • Patent Literature 1 is a response system to be used in voice conversation with a single user, and it is difficult to respond to a plurality of user at the same time.
  • Patent Literature 2 relates to a system to be used by a plurality of user, it is not assumed that a plurality of user uses the voice UI that automatically respond to a speech from a user by voice.
  • the present disclosure proposes an information processing device, control method, and program that can improve convenience of a speech recognition system by outputting appropriate responses to respective users when the plurality of users are talking.
  • an information processing device including: a response generation unit configured to generate responses to speeches from a plurality of users; a decision unit configured to decide methods of outputting the responses to the respective users on the basis of priorities according to order of the speeches from the plurality of users; and an output control unit configured to perform control such that the generated responses are output by using the decided methods of outputting the responses.
  • a control method including: generating responses to speeches from a plurality of users; deciding methods of outputting the responses to the respective users on the basis of priorities according to order of the speeches from the plurality of users; and performing control, by an output control unit, such that the generated responses are output by using the decided methods of outputting the responses.
  • FIG. 1 is a diagram illustrating an overview of a speech recognition system according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating an example of a configuration of an information processing device according to the embodiment.
  • FIG. 3 is a flowchart illustrating an operation process of a speech recognition system according to the embodiment.
  • FIG. 4 is a diagram illustrating examples of outputting responses by voice and display to speeches from a plurality of users at the same time according to the embodiment.
  • FIG. 5A is a diagram illustrating notification indicating stand-by users by using a sub-display according to the embodiment.
  • FIG. 5B is a diagram illustrating notification indicating stand-by users by using a sub-display according to the embodiment.
  • FIG. 6 is a diagram illustrating an example of saving a display region by displaying an icon indicating a response to a non-target user.
  • FIG. 7 is a diagram illustrating simultaneous responses by using directional voices according to the embodiment.
  • FIG. 8 is a diagram illustrating an example of error display according to the embodiment.
  • Operation process 4. Response output example 4-1. Responses by voice and display 4-2. Simultaneous response using directivity 4-3. Response through cooperation with external device 4-4. Response according to state of speaker 4-5. Response according to contents of speech 4-6. Error response
  • a speech recognition system has a basic function of performing speech recognition and semantic analysis on a speech from a user and responding by voice.
  • FIG. 1 an overview of the speech recognition system according to the embodiment of the present disclosure will be described.
  • FIG. 1 is a diagram illustrating the overview of the speech recognition system according to the embodiment of the present disclosure.
  • An information processing device 1 illustrated in FIG. 1 has a voice UI agent function capable of performing speech recognition and semantic analysis on a speech from a user and outputting a response to the user by voice.
  • the appearance of the information processing device 1 is not specifically limited.
  • the appearance of the information processing device 1 may be a circular cylindrical shape, and the device may be placed on a floor or a table in a room.
  • the information processing device 1 includes a band-like light emitting unit 18 constituted by light emitting elements such as light-emitting diodes (LEDs) such that the light emitting unit 18 surrounds a central region of a side surface of the information processing device 1 in a horizontal direction.
  • the information processing device 1 can notifies a user of states of the information processing device 1 .
  • the information processing device 1 can operate as if the information processing device 1 looks on the user as illustrated in FIG. 1 .
  • the information processing device 1 can notify the user that a process is ongoing.
  • the voice UI is conventionally assumed to be used in one-to-one conversation between a system and a user, and the voice UI is not assumed to be used in conversation between the system and a plurality of users. Therefore, for example, when it is assumed that the voice UI system is used in a house or a public space, a certain user is likely to occupy the system.
  • the speech recognition system according to an embodiment of the present disclosure, it is possible to improve convenience of the speech recognition system by outputting appropriate responses to respective users when the plurality of users are talking.
  • the information processing device 1 has a display function of projecting an image on a wall 20 as illustrated in FIG. 1 .
  • the information processing device 1 can output a response by display in addition to outputting a response by voice. Therefore, when another user speaks while the information processing device 1 is outputting a response by voice, the information processing device 1 can output an image displaying wording such as “just a moment” to prompt the another user to stand by. This prevents the information processing device 1 from ignoring a speech from the another user or stopping the response during outputting the response, and this enables the information processing device 1 to operate flexibly.
  • the information processing device 1 outputs a response 31 “tomorrow will be sunny” by voice in response to a speech 30 “what will the weather be like tomorrow?” from a user AA, and displays a response image 21 b indicating an illustration of the sun on the wall 20 , for example.
  • the information processing device 1 outputs a response image 21 a “just a moment” that prompts the user BB to wait his/her turn by display.
  • the information processing device 1 it is also possible for the information processing device 1 to project a speech contents image 21 c “when is the concert?” obtained by converting the recognized speech contents of the user BB into text, on the wall 20 . Accordingly, the user BB can understand that the speech from the user BB is correctly recognized by the information processing device 1 .
  • the information processing device 1 outputs a response to the standby user B by voice.
  • the speech recognition system it is possible for a plurality of users to use the system at the same time by causing occupation of a voice response output to transition in accordance with order of speeches, for example.
  • the shape of the information processing device 1 is not limited to the circular cylindrical shape illustrated in FIG. 1 .
  • the shape of the information processing device 1 may be a cube, a sphere, a polyhedron, or the like.
  • FIG. 2 is a diagram illustrating an example of the configuration of the information processing device 1 according to the embodiment.
  • the information processing device 1 includes a control unit 10 , a communication unit 11 , a microphone 12 , a loudspeaker 13 , a camera 14 , a ranging sensor 15 , a projection unit 16 , a storage unit 17 , and a light emitting unit 18 .
  • the control unit 10 controls respective structural elements of the information processing device 1 .
  • the control unit 10 is implemented by a microcontroller including a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), and a non-volatile memory.
  • the control unit 10 according to the embodiment also functions as a speech recognition unit 10 a , a semantic analysis unit 10 b , a response generation unit 10 c , a target decision unit 10 d , a response output method decision unit 10 e , and an output control unit 10 f.
  • the speech recognition unit 10 a recognizes a voice of a user collected by the microphone 12 of the information processing device 1 , converts the voice to a character string, and acquires a speech text. In addition, it is also possible for the speech recognition unit 10 a to identify a person who is speaking on the basis of a feature of the voice, and to estimate a voice source (in other words, direction of speaker).
  • the semantic analysis unit 10 b performs semantic analysis on the speech text acquired by the speech recognition unit 10 a .
  • a result of the semantic analysis is output to the response generation unit 10 c.
  • the response generation unit 10 c generates a response to the speech of the user on the basis of the semantic analysis result. For example, in the case where the speech of the user requests “tomorrow's weather”, the response generation unit 10 c acquires information on “tomorrow's weather” from a weather forecast server on a network, and generates a response.
  • the target decision unit 10 d decides priorities of the respective users on the basis of a predetermined condition, and decides that a user having the highest priority is a target user and the other user(s) are a non-target user(s).
  • the case where the speeches from the plurality of users are recognized means a case where a speech from a second user is recognized while a first user is speaking, or a case where a speech from the second user is recognized during output of a voice response to the speech from the first user.
  • the priorities of the respective users based on the predetermined condition may be priorities based on order of speeches, for example. Specifically, in the case where a speech from the second user other than the first user who is talking to the device is recognized, the target decision unit 10 d sets priorities such that the priority of the first user who starts conversation earlier becomes higher than the priority of the second user who starts conversation later.
  • the target decision unit 10 d may reset the priorities such that a non-target user who has interrupted the process is changed to the target user.
  • the explicit interrupt process may be a voice speech of a predetermined command, predetermined gesture operation, a predetermined situation of a user based on sensing data, or the like. Details of the interrupt process will be described later.
  • the response output method decision unit 10 e decides a method for outputting a response to each user on the basis of the priorities of the plurality of users. For example, the response output method decision unit 10 e decides that a response is output by voice or a response is output by display in accordance with whether a user is decided as the target user by the target decision unit 10 d . Specifically, for example, the response output method decision unit 10 e allocates different response output methods to the target user and the non-target user such that the target user occupies the response output using voice and response output using display is allocated to the non-target user. In addition, it is also possible for the response output method decision unit 10 e to allocate a part of a display region to the non-target user even in the case where the response output using display is allocated to the target user.
  • the output control unit 10 f performs control such that responses generated by the response generation unit 10 c are output in accordance with the response output methods decided by the response output method decision unit 10 e .
  • a specific response output example according to the embodiment will be described later.
  • the communication unit 11 exchanges data with an external device.
  • the communication unit 11 connects with a predetermined server on a network, and receives information necessary for the response generation unit 10 c to generate a response.
  • the communication unit 11 cooperates with peripheral devices and transmits response data to a target device under the control of the output control unit 10 f.
  • the microphone 12 has functions of collecting peripheral sounds and outputting the collected sound to the control unit 10 as a sound signal.
  • the microphone 12 may be implemented by array microphones.
  • the loudspeaker 13 has functions of converting the sound signal to a sound and outputting the sound under the control of the output control unit 10 f.
  • the camera 14 has functions of capturing an image of periphery by using an imaging lens included in the information processing device 1 , and outputting the captured image to the control unit 10 .
  • the camera 14 may be implemented by a 360-degree camera, a wide angle camera, or the like.
  • the ranging sensor 15 has a function of measuring distances between the information processing device 1 and a user of the information processing device 1 or people around the user.
  • the ranging sensor 15 may be implemented by an optical sensor (a sensor configured to measure a distance from a target object on the basis of information on phase difference between a light emitting timing and a light receiving timing).
  • the projection unit 16 is an example of a display device, and has a display function of projecting an (enlarged) image on a wall or a screen.
  • the storage unit 17 stores a program for causing the respective structural elements of the information processing device 1 to function.
  • the storage unit 17 stores various parameters and various algorithms.
  • the various parameters are used when the target decision unit 10 d calculates priorities of the plurality of users.
  • the various algorithms are used when the response output method decision unit 10 e decides output methods in accordance with the priorities (or in accordance with target/non-target decided on the basis of priorities).
  • the storage unit 17 stores registration information of users.
  • the registration information of a user includes individual identification information (feature of voice, facial image, feature of person image (including image of body), name, identification number, or the like), age, sex, hobby/preference, an attribute (housewife, office worker, student, or the like), information on a communication terminal held by the user, and the like.
  • the light emitting unit 18 may be implemented by light emitting elements such as LEDs, and lighting manners and lighting positions of the light emitting unit 18 are controlled such that all lights are turned on, a part of the light is turned on, or the lights are blinking. For example, under the control of the control unit 10 , a part of the light emitting unit 18 in a direction of a speaker recognized by the speech recognition unit 10 a is turned on. This enables the information processing device 1 to operate as if the information processing device 1 looks on the direction of the speaker.
  • light emitting elements such as LEDs
  • the information processing device 1 may further include an infrared (IR) camera, a depth camera, a stereo camera, a motion detector, or the like to acquire information on a surrounding environment.
  • IR infrared
  • the microphone 12 , the loudspeaker 13 , the camera 14 , the light emitting unit 18 , and the like in the information processing device 1 are not specifically limited.
  • the respective functions of the control unit 10 according to the embodiment may be in a cloud connected via the communication unit 11 .
  • FIG. 3 is a flowchart illustrating the operation process of the speech recognition system according to the embodiment.
  • the control unit 10 of the information processing device 1 first determines whether a user is speaking in Step S 103 . Specifically, the control unit 10 performs speech recognition on a sound signal collected by the microphone 12 by using the speech recognition unit 10 a , performs semantic analysis on the sound signal by using the semantic analysis unit 10 b , and determines whether the sound signal is a speech from the user who is talking to the system.
  • Step S 106 the control unit 10 determines whether a plurality of users are speaking. Specifically, the control unit 10 can determine whether two or more users are speaking on the basis of user (speaker) identification performed by the speech recognition unit 10 a.
  • the response output method decision unit 10 e in the control unit 10 decides to use a voice response output method (S 112 ), and the output control unit 10 f outputs a response generated by the response generation unit 10 c by voice (S 115 ).
  • the target decision unit 10 d in the control unit 10 decides a target user and a non-target user on the basis of priorities of the respective users in Step S 109 .
  • the target decision unit 10 d decides that a first user who has spoken first is a target user by increasing the priority of the first user, and decides that a second user who has spoken later is the non-target user by decreasing the priority of the second user in comparison with the priority of the first user.
  • the response output method decision unit 10 e decides response output methods in accordance with the target/non-target decided by the target decision unit 10 d .
  • the response output method decision unit 10 e decides that the response output method using voice is allocated to the target user (in other words, target user occupies voice response output method), and decides that the response output method using display is allocated to the non-target user.
  • Step S 115 the output control unit 10 f performs control such that responses to speeches from the respective users generated by the response generation unit 10 c in accordance with a result of semantic analysis performed on the speeches by the semantic analysis unit 10 b are output by using the respective output methods decided by the response output method decision unit 10 e . Accordingly, for example, in the case where the second user speaks during output of a response to the speech of the first user by voice, the output control unit 10 f can continue outputting the response without stopping the response. This is because the first user is decided to be the target user and the first user can occupy the voice output method.
  • the output control unit 10 f since the second user who has spoken during the speech from the first user is decided to be the non-target user and the display output method is allocated to the second user, it is possible for the output control unit 10 f to output a response to the second user by display while outputting the response to the first user by voice. Specifically, the output control unit 10 f outputs the response to the second user by display, the response instructing the second user to wait his/her turn. After the response to the first user by voice finishes, the output control unit 10 f outputs the response to the second user by voice. This is because, when the response to the first user by voice finishes, the priority of the second user increases, the second user becomes the target user, and the second user can occupy the voice response output.
  • the response output method decision unit 10 e performs control such that the single user occupies the voice response output.
  • the voice UI system As described above, by using the voice UI system according to the embodiment, it is possible to flexibly respond to speeches from a plurality of user, which improves convenience of the voice UI system. Note that, a specific example of outputting responses to the plurality of users according to the embodiment will be described later.
  • the target decision unit 10 d in the control unit 10 changes the targets/non-target with respect to the plurality of users (S 109 ). Specifically, the target decision unit 10 d increases the priority of an interrupting user in comparison with a current target user, decides the interrupting user as the target user, and changes the current target user to be a non-target user.
  • the control unit 10 controls the response such that the response output method is switched to a response output method that is re-decided in accordance with the change (S 112 and S 115 ). Examples of the explicit interrupt process include processes using voice, gesture, and the like as described later.
  • a priority of an interrupting user is increased in the voice interrupt process in the case where a system name is spoken such that “SS (system name), what's the weather like?”, in the case where a predetermined interrupt command is spoken such that “interrupt: what's the weather like?”, or in the case where wording indicating that the user is in hurry or indicating an important request is spoken such that “what's the weather like? Hurry up!”
  • the priority of the interrupting user is also increased in the case where the interrupting user speaks louder than his/her usual voice volume (or general voice volume) or the interrupting user speaks fast, since it is determined that it is an explicit interrupt process.
  • the priority of the interrupting user is also increased in the case where the interrupting user speaks with a predetermined gesture such as raising his/her hand as the gesture interrupt process.
  • an interrupt process function may be attached to a physical button provided on the information processing device 1 or a remote controller by which the information processing device 1 is operated.
  • an explicit interrupt process may be determined on the basis of contents detected by the camera 14 , the ranging sensor 15 , or the like. As an example, it is determined that there is an explicit interrupt process and a priority of a user is increased in the case where it is sensed that the user is in hurry (for example, the user is approaching the information processing device 1 in hurry), or in the case where the user speaks to the information processing device 1 at a position closer to the information processing device 1 than the current target user.
  • a priority of a user can be increased in the case where schedule information of a target user is acquired from a predetermined server or the like, and it is found that an interrupting user has a plan right after now.
  • the explicit interrupt processes have been described above. However, according to the embodiment, it is also possible to perform an interrupt process according to an attitude of a target user in addition to the above described interrupt process.
  • static or dynamic priorities are allocated to the respective users. Specifically, for example, in the case where the user AA is registered as a “son”, the user BB is registered as a “mother”, and the priority of the “mother” is set to be higher than the priority of the “son”, the priority of the user BB is controlled such that the priority of the user BB is increased in comparison with the priority of the user BB when the user BB interrupts the conversation between the information processing device 1 and the user AA. Accordingly, the response to the user AA is switched from the voice output to the display output.
  • FIG. 4 is a diagram illustrating examples of outputting responses by voice and display to speeches from a plurality of users at the same time according to the embodiment.
  • the information processing device 1 recognizes a speech 32 from the user BB while outputting a response 31 by voice to a speech 30 from the user AA
  • the information processing device 1 decides the user AA who starts conversation first as a target user and continues outputting voice of the response 31 .
  • the information processing device 1 decides the user BB who starts conversation later as a non-target user, and outputs display of a response image 21 a that prompts the user BB to stand by.
  • the information processing device 1 outputs a response 33 “Thank you for waiting. It's next Friday” to the standby user B by voice as illustrated in the right side of FIG. 4 .
  • the information processing device 1 can outputs display by projecting a response image 21 d on the wall 20 .
  • the information processing device 1 may be controlled such that a part of the light emitting unit 18 in a direction of the user BB is turned on as if the information processing device 1 looks on the user BB, as illustrated in the right side of FIG. 4 .
  • the speech recognition system As described above, by using the speech recognition system according to the embodiment, it is possible for a plurality of users to use the system at the same time by causing occupation of a voice response output to transition in accordance with order of speeches from the users.
  • the way of instructing the non-target user to stand by is not limited to the projection of the response image 21 a as illustrated in FIG. 4 . Next, modifications of the instructions will be described.
  • the information processing device 1 can output the stand-by instruction to the non-target user by using a sub-display or the light emitting unit 18 provided on the information processing device 1 .
  • the information processing device 1 can output the stand-by instruction by using an icon or color information of light.
  • FIG. 5A and FIG. 5B notification to a stand-by user by using the sub-display will be described.
  • the output control unit 10 f can visualize non-target users who are currently waiting for responses, as queues. In the example illustrated in FIG. 5A it is possible to intuitively recognize that currently two people are waiting for responses.
  • the output control unit 10 f can clearly display IDs or names of the users with registered colors of the target users to visualize non-target users who are currently waiting for responses as queues. In the example illustrated in FIG. 5B , it is possible to intuitively recognize that currently who is waiting for a response.
  • FIG. 6 is a diagram illustrating an example of saving a display region by displaying an icon indicating a response to a non-target user.
  • the information processing device 1 that has recognized a speech 34 “please display my calendar” from the user AA outputs a response 35 “sure”, and projects a corresponding calendar image 22 a on the wall 20 .
  • the information processing device 1 displays an icon image 22 b of the e-mail as illustrated in FIG. 6 .
  • the user B can intuitively understand that his/her speech is recognized correctly and he/she is in a response waiting state.
  • FIG. 7 is a diagram illustrating simultaneous responses using directional voices.
  • the information processing device 1 recognizes positions of respective speakers by using contents sensed by the camera 14 and the microphone 12 , outputs voice of a response 37 to the user AA and voice of a response 38 to the user BB towards the respective positions of the users, and outputs the responses at the same time.
  • the information processing device 1 it is also possible for the information processing device 1 to divide the display region, allocate display areas to the respective users, and display a response image 23 a to the user AA and a response image 23 b to the user BB.
  • the information processing device 1 may enlarge the display region for the target user in comparison with the display region for the non-target user.
  • the speech recognition system As described above, by using the speech recognition system according to the embodiment, it is possible to respond to a plurality of users by using directive voices at the same time, and allow the plurality of users to use the system at the same time.
  • the information processing device 1 may cooperate with an external device and perform control such that the external device outputs a response to the non-target user.
  • the information processing device 1 performs control such that a response to the non-target user is output from a mobile communication terminal and a wearable terminal that are held by the non-target user, a TV in the vicinity or in his/her own room, another voice UI system in another place, or the like.
  • the information processing device 1 may display information on the sub-display provided on the information processing device a, the information indicating that the external device outputs a response.
  • the information processing device 1 may cause the mobile communication terminal or the wearable terminal to output voice such as “a response will be output from here”. This enables the non-target user to be notified of the terminal from which the response is to be output.
  • the speech recognition system As described above, by using the speech recognition system according to the embodiment, it is possible to respond to a plurality of users at the same time by cooperating with the external device, and allow the plurality of users to use the system at the same time.
  • the information processing device 1 may decide to use a response output method by which the information processing device 1 cooperates with an external device such as a mobile communication terminal, a wearable device, or the like held by a user.
  • the information processing device 1 may decide to use a response output method by which the information processing device 1 cooperates with an external device such as a mobile communication terminal, a wearable device, or the like held by a user.
  • voice output or display output of the information processing device 1 it is possible to avoid voice output or display output of the information processing device 1 to be occupied in the case where a target user who has spoken first is in a position away from the information processing device 1 .
  • the voice output or the display output can be allocated to a non-target user in proximity.
  • the information processing device 1 it is also possible to decide a response output method in accordance with response contents. For example, in the case where a response has a large amount of information such as calendar display, the information processing device 1 preferentially allocate a display output method to such a response, and allows another user to use a voice output method.
  • the information processing device 1 in the case of simple confirmation (for example, the information processing device 1 outputs a simple response “no” to a speech “is the Yamanote Line is delayed?” from a user), the response is output by voice and image display is not necessary.
  • the information processing device 1 allows another user to use the display output method.
  • the speech from the user merely includes an instruction with regard to display such as “please display my calendar”, it is also possible for the information processing device 1 to allow another user to use the voice output method.
  • the information processing device 1 may display an error.
  • an example of the error display will be described with reference to FIG. 8 .
  • FIG. 8 is a diagram illustrating an example of the error display according to the embodiment.
  • the information processing device 1 that has recognized a speech 40 from the user AA outputs a response 41 by voice and projects a response image 24 d .
  • an error image 24 a is projected as illustrated in FIG. 8 in the case where the user BB speaks a speech 42 “When is the concert?”, a user CC speaks a speech 43 “please display TV listings!”, a user DD speaks a speech 44 “what kind of news do you have today?”, and the number of speakers exceeds the number of simultaneous speakers allowed by the information processing device 1 (for example, two people).
  • the error image 24 a may include a content that prompts a user to take measures to avoid the error such as “please speak one by one!” Therefore, the user BB, the user CC, and the user DD can understand that the error disappears when they speak one by one.
  • the information processing device 1 may transfer the response contents to a device or the like associated with each of non-target users.
  • the speech recognition system As described above, by using the speech recognition system according to the embodiment of the present disclosure, it is possible for a plurality of users to use the system at the same time and improve convenience of the speech recognition system by causing occupation of a voice response output to transition in accordance with order of speeches, for example.
  • present technology may also be configured as below.
  • An information processing device including:
  • a response generation unit configured to generate responses to speeches from a plurality of users
  • a decision unit configured to decide methods of outputting the responses to the respective users on the basis of priorities according to order of the speeches from the plurality of users
  • an output control unit configured to perform control such that the generated responses are output by using the decided methods of outputting the responses.
  • the decision unit sets priorities such that a priority of the user who has started conversation earlier becomes higher than a priority of the user who has started conversation later.
  • the decision unit decides a user having the highest priority as a target user, and decides each of the other one or more users as a non-target user.
  • the decision unit causes the target user to occupy a response output method using voice, and allocates a response output method using display to the non-target user.
  • the response generation unit generates a response that prompts the non-target user to stand by
  • the output control unit performs control such that an image of a response that prompts the non-target user to stand by is displayed.
  • the response generation unit generates a response to the non-target user, the response indicating a result of speech recognition performed on a speech from the non-target user, and
  • the output control unit performs control such that an image of the response indicating the result of speech recognition performed on the speech from the non-target user is displayed.
  • the information processing device according to any one of (4) to (6),
  • the output control unit performs control such that the non-target user waiting for a response is explicitly shown.
  • the decision unit causes the response output method using voice that has been occupied by the target user to transition to the non-target user.
  • the information processing device according to any one of (4) to (8), in which the response output using display is display through projection.
  • the decision unit allocates a method of outputting a response through cooperation with an external device to the non-target user.
  • the decision unit allocates a response output method that is different from a response output method decided in accordance with contents of a response to the target user, to the non-target user.
  • the decision unit allocates the outputting method using voice to the non-target user.
  • the decision unit decides a method of outputting a response in accordance with a state of the target user.
  • the decision unit allocates a method of outputting a response through cooperation with an external device.
  • the decision unit changes the priorities in response to an explicit interrupt process.
  • the decision unit allocates a method of outputting a response from a directional sound output unit to a plurality of users.
  • the output control unit performs control such that error notification is issued.
  • a control method including:
  • a response generation unit configured to generate responses to speeches from a plurality of users
  • a decision unit configured to decide methods of outputting the responses to the respective users on the basis of priorities according to order of the speeches from the plurality of users
  • an output control unit configured to perform control such that the generated responses are output by using the decided methods of outputting the responses.
US15/559,940 2015-03-31 2015-12-28 Information processing device, control method, and program Abandoned US20180074785A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2015073896 2015-03-31
JP2015-073896 2015-03-31
PCT/JP2015/086544 WO2016157662A1 (ja) 2015-03-31 2015-12-28 情報処理装置、制御方法、およびプログラム

Publications (1)

Publication Number Publication Date
US20180074785A1 true US20180074785A1 (en) 2018-03-15

Family

ID=57005865

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/559,940 Abandoned US20180074785A1 (en) 2015-03-31 2015-12-28 Information processing device, control method, and program

Country Status (5)

Country Link
US (1) US20180074785A1 (de)
EP (1) EP3279790B1 (de)
JP (1) JP6669162B2 (de)
CN (1) CN107408027B (de)
WO (1) WO2016157662A1 (de)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232563A1 (en) 2017-02-14 2018-08-16 Microsoft Technology Licensing, Llc Intelligent assistant
US20180322872A1 (en) * 2017-05-02 2018-11-08 Naver Corporation Method and system for processing user command to provide and adjust operation of electronic device by analyzing presentation of user speech
US20180367669A1 (en) * 2017-06-20 2018-12-20 Lenovo (Singapore) Pte. Ltd. Input during conversational session
US20190013021A1 (en) * 2017-07-05 2019-01-10 Baidu Online Network Technology (Beijing) Co., Ltd Voice wakeup method, apparatus and system, cloud server and readable medium
US20190088257A1 (en) * 2017-09-18 2019-03-21 Motorola Mobility Llc Directional Display and Audio Broadcast
US10438584B2 (en) * 2017-04-07 2019-10-08 Google Llc Multi-user virtual assistant for verbal device control
US20190369936A1 (en) * 2017-07-20 2019-12-05 Apple Inc. Electronic Device With Sensors and Display Devices
CN110992971A (zh) * 2019-12-24 2020-04-10 达闼科技成都有限公司 一种语音增强方向的确定方法、电子设备及存储介质
US10628570B2 (en) * 2017-05-15 2020-04-21 Fmr Llc Protection of data in a zero user interface environment
US20200152205A1 (en) * 2018-11-13 2020-05-14 Comcast Cable Communications,Llc Methods and systems for determining a wake word
CN112204655A (zh) * 2018-05-22 2021-01-08 三星电子株式会社 用于通过使用应用输出对语音输入的响应的电子装置及其操作方法
US11010601B2 (en) 2017-02-14 2021-05-18 Microsoft Technology Licensing, Llc Intelligent assistant device communicating non-verbal cues
US20210183371A1 (en) * 2018-08-29 2021-06-17 Alibaba Group Holding Limited Interaction method, device, storage medium and operating system
US11100384B2 (en) 2017-02-14 2021-08-24 Microsoft Technology Licensing, Llc Intelligent device user interactions
US11128636B1 (en) * 2020-05-13 2021-09-21 Science House LLC Systems, methods, and apparatus for enhanced headsets
US20210319790A1 (en) * 2018-07-20 2021-10-14 Sony Corporation Information processing device, information processing system, information processing method, and program
US11189270B2 (en) 2018-06-26 2021-11-30 Hitachi, Ltd. Method of controlling dialogue system, dialogue system, and data storage medium
CN113763968A (zh) * 2021-09-08 2021-12-07 北京百度网讯科技有限公司 用于识别语音的方法、装置、设备、介质和产品
US11222060B2 (en) * 2017-06-16 2022-01-11 Hewlett-Packard Development Company, L.P. Voice assistants with graphical image responses
US20220084518A1 (en) * 2019-01-07 2022-03-17 Sony Group Corporation Information Processing Device And Information Processing Method
US11373643B2 (en) * 2018-03-30 2022-06-28 Lenovo (Beijing) Co., Ltd. Output method and electronic device for reply information and supplemental information
US11574632B2 (en) 2018-04-23 2023-02-07 Baidu Online Network Technology (Beijing) Co., Ltd. In-cloud wake-up method and system, terminal and computer-readable storage medium
US11935449B2 (en) 2018-01-22 2024-03-19 Sony Corporation Information processing apparatus and information processing method

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447471B (zh) * 2017-02-15 2021-09-10 腾讯科技(深圳)有限公司 语音识别方法及语音识别装置
JP6901871B2 (ja) * 2017-03-01 2021-07-14 大和ハウス工業株式会社 インターフェースユニット
JP7215417B2 (ja) * 2017-11-07 2023-01-31 ソニーグループ株式会社 情報処理装置、情報処理方法、およびプログラム
CN107831903B (zh) * 2017-11-24 2021-02-02 科大讯飞股份有限公司 多人参与的人机交互方法及装置
JP2019101264A (ja) * 2017-12-04 2019-06-24 シャープ株式会社 外部制御装置、音声対話型制御システム、制御方法、およびプログラム
JP6693495B2 (ja) * 2017-12-15 2020-05-13 ソニー株式会社 情報処理装置、情報処理方法及び記録媒体
US20200411012A1 (en) * 2017-12-25 2020-12-31 Mitsubishi Electric Corporation Speech recognition device, speech recognition system, and speech recognition method
CN110096251B (zh) * 2018-01-30 2024-02-27 钉钉控股(开曼)有限公司 交互方法及装置
EP4361777A2 (de) * 2018-05-04 2024-05-01 Google LLC Erzeugung und/oder anpassung von inhalten eines automatisierten assistenten je nach einem abstand zwischen benutzer(n) und einer schnittstelle eines automatisierten assistenten
CN109117737A (zh) * 2018-07-19 2019-01-01 北京小米移动软件有限公司 洗手机的控制方法、装置和存储介质
CN109841207A (zh) * 2019-03-01 2019-06-04 深圳前海达闼云端智能科技有限公司 一种交互方法及机器人、服务器和存储介质
EP3723354B1 (de) * 2019-04-09 2021-12-22 Sonova AG Priorisierung und stummschaltung von teilnehmer in einem hörgerätesystem
JP7258686B2 (ja) * 2019-07-22 2023-04-17 Tis株式会社 情報処理システム、情報処理方法、及びプログラム
KR20210042520A (ko) * 2019-10-10 2021-04-20 삼성전자주식회사 전자 장치 및 이의 제어 방법
JP7474058B2 (ja) 2020-02-04 2024-04-24 株式会社デンソーテン 表示装置および表示装置の制御方法
JP6887035B1 (ja) * 2020-02-26 2021-06-16 株式会社サイバーエージェント 制御システム、制御装置、制御方法及びコンピュータプログラム
WO2021251107A1 (ja) * 2020-06-11 2021-12-16 ソニーグループ株式会社 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム
CN112863511A (zh) * 2021-01-15 2021-05-28 北京小米松果电子有限公司 信号处理方法、装置以及存储介质
WO2023090057A1 (ja) * 2021-11-17 2023-05-25 ソニーグループ株式会社 情報処理装置、情報処理方法および情報処理プログラム

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01216398A (ja) * 1988-02-25 1989-08-30 Toshiba Corp 音声認識方式
US6882974B2 (en) * 2002-02-15 2005-04-19 Sap Aktiengesellschaft Voice-control for a user interface
JP2006243555A (ja) * 2005-03-04 2006-09-14 Nec Corp 対応決定システム、ロボット、イベント出力サーバ、および対応決定方法
CN101282380B (zh) * 2007-04-02 2012-04-18 中国电信股份有限公司 一名通业务呼叫接续方法、服务器和通信系统
CN101291469B (zh) * 2008-06-02 2011-06-29 中国联合网络通信集团有限公司 语音被叫业务和主叫业务实现方法
KR20140004515A (ko) * 2012-07-03 2014-01-13 삼성전자주식회사 디스플레이 장치, 대화형 시스템 및 응답 정보 제공 방법
US9576574B2 (en) * 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9098467B1 (en) * 2012-12-19 2015-08-04 Rawles Llc Accepting voice commands based on user identity
CN107003999B (zh) * 2014-10-15 2020-08-21 声钰科技 对用户的在先自然语言输入的后续响应的系统和方法

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817760B2 (en) 2017-02-14 2020-10-27 Microsoft Technology Licensing, Llc Associating semantic identifiers with objects
US11194998B2 (en) 2017-02-14 2021-12-07 Microsoft Technology Licensing, Llc Multi-user intelligent assistance
US11100384B2 (en) 2017-02-14 2021-08-24 Microsoft Technology Licensing, Llc Intelligent device user interactions
US11010601B2 (en) 2017-02-14 2021-05-18 Microsoft Technology Licensing, Llc Intelligent assistant device communicating non-verbal cues
US11004446B2 (en) 2017-02-14 2021-05-11 Microsoft Technology Licensing, Llc Alias resolving intelligent assistant computing device
US10984782B2 (en) * 2017-02-14 2021-04-20 Microsoft Technology Licensing, Llc Intelligent digital assistant system
US10460215B2 (en) 2017-02-14 2019-10-29 Microsoft Technology Licensing, Llc Natural language interaction for smart assistant
US10467509B2 (en) 2017-02-14 2019-11-05 Microsoft Technology Licensing, Llc Computationally-efficient human-identifying smart assistant computer
US10467510B2 (en) 2017-02-14 2019-11-05 Microsoft Technology Licensing, Llc Intelligent assistant
US20180232563A1 (en) 2017-02-14 2018-08-16 Microsoft Technology Licensing, Llc Intelligent assistant
US10496905B2 (en) 2017-02-14 2019-12-03 Microsoft Technology Licensing, Llc Intelligent assistant with intent-based information resolution
US10957311B2 (en) 2017-02-14 2021-03-23 Microsoft Technology Licensing, Llc Parsers for deriving user intents
US10579912B2 (en) 2017-02-14 2020-03-03 Microsoft Technology Licensing, Llc User registration for intelligent assistant computer
US10824921B2 (en) 2017-02-14 2020-11-03 Microsoft Technology Licensing, Llc Position calibration for intelligent assistant computing device
US10628714B2 (en) 2017-02-14 2020-04-21 Microsoft Technology Licensing, Llc Entity-tracking computing system
US11817092B2 (en) 2017-04-07 2023-11-14 Google Llc Multi-user virtual assistant for verbal device control
US10891957B2 (en) 2017-04-07 2021-01-12 Google Llc Multi-user virtual assistant for verbal device control
US10438584B2 (en) * 2017-04-07 2019-10-08 Google Llc Multi-user virtual assistant for verbal device control
US10657963B2 (en) * 2017-05-02 2020-05-19 Naver Corporation Method and system for processing user command to provide and adjust operation of electronic device by analyzing presentation of user speech
US20180322872A1 (en) * 2017-05-02 2018-11-08 Naver Corporation Method and system for processing user command to provide and adjust operation of electronic device by analyzing presentation of user speech
US10628570B2 (en) * 2017-05-15 2020-04-21 Fmr Llc Protection of data in a zero user interface environment
US11222060B2 (en) * 2017-06-16 2022-01-11 Hewlett-Packard Development Company, L.P. Voice assistants with graphical image responses
US20180367669A1 (en) * 2017-06-20 2018-12-20 Lenovo (Singapore) Pte. Ltd. Input during conversational session
US11178280B2 (en) * 2017-06-20 2021-11-16 Lenovo (Singapore) Pte. Ltd. Input during conversational session
US20190013021A1 (en) * 2017-07-05 2019-01-10 Baidu Online Network Technology (Beijing) Co., Ltd Voice wakeup method, apparatus and system, cloud server and readable medium
US10964317B2 (en) * 2017-07-05 2021-03-30 Baidu Online Network Technology (Beijing) Co., Ltd. Voice wakeup method, apparatus and system, cloud server and readable medium
US11609603B2 (en) 2017-07-20 2023-03-21 Apple Inc. Electronic device with sensors and display devices
US20190369936A1 (en) * 2017-07-20 2019-12-05 Apple Inc. Electronic Device With Sensors and Display Devices
US11150692B2 (en) * 2017-07-20 2021-10-19 Apple Inc. Electronic device with sensors and display devices
US20190088257A1 (en) * 2017-09-18 2019-03-21 Motorola Mobility Llc Directional Display and Audio Broadcast
US10475454B2 (en) * 2017-09-18 2019-11-12 Motorola Mobility Llc Directional display and audio broadcast
US11935449B2 (en) 2018-01-22 2024-03-19 Sony Corporation Information processing apparatus and information processing method
US11373643B2 (en) * 2018-03-30 2022-06-28 Lenovo (Beijing) Co., Ltd. Output method and electronic device for reply information and supplemental information
US11900925B2 (en) 2018-03-30 2024-02-13 Lenovo (Beijing) Co., Ltd. Output method and electronic device
US11574632B2 (en) 2018-04-23 2023-02-07 Baidu Online Network Technology (Beijing) Co., Ltd. In-cloud wake-up method and system, terminal and computer-readable storage medium
US11508364B2 (en) * 2018-05-22 2022-11-22 Samsung Electronics Co., Ltd. Electronic device for outputting response to speech input by using application and operation method thereof
CN112204655A (zh) * 2018-05-22 2021-01-08 三星电子株式会社 用于通过使用应用输出对语音输入的响应的电子装置及其操作方法
US11189270B2 (en) 2018-06-26 2021-11-30 Hitachi, Ltd. Method of controlling dialogue system, dialogue system, and data storage medium
US20210319790A1 (en) * 2018-07-20 2021-10-14 Sony Corporation Information processing device, information processing system, information processing method, and program
US20210183371A1 (en) * 2018-08-29 2021-06-17 Alibaba Group Holding Limited Interaction method, device, storage medium and operating system
US10971160B2 (en) * 2018-11-13 2021-04-06 Comcast Cable Communications, Llc Methods and systems for determining a wake word
US11817104B2 (en) 2018-11-13 2023-11-14 Comcast Cable Communications, Llc Methods and systems for determining a wake word
US20200152205A1 (en) * 2018-11-13 2020-05-14 Comcast Cable Communications,Llc Methods and systems for determining a wake word
US20220084518A1 (en) * 2019-01-07 2022-03-17 Sony Group Corporation Information Processing Device And Information Processing Method
CN110992971A (zh) * 2019-12-24 2020-04-10 达闼科技成都有限公司 一种语音增强方向的确定方法、电子设备及存储介质
US20230293106A1 (en) * 2020-05-13 2023-09-21 Science House LLC Systems, methods, and apparatus for enhanced headsets
US11128636B1 (en) * 2020-05-13 2021-09-21 Science House LLC Systems, methods, and apparatus for enhanced headsets
US11957486B2 (en) * 2020-05-13 2024-04-16 Science House LLC Systems, methods, and apparatus for enhanced headsets
CN113763968A (zh) * 2021-09-08 2021-12-07 北京百度网讯科技有限公司 用于识别语音的方法、装置、设备、介质和产品

Also Published As

Publication number Publication date
EP3279790B1 (de) 2020-11-11
JP6669162B2 (ja) 2020-03-18
JPWO2016157662A1 (ja) 2018-01-25
CN107408027A (zh) 2017-11-28
CN107408027B (zh) 2020-07-28
EP3279790A4 (de) 2018-12-19
WO2016157662A1 (ja) 2016-10-06
EP3279790A1 (de) 2018-02-07

Similar Documents

Publication Publication Date Title
EP3279790B1 (de) Informationsverarbeitungsvorrichtung, steuerungsverfahren und programm
US10776070B2 (en) Information processing device, control method, and program
US11853648B2 (en) Cognitive and interactive sensor based smart home solution
US11812344B2 (en) Outputting notifications using device groups
EP3179474B1 (de) Benutzerfokusaktivierte spracherkennung
JP6669073B2 (ja) 情報処理装置、制御方法、およびプログラム
US20180188840A1 (en) Information processing device, information processing method, and program
EP2973543B1 (de) Bereitstellung von inhalt auf mehreren vorrichtungen
US11237794B2 (en) Information processing device and information processing method
KR102551715B1 (ko) Iot 기반 알림을 생성 및 클라이언트 디바이스(들)의 자동화된 어시스턴트 클라이언트(들)에 의해 iot 기반 알림을 자동 렌더링하게 하는 명령(들)의 제공
US11373650B2 (en) Information processing device and information processing method
KR102488285B1 (ko) 디지털 어시스턴트를 이용한 오디오 정보 제공
US20210110790A1 (en) Information processing device, information processing method, and recording medium
CN115917477A (zh) 使用可穿戴设备数据的助理设备仲裁
KR102629796B1 (ko) 음성 인식의 향상을 지원하는 전자 장치
KR20210116897A (ko) 외부 장치의 음성 기반 제어를 위한 방법 및 그 전자 장치
JPWO2017175442A1 (ja) 情報処理装置、および情報処理方法
JP2016071192A (ja) 対話装置および対話方法

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OHMURA, JUNKI;REEL/FRAME:043921/0466

Effective date: 20170605

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION