WO2016052018A1 - Système de gestion d'appareil ménager, appareil ménager, dispositif de commande à distance et robot - Google Patents

Système de gestion d'appareil ménager, appareil ménager, dispositif de commande à distance et robot Download PDF

Info

Publication number
WO2016052018A1
WO2016052018A1 PCT/JP2015/074117 JP2015074117W WO2016052018A1 WO 2016052018 A1 WO2016052018 A1 WO 2016052018A1 JP 2015074117 W JP2015074117 W JP 2015074117W WO 2016052018 A1 WO2016052018 A1 WO 2016052018A1
Authority
WO
WIPO (PCT)
Prior art keywords
home appliance
unit
robot
user
voice
Prior art date
Application number
PCT/JP2015/074117
Other languages
English (en)
Japanese (ja)
Inventor
圭司 坂
実雄 阪本
俊介 山縣
前田 隆宏
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2016052018A1 publication Critical patent/WO2016052018A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Definitions

  • the present invention relates to a home appliance management system, a home appliance, a remote control device, and a robot.
  • home appliances that use voice recognition technology to perform operations corresponding to voice commands by the user and notify the user of status information indicating the status of the device by voice are the latest information via a communication network such as the Internet.
  • Network-compatible home appliances have been proposed (for example, Japanese Patent Application Laid-Open No. H10-228707) that reproduces messages from smartphones via a communication network.
  • Japanese Patent Publication Japanese Patent Laid-Open No. 2008-046424 (published on Feb. 28, 2008)”
  • the network-type home appliances as described above are convenient for the user, but have the following problems, and thus are not convenient for the user.
  • the utterance content is set on a one-to-one basis for each home appliance, and utterances are missed when there is no user, or the utterance content cannot be heard at an appropriate timing. There is a problem.
  • an error notification indicating that the recording start process has failed when the recorder is operated next time can be given.
  • a notification sound (beep sound) sounds when the home appliance itself notifies the state, but since it is not a voice, it is necessary for a person to get used to understanding intuitively.
  • the present invention has been made in view of the above-described problems, and an object of the present invention is to provide a home appliance management system, a home appliance, a remote control device, a robot, and a facial expression data distribution system that can improve convenience for the user. Is to provide.
  • a home appliance management system includes a home appliance management system in which a plurality of home appliances and a management server that manages the home appliances are connected to each other via a communication network. Respectively reports the state information acquired from the management server via the communication network and the first state information output unit that outputs the state information indicating the state of the own device to the management server via the communication network.
  • a second status information output unit that outputs status information acquired from the home appliance to at least one home appliance of the plurality of home appliances connected to the communication network. It is characterized by having a part.
  • FIG. 6 is a schematic configuration block diagram of a robot constituting the remote control system shown in FIG. 5. It is a timing chart which shows the flow of the remote processing in the remote control system shown in FIG.
  • FIG. 6 is a timing chart showing the flow of remote processing for the purpose of preventing malfunction of the robot in the remote control system shown in FIG.
  • FIG. 6 is a schematic block diagram of the remote control system which concerns on Embodiment 3 of this invention.
  • It is a schematic block diagram of the robot which comprises the remote control system shown in FIG.
  • It is a figure explaining download of the facial expression data for changing the facial expression of the robot shown in FIG.
  • It is a figure which shows the example of the normal state of the facial expression data downloaded in FIG.
  • FIG. 1 is a diagram illustrating a configuration of a home appliance management system 1 according to the present embodiment.
  • the home appliance management system 1 includes an utterance management server (management server) 10 on the cloud and a plurality of home appliances 20 (A, B, C, D,...) On the local. They are connected to each other via a communication network.
  • the utterance management server 10 is connected to an external content server 30 that provides content such as weather forecasts in the cloud, and is connected to a smartphone (terminal device) 40 locally via the communication network.
  • the Internet can be used as the communication network.
  • a telephone line network, a mobile communication network, a CATV (Cable TeleVision) communication network, a satellite communication network, or the like can be used.
  • the utterance management server 10 acquires state information indicating the state (such as driving status) of a certain home appliance 20 (for example, home appliance A) via the communication network, and another home appliance 20 (for example, home appliance B). ) Is transmitted via the communication network to notify the user of the status information of the home appliance A from the home appliance B. Details of this mechanism will be described later.
  • FIG. 2 shows a schematic block diagram of the utterance management server 10 and the home appliance 20.
  • the speech management server 10 includes a control unit 11, a storage unit 12, and a communication unit 13.
  • the control unit 11 is a block that controls the operation of each unit of the utterance management server 10. That is, the control unit 11 includes, for example, a computer device that includes an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and performs various controls in the utterance management server 10 stored in the storage unit 12. By reading out and executing a program for performing the operation, the operation of each unit of the utterance management server 10 is comprehensively controlled.
  • a computer device that includes an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and performs various controls in the utterance management server 10 stored in the storage unit 12.
  • an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor
  • control unit 11 has functions as a state information acquisition unit (information acquisition unit) 14, a state identification unit 15, an utterance content selection unit 16, an output destination selection unit 17, an output control unit 18, and a content acquisition unit 19.
  • the state information acquisition unit 14 is a block that acquires state information indicating the state of the home appliance 20 from the home appliance 20. Specifically, the status information acquisition unit 14 acquires status information transmitted from any one of the plurality of home appliances 20 connected to the communication network via the communication unit 13. This state information includes identification information for identifying the home appliance 20. Thereby, the state information acquisition unit 14 sends state information including identification information indicating which home appliance 20 of the plurality of home appliances 20 is the acquired state information to the state specifying unit 15.
  • the state identification unit 15 is a block that identifies the state of the home appliance 20 from the state information sent from the state information acquisition unit 14.
  • the state of the home appliance 20 is, for example, when the home appliance is a refrigerator, when the door is open, when the home appliance is an air conditioner, stopped due to an error, and when the home appliance is a TV, the power is An on state or the like, and various states exist depending on home appliances. Therefore, the state specifying unit 15 specifies from the state information whether the home appliance is in the state as described above, and sends specific state information indicating the specified state of the home appliance to the utterance content selection unit 16. The specific state information indicates states 1 to 3 described later.
  • the utterance content selection unit 16 is a block for selecting the utterance content corresponding to the specific state information sent from the state specification unit 15 from the utterance content storage unit 121 in the storage unit 12.
  • Utterance content can be classified into the following three types.
  • Notification from each home appliance 20 For example, operation completion, error, timer start, etc.
  • Cloud information from the external content server 30 weather forecast, news, etc.
  • Information by operation from the smartphone 40 or PC for example, setting change, message, timer setting, etc.
  • the utterance content is stored in the utterance content storage unit 121 in the storage unit 12.
  • the utterance content storage unit 121 for example, as shown in FIG. 3, the utterance content is stored in association with the state of the home appliance 20 specified by the state specifying unit 15.
  • the utterance content shown in FIG. 3 is the content corresponding to the notification from each home appliance of (1) described above, and states 1 to 3 are specified by the state specifying unit 15 described above. Specific status information is shown. Specifically, state 1 indicates that the home appliance is a recorder and recording has failed, state 2 indicates that the air conditioner has stopped abnormally, state 3 indicates that the refrigerator door is open, The utterance content corresponds to these states 1 to 3.
  • the utterance content is normally stored as text data, but may be stored as an audio file. If the voice file is stored, the voice synthesis process in the home appliance 20 is not required.
  • the output destination selection unit 17 is a block for selecting the output destination of the utterance content selected by the utterance content selection unit 16. Specifically, the output destination selection unit 17 selects the home appliance 20 that is the output destination stored in the output destination database 122 in the storage unit 12 according to the utterance content. Details of the output destination selection criteria will be described later.
  • the output control unit (second state information output unit) 18 uses, as state information, the utterance content selected by the utterance content selection unit 16 for the home appliance 20 that is the output destination selected by the output destination selection unit 17. It is a block that causes the communication unit 13 to transmit (output).
  • the speech management server 10 includes a content acquisition unit 19 in the control unit 11.
  • the content acquisition unit 19 is a block that acquires external content from the external content server 30 shown in FIG. Specifically, when receiving an instruction for acquiring a weather forecast from the smartphone 40 or the like, the content acquisition unit 19 acquires weather information as external content from the external content server 30 connected to the utterance management server 10.
  • the home appliance 20 is an air conditioner, a television, a recorder, a refrigerator, a lighting device, or the like.
  • the household appliance 20 shown in FIG. 2 describes only a component required in this embodiment, and has abbreviate
  • the home appliance 20 includes a control unit 21, an audio output unit 22, and a communication unit 23 as shown in FIG.
  • the control unit 21 is a block that controls the operation of each unit of the home appliance 20.
  • the control unit 21 includes, for example, a computer device configured by an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and performs various controls in the home appliance 20 stored in a data storage unit (not shown). By reading out and executing a program for execution, the operation of each unit of the home appliance 20 is comprehensively controlled.
  • arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor
  • control unit 21 has functions as a state information extraction unit 24, an utterance content acquisition unit 25, a speech synthesis unit 26, and an output control unit 27.
  • the state information extraction unit 24 is a block that extracts state information indicating the state of the home appliance 20. Specifically, the state information extraction unit 24 extracts state information from a sensor or the like that the home appliance has stopped abnormally, that the recording reservation has failed, or that the door is in an open state. The state information extraction unit 24 transmits the extracted state information to the utterance management server 10 via the communication unit 23. Therefore, the status information extraction unit 24 and the communication unit 23 function as a first status information output unit that outputs status information indicating the status of the own device to the utterance management server 10 via the communication network.
  • the utterance content acquisition unit 25 is a block that acquires the utterance content transmitted from the utterance management server 10.
  • the utterance content acquired by the utterance content acquisition unit 25 is the utterance content derived from the state information indicating the state of the home appliance 20 other than the home appliance 20 that is the own device. As shown in FIG. 3, this utterance content is text data, and is sent to the speech synthesizer 26 for output as speech.
  • the speech synthesizer 26 is a block that generates speech data (speech synthesis). Specifically, the speech synthesizer 26 generates utterance content composed of text data as speech data.
  • the output control unit 27 is a block that performs voice output by causing the voice output unit 22 to output the voice data generated by the voice synthesis unit 26.
  • the audio output unit 22 is, for example, a speaker or the like, and operates the home appliance 20 to output audio data or outputs audio data to a user near the home appliance 20 to notify the utterance content. To do.
  • the output control unit 27 and the voice output unit 22 function as a state information notification unit that notifies the state information acquired from the utterance management server 10 via the communication network.
  • report is a television, it is possible to display the image
  • the utterance management server 10 selects an active home appliance as a home appliance (output destination) operated by the user.
  • the utterance management server 10 selects, as a home appliance (output destination) operated by the user, a home appliance in which the presence of the user near the home appliance is detected by a human sensor included in the home appliance.
  • the utterance management server 10 detects the timing when the home appliance is operated, and selects the home appliance as the home appliance (output destination) operated by the user. For example, when the user is watching television, the television is selected as an output destination, and for example, the fact that washing by the washing machine has been completed is displayed on the display screen of the television.
  • a home appliance preset by the user is preset as an output destination.
  • a home appliance installed in a room where the user is frequently present is preset as an output destination.
  • the air conditioner installed in the living room is set to speak. That is, the air conditioner is selected as the output destination.
  • each home appliance is set to speak the most suitable content for each merchandise. For example, if the content is related to the weather, home appliances such as an air conditioner and a washing machine are set, and if the content is related to ingredients, the home appliances such as a refrigerator and a range are set to speak. That is, these home appliances are selected as the output destination of the utterance content.
  • the household appliance which speaks according to a user is set. For example, if the user is a mother, the kitchen appliance is selected as the output destination, and if the user is a child, the air conditioner in the child room is selected as the output destination.
  • the user is specified using a camera provided in the home appliance, a method using voice recognition by a voice recognition function, a wearable device or a mobile phone (including a smartphone) worn by each user. There is a way.
  • wearable devices and mobile phones communication is performed using Bluetooth (registered trademark).
  • Bluetooth registered trademark
  • All home appliances are set as output destinations.
  • the notified content is information indicating an abnormal stop of the air conditioner
  • all home appliances constituting the home appliance management system 1 are selected as output destinations, and the abnormal stop of the air conditioner is notified to the user from all the home appliances.
  • all home appliances are selected as output destinations, but it is not necessary to broadcast content from each home appliance all at once. For example, when the user is moving, first, a notification may be given from the home appliance closest to the user at the present time, and notification may be made from a nearby home appliance as the user moves.
  • the output destination may be selected.
  • content that needs to be broadcasted (recording reservation information) and content that does not need to be repeated (weather forecast)
  • the output destination may be selected.
  • the home appliance used for re-notification may be a currently active home appliance as described in (1) above or a preset home appliance as described in (2) above.
  • reports a content according to time. For example, when the content to be notified is a morning alarm, the home appliance for notifying the content is set as an air conditioner in a bedroom.
  • the output destination may be selected.
  • content that needs to be broadcasted (recording reservation information) and content that does not need to be repeated (weather forecast)
  • the output destination may be selected.
  • the home appliance used for re-notification may be a currently active home appliance as described in (1) above or a preset home appliance as described in (2) above.
  • reports a content according to time. For example, when the content to be notified is a morning alarm, the home appliance for notifying the content is set as an air conditioner in a bedroom.
  • the present invention is not limited to this, for example,
  • the home appliance 20 may have the function of the utterance management server 10.
  • the household appliance which has the function of the speech management server 10 is demonstrated.
  • FIG. 4 is a block diagram of the home appliance 50 having the function of the utterance management server 10.
  • parts having the same functions as the parts included in the speech management server 10 and the home appliance 20 shown in FIG. 2 are denoted by the same reference numerals, and detailed description thereof is omitted.
  • the home appliance 50 includes a control unit 51, an audio output unit 22, a communication unit 23, and a storage unit 12, as shown in FIG.
  • the control unit 51 includes a state information acquisition unit 14, a state specification unit 15, an utterance content selection unit 16, an output destination selection unit 17, a state information extraction unit 24, an utterance content acquisition unit 25, a speech synthesis unit 26, and an output control unit 27. Function. That is, the control unit 51 functions as the same state information acquisition unit 14, state specifying unit 15, utterance content selection unit 16, and output destination selection unit 17 as the utterance management server 10 shown in FIG. 20 functions as the same state information extraction unit 24, utterance content acquisition unit 25, speech synthesis unit 26, and output control unit 27 as in FIG.
  • the home appliance 50 can send and receive data to and from a plurality of other home appliances via a communication network (not shown). Accordingly, the home appliance 50 transmits the status information of its own device to the other home appliance 50 via the communication network, while receiving the status information of the other home appliance 50 via the communication network. In other words, the home appliance 50 does not transmit / receive data to / from another home appliance 50 via the utterance management server 10 as shown in FIG.
  • the selection criterion for the transmission destination is the same as the selection criterion described in the first embodiment. Therefore, the household appliance 50 selects the output household appliance 50 in an own apparatus using the said selection criteria, and transmits status information to the selected household appliance 50.
  • FIG. 1 the selection criterion for the transmission destination (output destination) is the same as the selection criterion described in the first embodiment. Therefore, the household appliance 50 selects the output household appliance 50 in an own apparatus using the said selection criteria, and transmits status information to the selected household appliance 50.
  • the home appliance 50 itself may set the notification destination of the state information.
  • the setting of the notification destination is performed in the output destination selection unit 17. That is, in the home appliance 50, the output destination selection unit 17 functions as a notification destination setting unit that sets a notification destination of state information.
  • the notification destination is not limited to these.
  • the home appliance to be actually notified is set by voice. That is, the output destination selection unit (notification destination setting unit) 17 sets the notification destination of the state information according to the user's voice.
  • the home appliance 50 realizes the above setting by recognizing a user's voice by a voice recognition function (not shown).
  • the user sets the end of washing to be notified to the refrigerator that is another home appliance 50 by speaking. Thereby, the end of the washing machine is notified (notified) to the user from the refrigerator.
  • the information source to be notified (notified) by the user is set according to the location where the home appliance to be notified is notified. That is, the output destination selection unit (notification destination setting unit) 17 sets the notification destination of the state information in a room in which at least one home appliance connected to the communication network is installed.
  • the setting here may be set by the user by voice as in (6), but may be set manually.
  • the user sets the kitchen (room) to notify the end of washing by speaking. Thereby, the end of the washing machine is notified (notified) to the user from the refrigerator in the kitchen.
  • home appliances in the kitchen include not only a refrigerator but also a microwave oven and lighting, the microwave oven and lighting may be set together with the refrigerator as a household appliance group that informs the user.
  • the status information of the household appliance which does not have a speech function can be alert
  • the output destination is selected in units of home appliances, but the output destination may be selected in units of rooms.
  • the content may be notified using a home appliance that is operating in the selected room. Accordingly, it is possible to avoid a notification impossible state due to the fact that the home appliance for notifying the content is not operating, and it is possible to reliably notify the user of the content.
  • Embodiment 1 and the modification 1, 2 when alerting
  • it may be displayed as an image, or a sound having a meaning such as an alarm sound may be generated instead of an utterance.
  • the utterance content of each utterance-compatible home appliance is aggregated on the cloud server, and by selecting the delivery destination home appliance, the user can surely convey the utterance content. .
  • the user can miss information on home appliances at a distant location or can listen to information at an appropriate timing.
  • FIG. 5 is a diagram showing a configuration of the remote control system 2 according to the present embodiment.
  • the remote control system 2 includes a robot (remote control device, operation instruction unit) 60 having a voice utterance function, a voice recognition function, and an infrared transmission function, and a voice utterance function instructed by the robot 60.
  • a first home appliance 70 having an infrared reception function
  • a second home appliance 8 having a voice utterance function and a voice recognition function
  • a third home appliance 90 having a notification sound output function and an infrared reception function.
  • the robot 60 utters to the user by the voice utterance function, gives an operation instruction by utterance to the second home appliance 8 having the voice recognition function, recognizes the utterance content of the user by the voice recognition function, The utterances of the first home appliance 70 and the second home appliance 8 having the function are recognized, and infrared rays indicating operation instructions are transmitted to the first home appliance 70 and the third home appliance 90 having the infrared reception function by the infrared transmission function.
  • the robot 60 gives an operation command to the second home appliance 8 by voice, and gives an operation command to the first home appliance 70 and the third home appliance 90 by infrared rays.
  • the robot 60 receives a state notification from the first home appliance 70 and the second home appliance 8 by voice and receives a state notification from the third home appliance 90 by a notification sound. That is, the robot 60 functions as an acquisition unit that acquires notification sound or sound output from the home appliance. Furthermore, the robot 60 performs a state check on the second home appliance 8 by voice.
  • the status notification is notification of status information indicating the status of the home appliance.
  • state confirmation is confirming the state information of a household appliance. That is, the robot 60 checks the state of the second home appliance 8 by voice, and receives the state information of the own device from the second home appliance 8 by voice.
  • the robot 60 in the remote control system 2 gives an operation instruction to each home appliance, then receives a state notification (voice or notification sound) from each home appliance, analyzes the state notification, and the home appliance operates according to the operation instruction. If it is determined that the home appliance is not operating, the operation instruction is given to the home appliance again, and if the home appliance is operating in accordance with the operation instruction, the operation instruction is stopped. At this time, the robot 60 may transmit the result of the operation instruction to the user.
  • a state notification voice or notification sound
  • FIG. 6 shows a schematic configuration of the robot 60.
  • the robot 60 includes a control unit 61, a data storage unit 62, an infrared transmission / reception unit 63, a microphone 64, a speaker 65, a display unit 66, an operation unit 67, a camera 68, and a sensor (human sensor, temperature sensor). Etc.) 69).
  • the control unit 61 includes a computer device configured by an arithmetic processing unit such as a CPU or a dedicated processor, and is a block that controls the operation of each unit of the robot 60. Further, the control unit 61 functions as a voice recognition unit 611, a notification sound analysis unit (analysis unit) 612, a state specification unit 613, an output control unit 614, a voice synthesis unit 615, an operation command specification unit 616, and a determination unit 617.
  • the voice recognition unit 611 is a block that recognizes input voices from the user, the first home appliance 70, and the second home appliance 8. Specifically, the voice recognition unit 611 converts voice data input from the microphone 64 into text data, analyzes the text data, and extracts words and phrases. A known technique can be used for voice recognition processing.
  • the notification sound analysis unit 612 is a block that analyzes the notification sound from the third home appliance 90. Specifically, the notification sound analysis unit 612 identifies which notification sound that indicates the state of the home appliance from the notification sound input from the microphone 64.
  • a plurality of types of notification sounds are prepared according to the state of home appliances, such as “peep” that outputs a sound of a predetermined frequency for a certain period of time, and “peep, peep,... ing.
  • the state specifying unit 613 is a block that specifies the state of the home appliance from the voice recognized by the voice recognition unit 611 and the notification sound (analysis result) specified by the notification sound analysis unit 612. Specifically, the state specifying unit 613 specifies the state of the home appliance from the recognized voice, the specified notification sound, and the state information stored in the state information storage unit 621 of the data storage unit 62.
  • the output control unit 614 is a block that determines whether to give an operation instruction to the home appliance again from the state of the home appliance specified by the state specifying unit 613. Specifically, the output control unit 614 determines whether or not the specified home appliance state is operating according to the operation instruction. When the output control unit 614 determines that the specified home appliance state is not operating according to the operation instruction, infrared transmission / reception is performed again. The operation instruction is given to the home appliance by voice using the infrared ray or the speaker 65 and the voice synthesis unit 615 using the unit 63. On the other hand, when it is determined that the operation is performed according to the operation instruction, nothing is done or the user is notified that the operation is performed according to the operation instruction. In this case, since it is only necessary to know that the user is operating according to the operation instruction, it may be transmitted by voice using the speaker 65 or may be displayed on the display unit 66 and transmitted.
  • the voice synthesizer 615 is a block that generates (synthesizes) voice data. Specifically, the voice synthesizer 615 synthesizes voice data to be output from the speaker 65 from text data indicating an operation instruction.
  • the operation command specifying unit 616 is a block that specifies an operation command for the home appliance from the operation instruction content by voice from the user. Specifically, the operation command specifying unit 616 extracts an operation command corresponding to the content of the operation command by voice from the user from the operation command storage unit 622 of the data storage unit 62.
  • the determination unit 617 is a block that determines whether to accept a recognized voice command. Details of the determination unit 617 will be described later.
  • the user outputs a voice command to the robot 60.
  • the robot 60 specifies the operation command from the voice command by the operation command specifying unit 616 as described above, and transmits the operation command to another home appliance (home appliance specified by the voice command) by the infrared signal.
  • Other home appliances accept operation instructions by infrared signals.
  • the other home appliance is a home appliance having a voice utterance function like the first home appliance 70.
  • the other home appliances make voice utterances according to the accepted operation command.
  • This voice utterance is received by the robot 60.
  • the voice utterance includes content indicating that the home appliance has been operated according to the operation command, it is confirmed that the operation command has been completed, and the content that the home device has not been operated according to the operation command is indicated. If it is included, an infrared signal is transmitted to another home appliance in order to execute the operation command again.
  • the process is to be terminated.
  • the robot 60 has confirmed that the home appliance has been operated in accordance with the operation command. May be notified to the user.
  • the robot 60 confirms whether or not the home appliance has been operated according to the operation command by voice utterance from another home appliance. If the home appliance does not have the voice utterance function, the notification sound Therefore, it is confirmed whether the home appliance has been operated according to the operation instruction.
  • the robot 60 when the robot 60 having heard a certain notification sound asks the user “what happened?” And the user says “the air conditioner has stopped”, the robot sends the notification sound “air conditioner stopped”.
  • the notification sound is stored in the notification sound storage unit 624 of the data storage unit 62 as a notification sound. Thereafter, the robot 60 understands that the air conditioner is stopped when it hears the same notification sound.
  • the robot 60 can perform operations according to the detected notification sound by storing the notification sounds in various situations in association with the respective situations. Thereby, the robot 60 can determine whether or not to retry only by hearing the notification sound.
  • the operation command by the robot 60 is performed by transmitting infrared rays
  • the operation command does not reach the home appliances if the transmission direction of the infrared rays is suitable for the home appliances targeted by the operation commands.
  • the robot 60 does not emit a voice utterance or a notification sound from the home appliance that is the target of the operation command.
  • the robot 60 transmits the operation command and then performs voice utterance or When the notification sound is not emitted, it is conceivable to retry by changing the direction of the robot 60 or the like. Even if the orientation of the robot 60 is changed, the retry process is terminated when no voice utterance or notification sound is produced within a predetermined time after the operation command is transmitted. Then, the user is notified that the retry process has been completed.
  • the robot 60 includes a display unit 66 and an operation unit 67.
  • the display unit 66 is a block that displays an image of the facial expression of the robot 60.
  • the display unit 66 performs display by the rear projection method, but is not limited thereto.
  • the operation unit 67 is a block that executes the operation of the robot 60.
  • the operation unit 67 rotates the robot 60. As illustrated in FIG. 8, the operation unit 67 rotates the robot 60 to a position where an operation command can be issued to the first home appliance 70. The operation unit 67 rotates the robot 60 to a position where the voice utterance output from the speaker 70a of the first home appliance 70 can be heard.
  • the robot 60 estimates the position of the first home appliance 70 from the direction in which a voice utterance is emitted from the speaker 70a of the first home appliance 70. In this way, the robot 60 estimates the position of each home appliance and stores it in the arrangement direction storage unit 623 of the data storage unit 62, so that the operation unit 67 moves in the direction in which the home appliance to be operated is installed. Rotate to execute the operation command.
  • the robot 60 having the above-described configuration is configured to issue an operation command to a home appliance to be operated by a voice command from a user.
  • the robot 60 recognizes the words uttered by the user as voice commands.
  • the robot 60 recognizes the words uttered by the user as voice commands.
  • there is a risk of operating instructions that is, there is a possibility that a household appliance not intended by the user operates.
  • the robot 60 recognizes a voice command to turn on the air conditioner according to a word uttered by the user, the air conditioner may be turned on by an operation command.
  • the robot 60 judges whether to accept the voice command and give an operation instruction to the home appliance based on a preset judgment criterion.
  • a preset judgment criterion an example in which whether or not to accept a voice command is a result of speaking back to the user from whom the voice command is obtained by speaking using the speech function Is described below.
  • FIG. 10 shows that, in the process shown in FIG. 7, two-stage exchange is performed between the robot 60 and the user 80 before the robot 60 transmits an infrared signal to execute a transmission command. Have been added.
  • the robot 80 listens back to the user 80 and further determines whether or not to accept the voice command based on the answer from the user 80. Can prevent malfunction.
  • the user 80 may be stressed if he / she listens to the same response from the robot 60 at any time. Therefore, when the user is asked whether or not to accept a voice command, if the voice command is the same voice command as the previous time, the content of the utterance for listening back is made different from the previous time. For example, when the robot 60 first asks, “Do you turn on the air conditioner?”, The robot 60 will ask “Would you like to turn on the air conditioner?” Or “Would you like to turn on the air conditioner?” After the second time. Thereby, even if the same content as the previous time is heard back, since the utterance content is different, the stress felt by the user 80 can be reduced. In this case, it is preferable that the content of the utterance is a content that does not cause a sense of incongruity by listening back.
  • the above method is a method in which the robot confirms with the user.
  • Other confirmation methods include the following methods.
  • a method for determining whether or not to accept a voice command a method using a camera 68 and a sensor 69 provided in the robot 60 can be mentioned.
  • the camera 68 is used for photographing the user 80 and confirming whether or not the user 80 is facing the front with respect to the camera 68.
  • the robot 60 according to the present embodiment is rotated by the operation unit 67 so as to face the direction in which the voice is caught, so that the camera 68 can take a picture from the front of the user.
  • the sensor 69 uses a human sensor that detects the presence of a person and detects whether or not the user 80 is nearby. Also in this case, the above-described determination unit 617 determines whether to accept a voice command.
  • the determination unit 617 accepts the voice command and turns the front. If not, it is determined that the voice command is not accepted. In addition, when the user 80 is not facing the front, as described above, it may be confirmed whether or not the user 80 can accept a voice command using the speech function.
  • the determination unit 617 determines that the user 80 is nearby if the robot 60 detects a person when the robot 60 recognizes the voice command, receives the voice command, and detects the person using the sensor 69. If not, it is determined that the user 80 is not nearby and the voice command is not accepted. That is, the voice operation is not effective unless it is detected that there is a person.
  • the robot 60 recognizes the voice command, whether to accept the voice command without confirming whether the voice command may be accepted from the user 80. It can be determined whether or not.
  • the voice operation may be validated with a hand held over the touch sensor. Specifically, when the user 80 recognizes the voice command while holding the hand over the touch sensor of the robot 60, the voice command is accepted without confirming with the user 80. An operation command based on the voice command is executed. As described above, when there is only a voice command, there is a possibility of home appliance operation by recognition, but according to the present embodiment, there is an effect that malfunction can be prevented by adding confirmation exchange.
  • FIG. 11 is a diagram showing a configuration of the character data distribution system 3 according to the present embodiment.
  • a robot 100 and a server (distribution server) 300 are connected to each other via a communication network.
  • the server 300 is connected to an external content server 400 that provides content (characters), and is connected to a smartphone (terminal device) 201 and a PC 202 via the communication network.
  • the Internet can be used as the communication network.
  • a telephone line network, a mobile communication network, a CATV (Cable TeleVision) communication network, a satellite communication network, or the like can be used.
  • the robot 100 downloads character data associated with the account 301 set in advance from the server 300, and facial expression data corresponding to emotions such as emotions from the downloaded character data.
  • a face image is projected from the inside to a face region 100a corresponding to the face of the person. That is, the robot 100 in the character data distribution system 3 estimates its own emotions such as emotions by using a predetermined algorithm, and includes the facial expressions according to the estimated emotions such as emotions.
  • Expression data is displayed on the face area 100a using the projector 66a and the reflecting mirror 66b of the display unit 66.
  • the emotion of the robot is parameterized by the internal state of the main unit (battery remaining amount, etc.), the external environment (temperature / humidity / brightness / time, etc.), the number of conversations, frequency, and content, Comprehensively calculated and determined by probability table. For example, if the remaining battery level is abundant and the temperature is comfortable, and if the relationship with the user is good (speaking well / being praised), the user is in a good mood, and smile facial expression data is selected. Also, at midnight or early morning, the time is used as a parameter to select a sleepy expression.
  • FIG. 12 shows a schematic configuration of the robot 100.
  • the robot 100 has substantially the same configuration as the robot 60 shown in FIG. 6 described in the second embodiment, and is different in that it includes a communication unit 101 and a facial expression database (download buffer) 102. Description of the same components is omitted, and only different components are described below.
  • the communication unit 101 is a means for communicating with the server 300 in the character data distribution system 3.
  • the facial expression database 102 stores downloaded character data. It should be noted that a facial expression data group corresponding to emotions such as basic emotions is stored in the facial expression database 102 as basic character data in the initial state.
  • FIG. 13 is a diagram illustrating the download of character data for changing the expression of the robot.
  • the server 300 stores a plurality of types of character data for each account 301, and downloads and distributes the character data to the robot 100 as necessary.
  • FIG. 13 shows an example in which two types of character data (1) and character data (2) are stored in one account 301 of the server 300, and the downloaded character data (1) or (2) is The basic character data already stored in the facial expression database 102 in the robot 100 is replaced and stored.
  • the download distribution may be performed not in character data units but in facial expression data units.
  • the facial expression data distributed by download is stored by replacing the facial expression data corresponding to the basic character.
  • An instruction to distribute character data or display data to the server 300 is issued from the smartphone 201 or the PC 202 operated by a user using the robot 100. Specifically, the user instructs distribution of character data or display data stored in a predetermined account 301 in the server 300 from the smartphone 201 or the PC 202.
  • one account 301 can correspond to a plurality of robots 100, the same character data is downloaded and distributed to the robot 100 that can access the same account.
  • the smartphone 201 or the PC 202 operated by the user can access an account 301 (account (B)) different from the account 301 (account (A)) in the server 300, another account that can access the account (B)
  • the character data can be downloaded and distributed to the robot 100. If this is used, the character data can be downloaded to another person's robot, so that a trial version of the character data can be downloaded by a company. The user can purchase the trial version of the character data if they like it.
  • the robot 100 normally has no emotion, but when an utterance is made by associating the emotion with the utterance content set in advance in the dialog with the user, the robot 100 shows the emotion associated with the utterance content. By extracting facial expression data from the facial expression database 102 and displaying it in the facial region 100a, the robot 100 can appear to have emotion.
  • a numerical value is assigned to each facial expression data of the character data stored in the facial expression database 102 in advance, and the numerical value given to the display data in accordance with the emotion linked to the utterance content spoken by the robot 100. The same numerical value is given.
  • facial expression data having the same numerical value as the numerical value assigned to the emotion associated with the utterance content is extracted from the facial expression database 102 and displayed on the facial region 100a.
  • the character data distributed from the server 300 includes facial expression data corresponding to emotions such as emotions.
  • character data is created so that one type of facial expression data corresponds to one type of emotion, such as a feeling of joy, an emotion of anger, a feeling of sadness, and a pleasant emotion.
  • FIG. 14 is a diagram showing a variation of facial expression data in a normal state (easy / joyful state).
  • FIG. 15 is a diagram showing variations in facial expression data in a specific state (an angry / sad / trouble state, specific mode).
  • facial expression data indicating emotions caused by comfort and pleasure such as facial expression data indicating a state and facial expression data indicating a state of comfort.
  • each level that is, an anger level, a sadness level, and a troubled degree. Is divided into four levels, and facial expression data corresponding to each level is created.
  • facial expression data in a specific mode can be given as facial expression data in a specific state.
  • the specific mode includes an answering machine mode, a sleep mode, a dozing mode, and a remote control operation mode, and facial expression data corresponding to each mode is created.
  • the server 300 downloads and distributes the character data to the robot 100 accessible for each user account 301.
  • the user instructs the server 300 to perform download distribution using the smartphone 201 and the PC 202.
  • the character data downloaded and distributed to the account 301 of the server 300 is distributed from the external content server 400. Also in this case, using the smartphone 201 and the PC 202, the server 300 is instructed to download distribution from the external content server 400.
  • the robot 100 downloads and distributes the character data that has already been downloaded to the server 300, the user who operates the smartphone 201 and the PC 202 that instructs download distribution is not charged.
  • control blocks (particularly the control unit 21 and the control unit 61) of the home appliance 20, the robot 60, and the robot 100 may be realized by a logic circuit (hardware) formed on an integrated circuit (IC chip) or the like. It may be realized by software using a Central Processing Unit.
  • the home appliance 20, the robot 60, and the robot 100 include a CPU that executes instructions of a program, which is software that implements each function, and a ROM (in which the program and various data are recorded so as to be readable by a computer (or CPU)) Read Only Memory) or a storage device (these are referred to as "recording media"), a RAM (Random Access Memory) for expanding the program, and the like.
  • a computer or CPU
  • a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
  • the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
  • a transmission medium such as a communication network or a broadcast wave
  • the present invention can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
  • the home appliance management system is the home appliance management system 1 in which a plurality of home appliances 20 and a management server (utterance management server 10) that manages the home appliances 20 are connected to each other via a communication network.
  • 20 is a first status information output unit (status information extraction unit 24 and communication unit 23) that outputs status information indicating the status of the device itself to the management server (utterance management server 10) via the communication network.
  • a status information notification unit (output control unit 27 and voice output unit 22) for reporting status information acquired from the management server (speech management server 10) via the communication network
  • the management server uses the status information acquired from the home appliance 20 as at least one of the plurality of home appliances 20 connected to the communication network. It is characterized in that a second status information output unit that outputs to the home appliance 20 (output control section 18).
  • the management server outputs the state information acquired from the home appliance via the communication network to at least one home appliance among the plurality of home appliances connected to the communication network. Status information of other home appliances can be acquired.
  • home appliances acquire not only the status information of other home appliances but also the status information of their own devices from the management server.
  • the home appliance management system according to aspect 2 of the present invention is the home appliance management system according to aspect 1, wherein the second state information output unit outputs state information to home appliances that are turned on among the home appliances connected to the communication network. Is preferred.
  • a household appliance that is turned on that is, a household appliance in an active state
  • the user If the status information of home appliances is notified, the user can surely know the status information of other home appliances.
  • the active home appliance is a TV and the other home appliance is a refrigerator
  • status information of the refrigerator for example, the door is open
  • the user can know the status information of the refrigerator, which is another home appliance, even though he / she is watching TV.
  • the home appliance management system according to aspect 3 of the present invention is the home appliance management system according to aspect 1, in which the second state information output unit is set in advance according to the content of the acquired state information among home appliances connected to the communication network. It is preferable to output status information for.
  • the user since the state information is output to the home appliance set in advance according to the content of the acquired state information, the user can know the state information corresponding to the home appliance from the home appliance.
  • the first state information output unit notifies (notifies) the state information by voice.
  • the notification (notification) of the state information becomes a voice, the user can surely know the contents of the state information.
  • the household appliance which concerns on aspect 5 of this invention is the information acquisition which acquires the status information which shows the state of the said household appliance from the other household appliances connected to the said communication network in the household appliance connected to the communication network with several other household appliances And a state information notifying unit for notifying the state information acquired by the information acquiring unit.
  • the household appliance which concerns on aspect 6 of this invention is further provided with the alerting
  • reporting destination setting The unit preferably sets the notification destination of the state information according to the user's voice.
  • the household appliance which concerns on aspect 7 of this invention is further equipped with the alerting
  • reporting destination Preferably, the setting unit sets the notification destination of the state information in a room in which at least one home appliance connected to the communication network is installed.
  • the state information notification unit notifies the state information by voice.
  • the remote control device includes an operation instruction unit that gives an operation instruction to a home appliance, an acquisition unit that acquires notification sound or sound output by the home appliance in response to the operation instruction of the operation instruction unit, An analysis unit that analyzes the notification sound or voice acquired by the acquisition unit, and a state specifying unit that specifies the state of the home appliance from the analysis result of the analysis unit are provided.
  • the operation instruction was performed by specifying the state of the said household appliance from the result of having analyzed the alert sound or audio
  • the remote control device itself can know the state of the home appliance.
  • a remote control device further includes a determination unit that determines whether or not to perform an operation instruction on the home appliance again from the state of the home appliance specified by the state specifying unit in aspect 9,
  • the instruction unit preferably performs the operation instruction again on the home appliance when the determination unit determines to perform the operation instruction on the home appliance again.
  • the operation instruction unit is configured to perform the operation instruction again on the home appliance when the determination unit determines to perform the operation instruction on the home appliance again. Operation instructions can be given. That is, the home appliance can be reliably operated.
  • the remote control device is the remote control device according to aspect 10, wherein the installation direction specifying unit that specifies the installation direction of the home appliance as viewed from the device from the notification sound or voice acquired by the acquisition unit, and the installation direction A storage unit that stores the installation direction of the home appliance specified by the specifying unit, and the operation instruction unit is stored in the storage unit when the determination unit determines that the home appliance is to be operated again It is preferable to give an operation instruction to the home appliance toward the installation direction of the home appliance.
  • the operation instruction unit issues an operation instruction to the home appliance toward the installation direction of the home appliance stored in the storage unit when the determination unit determines to perform the operation instruction to the home appliance again.
  • the operation instruction can be surely given to the home appliance.
  • the remote control device is a robot that gives an operation instruction to a home appliance from the received voice command.
  • the remote command device recognizes the voice command, the remote command device accepts the voice command according to a preset criterion. Is provided with a determination unit for determining whether or not to give an operation instruction.
  • the voice that is not intended by the user is determined by judging whether to accept the voice command and give an operation instruction to the home appliance based on a preset judgment criterion. It is possible to prevent malfunction of home appliances due to commands. The following can be cited as the above criteria.
  • the remote control device further includes an utterance function in aspect 12, and the determination unit utters whether or not to accept the recognized voice command by using the utterance function. It is preferable to make a determination according to the utterance content obtained by listening back to the user from whom the voice command is obtained.
  • the user since the user can hear the voice response from the remote control device, the user can surely understand the content of the voice response. Thereby, the malfunction of a household appliance can be prevented reliably.
  • the remote control device When the remote control device according to aspect 14 of the present invention asks the user whether to accept a voice command in aspect 13, when the voice command is the same voice command as the previous time, the content of the utterance to be heard back is set as the previous time. It is preferable to make them different.
  • a robot according to an aspect 15 of the present invention is a robot that displays a face image in a predetermined region, and stores a facial expression data indicating emotion and the facial expression data corresponding to the emotion determined by a predetermined criterion. And a control unit that is acquired from the storage unit and displayed as the face image in the predetermined area.
  • the display unit acquires the facial expression data in accordance with the emotion determined by a predetermined reference from the storage unit and displays it as the facial image. Therefore, the robot itself selects and changes the facial expression of the facial image. be able to.
  • the facial expression data distribution system is characterized in that the robot having the above configuration and a distribution server that distributes facial expression data indicating emotions such as emotions to the robot are connected to a communication network. It is said.
  • the facial expression data that changes the display of the robot's face is distributed from the distribution server, so the display of the robot's face can be changed according to the user's preference. Allows customization of facial expressions.
  • the communication network further includes an expression data providing server that provides expression data to the distribution server according to a charge, and the expression data providing server. It is preferable that a terminal device that provides facial expression data to the distribution server is connected by charging the terminal.
  • the present invention can be suitably used for a system in which a plurality of home appliances are connected to a communication network, a remote control device for operating home appliances with voice commands, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Selective Calling Equipment (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

 L'invention permet d'augmenter la commodité d'utilisation pour un utilisateur. Dans un système de gestion d'appareil ménager (1), un serveur de gestion d'énoncé (10) comprend une unité de commande de sortie (18) pour délivrer en sortie des informations d'état acquises à partir d'un appareil ménager (20) à au moins un appareil ménager (20) parmi une pluralité d'appareils ménagers (20) connectés à un réseau de communication.
PCT/JP2015/074117 2014-10-03 2015-08-26 Système de gestion d'appareil ménager, appareil ménager, dispositif de commande à distance et robot WO2016052018A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014205268A JP2016076799A (ja) 2014-10-03 2014-10-03 家電管理システム、家電、リモコン装置、ロボット
JP2014-205268 2014-10-03

Publications (1)

Publication Number Publication Date
WO2016052018A1 true WO2016052018A1 (fr) 2016-04-07

Family

ID=55630064

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/074117 WO2016052018A1 (fr) 2014-10-03 2015-08-26 Système de gestion d'appareil ménager, appareil ménager, dispositif de commande à distance et robot

Country Status (2)

Country Link
JP (1) JP2016076799A (fr)
WO (1) WO2016052018A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109506349A (zh) * 2017-09-15 2019-03-22 夏普株式会社 网络系统、信息处理方法以及服务器
WO2019069596A1 (fr) * 2017-10-03 2019-04-11 東芝ライフスタイル株式会社 Système d'appareil électroménager
EP3454333A4 (fr) * 2016-05-03 2019-12-25 LG Electronics Inc. -1- Dispositif électronique et son procédé de commande
JP2020085313A (ja) * 2018-11-22 2020-06-04 ダイキン工業株式会社 空気調和システム
WO2020158615A1 (fr) * 2019-01-29 2020-08-06 ダイキン工業株式会社 Système de climatisation
CN112331195A (zh) * 2019-08-05 2021-02-05 佛山市顺德区美的电热电器制造有限公司 语音交互方法、装置以及系统
CN113037600A (zh) * 2019-12-09 2021-06-25 夏普株式会社 通知控制装置、通知控制系统以及通知控制方法
CN114040265A (zh) * 2017-07-14 2022-02-11 大金工业株式会社 设备操作系统
US20240111645A1 (en) * 2021-04-06 2024-04-04 Panasonic Intellectual Property Management Co., Ltd. Utterance test method for utterance device, utterance test server, utterance test system, and program

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600120B2 (en) 2008-01-03 2013-12-03 Apple Inc. Personal computing device control using face detection and recognition
US9002322B2 (en) 2011-09-29 2015-04-07 Apple Inc. Authentication with secondary approver
US9898642B2 (en) 2013-09-09 2018-02-20 Apple Inc. Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs
US10043185B2 (en) 2014-05-29 2018-08-07 Apple Inc. User interface for payments
JP7351745B2 (ja) 2016-11-10 2023-09-27 ワーナー・ブラザース・エンターテイメント・インコーポレイテッド 環境制御機能を有する社会ロボット
KR101949497B1 (ko) * 2017-05-02 2019-02-18 네이버 주식회사 사용자 발화의 표현법을 파악하여 기기의 동작이나 컨텐츠 제공 범위를 조정하여 제공하는 사용자 명령 처리 방법 및 시스템
US11048995B2 (en) 2017-05-16 2021-06-29 Google Llc Delayed responses by computational assistant
EP4155988A1 (fr) 2017-09-09 2023-03-29 Apple Inc. Mise en oeuvre de l'authentification biometrique pour l'execution d'une fonction respective
JP2019068319A (ja) * 2017-10-03 2019-04-25 東芝ライフスタイル株式会社 家電システム
JP6960823B2 (ja) * 2017-10-30 2021-11-05 三菱電機株式会社 音声解析装置、音声解析システム、音声解析方法及びプログラム
JP2019101492A (ja) * 2017-11-28 2019-06-24 トヨタ自動車株式会社 コミュニケーション装置
JP2019103073A (ja) * 2017-12-06 2019-06-24 東芝ライフスタイル株式会社 電気機器および電気機器システム
JP6530528B1 (ja) * 2018-03-26 2019-06-12 株式会社エヌ・ティ・ティ・データ 情報処理装置及びプログラム
JP6938415B2 (ja) * 2018-03-29 2021-09-22 東京瓦斯株式会社 警報ロボット、プログラムおよびシステム
US11170085B2 (en) 2018-06-03 2021-11-09 Apple Inc. Implementation of biometric authentication
JP7170428B2 (ja) * 2018-06-08 2022-11-14 三菱電機株式会社 電気機器、通信アダプタ、電気機器の設定方法、及び、プログラム
JP6463545B1 (ja) * 2018-08-22 2019-02-06 株式会社ネイン 情報処理装置、コンピュータプログラムおよび情報処理方法
US11100349B2 (en) 2018-09-28 2021-08-24 Apple Inc. Audio assisted enrollment
US10860096B2 (en) * 2018-09-28 2020-12-08 Apple Inc. Device control using gaze information
US20210158682A1 (en) * 2019-03-26 2021-05-27 Panasonic Intellectual Property Management Co., Ltd. Information notification system and information notification method
CN112400189B (zh) * 2019-03-26 2023-07-28 松下知识产权经营株式会社 信息通知系统以及信息通知方法
WO2020195821A1 (fr) * 2019-03-26 2020-10-01 ソニー株式会社 Dispositif, procédé et programme de traitement d'informations
JP7253975B2 (ja) * 2019-05-20 2023-04-07 三菱電機株式会社 通知システム
JP2021068370A (ja) * 2019-10-28 2021-04-30 ソニー株式会社 情報処理装置、情報処理方法、及びプログラム
JP7422455B2 (ja) * 2019-10-29 2024-01-26 キヤノン株式会社 通信装置、通信装置の制御方法、プログラム
JP7341426B2 (ja) * 2019-11-27 2023-09-11 国立大学法人岩手大学 通知システム、通知システムにおける制御装置、及び通知システムにおける制御方法
JP7458765B2 (ja) * 2019-12-12 2024-04-01 東芝ライフスタイル株式会社 情報処理システム、家電機器、およびプログラム
JP7366734B2 (ja) * 2019-12-19 2023-10-23 東芝ライフスタイル株式会社 通知システム
WO2021131682A1 (fr) * 2019-12-23 2021-07-01 ソニーグループ株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP7442330B2 (ja) 2020-02-05 2024-03-04 キヤノン株式会社 音声入力装置およびその制御方法ならびにプログラム
WO2022079953A1 (fr) * 2020-10-16 2022-04-21 パナソニックIpマネジメント株式会社 Dispositif de commande de notification, système de commande de notification, et procédé de commande de notification
CN114651095A (zh) * 2020-10-16 2022-06-21 松下知识产权经营株式会社 通知控制装置、通知控制系统和通知控制方法
US20230032760A1 (en) 2021-08-02 2023-02-02 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for controlling a serving robot

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003162626A (ja) * 2001-11-22 2003-06-06 Sharp Corp 情報報知システム及び機器
JP2013162314A (ja) * 2012-02-03 2013-08-19 Sharp Corp 通知システム及び通知方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003162626A (ja) * 2001-11-22 2003-06-06 Sharp Corp 情報報知システム及び機器
JP2013162314A (ja) * 2012-02-03 2013-08-19 Sharp Corp 通知システム及び通知方法

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11030996B2 (en) 2016-05-03 2021-06-08 Lg Electronics Inc. Electronic device and control method thereof
EP3454333A4 (fr) * 2016-05-03 2019-12-25 LG Electronics Inc. -1- Dispositif électronique et son procédé de commande
CN114040265B (zh) * 2017-07-14 2024-05-24 大金工业株式会社 设备操作系统
CN114040265A (zh) * 2017-07-14 2022-02-11 大金工业株式会社 设备操作系统
JP2019052797A (ja) * 2017-09-15 2019-04-04 シャープ株式会社 ネットワークシステム、情報処理方法、およびサーバ
CN109506349A (zh) * 2017-09-15 2019-03-22 夏普株式会社 网络系统、信息处理方法以及服务器
WO2019069596A1 (fr) * 2017-10-03 2019-04-11 東芝ライフスタイル株式会社 Système d'appareil électroménager
JP2020085313A (ja) * 2018-11-22 2020-06-04 ダイキン工業株式会社 空気調和システム
CN113348329A (zh) * 2019-01-29 2021-09-03 大金工业株式会社 空气调节系统
JP2020122585A (ja) * 2019-01-29 2020-08-13 ダイキン工業株式会社 空気調和システム
EP3919831A4 (fr) * 2019-01-29 2022-03-23 Daikin Industries, Ltd. Système de climatisation
WO2020158615A1 (fr) * 2019-01-29 2020-08-06 ダイキン工業株式会社 Système de climatisation
CN112331195A (zh) * 2019-08-05 2021-02-05 佛山市顺德区美的电热电器制造有限公司 语音交互方法、装置以及系统
CN112331195B (zh) * 2019-08-05 2024-02-20 佛山市顺德区美的电热电器制造有限公司 语音交互方法、装置以及系统
CN113037600A (zh) * 2019-12-09 2021-06-25 夏普株式会社 通知控制装置、通知控制系统以及通知控制方法
US20240111645A1 (en) * 2021-04-06 2024-04-04 Panasonic Intellectual Property Management Co., Ltd. Utterance test method for utterance device, utterance test server, utterance test system, and program

Also Published As

Publication number Publication date
JP2016076799A (ja) 2016-05-12

Similar Documents

Publication Publication Date Title
WO2016052018A1 (fr) Système de gestion d'appareil ménager, appareil ménager, dispositif de commande à distance et robot
JP6475386B2 (ja) 機器の制御方法、機器、及びプログラム
CN108268235B (zh) 用于语音接口设备的对话感知主动通知
CN106297781B (zh) 控制方法和控制器
US10983753B2 (en) Cognitive and interactive sensor based smart home solution
JP6739907B2 (ja) 機器特定方法、機器特定装置及びプログラム
US10958457B1 (en) Device control based on parsed meeting information
CN105323648B (zh) 字幕隐藏方法和电子装置
WO2020216107A1 (fr) Procédé, appareil et système de traitement de données de conférence, et dispositif électronique
WO2016052164A1 (fr) Dispositif de conversation
CN105284107A (zh) 用于提供交互式广告的设备、系统、方法和计算机可读介质
WO2017141530A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
KR20200074680A (ko) 단말 장치 및 이의 제어 방법
US20210099787A1 (en) Headphones providing fully natural interfaces
Sfikas et al. Creating a Smart Room using an IoT approach
KR20220078866A (ko) 외부 장치의 음성 기반 제어를 위한 방법 및 그 전자 장치
KR20230133864A (ko) 스피치 오디오 스트림 중단들을 처리하는 시스템들및 방법들
US20220122600A1 (en) Information processing device and information processing method
JP2016206646A (ja) 音声再生方法、音声対話装置及び音声対話プログラム
US11818820B2 (en) Adapting a lighting control interface based on an analysis of conversational input
US20110216915A1 (en) Providing audible information to a speaker system via a mobile communication device
JP5990311B2 (ja) サーバ、報知方法、プログラム、制御対象機器、及び報知システム
JP2020061046A (ja) 音声操作装置、音声操作方法、コンピュータプログラムおよび音声操作システム
JP5973030B2 (ja) 音声認識システム、および音声処理装置
KR20190023610A (ko) 회의 중 휴식 시간 제안 방법, 전자장치 및 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15847769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15847769

Country of ref document: EP

Kind code of ref document: A1