WO2016052018A1 - Home appliance management system, home appliance, remote control device, and robot - Google Patents

Home appliance management system, home appliance, remote control device, and robot Download PDF

Info

Publication number
WO2016052018A1
WO2016052018A1 PCT/JP2015/074117 JP2015074117W WO2016052018A1 WO 2016052018 A1 WO2016052018 A1 WO 2016052018A1 JP 2015074117 W JP2015074117 W JP 2015074117W WO 2016052018 A1 WO2016052018 A1 WO 2016052018A1
Authority
WO
WIPO (PCT)
Prior art keywords
home appliance
unit
robot
user
voice
Prior art date
Application number
PCT/JP2015/074117
Other languages
French (fr)
Japanese (ja)
Inventor
圭司 坂
実雄 阪本
俊介 山縣
前田 隆宏
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2016052018A1 publication Critical patent/WO2016052018A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Definitions

  • the present invention relates to a home appliance management system, a home appliance, a remote control device, and a robot.
  • home appliances that use voice recognition technology to perform operations corresponding to voice commands by the user and notify the user of status information indicating the status of the device by voice are the latest information via a communication network such as the Internet.
  • Network-compatible home appliances have been proposed (for example, Japanese Patent Application Laid-Open No. H10-228707) that reproduces messages from smartphones via a communication network.
  • Japanese Patent Publication Japanese Patent Laid-Open No. 2008-046424 (published on Feb. 28, 2008)”
  • the network-type home appliances as described above are convenient for the user, but have the following problems, and thus are not convenient for the user.
  • the utterance content is set on a one-to-one basis for each home appliance, and utterances are missed when there is no user, or the utterance content cannot be heard at an appropriate timing. There is a problem.
  • an error notification indicating that the recording start process has failed when the recorder is operated next time can be given.
  • a notification sound (beep sound) sounds when the home appliance itself notifies the state, but since it is not a voice, it is necessary for a person to get used to understanding intuitively.
  • the present invention has been made in view of the above-described problems, and an object of the present invention is to provide a home appliance management system, a home appliance, a remote control device, a robot, and a facial expression data distribution system that can improve convenience for the user. Is to provide.
  • a home appliance management system includes a home appliance management system in which a plurality of home appliances and a management server that manages the home appliances are connected to each other via a communication network. Respectively reports the state information acquired from the management server via the communication network and the first state information output unit that outputs the state information indicating the state of the own device to the management server via the communication network.
  • a second status information output unit that outputs status information acquired from the home appliance to at least one home appliance of the plurality of home appliances connected to the communication network. It is characterized by having a part.
  • FIG. 6 is a schematic configuration block diagram of a robot constituting the remote control system shown in FIG. 5. It is a timing chart which shows the flow of the remote processing in the remote control system shown in FIG.
  • FIG. 6 is a timing chart showing the flow of remote processing for the purpose of preventing malfunction of the robot in the remote control system shown in FIG.
  • FIG. 6 is a schematic block diagram of the remote control system which concerns on Embodiment 3 of this invention.
  • It is a schematic block diagram of the robot which comprises the remote control system shown in FIG.
  • It is a figure explaining download of the facial expression data for changing the facial expression of the robot shown in FIG.
  • It is a figure which shows the example of the normal state of the facial expression data downloaded in FIG.
  • FIG. 1 is a diagram illustrating a configuration of a home appliance management system 1 according to the present embodiment.
  • the home appliance management system 1 includes an utterance management server (management server) 10 on the cloud and a plurality of home appliances 20 (A, B, C, D,...) On the local. They are connected to each other via a communication network.
  • the utterance management server 10 is connected to an external content server 30 that provides content such as weather forecasts in the cloud, and is connected to a smartphone (terminal device) 40 locally via the communication network.
  • the Internet can be used as the communication network.
  • a telephone line network, a mobile communication network, a CATV (Cable TeleVision) communication network, a satellite communication network, or the like can be used.
  • the utterance management server 10 acquires state information indicating the state (such as driving status) of a certain home appliance 20 (for example, home appliance A) via the communication network, and another home appliance 20 (for example, home appliance B). ) Is transmitted via the communication network to notify the user of the status information of the home appliance A from the home appliance B. Details of this mechanism will be described later.
  • FIG. 2 shows a schematic block diagram of the utterance management server 10 and the home appliance 20.
  • the speech management server 10 includes a control unit 11, a storage unit 12, and a communication unit 13.
  • the control unit 11 is a block that controls the operation of each unit of the utterance management server 10. That is, the control unit 11 includes, for example, a computer device that includes an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and performs various controls in the utterance management server 10 stored in the storage unit 12. By reading out and executing a program for performing the operation, the operation of each unit of the utterance management server 10 is comprehensively controlled.
  • a computer device that includes an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and performs various controls in the utterance management server 10 stored in the storage unit 12.
  • an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor
  • control unit 11 has functions as a state information acquisition unit (information acquisition unit) 14, a state identification unit 15, an utterance content selection unit 16, an output destination selection unit 17, an output control unit 18, and a content acquisition unit 19.
  • the state information acquisition unit 14 is a block that acquires state information indicating the state of the home appliance 20 from the home appliance 20. Specifically, the status information acquisition unit 14 acquires status information transmitted from any one of the plurality of home appliances 20 connected to the communication network via the communication unit 13. This state information includes identification information for identifying the home appliance 20. Thereby, the state information acquisition unit 14 sends state information including identification information indicating which home appliance 20 of the plurality of home appliances 20 is the acquired state information to the state specifying unit 15.
  • the state identification unit 15 is a block that identifies the state of the home appliance 20 from the state information sent from the state information acquisition unit 14.
  • the state of the home appliance 20 is, for example, when the home appliance is a refrigerator, when the door is open, when the home appliance is an air conditioner, stopped due to an error, and when the home appliance is a TV, the power is An on state or the like, and various states exist depending on home appliances. Therefore, the state specifying unit 15 specifies from the state information whether the home appliance is in the state as described above, and sends specific state information indicating the specified state of the home appliance to the utterance content selection unit 16. The specific state information indicates states 1 to 3 described later.
  • the utterance content selection unit 16 is a block for selecting the utterance content corresponding to the specific state information sent from the state specification unit 15 from the utterance content storage unit 121 in the storage unit 12.
  • Utterance content can be classified into the following three types.
  • Notification from each home appliance 20 For example, operation completion, error, timer start, etc.
  • Cloud information from the external content server 30 weather forecast, news, etc.
  • Information by operation from the smartphone 40 or PC for example, setting change, message, timer setting, etc.
  • the utterance content is stored in the utterance content storage unit 121 in the storage unit 12.
  • the utterance content storage unit 121 for example, as shown in FIG. 3, the utterance content is stored in association with the state of the home appliance 20 specified by the state specifying unit 15.
  • the utterance content shown in FIG. 3 is the content corresponding to the notification from each home appliance of (1) described above, and states 1 to 3 are specified by the state specifying unit 15 described above. Specific status information is shown. Specifically, state 1 indicates that the home appliance is a recorder and recording has failed, state 2 indicates that the air conditioner has stopped abnormally, state 3 indicates that the refrigerator door is open, The utterance content corresponds to these states 1 to 3.
  • the utterance content is normally stored as text data, but may be stored as an audio file. If the voice file is stored, the voice synthesis process in the home appliance 20 is not required.
  • the output destination selection unit 17 is a block for selecting the output destination of the utterance content selected by the utterance content selection unit 16. Specifically, the output destination selection unit 17 selects the home appliance 20 that is the output destination stored in the output destination database 122 in the storage unit 12 according to the utterance content. Details of the output destination selection criteria will be described later.
  • the output control unit (second state information output unit) 18 uses, as state information, the utterance content selected by the utterance content selection unit 16 for the home appliance 20 that is the output destination selected by the output destination selection unit 17. It is a block that causes the communication unit 13 to transmit (output).
  • the speech management server 10 includes a content acquisition unit 19 in the control unit 11.
  • the content acquisition unit 19 is a block that acquires external content from the external content server 30 shown in FIG. Specifically, when receiving an instruction for acquiring a weather forecast from the smartphone 40 or the like, the content acquisition unit 19 acquires weather information as external content from the external content server 30 connected to the utterance management server 10.
  • the home appliance 20 is an air conditioner, a television, a recorder, a refrigerator, a lighting device, or the like.
  • the household appliance 20 shown in FIG. 2 describes only a component required in this embodiment, and has abbreviate
  • the home appliance 20 includes a control unit 21, an audio output unit 22, and a communication unit 23 as shown in FIG.
  • the control unit 21 is a block that controls the operation of each unit of the home appliance 20.
  • the control unit 21 includes, for example, a computer device configured by an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and performs various controls in the home appliance 20 stored in a data storage unit (not shown). By reading out and executing a program for execution, the operation of each unit of the home appliance 20 is comprehensively controlled.
  • arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor
  • control unit 21 has functions as a state information extraction unit 24, an utterance content acquisition unit 25, a speech synthesis unit 26, and an output control unit 27.
  • the state information extraction unit 24 is a block that extracts state information indicating the state of the home appliance 20. Specifically, the state information extraction unit 24 extracts state information from a sensor or the like that the home appliance has stopped abnormally, that the recording reservation has failed, or that the door is in an open state. The state information extraction unit 24 transmits the extracted state information to the utterance management server 10 via the communication unit 23. Therefore, the status information extraction unit 24 and the communication unit 23 function as a first status information output unit that outputs status information indicating the status of the own device to the utterance management server 10 via the communication network.
  • the utterance content acquisition unit 25 is a block that acquires the utterance content transmitted from the utterance management server 10.
  • the utterance content acquired by the utterance content acquisition unit 25 is the utterance content derived from the state information indicating the state of the home appliance 20 other than the home appliance 20 that is the own device. As shown in FIG. 3, this utterance content is text data, and is sent to the speech synthesizer 26 for output as speech.
  • the speech synthesizer 26 is a block that generates speech data (speech synthesis). Specifically, the speech synthesizer 26 generates utterance content composed of text data as speech data.
  • the output control unit 27 is a block that performs voice output by causing the voice output unit 22 to output the voice data generated by the voice synthesis unit 26.
  • the audio output unit 22 is, for example, a speaker or the like, and operates the home appliance 20 to output audio data or outputs audio data to a user near the home appliance 20 to notify the utterance content. To do.
  • the output control unit 27 and the voice output unit 22 function as a state information notification unit that notifies the state information acquired from the utterance management server 10 via the communication network.
  • report is a television, it is possible to display the image
  • the utterance management server 10 selects an active home appliance as a home appliance (output destination) operated by the user.
  • the utterance management server 10 selects, as a home appliance (output destination) operated by the user, a home appliance in which the presence of the user near the home appliance is detected by a human sensor included in the home appliance.
  • the utterance management server 10 detects the timing when the home appliance is operated, and selects the home appliance as the home appliance (output destination) operated by the user. For example, when the user is watching television, the television is selected as an output destination, and for example, the fact that washing by the washing machine has been completed is displayed on the display screen of the television.
  • a home appliance preset by the user is preset as an output destination.
  • a home appliance installed in a room where the user is frequently present is preset as an output destination.
  • the air conditioner installed in the living room is set to speak. That is, the air conditioner is selected as the output destination.
  • each home appliance is set to speak the most suitable content for each merchandise. For example, if the content is related to the weather, home appliances such as an air conditioner and a washing machine are set, and if the content is related to ingredients, the home appliances such as a refrigerator and a range are set to speak. That is, these home appliances are selected as the output destination of the utterance content.
  • the household appliance which speaks according to a user is set. For example, if the user is a mother, the kitchen appliance is selected as the output destination, and if the user is a child, the air conditioner in the child room is selected as the output destination.
  • the user is specified using a camera provided in the home appliance, a method using voice recognition by a voice recognition function, a wearable device or a mobile phone (including a smartphone) worn by each user. There is a way.
  • wearable devices and mobile phones communication is performed using Bluetooth (registered trademark).
  • Bluetooth registered trademark
  • All home appliances are set as output destinations.
  • the notified content is information indicating an abnormal stop of the air conditioner
  • all home appliances constituting the home appliance management system 1 are selected as output destinations, and the abnormal stop of the air conditioner is notified to the user from all the home appliances.
  • all home appliances are selected as output destinations, but it is not necessary to broadcast content from each home appliance all at once. For example, when the user is moving, first, a notification may be given from the home appliance closest to the user at the present time, and notification may be made from a nearby home appliance as the user moves.
  • the output destination may be selected.
  • content that needs to be broadcasted (recording reservation information) and content that does not need to be repeated (weather forecast)
  • the output destination may be selected.
  • the home appliance used for re-notification may be a currently active home appliance as described in (1) above or a preset home appliance as described in (2) above.
  • reports a content according to time. For example, when the content to be notified is a morning alarm, the home appliance for notifying the content is set as an air conditioner in a bedroom.
  • the output destination may be selected.
  • content that needs to be broadcasted (recording reservation information) and content that does not need to be repeated (weather forecast)
  • the output destination may be selected.
  • the home appliance used for re-notification may be a currently active home appliance as described in (1) above or a preset home appliance as described in (2) above.
  • reports a content according to time. For example, when the content to be notified is a morning alarm, the home appliance for notifying the content is set as an air conditioner in a bedroom.
  • the present invention is not limited to this, for example,
  • the home appliance 20 may have the function of the utterance management server 10.
  • the household appliance which has the function of the speech management server 10 is demonstrated.
  • FIG. 4 is a block diagram of the home appliance 50 having the function of the utterance management server 10.
  • parts having the same functions as the parts included in the speech management server 10 and the home appliance 20 shown in FIG. 2 are denoted by the same reference numerals, and detailed description thereof is omitted.
  • the home appliance 50 includes a control unit 51, an audio output unit 22, a communication unit 23, and a storage unit 12, as shown in FIG.
  • the control unit 51 includes a state information acquisition unit 14, a state specification unit 15, an utterance content selection unit 16, an output destination selection unit 17, a state information extraction unit 24, an utterance content acquisition unit 25, a speech synthesis unit 26, and an output control unit 27. Function. That is, the control unit 51 functions as the same state information acquisition unit 14, state specifying unit 15, utterance content selection unit 16, and output destination selection unit 17 as the utterance management server 10 shown in FIG. 20 functions as the same state information extraction unit 24, utterance content acquisition unit 25, speech synthesis unit 26, and output control unit 27 as in FIG.
  • the home appliance 50 can send and receive data to and from a plurality of other home appliances via a communication network (not shown). Accordingly, the home appliance 50 transmits the status information of its own device to the other home appliance 50 via the communication network, while receiving the status information of the other home appliance 50 via the communication network. In other words, the home appliance 50 does not transmit / receive data to / from another home appliance 50 via the utterance management server 10 as shown in FIG.
  • the selection criterion for the transmission destination is the same as the selection criterion described in the first embodiment. Therefore, the household appliance 50 selects the output household appliance 50 in an own apparatus using the said selection criteria, and transmits status information to the selected household appliance 50.
  • FIG. 1 the selection criterion for the transmission destination (output destination) is the same as the selection criterion described in the first embodiment. Therefore, the household appliance 50 selects the output household appliance 50 in an own apparatus using the said selection criteria, and transmits status information to the selected household appliance 50.
  • the home appliance 50 itself may set the notification destination of the state information.
  • the setting of the notification destination is performed in the output destination selection unit 17. That is, in the home appliance 50, the output destination selection unit 17 functions as a notification destination setting unit that sets a notification destination of state information.
  • the notification destination is not limited to these.
  • the home appliance to be actually notified is set by voice. That is, the output destination selection unit (notification destination setting unit) 17 sets the notification destination of the state information according to the user's voice.
  • the home appliance 50 realizes the above setting by recognizing a user's voice by a voice recognition function (not shown).
  • the user sets the end of washing to be notified to the refrigerator that is another home appliance 50 by speaking. Thereby, the end of the washing machine is notified (notified) to the user from the refrigerator.
  • the information source to be notified (notified) by the user is set according to the location where the home appliance to be notified is notified. That is, the output destination selection unit (notification destination setting unit) 17 sets the notification destination of the state information in a room in which at least one home appliance connected to the communication network is installed.
  • the setting here may be set by the user by voice as in (6), but may be set manually.
  • the user sets the kitchen (room) to notify the end of washing by speaking. Thereby, the end of the washing machine is notified (notified) to the user from the refrigerator in the kitchen.
  • home appliances in the kitchen include not only a refrigerator but also a microwave oven and lighting, the microwave oven and lighting may be set together with the refrigerator as a household appliance group that informs the user.
  • the status information of the household appliance which does not have a speech function can be alert
  • the output destination is selected in units of home appliances, but the output destination may be selected in units of rooms.
  • the content may be notified using a home appliance that is operating in the selected room. Accordingly, it is possible to avoid a notification impossible state due to the fact that the home appliance for notifying the content is not operating, and it is possible to reliably notify the user of the content.
  • Embodiment 1 and the modification 1, 2 when alerting
  • it may be displayed as an image, or a sound having a meaning such as an alarm sound may be generated instead of an utterance.
  • the utterance content of each utterance-compatible home appliance is aggregated on the cloud server, and by selecting the delivery destination home appliance, the user can surely convey the utterance content. .
  • the user can miss information on home appliances at a distant location or can listen to information at an appropriate timing.
  • FIG. 5 is a diagram showing a configuration of the remote control system 2 according to the present embodiment.
  • the remote control system 2 includes a robot (remote control device, operation instruction unit) 60 having a voice utterance function, a voice recognition function, and an infrared transmission function, and a voice utterance function instructed by the robot 60.
  • a first home appliance 70 having an infrared reception function
  • a second home appliance 8 having a voice utterance function and a voice recognition function
  • a third home appliance 90 having a notification sound output function and an infrared reception function.
  • the robot 60 utters to the user by the voice utterance function, gives an operation instruction by utterance to the second home appliance 8 having the voice recognition function, recognizes the utterance content of the user by the voice recognition function, The utterances of the first home appliance 70 and the second home appliance 8 having the function are recognized, and infrared rays indicating operation instructions are transmitted to the first home appliance 70 and the third home appliance 90 having the infrared reception function by the infrared transmission function.
  • the robot 60 gives an operation command to the second home appliance 8 by voice, and gives an operation command to the first home appliance 70 and the third home appliance 90 by infrared rays.
  • the robot 60 receives a state notification from the first home appliance 70 and the second home appliance 8 by voice and receives a state notification from the third home appliance 90 by a notification sound. That is, the robot 60 functions as an acquisition unit that acquires notification sound or sound output from the home appliance. Furthermore, the robot 60 performs a state check on the second home appliance 8 by voice.
  • the status notification is notification of status information indicating the status of the home appliance.
  • state confirmation is confirming the state information of a household appliance. That is, the robot 60 checks the state of the second home appliance 8 by voice, and receives the state information of the own device from the second home appliance 8 by voice.
  • the robot 60 in the remote control system 2 gives an operation instruction to each home appliance, then receives a state notification (voice or notification sound) from each home appliance, analyzes the state notification, and the home appliance operates according to the operation instruction. If it is determined that the home appliance is not operating, the operation instruction is given to the home appliance again, and if the home appliance is operating in accordance with the operation instruction, the operation instruction is stopped. At this time, the robot 60 may transmit the result of the operation instruction to the user.
  • a state notification voice or notification sound
  • FIG. 6 shows a schematic configuration of the robot 60.
  • the robot 60 includes a control unit 61, a data storage unit 62, an infrared transmission / reception unit 63, a microphone 64, a speaker 65, a display unit 66, an operation unit 67, a camera 68, and a sensor (human sensor, temperature sensor). Etc.) 69).
  • the control unit 61 includes a computer device configured by an arithmetic processing unit such as a CPU or a dedicated processor, and is a block that controls the operation of each unit of the robot 60. Further, the control unit 61 functions as a voice recognition unit 611, a notification sound analysis unit (analysis unit) 612, a state specification unit 613, an output control unit 614, a voice synthesis unit 615, an operation command specification unit 616, and a determination unit 617.
  • the voice recognition unit 611 is a block that recognizes input voices from the user, the first home appliance 70, and the second home appliance 8. Specifically, the voice recognition unit 611 converts voice data input from the microphone 64 into text data, analyzes the text data, and extracts words and phrases. A known technique can be used for voice recognition processing.
  • the notification sound analysis unit 612 is a block that analyzes the notification sound from the third home appliance 90. Specifically, the notification sound analysis unit 612 identifies which notification sound that indicates the state of the home appliance from the notification sound input from the microphone 64.
  • a plurality of types of notification sounds are prepared according to the state of home appliances, such as “peep” that outputs a sound of a predetermined frequency for a certain period of time, and “peep, peep,... ing.
  • the state specifying unit 613 is a block that specifies the state of the home appliance from the voice recognized by the voice recognition unit 611 and the notification sound (analysis result) specified by the notification sound analysis unit 612. Specifically, the state specifying unit 613 specifies the state of the home appliance from the recognized voice, the specified notification sound, and the state information stored in the state information storage unit 621 of the data storage unit 62.
  • the output control unit 614 is a block that determines whether to give an operation instruction to the home appliance again from the state of the home appliance specified by the state specifying unit 613. Specifically, the output control unit 614 determines whether or not the specified home appliance state is operating according to the operation instruction. When the output control unit 614 determines that the specified home appliance state is not operating according to the operation instruction, infrared transmission / reception is performed again. The operation instruction is given to the home appliance by voice using the infrared ray or the speaker 65 and the voice synthesis unit 615 using the unit 63. On the other hand, when it is determined that the operation is performed according to the operation instruction, nothing is done or the user is notified that the operation is performed according to the operation instruction. In this case, since it is only necessary to know that the user is operating according to the operation instruction, it may be transmitted by voice using the speaker 65 or may be displayed on the display unit 66 and transmitted.
  • the voice synthesizer 615 is a block that generates (synthesizes) voice data. Specifically, the voice synthesizer 615 synthesizes voice data to be output from the speaker 65 from text data indicating an operation instruction.
  • the operation command specifying unit 616 is a block that specifies an operation command for the home appliance from the operation instruction content by voice from the user. Specifically, the operation command specifying unit 616 extracts an operation command corresponding to the content of the operation command by voice from the user from the operation command storage unit 622 of the data storage unit 62.
  • the determination unit 617 is a block that determines whether to accept a recognized voice command. Details of the determination unit 617 will be described later.
  • the user outputs a voice command to the robot 60.
  • the robot 60 specifies the operation command from the voice command by the operation command specifying unit 616 as described above, and transmits the operation command to another home appliance (home appliance specified by the voice command) by the infrared signal.
  • Other home appliances accept operation instructions by infrared signals.
  • the other home appliance is a home appliance having a voice utterance function like the first home appliance 70.
  • the other home appliances make voice utterances according to the accepted operation command.
  • This voice utterance is received by the robot 60.
  • the voice utterance includes content indicating that the home appliance has been operated according to the operation command, it is confirmed that the operation command has been completed, and the content that the home device has not been operated according to the operation command is indicated. If it is included, an infrared signal is transmitted to another home appliance in order to execute the operation command again.
  • the process is to be terminated.
  • the robot 60 has confirmed that the home appliance has been operated in accordance with the operation command. May be notified to the user.
  • the robot 60 confirms whether or not the home appliance has been operated according to the operation command by voice utterance from another home appliance. If the home appliance does not have the voice utterance function, the notification sound Therefore, it is confirmed whether the home appliance has been operated according to the operation instruction.
  • the robot 60 when the robot 60 having heard a certain notification sound asks the user “what happened?” And the user says “the air conditioner has stopped”, the robot sends the notification sound “air conditioner stopped”.
  • the notification sound is stored in the notification sound storage unit 624 of the data storage unit 62 as a notification sound. Thereafter, the robot 60 understands that the air conditioner is stopped when it hears the same notification sound.
  • the robot 60 can perform operations according to the detected notification sound by storing the notification sounds in various situations in association with the respective situations. Thereby, the robot 60 can determine whether or not to retry only by hearing the notification sound.
  • the operation command by the robot 60 is performed by transmitting infrared rays
  • the operation command does not reach the home appliances if the transmission direction of the infrared rays is suitable for the home appliances targeted by the operation commands.
  • the robot 60 does not emit a voice utterance or a notification sound from the home appliance that is the target of the operation command.
  • the robot 60 transmits the operation command and then performs voice utterance or When the notification sound is not emitted, it is conceivable to retry by changing the direction of the robot 60 or the like. Even if the orientation of the robot 60 is changed, the retry process is terminated when no voice utterance or notification sound is produced within a predetermined time after the operation command is transmitted. Then, the user is notified that the retry process has been completed.
  • the robot 60 includes a display unit 66 and an operation unit 67.
  • the display unit 66 is a block that displays an image of the facial expression of the robot 60.
  • the display unit 66 performs display by the rear projection method, but is not limited thereto.
  • the operation unit 67 is a block that executes the operation of the robot 60.
  • the operation unit 67 rotates the robot 60. As illustrated in FIG. 8, the operation unit 67 rotates the robot 60 to a position where an operation command can be issued to the first home appliance 70. The operation unit 67 rotates the robot 60 to a position where the voice utterance output from the speaker 70a of the first home appliance 70 can be heard.
  • the robot 60 estimates the position of the first home appliance 70 from the direction in which a voice utterance is emitted from the speaker 70a of the first home appliance 70. In this way, the robot 60 estimates the position of each home appliance and stores it in the arrangement direction storage unit 623 of the data storage unit 62, so that the operation unit 67 moves in the direction in which the home appliance to be operated is installed. Rotate to execute the operation command.
  • the robot 60 having the above-described configuration is configured to issue an operation command to a home appliance to be operated by a voice command from a user.
  • the robot 60 recognizes the words uttered by the user as voice commands.
  • the robot 60 recognizes the words uttered by the user as voice commands.
  • there is a risk of operating instructions that is, there is a possibility that a household appliance not intended by the user operates.
  • the robot 60 recognizes a voice command to turn on the air conditioner according to a word uttered by the user, the air conditioner may be turned on by an operation command.
  • the robot 60 judges whether to accept the voice command and give an operation instruction to the home appliance based on a preset judgment criterion.
  • a preset judgment criterion an example in which whether or not to accept a voice command is a result of speaking back to the user from whom the voice command is obtained by speaking using the speech function Is described below.
  • FIG. 10 shows that, in the process shown in FIG. 7, two-stage exchange is performed between the robot 60 and the user 80 before the robot 60 transmits an infrared signal to execute a transmission command. Have been added.
  • the robot 80 listens back to the user 80 and further determines whether or not to accept the voice command based on the answer from the user 80. Can prevent malfunction.
  • the user 80 may be stressed if he / she listens to the same response from the robot 60 at any time. Therefore, when the user is asked whether or not to accept a voice command, if the voice command is the same voice command as the previous time, the content of the utterance for listening back is made different from the previous time. For example, when the robot 60 first asks, “Do you turn on the air conditioner?”, The robot 60 will ask “Would you like to turn on the air conditioner?” Or “Would you like to turn on the air conditioner?” After the second time. Thereby, even if the same content as the previous time is heard back, since the utterance content is different, the stress felt by the user 80 can be reduced. In this case, it is preferable that the content of the utterance is a content that does not cause a sense of incongruity by listening back.
  • the above method is a method in which the robot confirms with the user.
  • Other confirmation methods include the following methods.
  • a method for determining whether or not to accept a voice command a method using a camera 68 and a sensor 69 provided in the robot 60 can be mentioned.
  • the camera 68 is used for photographing the user 80 and confirming whether or not the user 80 is facing the front with respect to the camera 68.
  • the robot 60 according to the present embodiment is rotated by the operation unit 67 so as to face the direction in which the voice is caught, so that the camera 68 can take a picture from the front of the user.
  • the sensor 69 uses a human sensor that detects the presence of a person and detects whether or not the user 80 is nearby. Also in this case, the above-described determination unit 617 determines whether to accept a voice command.
  • the determination unit 617 accepts the voice command and turns the front. If not, it is determined that the voice command is not accepted. In addition, when the user 80 is not facing the front, as described above, it may be confirmed whether or not the user 80 can accept a voice command using the speech function.
  • the determination unit 617 determines that the user 80 is nearby if the robot 60 detects a person when the robot 60 recognizes the voice command, receives the voice command, and detects the person using the sensor 69. If not, it is determined that the user 80 is not nearby and the voice command is not accepted. That is, the voice operation is not effective unless it is detected that there is a person.
  • the robot 60 recognizes the voice command, whether to accept the voice command without confirming whether the voice command may be accepted from the user 80. It can be determined whether or not.
  • the voice operation may be validated with a hand held over the touch sensor. Specifically, when the user 80 recognizes the voice command while holding the hand over the touch sensor of the robot 60, the voice command is accepted without confirming with the user 80. An operation command based on the voice command is executed. As described above, when there is only a voice command, there is a possibility of home appliance operation by recognition, but according to the present embodiment, there is an effect that malfunction can be prevented by adding confirmation exchange.
  • FIG. 11 is a diagram showing a configuration of the character data distribution system 3 according to the present embodiment.
  • a robot 100 and a server (distribution server) 300 are connected to each other via a communication network.
  • the server 300 is connected to an external content server 400 that provides content (characters), and is connected to a smartphone (terminal device) 201 and a PC 202 via the communication network.
  • the Internet can be used as the communication network.
  • a telephone line network, a mobile communication network, a CATV (Cable TeleVision) communication network, a satellite communication network, or the like can be used.
  • the robot 100 downloads character data associated with the account 301 set in advance from the server 300, and facial expression data corresponding to emotions such as emotions from the downloaded character data.
  • a face image is projected from the inside to a face region 100a corresponding to the face of the person. That is, the robot 100 in the character data distribution system 3 estimates its own emotions such as emotions by using a predetermined algorithm, and includes the facial expressions according to the estimated emotions such as emotions.
  • Expression data is displayed on the face area 100a using the projector 66a and the reflecting mirror 66b of the display unit 66.
  • the emotion of the robot is parameterized by the internal state of the main unit (battery remaining amount, etc.), the external environment (temperature / humidity / brightness / time, etc.), the number of conversations, frequency, and content, Comprehensively calculated and determined by probability table. For example, if the remaining battery level is abundant and the temperature is comfortable, and if the relationship with the user is good (speaking well / being praised), the user is in a good mood, and smile facial expression data is selected. Also, at midnight or early morning, the time is used as a parameter to select a sleepy expression.
  • FIG. 12 shows a schematic configuration of the robot 100.
  • the robot 100 has substantially the same configuration as the robot 60 shown in FIG. 6 described in the second embodiment, and is different in that it includes a communication unit 101 and a facial expression database (download buffer) 102. Description of the same components is omitted, and only different components are described below.
  • the communication unit 101 is a means for communicating with the server 300 in the character data distribution system 3.
  • the facial expression database 102 stores downloaded character data. It should be noted that a facial expression data group corresponding to emotions such as basic emotions is stored in the facial expression database 102 as basic character data in the initial state.
  • FIG. 13 is a diagram illustrating the download of character data for changing the expression of the robot.
  • the server 300 stores a plurality of types of character data for each account 301, and downloads and distributes the character data to the robot 100 as necessary.
  • FIG. 13 shows an example in which two types of character data (1) and character data (2) are stored in one account 301 of the server 300, and the downloaded character data (1) or (2) is The basic character data already stored in the facial expression database 102 in the robot 100 is replaced and stored.
  • the download distribution may be performed not in character data units but in facial expression data units.
  • the facial expression data distributed by download is stored by replacing the facial expression data corresponding to the basic character.
  • An instruction to distribute character data or display data to the server 300 is issued from the smartphone 201 or the PC 202 operated by a user using the robot 100. Specifically, the user instructs distribution of character data or display data stored in a predetermined account 301 in the server 300 from the smartphone 201 or the PC 202.
  • one account 301 can correspond to a plurality of robots 100, the same character data is downloaded and distributed to the robot 100 that can access the same account.
  • the smartphone 201 or the PC 202 operated by the user can access an account 301 (account (B)) different from the account 301 (account (A)) in the server 300, another account that can access the account (B)
  • the character data can be downloaded and distributed to the robot 100. If this is used, the character data can be downloaded to another person's robot, so that a trial version of the character data can be downloaded by a company. The user can purchase the trial version of the character data if they like it.
  • the robot 100 normally has no emotion, but when an utterance is made by associating the emotion with the utterance content set in advance in the dialog with the user, the robot 100 shows the emotion associated with the utterance content. By extracting facial expression data from the facial expression database 102 and displaying it in the facial region 100a, the robot 100 can appear to have emotion.
  • a numerical value is assigned to each facial expression data of the character data stored in the facial expression database 102 in advance, and the numerical value given to the display data in accordance with the emotion linked to the utterance content spoken by the robot 100. The same numerical value is given.
  • facial expression data having the same numerical value as the numerical value assigned to the emotion associated with the utterance content is extracted from the facial expression database 102 and displayed on the facial region 100a.
  • the character data distributed from the server 300 includes facial expression data corresponding to emotions such as emotions.
  • character data is created so that one type of facial expression data corresponds to one type of emotion, such as a feeling of joy, an emotion of anger, a feeling of sadness, and a pleasant emotion.
  • FIG. 14 is a diagram showing a variation of facial expression data in a normal state (easy / joyful state).
  • FIG. 15 is a diagram showing variations in facial expression data in a specific state (an angry / sad / trouble state, specific mode).
  • facial expression data indicating emotions caused by comfort and pleasure such as facial expression data indicating a state and facial expression data indicating a state of comfort.
  • each level that is, an anger level, a sadness level, and a troubled degree. Is divided into four levels, and facial expression data corresponding to each level is created.
  • facial expression data in a specific mode can be given as facial expression data in a specific state.
  • the specific mode includes an answering machine mode, a sleep mode, a dozing mode, and a remote control operation mode, and facial expression data corresponding to each mode is created.
  • the server 300 downloads and distributes the character data to the robot 100 accessible for each user account 301.
  • the user instructs the server 300 to perform download distribution using the smartphone 201 and the PC 202.
  • the character data downloaded and distributed to the account 301 of the server 300 is distributed from the external content server 400. Also in this case, using the smartphone 201 and the PC 202, the server 300 is instructed to download distribution from the external content server 400.
  • the robot 100 downloads and distributes the character data that has already been downloaded to the server 300, the user who operates the smartphone 201 and the PC 202 that instructs download distribution is not charged.
  • control blocks (particularly the control unit 21 and the control unit 61) of the home appliance 20, the robot 60, and the robot 100 may be realized by a logic circuit (hardware) formed on an integrated circuit (IC chip) or the like. It may be realized by software using a Central Processing Unit.
  • the home appliance 20, the robot 60, and the robot 100 include a CPU that executes instructions of a program, which is software that implements each function, and a ROM (in which the program and various data are recorded so as to be readable by a computer (or CPU)) Read Only Memory) or a storage device (these are referred to as "recording media"), a RAM (Random Access Memory) for expanding the program, and the like.
  • a computer or CPU
  • a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
  • the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
  • a transmission medium such as a communication network or a broadcast wave
  • the present invention can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
  • the home appliance management system is the home appliance management system 1 in which a plurality of home appliances 20 and a management server (utterance management server 10) that manages the home appliances 20 are connected to each other via a communication network.
  • 20 is a first status information output unit (status information extraction unit 24 and communication unit 23) that outputs status information indicating the status of the device itself to the management server (utterance management server 10) via the communication network.
  • a status information notification unit (output control unit 27 and voice output unit 22) for reporting status information acquired from the management server (speech management server 10) via the communication network
  • the management server uses the status information acquired from the home appliance 20 as at least one of the plurality of home appliances 20 connected to the communication network. It is characterized in that a second status information output unit that outputs to the home appliance 20 (output control section 18).
  • the management server outputs the state information acquired from the home appliance via the communication network to at least one home appliance among the plurality of home appliances connected to the communication network. Status information of other home appliances can be acquired.
  • home appliances acquire not only the status information of other home appliances but also the status information of their own devices from the management server.
  • the home appliance management system according to aspect 2 of the present invention is the home appliance management system according to aspect 1, wherein the second state information output unit outputs state information to home appliances that are turned on among the home appliances connected to the communication network. Is preferred.
  • a household appliance that is turned on that is, a household appliance in an active state
  • the user If the status information of home appliances is notified, the user can surely know the status information of other home appliances.
  • the active home appliance is a TV and the other home appliance is a refrigerator
  • status information of the refrigerator for example, the door is open
  • the user can know the status information of the refrigerator, which is another home appliance, even though he / she is watching TV.
  • the home appliance management system according to aspect 3 of the present invention is the home appliance management system according to aspect 1, in which the second state information output unit is set in advance according to the content of the acquired state information among home appliances connected to the communication network. It is preferable to output status information for.
  • the user since the state information is output to the home appliance set in advance according to the content of the acquired state information, the user can know the state information corresponding to the home appliance from the home appliance.
  • the first state information output unit notifies (notifies) the state information by voice.
  • the notification (notification) of the state information becomes a voice, the user can surely know the contents of the state information.
  • the household appliance which concerns on aspect 5 of this invention is the information acquisition which acquires the status information which shows the state of the said household appliance from the other household appliances connected to the said communication network in the household appliance connected to the communication network with several other household appliances And a state information notifying unit for notifying the state information acquired by the information acquiring unit.
  • the household appliance which concerns on aspect 6 of this invention is further provided with the alerting
  • reporting destination setting The unit preferably sets the notification destination of the state information according to the user's voice.
  • the household appliance which concerns on aspect 7 of this invention is further equipped with the alerting
  • reporting destination Preferably, the setting unit sets the notification destination of the state information in a room in which at least one home appliance connected to the communication network is installed.
  • the state information notification unit notifies the state information by voice.
  • the remote control device includes an operation instruction unit that gives an operation instruction to a home appliance, an acquisition unit that acquires notification sound or sound output by the home appliance in response to the operation instruction of the operation instruction unit, An analysis unit that analyzes the notification sound or voice acquired by the acquisition unit, and a state specifying unit that specifies the state of the home appliance from the analysis result of the analysis unit are provided.
  • the operation instruction was performed by specifying the state of the said household appliance from the result of having analyzed the alert sound or audio
  • the remote control device itself can know the state of the home appliance.
  • a remote control device further includes a determination unit that determines whether or not to perform an operation instruction on the home appliance again from the state of the home appliance specified by the state specifying unit in aspect 9,
  • the instruction unit preferably performs the operation instruction again on the home appliance when the determination unit determines to perform the operation instruction on the home appliance again.
  • the operation instruction unit is configured to perform the operation instruction again on the home appliance when the determination unit determines to perform the operation instruction on the home appliance again. Operation instructions can be given. That is, the home appliance can be reliably operated.
  • the remote control device is the remote control device according to aspect 10, wherein the installation direction specifying unit that specifies the installation direction of the home appliance as viewed from the device from the notification sound or voice acquired by the acquisition unit, and the installation direction A storage unit that stores the installation direction of the home appliance specified by the specifying unit, and the operation instruction unit is stored in the storage unit when the determination unit determines that the home appliance is to be operated again It is preferable to give an operation instruction to the home appliance toward the installation direction of the home appliance.
  • the operation instruction unit issues an operation instruction to the home appliance toward the installation direction of the home appliance stored in the storage unit when the determination unit determines to perform the operation instruction to the home appliance again.
  • the operation instruction can be surely given to the home appliance.
  • the remote control device is a robot that gives an operation instruction to a home appliance from the received voice command.
  • the remote command device recognizes the voice command, the remote command device accepts the voice command according to a preset criterion. Is provided with a determination unit for determining whether or not to give an operation instruction.
  • the voice that is not intended by the user is determined by judging whether to accept the voice command and give an operation instruction to the home appliance based on a preset judgment criterion. It is possible to prevent malfunction of home appliances due to commands. The following can be cited as the above criteria.
  • the remote control device further includes an utterance function in aspect 12, and the determination unit utters whether or not to accept the recognized voice command by using the utterance function. It is preferable to make a determination according to the utterance content obtained by listening back to the user from whom the voice command is obtained.
  • the user since the user can hear the voice response from the remote control device, the user can surely understand the content of the voice response. Thereby, the malfunction of a household appliance can be prevented reliably.
  • the remote control device When the remote control device according to aspect 14 of the present invention asks the user whether to accept a voice command in aspect 13, when the voice command is the same voice command as the previous time, the content of the utterance to be heard back is set as the previous time. It is preferable to make them different.
  • a robot according to an aspect 15 of the present invention is a robot that displays a face image in a predetermined region, and stores a facial expression data indicating emotion and the facial expression data corresponding to the emotion determined by a predetermined criterion. And a control unit that is acquired from the storage unit and displayed as the face image in the predetermined area.
  • the display unit acquires the facial expression data in accordance with the emotion determined by a predetermined reference from the storage unit and displays it as the facial image. Therefore, the robot itself selects and changes the facial expression of the facial image. be able to.
  • the facial expression data distribution system is characterized in that the robot having the above configuration and a distribution server that distributes facial expression data indicating emotions such as emotions to the robot are connected to a communication network. It is said.
  • the facial expression data that changes the display of the robot's face is distributed from the distribution server, so the display of the robot's face can be changed according to the user's preference. Allows customization of facial expressions.
  • the communication network further includes an expression data providing server that provides expression data to the distribution server according to a charge, and the expression data providing server. It is preferable that a terminal device that provides facial expression data to the distribution server is connected by charging the terminal.
  • the present invention can be suitably used for a system in which a plurality of home appliances are connected to a communication network, a remote control device for operating home appliances with voice commands, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Selective Calling Equipment (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

 The present invention heightens convenience for a user. In this home appliance management system (1), an utterance management server 10 is provided with an output control unit (18) for outputting status information acquired from a home appliance (20) to at least one home appliance (20) among a plurality of home appliances (20) connected to a communication network.

Description

家電管理システム、家電、リモコン装置、ロボットHome appliance management system, home appliance, remote control device, robot
 本発明は、家電管理システム、家電、リモコン装置、ロボットに関する。 The present invention relates to a home appliance management system, a home appliance, a remote control device, and a robot.
 近年、音声認識の技術を利用して、ユーザによる音声コマンドに対応した操作を行い、音声により自機器の状態を示す状態情報をユーザに通知する家電がインターネット等の通信ネットワークを介して最新の情報を発信したり、通信ネットワークを介してスマートフォンからの伝言などを再生したりする、ネットワーク対応型の家電が提案されている(例えば特許文献1)。 In recent years, home appliances that use voice recognition technology to perform operations corresponding to voice commands by the user and notify the user of status information indicating the status of the device by voice are the latest information via a communication network such as the Internet. Network-compatible home appliances have been proposed (for example, Japanese Patent Application Laid-Open No. H10-228707) that reproduces messages from smartphones via a communication network.
日本国公開特許公報「特開2008-046424号公報(2008年2月28日公開)」Japanese Patent Publication “Japanese Patent Laid-Open No. 2008-046424 (published on Feb. 28, 2008)”
 しかしながら、上記のようなネットワーク型の家電は、ユーザにとって便利であるが、以下に示すような問題点を有しているため、ユーザにとって利便性は十分ではない。 However, the network-type home appliances as described above are convenient for the user, but have the following problems, and thus are not convenient for the user.
 (1)従来の発話機能を持つ家電は、各家電毎に発話コンテンツが1対1で設定されており、ユーザがいないところで発話して聞き逃したり、適切なタイミングで発話コンテンツが聞けなかったりするという問題がある。 (1) For home appliances having a conventional utterance function, the utterance content is set on a one-to-one basis for each home appliance, and utterances are missed when there is no user, or the utterance content cannot be heard at an appropriate timing. There is a problem.
 例えば、聞き逃しの例として、リビングから離れた洗濯室で、洗濯機が完了通知を発話した場合を挙げることができる。 For example, as an example of missed hearing, a case where the washing machine utters a completion notification in a laundry room away from the living room can be cited.
 また、タイミングが不適切な例として、レコーダの録画開始処理を失敗した場合、次回レコーダを操作したときに録画開始処理を失敗したことを示すエラー通知が行われることを挙げることができる。 Further, as an example of inappropriate timing, when the recording start process of the recorder fails, an error notification indicating that the recording start process has failed when the recorder is operated next time can be given.
 また、無駄な発話によるストレスの例として、複数の家電がそれぞれ発話コンテンツを持つことにより、同じ情報を何度もユーザが聞くことになるケースを挙げることができる。例えば、天気予報をエアコン/冷蔵庫/空気清浄機が別々に喋る場合を挙げることができる。 Also, as an example of stress due to useless utterances, there can be mentioned a case where a plurality of home appliances each have utterance content, so that the user hears the same information over and over again. For example, a case where the air conditioner / refrigerator / air purifier separately gives the weather forecast can be mentioned.
 さらに、スマートフォンからの伝言を再生する機能を有する家電では、伝言を聞くユーザを認識/指定することができないため、伝言が正しく伝わらない可能性がある。 Furthermore, with home appliances that have the function of playing back messages from smartphones, the user who listens to the message cannot be recognized / designated, so the message may not be transmitted correctly.
 (2)従来の、音声認識により、音声コマンドに対応した家電操作を行うリモートコントロールシステムでは、操作対象の家電がどの位置にあるかを認識することができないという問題が生じる。 (2) In a conventional remote control system that performs home appliance operation corresponding to a voice command by voice recognition, there is a problem that it is impossible to recognize the position of the home appliance to be operated.
 また、上記のリモートコントロールシステムでは、家電に対して操作指示をした場合に、家電が操作命令通りに動作したかを確認する術がなく、操作に失敗した場合のフィードバックを行うことができないという問題が生じる。 Also, in the above remote control system, there is no way to confirm whether the home appliance has operated according to the operation command when an operation instruction is given to the home appliance, and it is not possible to provide feedback when the operation fails Occurs.
 通常、家電自体が状態を通知する際に報知音(ビープ音)が鳴るが、音声ではないため人間が直感的に理解するには慣れが必要である。 Normally, a notification sound (beep sound) sounds when the home appliance itself notifies the state, but since it is not a voice, it is necessary for a person to get used to understanding intuitively.
 (3)上記のリモートコントロールシステムのように音声コマンドのみで操作を行った場合、ユーザの意図しない家電操作が行われる虞がある。例えば、リモートコントロールシステムの近くでユーザが会話しているときに、ユーザが音声コマンドと意図せず、当該音声コマンドに相当する音声を発した場合、リモートコントロールシステムは音声コマンドが入力されたと認識して家電操作を行うことを挙げることができる。 (3) When the operation is performed only with a voice command as in the above-described remote control system, there is a possibility that the home appliance operation not intended by the user may be performed. For example, when the user is talking near the remote control system, if the user does not intend to be a voice command and utters a voice corresponding to the voice command, the remote control system recognizes that the voice command has been input. You can mention doing home appliance operation.
 本発明は、上記の各問題点に鑑みなされたものであって、その目的は、ユーザにとっての利便性を高めることのできる、家電管理システム、家電、リモコン装置、ロボット、及び表情データ配信システムを提供することである。 The present invention has been made in view of the above-described problems, and an object of the present invention is to provide a home appliance management system, a home appliance, a remote control device, a robot, and a facial expression data distribution system that can improve convenience for the user. Is to provide.
 上記の課題を解決するために、本発明の一態様に係る家電管理システムは、複数の家電と当該家電を管理する管理サーバとが互いに通信ネットワークを介して接続された家電管理システムにおいて、上記家電は、それぞれ、自機器の状態を示す状態情報を、上記通信ネットワークを介して上記管理サーバに出力する第1状態情報出力部と、上記通信ネットワークを介して上記管理サーバから取得した状態情報を報知する状態情報報知部と、を備え、上記管理サーバは、上記家電から取得した状態情報を、上記通信ネットワークに接続された複数の家電のうち少なくとも一つの家電に対して出力する第2状態情報出力部を備えていることを特徴としている。 In order to solve the above problems, a home appliance management system according to an aspect of the present invention includes a home appliance management system in which a plurality of home appliances and a management server that manages the home appliances are connected to each other via a communication network. Respectively reports the state information acquired from the management server via the communication network and the first state information output unit that outputs the state information indicating the state of the own device to the management server via the communication network. A second status information output unit that outputs status information acquired from the home appliance to at least one home appliance of the plurality of home appliances connected to the communication network. It is characterized by having a part.
 本発明の一態様によれば、他の家電の状態情報を取得した家電から、他の家電の状態情報をユーザに報知することができるという効果を奏する。 According to one aspect of the present invention, it is possible to notify the user of the status information of another home appliance from the home appliance that acquired the status information of the other home appliance.
本発明の実施形態1に係る家電管理システムの概略構成図である。It is a schematic block diagram of the household appliance management system which concerns on Embodiment 1 of this invention. 図1に示す家電管理システムを構成する家電及び発話管理サーバの概略構成ブロックである。It is a schematic block diagram of the household appliance and speech management server which comprise the household appliance management system shown in FIG. 図1に示す家電管理システム内の発話管理サーバに格納されている発話コンテンツの例を示す図である。It is a figure which shows the example of the speech content stored in the speech management server in the household appliance management system shown in FIG. 本発明の実施形態1の変形例に係る家電管理システムを構成する家電の概略構成ブロック図である。It is a schematic block diagram of the household appliance which comprises the household appliance management system which concerns on the modification of Embodiment 1 of this invention. 本発明の実施形態2に係るリモートコントロールシステムの概略構成図である。It is a schematic block diagram of the remote control system which concerns on Embodiment 2 of this invention. 図5に示すリモートコントロールシステムを構成するロボットの概略構成ブロック図である。FIG. 6 is a schematic configuration block diagram of a robot constituting the remote control system shown in FIG. 5. 図5に示すリモートコントロールシステムにおけるリモート処理の流れを示すタイミングチャートである。It is a timing chart which shows the flow of the remote processing in the remote control system shown in FIG. 図6に示すロボットの動作を説明する図である。It is a figure explaining operation | movement of the robot shown in FIG. 図6に示すロボットとユーザとの対話によるロボットの誤動作防止を説明するための図である。It is a figure for demonstrating the malfunction prevention of the robot by the dialog of the robot shown in FIG. 6, and a user. 図5に示すリモートコントロールシステムにおけるロボットの誤動作防止を目的としたリモート処理の流れを示すタイミングチャートである。6 is a timing chart showing the flow of remote processing for the purpose of preventing malfunction of the robot in the remote control system shown in FIG. 本発明の実施形態3に係るリモートコントロールシステムの概略構成図である。It is a schematic block diagram of the remote control system which concerns on Embodiment 3 of this invention. 図11に示すリモートコントロールシステムを構成するロボットの概略構成ブロック図である。It is a schematic block diagram of the robot which comprises the remote control system shown in FIG. 図12に示すロボットの表情を変更するための表情データのダウンロードを説明する図である。It is a figure explaining download of the facial expression data for changing the facial expression of the robot shown in FIG. 図13においてダウンロードする表情データの通常状態の例を示す図である。It is a figure which shows the example of the normal state of the facial expression data downloaded in FIG. 図13においてダウンロードする表情データの特定状態の例を示す図である。It is a figure which shows the example of the specific state of the facial expression data downloaded in FIG.
 〔実施形態1〕
 以下、本発明の実施の形態について、詳細に説明する。
[Embodiment 1]
Hereinafter, embodiments of the present invention will be described in detail.
 (家電管理システムの構成)
 図1は、本実施形態に係る家電管理システム1の構成を示す図である。家電管理システム1は、図1に示すように、クラウド上にある発話管理サーバ(管理サーバ)10と、ローカル上にある複数の家電20(A,B,C,D,・・・)とが互いに通信ネットワークを介して接続している。また、上記発話管理サーバ10は、クラウド上で、天気予報等のコンテンツを提供する外部コンテンツサーバ30に接続され、上記通信ネットワークを介して、ローカル上で、スマートフォン(端末装置)40に接続されている。上記通信ネットワークとしては、例えば、インターネットが利用できる。また、電話回線網、移動体通信網、CATV(Cable TeleVision)通信網、衛星通信網などを利用することもできる。
(Configuration of home appliance management system)
FIG. 1 is a diagram illustrating a configuration of a home appliance management system 1 according to the present embodiment. As shown in FIG. 1, the home appliance management system 1 includes an utterance management server (management server) 10 on the cloud and a plurality of home appliances 20 (A, B, C, D,...) On the local. They are connected to each other via a communication network. The utterance management server 10 is connected to an external content server 30 that provides content such as weather forecasts in the cloud, and is connected to a smartphone (terminal device) 40 locally via the communication network. Yes. For example, the Internet can be used as the communication network. Further, a telephone line network, a mobile communication network, a CATV (Cable TeleVision) communication network, a satellite communication network, or the like can be used.
 上記家電管理システム1では、発話管理サーバ10が、ある家電20(例えば家電A)の状態(運転状況等)を示す状態情報を、通信ネットワークを介して取得し、別の家電20(例えば家電B)に家電Aの状態情報を、通信ネットワークを介して送信し、当該家電Bから家電Aの状態情報をユーザに対して報知するようになっている。この仕組みについての詳細は、後述する。 In the home appliance management system 1, the utterance management server 10 acquires state information indicating the state (such as driving status) of a certain home appliance 20 (for example, home appliance A) via the communication network, and another home appliance 20 (for example, home appliance B). ) Is transmitted via the communication network to notify the user of the status information of the home appliance A from the home appliance B. Details of this mechanism will be described later.
 上記発話管理サーバ10及び家電20の詳細について以下に説明する。図2は、発話管理サーバ10及び家電20の概略構成ブロック図を示す。 Details of the utterance management server 10 and the home appliance 20 will be described below. FIG. 2 shows a schematic block diagram of the utterance management server 10 and the home appliance 20.
 (発話管理サーバ)
 発話管理サーバ10は、図2に示すように、制御部11、記憶部12、通信部13を含んでいる。
(Speech management server)
As shown in FIG. 2, the speech management server 10 includes a control unit 11, a storage unit 12, and a communication unit 13.
 制御部11は、発話管理サーバ10の各部の動作を制御するブロックである。すなわち、制御部11は、例えば、CPU(Central Processing Unit)や専用プロセッサなどの演算処理部などにより構成されるコンピュータ装置から成り、記憶部12に記憶されている発話管理サーバ10における各種制御を実施するためのプログラムを読み出して実行することで、発話管理サーバ10の各部の動作を統括的に制御する。 The control unit 11 is a block that controls the operation of each unit of the utterance management server 10. That is, the control unit 11 includes, for example, a computer device that includes an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and performs various controls in the utterance management server 10 stored in the storage unit 12. By reading out and executing a program for performing the operation, the operation of each unit of the utterance management server 10 is comprehensively controlled.
 また、制御部11は、状態情報取得部(情報取得部)14、状態特定部15、発話コンテンツ選択部16、出力先選択部17、出力制御部18、コンテンツ取得部19としての機能を有する。 Further, the control unit 11 has functions as a state information acquisition unit (information acquisition unit) 14, a state identification unit 15, an utterance content selection unit 16, an output destination selection unit 17, an output control unit 18, and a content acquisition unit 19.
 状態情報取得部14は、家電20の状態を示す状態情報を、当該家電20から取得するブロックである。具体的には、状態情報取得部14は、通信部13を介して、通信ネットワークに接続されている複数の家電20の何れかの家電20から送信される状態情報を取得している。この状態情報には、家電20を識別する識別情報が含まれている。これにより、状態情報取得部14は、取得した状態情報が複数の家電20のうちどの家電20から送信された状態情報であるかを示す識別情報を含んだ状態情報を状態特定部15に送る。 The state information acquisition unit 14 is a block that acquires state information indicating the state of the home appliance 20 from the home appliance 20. Specifically, the status information acquisition unit 14 acquires status information transmitted from any one of the plurality of home appliances 20 connected to the communication network via the communication unit 13. This state information includes identification information for identifying the home appliance 20. Thereby, the state information acquisition unit 14 sends state information including identification information indicating which home appliance 20 of the plurality of home appliances 20 is the acquired state information to the state specifying unit 15.
 状態特定部15は、状態情報取得部14から送られた状態情報から、家電20の状態を特定するブロックである。ここで家電20の状態とは、例えば、家電が冷蔵庫であれば、ドアが開放されている状態、家電がエアコンであれば、エラーにより停止している状態、家電がテレビであれば、電源がオン状態等であり、家電により様々な状態が存在することになる。従って、状態特定部15は、状態情報から、家電が上述したような状態の何れであるかを特定し、特定した家電の状態を示す特定状態情報を発話コンテンツ選択部16に送る。特定状態情報は、後述する状態1~3等を示している。 The state identification unit 15 is a block that identifies the state of the home appliance 20 from the state information sent from the state information acquisition unit 14. Here, the state of the home appliance 20 is, for example, when the home appliance is a refrigerator, when the door is open, when the home appliance is an air conditioner, stopped due to an error, and when the home appliance is a TV, the power is An on state or the like, and various states exist depending on home appliances. Therefore, the state specifying unit 15 specifies from the state information whether the home appliance is in the state as described above, and sends specific state information indicating the specified state of the home appliance to the utterance content selection unit 16. The specific state information indicates states 1 to 3 described later.
 発話コンテンツ選択部16は、状態特定部15から送られた特定状態情報に応じた発話コンテンツを記憶部12内の発話コンテンツ格納部121から選択するブロックである。 The utterance content selection unit 16 is a block for selecting the utterance content corresponding to the specific state information sent from the state specification unit 15 from the utterance content storage unit 121 in the storage unit 12.
 ここで、発話コンテンツについて説明する。発話コンテンツは、主に、以下の3種類に分類できる。 Here, the utterance content is explained. Utterance content can be classified into the following three types.
 (1)各家電20からの通知:例えば、動作完了、エラー、タイマー起動等。 (1) Notification from each home appliance 20: For example, operation completion, error, timer start, etc.
 (2)外部コンテンツサーバ30からのクラウド情報:例えば、天気予報、ニュース等。(3)スマートフォン40またはPC(図示せず)などからの操作による情報:例えば、設定変更、伝言、タイマー設定等。 (2) Cloud information from the external content server 30: weather forecast, news, etc. (3) Information by operation from the smartphone 40 or PC (not shown): for example, setting change, message, timer setting, etc.
 発話コンテンツは、記憶部12内の発話コンテンツ格納部121内に格納されている。発話コンテンツ格納部121では、例えば図3に示すように、発話コンテンツを状態特定部15で特定した家電20の状態と対応付けて格納している。ここで、図3に示す発話コンテンツは、上述した(1)の各家電からの通知に対応した内容となっている、また、状態1~3は、上述した状態特定部15にて特定された特定状態情報を示している。具体的には、状態1は家電がレコーダであり、録画に失敗したことを示し、状態2はエアコンが異常停止したことを示し、状態3は冷蔵庫のドアが開放されていることを示し、各発話コンテンツは、これら状態1~3に対応した内容となっている。なお、発話コンテンツは、通常、テキストデータで格納されているが、音声ファイルで格納してもよい。音声ファイルで格納している場合には、家電20内での音声合成処理が不要となる。 The utterance content is stored in the utterance content storage unit 121 in the storage unit 12. In the utterance content storage unit 121, for example, as shown in FIG. 3, the utterance content is stored in association with the state of the home appliance 20 specified by the state specifying unit 15. Here, the utterance content shown in FIG. 3 is the content corresponding to the notification from each home appliance of (1) described above, and states 1 to 3 are specified by the state specifying unit 15 described above. Specific status information is shown. Specifically, state 1 indicates that the home appliance is a recorder and recording has failed, state 2 indicates that the air conditioner has stopped abnormally, state 3 indicates that the refrigerator door is open, The utterance content corresponds to these states 1 to 3. Note that the utterance content is normally stored as text data, but may be stored as an audio file. If the voice file is stored, the voice synthesis process in the home appliance 20 is not required.
 出力先選択部17は、発話コンテンツ選択部16にて選択された発話コンテンツの出力先を選択するブロックである。具体的には、出力先選択部17は、発話コンテンツに応じて、記憶部12内の出力先データベース122に格納された出力先である家電20を選択する。なお、出力先の選択基準について詳細は後述する。 The output destination selection unit 17 is a block for selecting the output destination of the utterance content selected by the utterance content selection unit 16. Specifically, the output destination selection unit 17 selects the home appliance 20 that is the output destination stored in the output destination database 122 in the storage unit 12 according to the utterance content. Details of the output destination selection criteria will be described later.
 出力制御部(第2状態情報出力部)18は、出力先選択部17にて選択された出力先である家電20に対して、発話コンテンツ選択部16にて選択された発話コンテンツを状態情報として通信部13に送信(出力)させるブロックである。 The output control unit (second state information output unit) 18 uses, as state information, the utterance content selected by the utterance content selection unit 16 for the home appliance 20 that is the output destination selected by the output destination selection unit 17. It is a block that causes the communication unit 13 to transmit (output).
 なお、発話管理サーバ10は、制御部11内にコンテンツ取得部19を備えている。このコンテンツ取得部19は、図1に示す外部コンテンツサーバ30から外部のコンテンツを取得するブロックである。具体的には、コンテンツ取得部19は、スマートフォン40等から天気予報を取得するための指示を受け付けると、発話管理サーバ10に接続された外部コンテンツサーバ30から天気情報を外部のコンテンツとして取得する。 The speech management server 10 includes a content acquisition unit 19 in the control unit 11. The content acquisition unit 19 is a block that acquires external content from the external content server 30 shown in FIG. Specifically, when receiving an instruction for acquiring a weather forecast from the smartphone 40 or the like, the content acquisition unit 19 acquires weather information as external content from the external content server 30 connected to the utterance management server 10.
 (家電)
 本実施形態において、家電20として想定しているのは、エアコン、テレビ、レコーダ、冷蔵庫、照明機器等である。なお、図2に示す家電20は、本実施形態において必要な構成要素のみを記載し、その他の家電毎の機能については省略している。
(Household appliances)
In this embodiment, what is assumed as the home appliance 20 is an air conditioner, a television, a recorder, a refrigerator, a lighting device, or the like. In addition, the household appliance 20 shown in FIG. 2 describes only a component required in this embodiment, and has abbreviate | omitted about the function for every other household appliance.
 家電20は、図2に示すように、制御部21、音声出力部22、通信部23を含んでいる。 The home appliance 20 includes a control unit 21, an audio output unit 22, and a communication unit 23 as shown in FIG.
 制御部21は、家電20の各部の動作を制御するブロックである。制御部21は、例えば、CPU(Central Processing Unit)や専用プロセッサなどの演算処理部などにより構成されるコンピュータ装置から成り、データ格納部(図示せず)に記憶されている家電20における各種制御を実施するためのプログラムを読み出して実行することで、家電20の各部の動作を統括的に制御する。 The control unit 21 is a block that controls the operation of each unit of the home appliance 20. The control unit 21 includes, for example, a computer device configured by an arithmetic processing unit such as a CPU (Central Processing Unit) or a dedicated processor, and performs various controls in the home appliance 20 stored in a data storage unit (not shown). By reading out and executing a program for execution, the operation of each unit of the home appliance 20 is comprehensively controlled.
 また、制御部21は、状態情報抽出部24、発話コンテンツ取得部25、音声合成部26、出力制御部27としての機能を有する。 Further, the control unit 21 has functions as a state information extraction unit 24, an utterance content acquisition unit 25, a speech synthesis unit 26, and an output control unit 27.
 状態情報抽出部24は、家電20の状態を示す状態情報を抽出するブロックである。具体的には、状態情報抽出部24は、家電が異常停止したこと、録画予約に失敗したこと、ドアが開放状態であること等をセンサ等から状態情報として抽出する。状態情報抽出部24は、抽出した状態情報を通信部23を介して発話管理サーバ10に送信する。従って、状態情報抽出部24及び通信部23は、自機器の状態を示す状態情報を、上記通信ネットワークを介して上記発話管理サーバ10に出力する第1状態情報出力部として機能している。 The state information extraction unit 24 is a block that extracts state information indicating the state of the home appliance 20. Specifically, the state information extraction unit 24 extracts state information from a sensor or the like that the home appliance has stopped abnormally, that the recording reservation has failed, or that the door is in an open state. The state information extraction unit 24 transmits the extracted state information to the utterance management server 10 via the communication unit 23. Therefore, the status information extraction unit 24 and the communication unit 23 function as a first status information output unit that outputs status information indicating the status of the own device to the utterance management server 10 via the communication network.
 発話コンテンツ取得部25は、発話管理サーバ10から送信された発話コンテンツを取得するブロックである。ここで、発話コンテンツ取得部25が取得する発話コンテンツは、自機器である家電20以外の家電20の状態を示す状態情報から導き出された発話コンテンツである。この発話コンテンツは、図3に示すように、テキストデータであり、音声として出力するために、音声合成部26に送られる。 The utterance content acquisition unit 25 is a block that acquires the utterance content transmitted from the utterance management server 10. Here, the utterance content acquired by the utterance content acquisition unit 25 is the utterance content derived from the state information indicating the state of the home appliance 20 other than the home appliance 20 that is the own device. As shown in FIG. 3, this utterance content is text data, and is sent to the speech synthesizer 26 for output as speech.
 音声合成部26は、音声データを生成(音声合成)するブロックである。具体的には、音声合成部26は、テキストデータからなる発話コンテンツを音声データとして生成する。 The speech synthesizer 26 is a block that generates speech data (speech synthesis). Specifically, the speech synthesizer 26 generates utterance content composed of text data as speech data.
 出力制御部27は、音声合成部26にて生成された音声データを音声出力部22に出力させることで音声出力を行うブロックである。 The output control unit 27 is a block that performs voice output by causing the voice output unit 22 to output the voice data generated by the voice synthesis unit 26.
 音声出力部22は、例えば、スピーカ等であり、音声データを出力しようとする家電20を操作しているか、あるいはこの家電20の近くに居るユーザに音声データを出力することで、発話コンテンツを報知する。 The audio output unit 22 is, for example, a speaker or the like, and operates the home appliance 20 to output audio data or outputs audio data to a user near the home appliance 20 to notify the utterance content. To do.
 従って、出力制御部27及び音声出力部22は、上記通信ネットワークを介して上記発話管理サーバ10から取得した状態情報を報知する状態情報報知部として機能している。なお、状態情報を報知する場合、上述にように、音声により報知すること以外に、映像で報知するようにしてもよい。この場合、報知する家電20がテレビであれば、テレビの表示画面に報知すべき状態情報を映像、例えば文字を表示することが考えられる。 Therefore, the output control unit 27 and the voice output unit 22 function as a state information notification unit that notifies the state information acquired from the utterance management server 10 via the communication network. In addition, when notifying state information, you may make it alert | report by an image | video other than notifying by audio | voice as mentioned above. In this case, if the household appliance 20 to alert | report is a television, it is possible to display the image | video, for example, a character, for the status information which should be alert | reported on the display screen of a television.
 (出力先となる家電の選択)
 次に、発話コンテンツの出力先となる家電の選択について説明する。出力先の選択の基準は、以下のように、大きく分けて5通りあるが、これらの基準に限定されるものではない。
(Select home appliances as output destination)
Next, selection of a home appliance as an output destination of the utterance content will be described. The criteria for selecting the output destination are roughly divided into five as follows, but are not limited to these criteria.
 (1)ユーザが操作している家電を出力先として選択
 具体的には、発話管理サーバ10は、アクティブな家電をユーザが操作している家電(出力先)として選択する。または、発話管理サーバ10は、家電が備える人感センサ等により当該家電の近くにユーザが居ることが検出された家電をユーザが操作している家電(出力先)として選択する。さらに、発話管理サーバ10は、家電が操作されたタイミングを検出することで、当該家電をユーザが操作している家電(出力先)として選択する。例えばユーザがテレビを見ている場合には、当該テレビを出力先として選択し、例えば洗濯機による洗濯が終了したことを当該テレビの表示画面に表示する。
(1) Selecting home appliance operated by user as output destination Specifically, the utterance management server 10 selects an active home appliance as a home appliance (output destination) operated by the user. Alternatively, the utterance management server 10 selects, as a home appliance (output destination) operated by the user, a home appliance in which the presence of the user near the home appliance is detected by a human sensor included in the home appliance. Furthermore, the utterance management server 10 detects the timing when the home appliance is operated, and selects the home appliance as the home appliance (output destination) operated by the user. For example, when the user is watching television, the television is selected as an output destination, and for example, the fact that washing by the washing machine has been completed is displayed on the display screen of the television.
 (2)ユーザによって予め設定した家電を出力先として選択
 具体的には、ユーザが居る頻度の高い部屋に設置された家電を出力先として予め設定する。例えば、ユーザがリビングに居る頻度がかい場合、リビングに設置されているエアコンが発話するように設定する。つまり、エアコンを出力先として選択する。
(2) Selecting a home appliance preset by the user as an output destination Specifically, a home appliance installed in a room where the user is frequently present is preset as an output destination. For example, when the user is frequently in the living room, the air conditioner installed in the living room is set to speak. That is, the air conditioner is selected as the output destination.
 (3)報知する内容に応じた家電を出力先として選択
 具体的には、各家電がそれぞれの商品性に最も適したコンテンツを発話するように設定する。例えば、コンテンツが天気関連であればエアコン、洗濯機等の家電、コンテンツが食材関連であれば冷蔵庫、レンジ等の家電が発話するように設定する。つまり、これらの家電を発話コンテンツの出力先として選択する。
(3) Select a home appliance according to the content to be notified as an output destination Specifically, each home appliance is set to speak the most suitable content for each merchandise. For example, if the content is related to the weather, home appliances such as an air conditioner and a washing machine are set, and if the content is related to ingredients, the home appliances such as a refrigerator and a range are set to speak. That is, these home appliances are selected as the output destination of the utterance content.
 (4)対象ユーザに応じた家電を出力先として選択
 具体的には、ユーザに応じて発話する家電を設定する。例えば、ユーザが母親であれば、キッチン家電を出力先として選択し、ユーザが子供であれば、子供部屋のエアコンを出力先として選択する。ここで、ユーザの特定は、家電が備えているカメラ、音声認識機能による音声認識を利用して行う方法や、各ユーザが身につけているウエアラブル機器や携帯電話(スマートフォンを含む)を用いて行う方法がある。ウエアラブル機器、携帯電話では、ブルートゥース(登録商標)を用いて通信を行っている。ブルートゥース(登録商標)では、電波の強度でおおよその距離(ユーザと家電との距離)が分かるので、当該距離が所定距離以内であれば家電の近くにいると判断できる。
(4) Select household appliance according to target user as output destination Specifically, the household appliance which speaks according to a user is set. For example, if the user is a mother, the kitchen appliance is selected as the output destination, and if the user is a child, the air conditioner in the child room is selected as the output destination. Here, the user is specified using a camera provided in the home appliance, a method using voice recognition by a voice recognition function, a wearable device or a mobile phone (including a smartphone) worn by each user. There is a way. In wearable devices and mobile phones, communication is performed using Bluetooth (registered trademark). In Bluetooth (registered trademark), since the approximate distance (the distance between the user and the home appliance) can be determined from the strength of the radio wave, it can be determined that the device is near the home appliance if the distance is within a predetermined distance.
 (5)報知すべき情報の情報源と、実際に報知する家電とを1対多の関係となるように出力先を選択
 具体的には、報知するコンテンツが家電のエラー関連の情報であれば、全ての家電を出力先として設定する。例えば、報知するコンテンツがエアコンの異常停止を示す情報であれば、家電管理システム1を構成している全ての家電を出力先として選択し、全ての家電からエアコンの異常停止をユーザに報知する。この場合、全ての家電を出力先として選択するが、各家電からのコンテンツの報知は一斉に行う必要はない。例えば、ユーザが移動中である場合、まず、現時点でユーザの一番近くにある家電から報知し、ユーザの移動に合わせて、都度、近くにある家電から報知するようにしてもよい。
(5) Select the output destination so that there is a one-to-many relationship between the information source of the information to be notified and the home appliance that is actually notified. Specifically, if the content to be notified is error-related information of the home appliance , All home appliances are set as output destinations. For example, if the notified content is information indicating an abnormal stop of the air conditioner, all home appliances constituting the home appliance management system 1 are selected as output destinations, and the abnormal stop of the air conditioner is notified to the user from all the home appliances. In this case, all home appliances are selected as output destinations, but it is not necessary to broadcast content from each home appliance all at once. For example, when the user is moving, first, a notification may be given from the home appliance closest to the user at the present time, and notification may be made from a nearby home appliance as the user moves.
 また、上記(1)~(5)に記載の出力先の選択方法以外に、例えば、報知すべき内容がリピートする必要のあるコンテンツ(録画予約情報)と、リピートしなくてよいコンテンツ(天気予報)とに応じて出力先を選択してもよい。リピートする必要のあるコンテンツの場合には、最初に報知したのち、所定時間経過後に、再度、同一内容のコンテンツを報知することになる。この場合、再度の報知に用いる家電は、上記(1)のように現在アクティブな家電としてもよいし、上記(2)のように予め設定した家電としてもよい。 In addition to the output destination selection methods described in (1) to (5) above, for example, content that needs to be broadcasted (recording reservation information) and content that does not need to be repeated (weather forecast) ) And the output destination may be selected. In the case of content that needs to be repeated, after the first notification, the content having the same content is again notified after a predetermined time has elapsed. In this case, the home appliance used for re-notification may be a currently active home appliance as described in (1) above or a preset home appliance as described in (2) above.
 また、コンテンツを報知する家電を、時間により変更するようにしてもよい。例えば、報知するコンテンツが朝のアラームである場合、コンテンツを報知する家電を寝室のエアコンに設定する。 Moreover, you may make it change the household appliance which alert | reports a content according to time. For example, when the content to be notified is a morning alarm, the home appliance for notifying the content is set as an air conditioner in a bedroom.
 また、上記(1)~(5)に記載の出力先の選択方法以外に、例えば、報知すべき内容がリピートする必要のあるコンテンツ(録画予約情報)と、リピートしなくてよいコンテンツ(天気予報)とに応じて出力先を選択してもよい。リピートする必要のあるコンテンツの場合には、最初に報知したのち、所定時間経過後に、再度、同一内容のコンテンツを報知することになる。この場合、再度の報知に用いる家電は、上記(1)のように現在アクティブな家電としてもよいし、上記(2)のように予め設定した家電としてもよい。 In addition to the output destination selection methods described in (1) to (5) above, for example, content that needs to be broadcasted (recording reservation information) and content that does not need to be repeated (weather forecast) ) And the output destination may be selected. In the case of content that needs to be repeated, after the first notification, the content having the same content is again notified after a predetermined time has elapsed. In this case, the home appliance used for re-notification may be a currently active home appliance as described in (1) above or a preset home appliance as described in (2) above.
 また、コンテンツを報知する家電を、時間により変更するようにしてもよい。例えば、報知するコンテンツが朝のアラームである場合、コンテンツを報知する家電を寝室のエアコンに設定する。 Moreover, you may make it change the household appliance which alert | reports a content according to time. For example, when the content to be notified is a morning alarm, the home appliance for notifying the content is set as an air conditioner in a bedroom.
 ここで、本実施形態では、図1に示すように、クラウド上に発話管理サーバ10を備えた家電管理システム1の例について説明したが、本発明はこれに限定されるものではなく、例えば、発話管理サーバ10の機能を家電20に持たせてもよい。以下に、発話管理サーバ10の機能を有する家電について説明する。 Here, in the present embodiment, as illustrated in FIG. 1, the example of the home appliance management system 1 provided with the utterance management server 10 on the cloud has been described. However, the present invention is not limited to this, for example, The home appliance 20 may have the function of the utterance management server 10. Below, the household appliance which has the function of the speech management server 10 is demonstrated.
 〔変形例1〕
 図4は、発話管理サーバ10の機能を備えた家電50のブロック図である。なお、図4において、図2に示す発話管理サーバ10、家電20が備える各部と同一の機能を有する部には同一符号を付記し、詳細な説明は省略する。
[Modification 1]
FIG. 4 is a block diagram of the home appliance 50 having the function of the utterance management server 10. In FIG. 4, parts having the same functions as the parts included in the speech management server 10 and the home appliance 20 shown in FIG. 2 are denoted by the same reference numerals, and detailed description thereof is omitted.
 家電50は、図4に示すように、制御部51、音声出力部22、通信部23、記憶部12を含んでいる。 The home appliance 50 includes a control unit 51, an audio output unit 22, a communication unit 23, and a storage unit 12, as shown in FIG.
 制御部51は、状態情報取得部14、状態特定部15、発話コンテンツ選択部16、出力先選択部17、状態情報抽出部24、発話コンテンツ取得部25、音声合成部26、出力制御部27として機能する。すなわち、制御部51は、図2に示す発話管理サーバ10と同じ状態情報取得部14、状態特定部15、発話コンテンツ選択部16、出力先選択部17として機能し、さらに、図2に示す家電20と同じ状態情報抽出部24、発話コンテンツ取得部25、音声合成部26、出力制御部27として機能する。 The control unit 51 includes a state information acquisition unit 14, a state specification unit 15, an utterance content selection unit 16, an output destination selection unit 17, a state information extraction unit 24, an utterance content acquisition unit 25, a speech synthesis unit 26, and an output control unit 27. Function. That is, the control unit 51 functions as the same state information acquisition unit 14, state specifying unit 15, utterance content selection unit 16, and output destination selection unit 17 as the utterance management server 10 shown in FIG. 20 functions as the same state information extraction unit 24, utterance content acquisition unit 25, speech synthesis unit 26, and output control unit 27 as in FIG.
 家電50は、図示しないが、複数の他の家電と通信ネットワークを介してデータの送受が行えるようになっている。従って、家電50は、自機器の状態情報を、通信ネットワークを介して他の家電50に送信する一方、他の家電50の状態情報を、通信ネットワークを介して受信するようになっている。つまり、家電50は、図1に示すような発話管理サーバ10を介して他の家電50とデータの送受を行うようになっていない。 The home appliance 50 can send and receive data to and from a plurality of other home appliances via a communication network (not shown). Accordingly, the home appliance 50 transmits the status information of its own device to the other home appliance 50 via the communication network, while receiving the status information of the other home appliance 50 via the communication network. In other words, the home appliance 50 does not transmit / receive data to / from another home appliance 50 via the utterance management server 10 as shown in FIG.
 家電50が自機器の状態情報を送信する場合、送信先(出力先)の選択基準は、前記実施形態1で説明した選択基準と同じである。従って、家電50は、上記選択基準を用いて、自機器内で出力先の家電50を選択し、選択した家電50に状態情報を送信する。 When the home appliance 50 transmits the status information of its own device, the selection criterion for the transmission destination (output destination) is the same as the selection criterion described in the first embodiment. Therefore, the household appliance 50 selects the output household appliance 50 in an own apparatus using the said selection criteria, and transmits status information to the selected household appliance 50. FIG.
 このように、家電50自身が状態情報の報知先を設定するようにしてよい。この報知先の設定は、出力先選択部17において行われる。すなわち、家電50においては、出力先選択部17が状態情報の報知先を設定する報知先設定部として機能する。報知先の設定の仕方としては、例えば、以下の2通りあるが、これらに限定されるものではない。 Thus, the home appliance 50 itself may set the notification destination of the state information. The setting of the notification destination is performed in the output destination selection unit 17. That is, in the home appliance 50, the output destination selection unit 17 functions as a notification destination setting unit that sets a notification destination of state information. For example, there are the following two methods for setting the notification destination, but the notification destination is not limited to these.
 (6)ユーザが報知(通知)すべき情報の情報源に対し、実際に報知する家電を音声によって設定する。すなわち、上記出力先選択部(報知先設定部)17は、上記状態情報の通知先をユーザの音声に応じて設定する。家電50は、音声認識機能(図示しない)により、ユーザの音声を認識することで、上記の設定を実現している。 (6) For the information source of information to be notified (notified) by the user, the home appliance to be actually notified is set by voice. That is, the output destination selection unit (notification destination setting unit) 17 sets the notification destination of the state information according to the user's voice. The home appliance 50 realizes the above setting by recognizing a user's voice by a voice recognition function (not shown).
 具体的には、状態情報の取得先(情報源)である家電50、例えば洗濯機に対し、ユーザは発話によって洗濯の終了を別の家電50である冷蔵庫に報知するように設定する。これにより洗濯機の終了は冷蔵庫からユーザに報知(通知)される。 More specifically, for the home appliance 50 that is the acquisition destination (information source) of the state information, for example, a washing machine, the user sets the end of washing to be notified to the refrigerator that is another home appliance 50 by speaking. Thereby, the end of the washing machine is notified (notified) to the user from the refrigerator.
 (7)ユーザが報知(通知)すべき情報の情報源に対し、実際に報知する家電を報知する場所によって設定する。すなわち、上記出力先選択部(報知先設定部)17は、上記状態情報の報知先を、上記通信ネットワークに接続された家電が少なくとも一つ設置された部屋に設定する。ここの設定は、(6)と同じようにユーザが音声により設定してもよいが、手動で設定してもよい。 (7) The information source to be notified (notified) by the user is set according to the location where the home appliance to be notified is notified. That is, the output destination selection unit (notification destination setting unit) 17 sets the notification destination of the state information in a room in which at least one home appliance connected to the communication network is installed. The setting here may be set by the user by voice as in (6), but may be set manually.
 具体的には、状態情報の取得先(情報源)である家電50、例えば洗濯機に対し、ユーザは発話によって洗濯の終了をキッチン(部屋)に報知するように設定する。これにより洗濯機の終了はキッチンにある冷蔵庫からユーザに報知(通知)される。キッチンにある家電が冷蔵庫だけでなく電子レンジや照明もある場合、電子レンジや照明もユーザに報知する家電群として冷蔵庫とともに設定してもよい。 Specifically, for the home appliance 50 that is the acquisition source (information source) of the state information, for example, the washing machine, the user sets the kitchen (room) to notify the end of washing by speaking. Thereby, the end of the washing machine is notified (notified) to the user from the refrigerator in the kitchen. When home appliances in the kitchen include not only a refrigerator but also a microwave oven and lighting, the microwave oven and lighting may be set together with the refrigerator as a household appliance group that informs the user.
 〔変形例2〕
 本発明を用いれば、発話機能を持たない家電の状態情報を、発話機能を有する家電から報知するようにできる。すなわち、発話機能を持たない家電は、ユーザの近くにある別の家電(発話機能を有する家電)を用いて、自機器の状態情報をユーザに伝える。例えば、リビングにおいて、エアコンに発話機能がない場合、同じリビングに設置されたテレビの発話機能を用いて、エアコンの状態情報を報知する。
[Modification 2]
If this invention is used, the status information of the household appliance which does not have a speech function can be alert | reported from the household appliance which has a speech function. That is, a home appliance that does not have an utterance function uses another home appliance (home appliance that has an utterance function) near the user to convey the status information of the own device to the user. For example, in a living room, when the air conditioner does not have an utterance function, the state information of the air conditioner is notified using the utterance function of a television set in the same living room.
 これにより、発話機能を持たなくても、発話機能を持った家電に代弁してもらうことで、ユーザに発話機能の無い家電の状態情報を報知することが可能となる。 This makes it possible to notify the user of the status information of home appliances without the speech function by having the home appliances with the speech function speak for them without having the speech function.
 なお、上記実施形態1及び変形例1,2では、出力先を家電単位で選択するようにしているが、出力先を部屋単位で選択するようにしてもよい。この場合、選択した部屋において稼働中の家電を用いてコンテンツを報知するようにすればよい。これにより、コンテンツを報知させる家電が稼働していないことによる、報知不可状態を回避でき、ユーザに対して確実にコンテンツを報知させることができる。 In Embodiment 1 and Modifications 1 and 2, the output destination is selected in units of home appliances, but the output destination may be selected in units of rooms. In this case, the content may be notified using a home appliance that is operating in the selected room. Accordingly, it is possible to avoid a notification impossible state due to the fact that the home appliance for notifying the content is not operating, and it is possible to reliably notify the user of the content.
 また、上記実施形態1及び変形例1,2では、家電からコンテンツを報知する場合、家電側のタイミングで報知しているが、ユーザが家電に指示をして、別の家電の状態情報を報知させるようにしてもよい。例えば、エアコンが設置された部屋から別の部屋に移動したユーザが、移動先の部屋に設置された家電に対して、エアコンの状態情報を報知するように指示する。 Moreover, in the said Embodiment 1 and the modification 1, 2, when alerting | reporting content from a household appliance, it alert | reports at the timing of a household appliance side, However, A user instructs a household appliance and alert | reports the status information of another household appliance You may make it make it. For example, a user who has moved from a room where an air conditioner is installed to another room instructs a home appliance installed in the destination room to notify the state information of the air conditioner.
 さらに、上記実施形態1及び変形例1,2では、コンテンツを報知する手段として音声による出力、すなわち家電に発話させる例について主に記載したが、コンテンツを報知する手段として発話させる必要はなく、実施形態1でも説明したように、映像として表示するようにしてもよいし、発話でなく、警報音のような意味を持たせた音を発するようにしてもよい。 Furthermore, in the first embodiment and the first and second modifications, the description has mainly been given of the output by voice as the means for notifying the content, that is, the example in which the home appliance utters, but it is not necessary to utter as the means for notifying the content. As described in the first embodiment, it may be displayed as an image, or a sound having a meaning such as an alarm sound may be generated instead of an utterance.
 以上、本実施形態によれば、各発話対応家電の発話コンテンツをクラウドサーバ上に集約し、配信先の家電を選択することで、ユーザにより確実に発話内容を伝えることができると言う効果を奏する。 As described above, according to the present embodiment, the utterance content of each utterance-compatible home appliance is aggregated on the cloud server, and by selecting the delivery destination home appliance, the user can surely convey the utterance content. .
 さらに、本実施形態によれば、ユーザは離れた位置にある家電の情報を聞き逃したり、適切なタイミングで聞いたりすることができる。また、情報が集約されることにより発話コンテンツを管理することができ、同じ発話コンテンツを何度も聞くことがなくなる。さらに、複数のユーザが居る場合(家庭など)、それぞれの家電に使用頻度が多いユーザを紐付けして設定することで、適切なユーザが発話コンテンツを聞くことができるという効果を奏する。 Furthermore, according to the present embodiment, the user can miss information on home appliances at a distant location or can listen to information at an appropriate timing. In addition, it is possible to manage the utterance content by collecting the information, and the same utterance content is not heard many times. Furthermore, when there are a plurality of users (such as homes), it is possible to obtain an effect that an appropriate user can listen to the utterance content by linking and setting a user who frequently uses each home appliance.
 〔実施形態2〕
 以下、本発明の他の実施形態について、詳細に説明する。
[Embodiment 2]
Hereinafter, other embodiments of the present invention will be described in detail.
 (リモートコントロールシステムの構成)
 図5は、本実施形態に係るリモートコントロールシステム2の構成を示す図である。リモートコントロールシステム2は、図5に示すように、音声発話機能、音声認識機能、赤外線送信機能を有するロボット(リモコン装置、操作指示部)60と、当該ロボット60によって操作指示される、音声発話機能、赤外線受信機能を有する第1家電70、音声発話機能、音声認識機能を有する第2家電8、報知音出力機能、赤外線受信機能を有する第3家電90を含む。
(Configuration of remote control system)
FIG. 5 is a diagram showing a configuration of the remote control system 2 according to the present embodiment. As shown in FIG. 5, the remote control system 2 includes a robot (remote control device, operation instruction unit) 60 having a voice utterance function, a voice recognition function, and an infrared transmission function, and a voice utterance function instructed by the robot 60. , A first home appliance 70 having an infrared reception function, a second home appliance 8 having a voice utterance function and a voice recognition function, a third home appliance 90 having a notification sound output function and an infrared reception function.
 ロボット60は、音声発話機能によりユーザに対して発話したり、音声認識機能を有する第2家電8に発話による操作指示をしたりし、音声認識機能によりユーザの発話内容を認識したり、音声発話機能を有する第1家電70及び第2家電8の発話を認識したりし、赤外線送信機能により赤外線受信機能を有する第1家電70、第3家電90に対して操作指示を示す赤外線を送信する。 The robot 60 utters to the user by the voice utterance function, gives an operation instruction by utterance to the second home appliance 8 having the voice recognition function, recognizes the utterance content of the user by the voice recognition function, The utterances of the first home appliance 70 and the second home appliance 8 having the function are recognized, and infrared rays indicating operation instructions are transmitted to the first home appliance 70 and the third home appliance 90 having the infrared reception function by the infrared transmission function.
 すなわち、ロボット60は、第2家電8に対して操作命令を音声で行い、第1家電70及び第3家電90に対して操作命令を赤外線で行う。また、ロボット60は、第1家電70及び第2家電8から状態通知を音声で受け取り、第3家電90から状態通知を報知音で受け取る。つまり、ロボット60は、家電が出力する報知音または音声を取得する取得部として機能している。さらに、ロボット60は、第2家電8に対して状態確認を音声で行う。 That is, the robot 60 gives an operation command to the second home appliance 8 by voice, and gives an operation command to the first home appliance 70 and the third home appliance 90 by infrared rays. In addition, the robot 60 receives a state notification from the first home appliance 70 and the second home appliance 8 by voice and receives a state notification from the third home appliance 90 by a notification sound. That is, the robot 60 functions as an acquisition unit that acquires notification sound or sound output from the home appliance. Furthermore, the robot 60 performs a state check on the second home appliance 8 by voice.
 ここで、状態通知とは、家電の状態を示す状態情報を通知することである。また、状態確認とは、家電の状態情報を確認することである。つまり、ロボット60は、音声により状態確認を第2家電8に対して行い、これに対して、第2家電8から自機器の状態情報を音声で受け取る。 Here, the status notification is notification of status information indicating the status of the home appliance. Moreover, state confirmation is confirming the state information of a household appliance. That is, the robot 60 checks the state of the second home appliance 8 by voice, and receives the state information of the own device from the second home appliance 8 by voice.
 従って、リモートコントロールシステム2におけるロボット60は、各家電に操作指示を行った後、各家電から状態通知(音声あるいは報知音)を受け取り、状態通知を解析して、家電が操作指示通りに稼働していないことが分かれば、当該家電に対して再度操作指示を行い、家電が操作指示通りに稼働していることが分かれば、操作指示を停止する。このとき、ロボット60は、ユーザに対して操作指示の結果を伝えるようにしてもよい。 Therefore, the robot 60 in the remote control system 2 gives an operation instruction to each home appliance, then receives a state notification (voice or notification sound) from each home appliance, analyzes the state notification, and the home appliance operates according to the operation instruction. If it is determined that the home appliance is not operating, the operation instruction is given to the home appliance again, and if the home appliance is operating in accordance with the operation instruction, the operation instruction is stopped. At this time, the robot 60 may transmit the result of the operation instruction to the user.
 (ロボット)
 図6は、ロボット60の概略構成を示している。ロボット60は、図6に示すように、制御部61、データ格納部62、赤外線送受信部63、マイク64、スピーカ65、表示部66、動作部67、カメラ68、センサ(人感センサ、温度センサ等の各種センサ)69を備えている。
(robot)
FIG. 6 shows a schematic configuration of the robot 60. As shown in FIG. 6, the robot 60 includes a control unit 61, a data storage unit 62, an infrared transmission / reception unit 63, a microphone 64, a speaker 65, a display unit 66, an operation unit 67, a camera 68, and a sensor (human sensor, temperature sensor). Etc.) 69).
 制御部61は、例えば、CPUや専用プロセッサなどの演算処理部などにより構成されるコンピュータ装置からなり、ロボット60の各部の動作を制御するブロックである。また、制御部61は、音声認識部611、報知音解析部(解析部)612、状態特定部613、出力制御部614、音声合成部615、操作命令特定部616、判断部617として機能する。 The control unit 61 includes a computer device configured by an arithmetic processing unit such as a CPU or a dedicated processor, and is a block that controls the operation of each unit of the robot 60. Further, the control unit 61 functions as a voice recognition unit 611, a notification sound analysis unit (analysis unit) 612, a state specification unit 613, an output control unit 614, a voice synthesis unit 615, an operation command specification unit 616, and a determination unit 617.
 音声認識部611は、ユーザ、第1家電70、第2家電8からの入力音声を認識するブロックである。具体的には、音声認識部611は、マイク64から入力された音声データをテキストデータに変換して、そのテキストデータを解析して単語やフレーズを抽出する。なお、音声認識の処理について公知技術を用いることができる。 The voice recognition unit 611 is a block that recognizes input voices from the user, the first home appliance 70, and the second home appliance 8. Specifically, the voice recognition unit 611 converts voice data input from the microphone 64 into text data, analyzes the text data, and extracts words and phrases. A known technique can be used for voice recognition processing.
 報知音解析部612は、第3家電90からの報知音を解析するブロックである。具体的には、報知音解析部612は、マイク64から入力された報知音が、家電がどのような状態にあるかを示すどの報知音であるかを特定する。ここで報知音は、所定の周波数の音を一定時間出力したもの「ピー」や、一定の間隔で出力したもの「ピー、ピー、・・・」等、家電の状態に合わせて複数種類用意されている。 The notification sound analysis unit 612 is a block that analyzes the notification sound from the third home appliance 90. Specifically, the notification sound analysis unit 612 identifies which notification sound that indicates the state of the home appliance from the notification sound input from the microphone 64. Here, a plurality of types of notification sounds are prepared according to the state of home appliances, such as “peep” that outputs a sound of a predetermined frequency for a certain period of time, and “peep, peep,... ing.
 状態特定部613は、音声認識部611によって認識された音声、報知音解析部612によって特定された報知音(解析結果)から、家電の状態を特定するブロックである。具体的には、状態特定部613は、認識された音声、特定された報知音と、データ格納部62の状態情報格納部621に格納されている状態情報とから、家電の状態を特定する。 The state specifying unit 613 is a block that specifies the state of the home appliance from the voice recognized by the voice recognition unit 611 and the notification sound (analysis result) specified by the notification sound analysis unit 612. Specifically, the state specifying unit 613 specifies the state of the home appliance from the recognized voice, the specified notification sound, and the state information stored in the state information storage unit 621 of the data storage unit 62.
 出力制御部614は、状態特定部613によって特定された家電の状態から、再度家電に操作指示をするか否かを判断するブロックである。具体的には、出力制御部614は、特定された家電状態が、操作指示通りに稼働しているか否かを判断し、操作指示通りに稼働していないと判断したときに、再度、赤外線送受信部63を用いて赤外線またはスピーカ65及び音声合成部615を用いて音声により操作指示を家電に対して行う。一方、操作指示通りに稼働していると判断したときには、何もしないか、あるいは、ユーザに操作指示通りに稼働している旨を伝える。この場合には、ユーザが操作指示通りに稼働していることが分かればよいので、スピーカ65を用いて音声で伝えてもよいし、表示部66に表示して伝えてもよい。 The output control unit 614 is a block that determines whether to give an operation instruction to the home appliance again from the state of the home appliance specified by the state specifying unit 613. Specifically, the output control unit 614 determines whether or not the specified home appliance state is operating according to the operation instruction. When the output control unit 614 determines that the specified home appliance state is not operating according to the operation instruction, infrared transmission / reception is performed again. The operation instruction is given to the home appliance by voice using the infrared ray or the speaker 65 and the voice synthesis unit 615 using the unit 63. On the other hand, when it is determined that the operation is performed according to the operation instruction, nothing is done or the user is notified that the operation is performed according to the operation instruction. In this case, since it is only necessary to know that the user is operating according to the operation instruction, it may be transmitted by voice using the speaker 65 or may be displayed on the display unit 66 and transmitted.
 音声合成部615は、音声データを生成(合成)するブロックである。具体的には、音声合成部615は、操作指示を示すテキストデータから、スピーカ65から出力するための音声データを合成する。 The voice synthesizer 615 is a block that generates (synthesizes) voice data. Specifically, the voice synthesizer 615 synthesizes voice data to be output from the speaker 65 from text data indicating an operation instruction.
 操作命令特定部616は、ユーザからの音声による操作指示内容から、家電に対する操作命令を特定するブロックである。具体的には、操作命令特定部616は、ユーザからの音声による操作指示内容に応じた操作命令をデータ格納部62の操作命令格納部622から抽出する。 The operation command specifying unit 616 is a block that specifies an operation command for the home appliance from the operation instruction content by voice from the user. Specifically, the operation command specifying unit 616 extracts an operation command corresponding to the content of the operation command by voice from the user from the operation command storage unit 622 of the data storage unit 62.
 判断部617は、認識した音声コマンドを受け付けるか否かを判断するブロックである。この判断部617の詳細は、後述する。 The determination unit 617 is a block that determines whether to accept a recognized voice command. Details of the determination unit 617 will be described later.
 (リトライ処理)
 上記構成のロボット60を用いた家電への操作指示のリトライ処理について図7を参照しながら以下に説明する。
(Retry processing)
Retry processing of operation instructions to home appliances using the robot 60 having the above configuration will be described below with reference to FIG.
 まず、ユーザが音声コマンドをロボット60に出力する。ロボット60は、音声コマンドを受け取ると、上述したように、操作命令特定部616によって音声コマンドから操作命令を特定し、赤外線信号により操作命令を他家電(音声コマンドにより指定された家電)に送信する。他家電は、赤外線信号による操作命令を受け付ける。ここで、他家電は、第1家電70のように音声発話機能を有する家電とする。 First, the user outputs a voice command to the robot 60. When receiving the voice command, the robot 60 specifies the operation command from the voice command by the operation command specifying unit 616 as described above, and transmits the operation command to another home appliance (home appliance specified by the voice command) by the infrared signal. . Other home appliances accept operation instructions by infrared signals. Here, the other home appliance is a home appliance having a voice utterance function like the first home appliance 70.
 次に、他家電は、受け付けた操作命令に応じて、音声発話を行う。この音声発話は、ロボット60が受け取る。ここで、音声発話が、操作命令通りに家電が稼働したことを示す内容を含んだものであれば、操作命令は完了したものと確認し、操作命令通りに家電が稼働していないという内容を含んだものであれば、再度、操作命令を実行するために他家電に対して、赤外線信号を送信する。 Next, the other home appliances make voice utterances according to the accepted operation command. This voice utterance is received by the robot 60. Here, if the voice utterance includes content indicating that the home appliance has been operated according to the operation command, it is confirmed that the operation command has been completed, and the content that the home device has not been operated according to the operation command is indicated. If it is included, an infrared signal is transmitted to another home appliance in order to execute the operation command again.
 このように、操作命令通りに家電が稼働したことを確認するまで、リトライ処理を行う。 In this way, retry processing is performed until it is confirmed that the home appliance has been operated in accordance with the operation instruction.
 上記の例では、操作命令通りに家電が稼働したことをロボット60が確認すれば、そこで処理が終了することになっているが、操作命令通りに家電が稼働したことをロボット60が確認したことをユーザに報知してもよい。 In the above example, if the robot 60 confirms that the home appliance has been operated in accordance with the operation command, the process is to be terminated. However, the robot 60 has confirmed that the home appliance has been operated in accordance with the operation command. May be notified to the user.
 また、上記の例では、ロボット60は、他家電からの音声発話により、操作命令通りに家電が稼働したか否かを確認しているが、家電に音声発話機能が無い場合には、報知音から、操作命令通りに家電が稼働したか否かを確認することになる。 In the above example, the robot 60 confirms whether or not the home appliance has been operated according to the operation command by voice utterance from another home appliance. If the home appliance does not have the voice utterance function, the notification sound Therefore, it is confirmed whether the home appliance has been operated according to the operation instruction.
 例えば、ある報知音を聞いたロボット60が、ユーザに対して「何かあった?」と聞き、「エアコンが停止した」とユーザが言えば、ロボットは、その報知音を「エアコン停止」の報知音としてデータ格納部62の報知音格納部624に記憶する。以降、ロボット60は、同じ報知音を聞いたとき、エアコンが停止していると理解する。 For example, when the robot 60 having heard a certain notification sound asks the user “what happened?” And the user says “the air conditioner has stopped”, the robot sends the notification sound “air conditioner stopped”. The notification sound is stored in the notification sound storage unit 624 of the data storage unit 62 as a notification sound. Thereafter, the robot 60 understands that the air conditioner is stopped when it hears the same notification sound.
 このように、ロボット60は、様々な状況における報知音を、それぞれの状況と対応付けて記憶することで、検出した報知音に応じた操作を行うことができる。これにより、ロボット60は、報知音を聞くだけで、リトライするか否かを判断することが可能となる。 As described above, the robot 60 can perform operations according to the detected notification sound by storing the notification sounds in various situations in association with the respective situations. Thereby, the robot 60 can determine whether or not to retry only by hearing the notification sound.
 ところで、ロボット60による操作命令は、赤外線を送信することで行われるが、赤外線の送信方向が操作命令の対象となる家電に向いていければ、操作命令が家電に届かない。このような場合、ロボット60は、操作命令の対象となる家電から、音声発話、報知音が発せられない。 By the way, although the operation command by the robot 60 is performed by transmitting infrared rays, the operation command does not reach the home appliances if the transmission direction of the infrared rays is suitable for the home appliances targeted by the operation commands. In such a case, the robot 60 does not emit a voice utterance or a notification sound from the home appliance that is the target of the operation command.
 従って、上記のように、ロボット60は、操作命令の対象となる家電から、音声発話、報知音が発せられない状況の場合には、操作命令を送信してから、所定時間内に音声発話もしくは報知音が発せられないとき、ロボット60の向きを変える等して、リトライすることが考えられる。ロボット60の向きを変えても、操作命令を送信してから、所定時間内に音声発話もしくは報知音が発せられないときには、リトライ処理を終了する。そして、ユーザにリトライ処理を終了した旨を伝える。 Therefore, as described above, in a situation where the voice utterance and the notification sound are not emitted from the home appliance that is the target of the operation command, the robot 60 transmits the operation command and then performs voice utterance or When the notification sound is not emitted, it is conceivable to retry by changing the direction of the robot 60 or the like. Even if the orientation of the robot 60 is changed, the retry process is terminated when no voice utterance or notification sound is produced within a predetermined time after the operation command is transmitted. Then, the user is notified that the retry process has been completed.
 このような処理を実現するために、ロボット60は、表示部66及び動作部67を備えている。表示部66は、ロボット60の表情の画像を表出するブロックであり、本実施形態では、リアプロジェクション方式で表示を行うものとするが、これには限定されない。動作部67は、ロボット60の動作を実行するブロックである。 In order to realize such processing, the robot 60 includes a display unit 66 and an operation unit 67. The display unit 66 is a block that displays an image of the facial expression of the robot 60. In the present embodiment, the display unit 66 performs display by the rear projection method, but is not limited thereto. The operation unit 67 is a block that executes the operation of the robot 60.
 動作部67は、ロボット60を回転させる。動作部67は、図8に示すように、ロボット60を第1家電70に対して操作命令を行える位置まで回転させる。動作部67は、第1家電70のスピーカ70aから出力される音声発話を聞き取れる位置までロボット60を回転させる。 The operation unit 67 rotates the robot 60. As illustrated in FIG. 8, the operation unit 67 rotates the robot 60 to a position where an operation command can be issued to the first home appliance 70. The operation unit 67 rotates the robot 60 to a position where the voice utterance output from the speaker 70a of the first home appliance 70 can be heard.
 つまり、ロボット60は、第1家電70のスピーカ70aから音声発話が発せられる方向から、当該第1家電70の位置を推定するようになっている。このようにして、ロボット60は、各家電の位置を推定し、データ格納部62の配置方向格納部623に記憶させることで、操作対象となった家電が設置されている方向に動作部67によって回転させて、操作命令を実行する。 That is, the robot 60 estimates the position of the first home appliance 70 from the direction in which a voice utterance is emitted from the speaker 70a of the first home appliance 70. In this way, the robot 60 estimates the position of each home appliance and stores it in the arrangement direction storage unit 623 of the data storage unit 62, so that the operation unit 67 moves in the direction in which the home appliance to be operated is installed. Rotate to execute the operation command.
 以上、本実施形態によれば、ロボットがあたかも他の家電と会話しているような演出ができるだけでなく、ユーザが各家電の位置を毎回指示することなく、自然に音声による家電操作が実現できるという効果を奏する。 As described above, according to the present embodiment, it is possible not only to produce an effect as if the robot is talking to other home appliances, but also to realize home appliance operations by voice naturally without the user indicating the position of each home appliance every time. There is an effect.
 (誤動作防止)
 上記構成のロボット60は、ユーザからの音声コマンドにより操作対象となる家電に対して操作命令を行うようになっている。
(Malfunction prevention)
The robot 60 having the above-described configuration is configured to issue an operation command to a home appliance to be operated by a voice command from a user.
 ところで、ロボット60が設置された部屋で、ユーザが一般的な会話(操作命令を意図しない会話)を行っている際に、ユーザが発した言葉を、ロボット60が音声コマンドと認識し、家電に対して操作命令を行う虞がある。つまり、ユーザが意図しない家電が動作する虞がある。例えば、ユーザが発する言葉によって、ロボット60がエアコンをつけるという音声コマンドを認識した場合、操作命令によってエアコンがついてしまう場合ある。 By the way, in a room where the robot 60 is installed, when the user is performing a general conversation (a conversation not intended for an operation command), the robot 60 recognizes the words uttered by the user as voice commands, On the other hand, there is a risk of operating instructions. That is, there is a possibility that a household appliance not intended by the user operates. For example, when the robot 60 recognizes a voice command to turn on the air conditioner according to a word uttered by the user, the air conditioner may be turned on by an operation command.
 そこで、ロボット60は、制御部61内の判断部617によって、音声コマンドを認識したとき、予め設定した判断基準により、当該音声コマンドを受け付けて家電に操作指示を行うか否かを判断する。ここで、上記判断基準の一つとして、音声コマンドを受け付けるか否かを、発話機能を用いて発話することによって、当該音声コマンドの取得先であるユーザに対して聞き返した結果とした場合の例について以下に説明する。 Therefore, when the robot 60 recognizes the voice command by the judgment unit 617 in the control unit 61, the robot 60 judges whether to accept the voice command and give an operation instruction to the home appliance based on a preset judgment criterion. Here, as one of the above judgment criteria, an example in which whether or not to accept a voice command is a result of speaking back to the user from whom the voice command is obtained by speaking using the speech function Is described below.
 例えば図9に示すように、ユーザ80が「エアコンつけて?」80aと言うと、ロボット60は、「つけるよ? いい?」60aと聞き返し、これに対して、ユーザ80が「うん。お願い♪」80bと言う。これにより、ロボット60は、エアコンをつけるという操作命令をエアコンに対して実行する。この場合の処理は、図10に示すようになる。図10は、図7に示した処理において、ロボット60が送命令を実行するために赤外線信号を送信するまでに、ロボット60とユーザ80との間で2段階のやりとりが行われている点が追加されている。 For example, as shown in FIG. 9, when the user 80 says “Turn on the air conditioner?” 80a, the robot 60 hears back “Turn on? OK?” 60a. "Says 80b. As a result, the robot 60 executes an operation command to turn on the air conditioner on the air conditioner. The processing in this case is as shown in FIG. FIG. 10 shows that, in the process shown in FIG. 7, two-stage exchange is performed between the robot 60 and the user 80 before the robot 60 transmits an infrared signal to execute a transmission command. Have been added.
 以上のように、ロボット60が音声コマンドを認識したとき、ユーザ80に対して聞き返し、さらに、ユーザ80からの回答によって当該音声コマンドを受け付けるか否かを判断するようになっているので、音声認識による誤動作を防止することができる。 As described above, when the robot 60 recognizes the voice command, the robot 80 listens back to the user 80 and further determines whether or not to accept the voice command based on the answer from the user 80. Can prevent malfunction.
 なお、ユーザ80は、ロボット60から何時も同じ聞き返しを聞くとストレスがたまる虞がある。そこで、音声コマンドを受け付けるか否かを上記ユーザに聞き返すとき、当該音声コマンドが前回と同じ音声コマンドである場合、聞き返すための発話内容を前回と異ならせる。例えば、最初、ロボット60が「エアコンつけるの?」と聞き返した場合、2回目以降に、ロボット60が「エアコンつけましょうか?」や「エアコンつけていいかな?」等にする。これにより、前回と同じ内容を聞き返されたとしても、発話内容が異なるため、ユーザ80が感じるストレスを軽減することが可能となる。この場合、発話内容としては、聞き返して違和感のない内容にするのが好ましい。 Note that the user 80 may be stressed if he / she listens to the same response from the robot 60 at any time. Therefore, when the user is asked whether or not to accept a voice command, if the voice command is the same voice command as the previous time, the content of the utterance for listening back is made different from the previous time. For example, when the robot 60 first asks, “Do you turn on the air conditioner?”, The robot 60 will ask “Would you like to turn on the air conditioner?” Or “Would you like to turn on the air conditioner?” After the second time. Thereby, even if the same content as the previous time is heard back, since the utterance content is different, the stress felt by the user 80 can be reduced. In this case, it is preferable that the content of the utterance is a content that does not cause a sense of incongruity by listening back.
 以上の方法が、ロボットがユーザに確認する方法である。他の確認方法として以下の方法がある。 The above method is a method in which the robot confirms with the user. Other confirmation methods include the following methods.
 (1)「エアコンつけるよ?」など確認をするフレーズでバリエーションを増やしていく方法。 (1) A method to increase variations with a phrase to confirm, such as “I'll turn on the air conditioner?”.
 (2)「エアコンつけるの?」など聞き返すフレーズでバリエーションを増やしていく方法。 (2) A method to increase variations with a phrase to ask back, such as "Do you turn on the air conditioner?"
 (3)温度センサを用いた反応を示すフレーズでバリエーションを増やしていく方法。
例えば、温度センサの検出温度に応じて、「暑いよね~ エアコンつけるよ?」、「 寒いね~ エアコンつける?」、「ちょうどいいと思うけど、エアコンつける?」等。
(3) A method of increasing variations with a phrase indicating a reaction using a temperature sensor.
For example, depending on the temperature detected by the temperature sensor, “Is it hot? Do you turn on the air conditioner?”, “Is it cold?
 音声コマンドを受け付けるか否かを判断する方法として、ロボット60が備えるカメラ68、センサ69を利用する方法を挙げることができる。カメラ68は、ユーザ80を撮影し、ユーザ80がカメラ68に対して正面を向いているか否かを確認するために用いる。なお、本実施形態に係るロボット60では、音声をキャッチした方向に向くように動作部67によって回転するようになっているので、ユーザの正面からカメラ68による撮影を行うことができる。また、センサ69は、人の存在を検知する人感センサを用い、ユーザ80が近くにいるか否かを検出する。この場合も、上述した判断部617によって音声コマンドを受け付けるか否かを判断する。 As a method for determining whether or not to accept a voice command, a method using a camera 68 and a sensor 69 provided in the robot 60 can be mentioned. The camera 68 is used for photographing the user 80 and confirming whether or not the user 80 is facing the front with respect to the camera 68. Note that the robot 60 according to the present embodiment is rotated by the operation unit 67 so as to face the direction in which the voice is caught, so that the camera 68 can take a picture from the front of the user. The sensor 69 uses a human sensor that detects the presence of a person and detects whether or not the user 80 is nearby. Also in this case, the above-described determination unit 617 determines whether to accept a voice command.
 具体的には、判断部617は、ロボット60が音声コマンドを認識したときに、カメラ68によって撮影されたユーザ80が正面を向いていると確認されれば、当該音声コマンドを受け付け、正面を向いていなければ、当該音声コマンドを受け付けないと判断する。なお、ユーザ80が正面を向いていない場合に、上述したように、発話機能を用いて、ユーザ80に対して、音声コマンドを受け付けてもよいか否かを確認してもよい。 Specifically, when the robot 60 recognizes a voice command and the robot 80 recognizes that the user 80 photographed by the camera 68 is facing the front, the determination unit 617 accepts the voice command and turns the front. If not, it is determined that the voice command is not accepted. In addition, when the user 80 is not facing the front, as described above, it may be confirmed whether or not the user 80 can accept a voice command using the speech function.
 また、判断部617は、ロボット60が音声コマンドを認識したときに、センサ69により人を検出すれば、ユーザ80が近くにいると判断し、当該音声コマンドを受け付け、センサ69により人を検出しなければ、ユーザ80が近くにいないと判断し、当該音声コマンドを受け付けない。つまり、人がいると検知しない限り、音声操作を有効にない。 The determination unit 617 determines that the user 80 is nearby if the robot 60 detects a person when the robot 60 recognizes the voice command, receives the voice command, and detects the person using the sensor 69. If not, it is determined that the user 80 is not nearby and the voice command is not accepted. That is, the voice operation is not effective unless it is detected that there is a person.
 なお、センサ69の検出結果だけでは、音声コマンドを発したユーザ80であるか否かが明確でないため、上述したカメラ68によって撮影された映像を利用して、音声コマンドを発したユーザ80であるか否かを明確に判断するようにしてもよい。 Note that it is not clear from the detection result of the sensor 69 whether or not the user 80 has issued the voice command. Therefore, the user 80 has issued the voice command using the video imaged by the camera 68 described above. It may be clearly determined whether or not.
 このように、カメラ68、センサ69などを用いれば、ロボット60が音声コマンドを認識したときに、ユーザ80に対して、音声コマンドを受け付けてもよいかを確認することなく、音声コマンドを受け付けるか否かを判断することができる。 As described above, when the camera 68, the sensor 69, and the like are used, when the robot 60 recognizes the voice command, whether to accept the voice command without confirming whether the voice command may be accepted from the user 80. It can be determined whether or not.
 ロボット60がタッチセンサを備えている場合に、当該タッチセンサに手をかざした状態で、音声操作を有効にするようにしてもよい。具体的には、ユーザ80がロボット60のタッチセンサに手をかざした状態で、音声コマンドを認識した場合には、ユーザ80に確認することなく、音声コマンドを受け付ける。当該音声コマンドによる操作命令を実行する。 
 以上、音声コマンドのみだと、ご認識による家電操作の可能性があるところ、本実施形態によれば、確認のやり取りを加えることで、誤動作を防止することができるという効果を奏する。
When the robot 60 includes a touch sensor, the voice operation may be validated with a hand held over the touch sensor. Specifically, when the user 80 recognizes the voice command while holding the hand over the touch sensor of the robot 60, the voice command is accepted without confirming with the user 80. An operation command based on the voice command is executed.
As described above, when there is only a voice command, there is a possibility of home appliance operation by recognition, but according to the present embodiment, there is an effect that malfunction can be prevented by adding confirmation exchange.
 〔実施形態3〕
 以下、本発明の他の実施形態について、詳細に説明する。なお、説明の便宜上、前記実施形態2にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 3]
Hereinafter, other embodiments of the present invention will be described in detail. For convenience of explanation, members having the same functions as those described in the second embodiment are denoted by the same reference numerals and description thereof is omitted.
 (キャラクタデータ配信システムの構成)
 図11は、本実施形態に係るキャラクタデータ配信システム3の構成を示す図である。キャラクタデータ配信システム3は、図11に示すように、ロボット100と、サーバ(配信サーバ)300とが互いに通信ネットワークを介して接続している。また、上記サーバ300は、コンテンツ(キャラクタ)を提供する外部コンテンツサーバ400に接続され、上記通信ネットワークを介して、スマートフォン(端末装置)201及びPC202に接続されている。上記通信ネットワークとしては、例えば、インターネットが利用できる。また、電話回線網、移動体通信網、CATV(Cable TeleVision)通信網、衛星通信網などを利用することもできる。
(Character data distribution system configuration)
FIG. 11 is a diagram showing a configuration of the character data distribution system 3 according to the present embodiment. In the character data distribution system 3, as shown in FIG. 11, a robot 100 and a server (distribution server) 300 are connected to each other via a communication network. The server 300 is connected to an external content server 400 that provides content (characters), and is connected to a smartphone (terminal device) 201 and a PC 202 via the communication network. For example, the Internet can be used as the communication network. Further, a telephone line network, a mobile communication network, a CATV (Cable TeleVision) communication network, a satellite communication network, or the like can be used.
 上記キャラクタデータ配信システム3では、ロボット100が、サーバ300から予め設定したアカウント301に紐付けたキャラクタデータをダウンロードし、ダウンロードしたキャラクタデータから喜怒哀楽等の感情に応じた表情データを当該ロボット100の顔に相当する顔領域100aに内部から顔画像として投影するようになっている。つまり、キャラクタデータ配信システム3内のロボット100は、自身の喜怒哀楽等の感情を所定のアルゴリズムにより推定し、推定した喜怒哀楽等の感情に応じた表情をするように、キャラクタデータに含まれる表情データを表示部66のプロジェクタ66a及び反射鏡66bを用いて顔領域100aに表示させている。 In the character data distribution system 3, the robot 100 downloads character data associated with the account 301 set in advance from the server 300, and facial expression data corresponding to emotions such as emotions from the downloaded character data. A face image is projected from the inside to a face region 100a corresponding to the face of the person. That is, the robot 100 in the character data distribution system 3 estimates its own emotions such as emotions by using a predetermined algorithm, and includes the facial expressions according to the estimated emotions such as emotions. Expression data is displayed on the face area 100a using the projector 66a and the reflecting mirror 66b of the display unit 66.
 ここで、ロボットの感情は、本体の内部状態(バッテリー残量など)や外部環境(温湿度/明るさ/時間など)、また会話した回数や頻度、内容によりユーザとの関係性をパラメータ化し、確率テーブルにより総合的に計算して決定する。例えば、バッテリー残量が豊富で快適な温度状態、ユーザとの関係が良好(よく話しかけられる/褒められるなど)であれば機嫌が良い状態となり、笑顔の表情データを選択する。また、深夜や早朝などは時間をパラメータとして利用し、眠そうな表情を選択する。 Here, the emotion of the robot is parameterized by the internal state of the main unit (battery remaining amount, etc.), the external environment (temperature / humidity / brightness / time, etc.), the number of conversations, frequency, and content, Comprehensively calculated and determined by probability table. For example, if the remaining battery level is abundant and the temperature is comfortable, and if the relationship with the user is good (speaking well / being praised), the user is in a good mood, and smile facial expression data is selected. Also, at midnight or early morning, the time is used as a parameter to select a sleepy expression.
 (ロボット)
 図12は、ロボット100の概略構成を示している。ロボット100は、前記実施形態2において説明した図6に示すロボット60とほぼ同じ構成であり、異なるのは、通信部101、表情データベース(ダウンロード用バッファ)102を備えている点である。同じ構成要素については説明を省略し、異なる構成要素のみ以下に説明する。
(robot)
FIG. 12 shows a schematic configuration of the robot 100. The robot 100 has substantially the same configuration as the robot 60 shown in FIG. 6 described in the second embodiment, and is different in that it includes a communication unit 101 and a facial expression database (download buffer) 102. Description of the same components is omitted, and only different components are described below.
 通信部101は、キャラクタデータ配信システム3において、サーバ300との通信を行うための手段である。ここで、キャラクタデータは、喜怒哀楽等の感情それぞれに対応する表情データをひとかたまりにしたものをいう。つまり、キャラクタデータ=表示データ群となる。 The communication unit 101 is a means for communicating with the server 300 in the character data distribution system 3. Here, the character data is a set of facial expression data corresponding to emotions such as emotions. That is, character data = display data group.
 表情データベース102は、ダウンロードしたキャラクタデータを格納している。なお、表情データベース102には、初期状態で、基本の喜怒哀楽等の感情に対応する表情データ群を基本キャラクタデータとして格納している。 The facial expression database 102 stores downloaded character data. It should be noted that a facial expression data group corresponding to emotions such as basic emotions is stored in the facial expression database 102 as basic character data in the initial state.
 (キャラクタデータダウンロード)
 図13は、ロボットの表情を変更するためのキャラクタデータのダウンロードの説明を示す図である。サーバ300は、アカウント301毎に複数種類のキャラクタデータを格納し、必要に応じて、キャラクタデータをロボット100にダウンロード配信する。
(Character data download)
FIG. 13 is a diagram illustrating the download of character data for changing the expression of the robot. The server 300 stores a plurality of types of character data for each account 301, and downloads and distributes the character data to the robot 100 as necessary.
 図13では、サーバ300の一つのアカウント301に2種類のキャラクタデータ(1)、キャラクタデータ(2)が格納されている例を示し、ダウンロード配信されたキャラクタデータ(1)または(2)は、ロボット100内の表情データベース102に既に格納されている基本キャラクタデータと入替えて格納される。 FIG. 13 shows an example in which two types of character data (1) and character data (2) are stored in one account 301 of the server 300, and the downloaded character data (1) or (2) is The basic character data already stored in the facial expression database 102 in the robot 100 is replaced and stored.
 なお、ダウンロード配信は、キャラクタデータ単位ではなく、表情データ単位であってもよい。その場合には、ダウンロード配信された表情データは、基本キャラクタの該当する表情データと入替えて格納される。 It should be noted that the download distribution may be performed not in character data units but in facial expression data units. In this case, the facial expression data distributed by download is stored by replacing the facial expression data corresponding to the basic character.
 サーバ300に対するキャラクタデータまたは表示データの配信の指示は、ロボット100を利用するユーザが操作するスマートフォン201やPC202から行われる。具体的には、ユーザは、スマートフォン201またはPC202からサーバ300内の所定のアカウント301に格納されたキャラクタデータまたは表示データの配信の指示を行う。 An instruction to distribute character data or display data to the server 300 is issued from the smartphone 201 or the PC 202 operated by a user using the robot 100. Specifically, the user instructs distribution of character data or display data stored in a predetermined account 301 in the server 300 from the smartphone 201 or the PC 202.
 一つのアカウント301には複数のロボット100を対応させることが可能であるため、同じキャラクタデータを同じアカウントにアクセス可能なロボット100にダウンロード配信する。 Since one account 301 can correspond to a plurality of robots 100, the same character data is downloaded and distributed to the robot 100 that can access the same account.
 ユーザが操作するスマートフォン201またPC202が、サーバ300内のアカウント301(アカウント(A))とは別のアカウント301(アカウント(B))にアクセス可能であれば、アカウント(B)にアクセス可能な別のロボット100にキャラクタデータをダウンロード配信することができる。これを利用すれば、他人のロボットにキャラクタデータをダウンロードさせることができるため、企業であれば、お試し版のキャラクタデータをダウンロードさせることができる。ユーザは、お試し版のキャラクタデータが気に入れば購入することができる。 If the smartphone 201 or the PC 202 operated by the user can access an account 301 (account (B)) different from the account 301 (account (A)) in the server 300, another account that can access the account (B) The character data can be downloaded and distributed to the robot 100. If this is used, the character data can be downloaded to another person's robot, so that a trial version of the character data can be downloaded by a company. The user can purchase the trial version of the character data if they like it.
 (表情の表示)
 ロボット100は、サーバ300からのダウンロード配信により、新たなキャラクタデータが表情データベース102に格納されると、当該表情データベース102に格納された新キャラクタデータの表情データをランダムに抽出し顔領域100aに表示させる。
(Expression display)
When new character data is stored in the expression database 102 by download distribution from the server 300, the robot 100 randomly extracts the expression data of the new character data stored in the expression database 102 and displays it in the face area 100a. Let
 なお、ロボット100は、通常、感情を持たないが、ユーザとの対話において予め設定される発話内容に感情を紐付けることにより、発話を行う際に、その発話内容に紐付けられた感情を示す表情データを表情データベース102から抽出し顔領域100aに表示させることで、ロボット100が感情を持っているように見せかけることができる。 Note that the robot 100 normally has no emotion, but when an utterance is made by associating the emotion with the utterance content set in advance in the dialog with the user, the robot 100 shows the emotion associated with the utterance content. By extracting facial expression data from the facial expression database 102 and displaying it in the facial region 100a, the robot 100 can appear to have emotion.
 具体的には、予め、表情データベース102に格納されているキャラクタデータの各表情データに数値を付与し、ロボット100が発話する発話内容に紐付けた感情に合わせて、表示データに付与された数値と同じ数値を付与する。これにより、ロボット100がある発話を行う場合、発話内容に紐付けた感情に付与された数値と同じ数値の表情データを表情データベース102から抽出して顔領域100aに表示させる。 Specifically, a numerical value is assigned to each facial expression data of the character data stored in the facial expression database 102 in advance, and the numerical value given to the display data in accordance with the emotion linked to the utterance content spoken by the robot 100. The same numerical value is given. Thus, when a certain utterance is made by the robot 100, facial expression data having the same numerical value as the numerical value assigned to the emotion associated with the utterance content is extracted from the facial expression database 102 and displayed on the facial region 100a.
 (表情データの作成)
 サーバ300から配信されるキャラクタデータは、喜怒哀楽等の感情それぞれに対応する表情データを含んだものである。この場合、喜びの感情、怒りの感情、哀しみの感情、楽しい感情等、1種類の感情には、1種類の表情データが対応するように、キャラクタデータを作成している。
(Creation of facial expression data)
The character data distributed from the server 300 includes facial expression data corresponding to emotions such as emotions. In this case, character data is created so that one type of facial expression data corresponds to one type of emotion, such as a feeling of joy, an emotion of anger, a feeling of sadness, and a pleasant emotion.
 しかしながら、人間の感情は非常に複雑であり、様々な種類の感情が複合される場合もある。そこで、表情データのバリエーションをさらに増やすことで、より人間に近い感情を表情で表現することが可能となる。 However, human emotions are very complex and various types of emotions may be combined. Therefore, by further increasing the variation of facial expression data, it is possible to express emotions that are closer to humans with facial expressions.
 図14は、通常状態(楽・喜の状態)の表情データのバリーションを示す図である。 FIG. 14 is a diagram showing a variation of facial expression data in a normal state (easy / joyful state).
 図15は、特定状態(怒・悲・困の状態、特定モード)の表情データのバリエーションを示す図である。 FIG. 15 is a diagram showing variations in facial expression data in a specific state (an angry / sad / trouble state, specific mode).
 すなわち、通常状態、すなわち、楽しい状態、喜びの状態の表情データを作成するとき、図14に示すように、楽・喜の方向に表情データのバリエーションを増やして表情データを作成することで、恋する状態を示す表情データや、安らぎの状態を示す表情データなどの楽・喜に起因する感情を示す表情データが作成できる。 That is, when creating facial expression data in the normal state, that is, in a fun state and a joyful state, as shown in FIG. It is possible to create facial expression data indicating emotions caused by comfort and pleasure such as facial expression data indicating a state and facial expression data indicating a state of comfort.
 一方、特定状態、すなわち怒っている状態、悲しい状態、困っている状態の表情データは、図15の(a)に示すように、各レベル、すなわち怒りのレベル、悲しみのレベル、困っている程度を示すレベルを4段階に分けて、各レベルに応じた表情データを作成する。 On the other hand, the facial expression data in a specific state, that is, an angry state, a sad state, and a troubled state, as shown in FIG. 15A, each level, that is, an anger level, a sadness level, and a troubled degree. Is divided into four levels, and facial expression data corresponding to each level is created.
 また、特定状態の表情データとしては、特定モード時の表情データを挙げることができる。具体的には、図15の(b)に示すように、特定モードとしては、留守番モード、おやすみモード、居眠りモード、リモコン操作モードがあり、各モードに合わせた表情データを作成する。 Also, facial expression data in a specific mode can be given as facial expression data in a specific state. Specifically, as shown in FIG. 15B, the specific mode includes an answering machine mode, a sleep mode, a dozing mode, and a remote control operation mode, and facial expression data corresponding to each mode is created.
 (課金)
 ここで、サーバ300は、図11に示すように、ユーザのアカウント301毎にアクセス可能なロボット100にキャラクタデータをダウンロード配信している。この場合、上述したように、スマートフォン201、PC202を用いてユーザがダウンロード配信をサーバ300に指示する。
(Billing)
Here, as shown in FIG. 11, the server 300 downloads and distributes the character data to the robot 100 accessible for each user account 301. In this case, as described above, the user instructs the server 300 to perform download distribution using the smartphone 201 and the PC 202.
 また、サーバ300のアカウント301にダウンロード配信されるキャラクタデータは、外部コンテンツサーバ400から配信される。この場合も、スマートフォン201、PC202を用いて、サーバ300に外部コンテンツサーバ400からのダウンロード配信を指示する。 Further, the character data downloaded and distributed to the account 301 of the server 300 is distributed from the external content server 400. Also in this case, using the smartphone 201 and the PC 202, the server 300 is instructed to download distribution from the external content server 400.
 ここで、既にサーバ300にダウンロードされたキャラクタデータをロボット100にダウンロード配信させる場合には、ダウンロード配信を指示するスマートフォン201、PC202を操作するユーザに課金をしない。 Here, when the robot 100 downloads and distributes the character data that has already been downloaded to the server 300, the user who operates the smartphone 201 and the PC 202 that instructs download distribution is not charged.
 一方、サーバ300のアカウント301にキャラクタデータを外部コンテンツサーバ400からダウンロード配信される場合には、ダウンロード配信を指示するスマートフォン201、PC202を操作するユーザに課金をする。 On the other hand, when the character data is downloaded and distributed from the external content server 400 to the account 301 of the server 300, the user who operates the smartphone 201 and the PC 202 that instruct the download distribution is charged.
 この場合、実際に課金されるのは、サーバ300にダウンロード配信を指示するスマートフォン201、PC202側であり、ダウンロード配信されたキャラクタデータを使用するロボット100側ではない。 In this case, what is actually charged is on the smart phone 201 and PC 202 side instructing the server 300 to perform download distribution, not on the robot 100 side using the character data downloaded and distributed.
 以上のように本実施形態によれば、顔を作りだすことができ、好みに合わせたキャラを配信することができ、顔コンテンツ配信ビジネスへの展開が可能であり、ユーザの好みに合わせたキャラクタ展開が可能であり、ハード構成を変えずロボットに個性をつけることができるという効果を奏する。 
 〔ソフトウェアによる実現例〕
 家電20、ロボット60、ロボット100の制御ブロック(特に制御部21、制御部61)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。
As described above, according to the present embodiment, it is possible to create a face, distribute characters according to preferences, develop into a face content distribution business, and develop characters according to user preferences. It is possible, and the robot can be personalized without changing the hardware configuration.
[Example of software implementation]
The control blocks (particularly the control unit 21 and the control unit 61) of the home appliance 20, the robot 60, and the robot 100 may be realized by a logic circuit (hardware) formed on an integrated circuit (IC chip) or the like. It may be realized by software using a Central Processing Unit.
 後者の場合、家電20、ロボット60、ロボット100は、各機能を実現するソフトウェアであるプログラムの命令を実行するCPU、上記プログラムおよび各種データがコンピュータ(またはCPU)で読み取り可能に記録されたROM(Read Only Memory)または記憶装置(これらを「記録媒体」と称する)、上記プログラムを展開するRAM(Random Access Memory)などを備えている。そして、コンピュータ(またはCPU)が上記プログラムを上記記録媒体から読み取って実行することにより、本発明の目的が達成される。上記記録媒体としては、「一時的でない有形の媒体」、例えば、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本発明は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。 In the latter case, the home appliance 20, the robot 60, and the robot 100 include a CPU that executes instructions of a program, which is software that implements each function, and a ROM (in which the program and various data are recorded so as to be readable by a computer (or CPU)) Read Only Memory) or a storage device (these are referred to as "recording media"), a RAM (Random Access Memory) for expanding the program, and the like. And the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it. As the recording medium, a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. The program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program. The present invention can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
 〔まとめ〕
 本発明の態様1に係る家電管理システムは、複数の家電20と当該家電20を管理する管理サーバ(発話管理サーバ10)とが互いに通信ネットワークを介して接続された家電管理システム1において、上記家電20は、それぞれ、自機器の状態を示す状態情報を、上記通信ネットワークを介して上記管理サーバ(発話管理サーバ10)に出力する第1状態情報出力部(状態情報抽出部24及び通信部23)と、上記通信ネットワークを介して上記管理サーバ(発話管理サーバ10)から取得した状態情報を報知する状態情報報知部(出力制御部27及び音声出力部22)と、を備え、上記管理サーバ(発話管理サーバ10)は、上記家電20から取得した状態情報を、上記通信ネットワークに接続された複数の家電20のうち少なくとも一つの家電20に対して出力する第2状態情報出力部(出力制御部18)を備えていることを特徴としている。
[Summary]
The home appliance management system according to aspect 1 of the present invention is the home appliance management system 1 in which a plurality of home appliances 20 and a management server (utterance management server 10) that manages the home appliances 20 are connected to each other via a communication network. 20 is a first status information output unit (status information extraction unit 24 and communication unit 23) that outputs status information indicating the status of the device itself to the management server (utterance management server 10) via the communication network. And a status information notification unit (output control unit 27 and voice output unit 22) for reporting status information acquired from the management server (speech management server 10) via the communication network, the management server (speech The management server 10) uses the status information acquired from the home appliance 20 as at least one of the plurality of home appliances 20 connected to the communication network. It is characterized in that a second status information output unit that outputs to the home appliance 20 (output control section 18).
 上記構成によれば、管理サーバは、通信ネットワークを介して家電から取得した状態情報を、当該通信ネットワークに接続された複数の家電のうち少なくとも一つの家電に対して出力することで、当該家電は他の家電の状態情報を取得することができる。 According to the above configuration, the management server outputs the state information acquired from the home appliance via the communication network to at least one home appliance among the plurality of home appliances connected to the communication network. Status information of other home appliances can be acquired.
 これにより、他の家電の状態情報を取得した家電から、ユーザに対して、他の家電の状態情報を報知することが可能となる。なお、家電は、他の家電の状態情報だけでなく、自機器の状態情報も管理サーバから取得するようにもなっている。 Thereby, it becomes possible to notify the user of the status information of the other home appliances from the home appliance that acquired the status information of the other home appliances to the user. Note that home appliances acquire not only the status information of other home appliances but also the status information of their own devices from the management server.
 ここで、ユーザに対して、家電の状態情報を確実に報知するには、以下のようにすることが考えられる。 Here, in order to surely notify the user of the status information of the home appliance, the following may be considered.
 本発明の態様2に係る家電管理システムは、態様1において、上記第2状態情報出力部は、上記通信ネットワークに接続された家電のうち、電源が入った家電に対して状態情報を出力するのが好ましい。 The home appliance management system according to aspect 2 of the present invention is the home appliance management system according to aspect 1, wherein the second state information output unit outputs state information to home appliances that are turned on among the home appliances connected to the communication network. Is preferred.
 上記構成によれば、電源が入った家電、すなわちアクティブな状態にある家電は、ユーザが操作しているか、ユーザが近くにいる可能性が高いため、このアクティブな状態にある家電から、他の家電の状態情報を報知すれば、ユーザは確実に他の家電の状態情報を知ることができる。 According to the above configuration, a household appliance that is turned on, that is, a household appliance in an active state, is likely to be operated by the user or the user is close to the household appliance in the active state. If the status information of home appliances is notified, the user can surely know the status information of other home appliances.
 例えば、アクティブな家電をテレビとし、他の家電を冷蔵庫としたとき、テレビを観ているユーザに対して、当該テレビから冷蔵庫の状態情報(例えばドアが開放状態)を報知する。これにより、ユーザはテレビを視ているにも関わらず、他の家電である冷蔵庫の状態情報を知ることができる。 For example, when the active home appliance is a TV and the other home appliance is a refrigerator, status information of the refrigerator (for example, the door is open) is notified from the TV to the user watching the TV. Thereby, the user can know the status information of the refrigerator, which is another home appliance, even though he / she is watching TV.
 本発明の態様3に係る家電管理システムは、態様1において、上記第2状態情報出力部は、上記通信ネットワークに接続された家電のうち、取得した状態情報の内容に応じて予め設定された家電に対して状態情報を出力するのが好ましい。 The home appliance management system according to aspect 3 of the present invention is the home appliance management system according to aspect 1, in which the second state information output unit is set in advance according to the content of the acquired state information among home appliances connected to the communication network. It is preferable to output status information for.
 上記構成によれば、取得した状態情報の内容に応じて予め設定された家電に対して状態情報が出力されるので、ユーザは家電から当該家電に対応した状態情報を知ることができる。 According to the above configuration, since the state information is output to the home appliance set in advance according to the content of the acquired state information, the user can know the state information corresponding to the home appliance from the home appliance.
 本発明の態様4に係る家電管理システムは、態様1~3の何れか1態様において、上記第1状態情報出力部は、状態情報を音声により通知(報知)するのが好ましい。 In the home appliance management system according to aspect 4 of the present invention, in any one of aspects 1 to 3, it is preferable that the first state information output unit notifies (notifies) the state information by voice.
 上記構成によれば、状態情報の通知(報知)が音声になるので、ユーザは確実に状態情報の内容を知ることができる。 According to the above configuration, since the notification (notification) of the state information becomes a voice, the user can surely know the contents of the state information.
 本発明の態様5に係る家電は、他の複数の家電と共に通信ネットワークに接続された家電において、上記通信ネットワークに接続された他の家電から、当該家電の状態を示す状態情報を取得する情報取得部と、上記情報取得部によって取得した状態情報を報知する状態情報報知部と、を備えていることを特徴としている。 The household appliance which concerns on aspect 5 of this invention is the information acquisition which acquires the status information which shows the state of the said household appliance from the other household appliances connected to the said communication network in the household appliance connected to the communication network with several other household appliances And a state information notifying unit for notifying the state information acquired by the information acquiring unit.
 本発明の態様6に係る家電は、態様5において、上記情報取得部によって取得された状態情報を上記状態情報報知部によって報知する報知先を設定する報知先設定部をさらに備え、上記報知先設定部は、上記状態情報の報知先をユーザの音声に応じて設定するのが好ましい。 The household appliance which concerns on aspect 6 of this invention is further provided with the alerting | reporting destination setting part which sets the alerting | reporting destination which alert | reports the status information acquired by the said information acquisition part by the said status information alerting | reporting part in aspect 5, The said alerting | reporting destination setting The unit preferably sets the notification destination of the state information according to the user's voice.
 本発明の態様7に係る家電は、態様5において、上記情報取得部によって取得された状態情報を上記状態情報通報知部によって報知する報知先を設定する報知先設定部をさらに備え、上記報知先設定部は、上記状態情報の報知先を、上記通信ネットワークに接続された家電が少なくとも一つ設置された部屋に設定するのが好ましい。 The household appliance which concerns on aspect 7 of this invention is further equipped with the alerting | reporting destination setting part which sets the alerting | reporting destination which alert | reports the status information acquired by the said information acquisition part by the said status information alerting | reporting part in aspect 5, The said alerting | reporting destination Preferably, the setting unit sets the notification destination of the state information in a room in which at least one home appliance connected to the communication network is installed.
 本発明の態様8に係る家電は、態様5~7の何れか1態様において、上記状態情報報知部は、状態情報を音声により報知するのが好ましい。 In the household appliance according to aspect 8 of the present invention, in any one aspect of aspects 5 to 7, it is preferable that the state information notification unit notifies the state information by voice.
 本発明の態様9に係るリモコン装置は、家電に対して操作指示を行う操作指示部と、上記操作指示部の操作指示に応じて上記家電が出力する報知音または音声を取得する取得部と、上記取得部が取得した報知音または音声を解析する解析部と、上記解析部による解析結果から、当該家電の状態を特定する状態特定部と、を備えていることを特徴としている。 The remote control device according to aspect 9 of the present invention includes an operation instruction unit that gives an operation instruction to a home appliance, an acquisition unit that acquires notification sound or sound output by the home appliance in response to the operation instruction of the operation instruction unit, An analysis unit that analyzes the notification sound or voice acquired by the acquisition unit, and a state specifying unit that specifies the state of the home appliance from the analysis result of the analysis unit are provided.
 上記の構成によれば、状態特定部によって、家電に対して操作指示を行った後に、発せられる報知音または音声を解析した結果から、当該家電の状態を特定することで、操作指示を行った家電の状態をリモコン装置自体が知ることができる。 According to said structure, after performing the operation instruction with respect to a household appliance by the state specific | specification part, the operation instruction was performed by specifying the state of the said household appliance from the result of having analyzed the alert sound or audio | voice emitted. The remote control device itself can know the state of the home appliance.
 本発明の態様10に係るリモコン装置は、態様9において、上記状態特定部によって特定された家電の状態から、当該家電に対する操作指示を再度行うか否かを判断する判断部をさらに備え、上記操作指示部は、上記判断部が家電に操作指示を再度行うと判断したとき、当該家電に対して操作指示を再度行うのが好ましい。 A remote control device according to aspect 10 of the present invention further includes a determination unit that determines whether or not to perform an operation instruction on the home appliance again from the state of the home appliance specified by the state specifying unit in aspect 9, The instruction unit preferably performs the operation instruction again on the home appliance when the determination unit determines to perform the operation instruction on the home appliance again.
 上記構成によれば、操作指示部は、上記判断部が家電に操作指示を再度行うと判断したとき、当該家電に対して操作指示を再度行うようになっているので、家電に対して確実に操作指示を行うことができる。つまり、家電を確実に稼働させることができる。 According to the above configuration, the operation instruction unit is configured to perform the operation instruction again on the home appliance when the determination unit determines to perform the operation instruction on the home appliance again. Operation instructions can be given. That is, the home appliance can be reliably operated.
 本発明の態様11に係るリモコン装置は、態様10において、上記取得部によって取得された報知音または音声から、自装置から見た上記家電の設置方向を特定する設置方向特定部と、上記設置方向特定部によって特定された家電の設置方向を記憶する記憶部と、をさらに備え、上記操作指示部は、上記判断部が家電に操作指示を再度行うと判断したとき、上記記憶部に記憶された家電の設置方向に向かって、当該家電に対する操作指示を行うのが好ましい。 The remote control device according to aspect 11 of the present invention is the remote control device according to aspect 10, wherein the installation direction specifying unit that specifies the installation direction of the home appliance as viewed from the device from the notification sound or voice acquired by the acquisition unit, and the installation direction A storage unit that stores the installation direction of the home appliance specified by the specifying unit, and the operation instruction unit is stored in the storage unit when the determination unit determines that the home appliance is to be operated again It is preferable to give an operation instruction to the home appliance toward the installation direction of the home appliance.
 上記構成によれば、操作指示部は、上記判断部が家電に操作指示を再度行うと判断したとき、上記記憶部に記憶された家電の設置方向に向かって、当該家電に対する操作指示を行うので、家電に対して確実に操作指示を行うことができる。 According to the above configuration, the operation instruction unit issues an operation instruction to the home appliance toward the installation direction of the home appliance stored in the storage unit when the determination unit determines to perform the operation instruction to the home appliance again. The operation instruction can be surely given to the home appliance.
 本発明の態様12に係るリモコン装置は、受け付けた音声コマンドから家電に対して操作指示を行うロボットであって、音声コマンドを認識したとき、予め設定した判断基準により、当該音声コマンドを受け付けて家電に操作指示を行うか否かを判断する判断部を備えていることを特徴としている。 The remote control device according to the twelfth aspect of the present invention is a robot that gives an operation instruction to a home appliance from the received voice command. When the remote command device recognizes the voice command, the remote command device accepts the voice command according to a preset criterion. Is provided with a determination unit for determining whether or not to give an operation instruction.
 上記構成によれば、判断部によって、音声コマンドを認識したとき、予め設定した判断基準により、当該音声コマンドを受け付けて家電に操作指示を行うか否かを判断することで、ユーザの意図しない音声コマンドによる家電の誤動作を防止することができる。上記の判断基準として以下の内容を挙げることができる。 According to the above configuration, when the voice command is recognized by the judgment unit, the voice that is not intended by the user is determined by judging whether to accept the voice command and give an operation instruction to the home appliance based on a preset judgment criterion. It is possible to prevent malfunction of home appliances due to commands. The following can be cited as the above criteria.
 本発明の態様13に係るリモコン装置は、態様12において、さらに、発話機能を有し、上記判断部は、認識した音声コマンドを受け付けるか否かを、上記発話機能を用いて発話することによって、当該音声コマンドの取得先であるユーザに対して聞き返して得られた発話内容に応じて判断することが好ましい。 The remote control device according to aspect 13 of the present invention further includes an utterance function in aspect 12, and the determination unit utters whether or not to accept the recognized voice command by using the utterance function. It is preferable to make a determination according to the utterance content obtained by listening back to the user from whom the voice command is obtained.
 上記構成によれば、リモコン装置から聞き返しをユーザは音声で聞くことができるため、ユーザは確実に聞き返し内容を理解できる。これにより、確実に家電の誤動作を防止することができる。 According to the above configuration, since the user can hear the voice response from the remote control device, the user can surely understand the content of the voice response. Thereby, the malfunction of a household appliance can be prevented reliably.
 本発明の態様14に係るリモコン装置は、態様13において、音声コマンドを受け付けるか否かを上記ユーザに聞き返すとき、当該音声コマンドが前回と同じ音声コマンドである場合、聞き返すための発話内容を前回と異ならせるのが好ましい。 When the remote control device according to aspect 14 of the present invention asks the user whether to accept a voice command in aspect 13, when the voice command is the same voice command as the previous time, the content of the utterance to be heard back is set as the previous time. It is preferable to make them different.
 上記構成によれば、同じ内容の確認であっても、異なる発話内容であるため、ユーザはストレスを感じにくくなる。 According to the above configuration, even if the same content is confirmed, since the content of the utterance is different, the user is less likely to feel stress.
 本発明の態様15に係るロボットは、所定の領域に顔画像を表示したロボットであって、感情を示す表情データを記憶する記憶部と、所定の基準によって定まる感情に応じた上記表情データを上記記憶部から取得し、上記所定の領域に上記顔画像として表示させる制御部と、を備えていることを特徴としている。 A robot according to an aspect 15 of the present invention is a robot that displays a face image in a predetermined region, and stores a facial expression data indicating emotion and the facial expression data corresponding to the emotion determined by a predetermined criterion. And a control unit that is acquired from the storage unit and displayed as the face image in the predetermined area.
 上記構成によれば、表示部は、記憶部から所定の基準によって定まる感情に合わせて上記表情データを取得し、上記顔画像として表示するので、ロボット自らが選択して顔画像の表情を変化させることができる。 According to the above configuration, the display unit acquires the facial expression data in accordance with the emotion determined by a predetermined reference from the storage unit and displays it as the facial image. Therefore, the robot itself selects and changes the facial expression of the facial image. be able to.
 本発明の態様16に係る表情データ配信システムは、上記構成のロボットと、上記ロボットに喜怒哀楽等の感情を示す表情データを配信する配信サーバとが互いに通信ネットワークに接続されていることを特徴としている。 The facial expression data distribution system according to aspect 16 of the present invention is characterized in that the robot having the above configuration and a distribution server that distributes facial expression data indicating emotions such as emotions to the robot are connected to a communication network. It is said.
 上記構成によれば、ロボットの顔の表示を変化させる表情データが配信サーバから配信されるので、ロボットの顔の表示をユーザの好みに合わせて変えることができる。表情のカスタマイズを可能とする。 According to the above configuration, the facial expression data that changes the display of the robot's face is distributed from the distribution server, so the display of the robot's face can be changed according to the user's preference. Allows customization of facial expressions.
 本発明の態様17に係る表情データ配信システムは、態様16において、上記通信ネットワークには、さらに、課金に応じて上記配信サーバに表情データを提供する表情データ提供サーバと、上記表情データ提供サーバに対して課金をすることで、表情データを上記配信サーバに提供させる端末装置とが接続されていることが好ましい。 In the expression data distribution system according to aspect 17 of the present invention, in the aspect 16, the communication network further includes an expression data providing server that provides expression data to the distribution server according to a charge, and the expression data providing server. It is preferable that a terminal device that provides facial expression data to the distribution server is connected by charging the terminal.
 本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。 The present invention is not limited to the above-described embodiments, and various modifications are possible within the scope shown in the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is also included in the technical scope of the present invention. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
 本発明は、複数の家電を通信ネットワークに接続したシステム、音声コマンドで家電を操作するリモコン装置などに好適に利用することができる。 The present invention can be suitably used for a system in which a plurality of home appliances are connected to a communication network, a remote control device for operating home appliances with voice commands, and the like.
1 家電管理システム
2 リモートコントロールシステム
3 キャラクタデータ配信システム(表示データ配信システム)
8 第2家電
10 発話管理サーバ(管理サーバ)
11 制御部
12 記憶部
13 通信部
14 状態情報取得部
15 状態特定部
16 発話コンテンツ選択部
17 出力先選択部(報知先設定部)
18 出力制御部
19 コンテンツ取得部
20 家電
21 制御部
22 音声出力部
23 通信部
24 状態情報抽出部
25 発話コンテンツ取得部
26 音声合成部
27 出力制御部
30 外部コンテンツサーバ
40 スマートフォン
50 家電
51 制御部
60 ロボット
61 制御部
62 データ格納部
63 赤外線送受信部
64 マイク
65 スピーカ
66 表示部
66a プロジェクタ
66b 反射鏡
67 動作部
68 カメラ
69 センサ
70 第1家電
70a スピーカ
80 ユーザ
90 第3家電
100 ロボット
100a 顔領域
101 通信部
102 表情データベース
121 発話コンテンツ格納部
122 出力先データベース
201 スマートフォン
300 サーバ
301 アカウント
400 外部コンテンツサーバ
611 音声認識部
612 報知音解析部
613 状態特定部
614 出力制御部
615 音声合成部
616 操作命令特定部
617 判断部
621 状態情報格納部
622 操作命令格納部
623 配置方向格納部
624 報知音格納部
1 Home Appliance Management System 2 Remote Control System 3 Character Data Distribution System (Display Data Distribution System)
8 Second household appliance 10 Utterance management server (management server)
DESCRIPTION OF SYMBOLS 11 Control part 12 Storage part 13 Communication part 14 State information acquisition part 15 State specific | specification part 16 Utterance content selection part 17 Output destination selection part (notification destination setting part)
18 Output control unit 19 Content acquisition unit 20 Home appliance 21 Control unit 22 Audio output unit 23 Communication unit 24 State information extraction unit 25 Speech content acquisition unit 26 Speech synthesis unit 27 Output control unit 30 External content server 40 Smartphone 50 Home appliance 51 Control unit 60 Robot 61 Control unit 62 Data storage unit 63 Infrared transmission / reception unit 64 Microphone 65 Speaker 66 Display unit 66a Projector 66b Reflector 67 Operation unit 68 Camera 69 Sensor 70 First home appliance 70a Speaker 80 User 90 Third home appliance 100 Robot 100a Face area 101 Communication Unit 102 facial expression database 121 utterance content storage unit 122 output destination database 201 smartphone 300 server 301 account 400 external content server 611 voice recognition unit 612 notification sound analysis unit 613 state identification unit 614 Output control unit 615 Speech synthesis unit 616 Operation command specifying unit 617 Judgment unit 621 Status information storage unit 622 Operation command storage unit 623 Arrangement direction storage unit 624 Notification sound storage unit

Claims (5)

  1.  複数の家電と当該家電を管理する管理サーバとが互いに通信ネットワークを介して接続された家電管理システムにおいて、
     上記家電は、それぞれ、
     自機器の状態を示す状態情報を、上記通信ネットワークを介して上記管理サーバに出力する第1状態情報出力部と、
     上記通信ネットワークを介して上記管理サーバから取得した状態情報を報知する状態情報報知部と、を備え、
     上記管理サーバは、
     上記家電から取得した状態情報を、上記通信ネットワークに接続された複数の家電のうち少なくとも一つの家電に対して出力する第2状態情報出力部を備えていることを特徴とする家電管理システム。
    In a home appliance management system in which a plurality of home appliances and a management server that manages the home appliances are connected to each other via a communication network,
    Each of the above appliances
    A first status information output unit that outputs status information indicating the status of the device to the management server via the communication network;
    A state information notifying unit for notifying the state information acquired from the management server via the communication network,
    The management server
    A home appliance management system comprising: a second state information output unit that outputs state information acquired from the home appliance to at least one of a plurality of home appliances connected to the communication network.
  2.  他の複数の家電と共に通信ネットワークに接続された家電において、
     上記通信ネットワークに接続された他の家電から、当該家電の状態を示す状態情報を取得する情報取得部と、
     上記情報取得部によって取得した状態情報を報知する状態情報報知部と、
    を備えていることを特徴とする家電。
    In home appliances connected to a communication network together with other home appliances,
    An information acquisition unit that acquires state information indicating the state of the home appliance from other home appliances connected to the communication network;
    A status information notification unit for reporting the status information acquired by the information acquisition unit;
    Home appliances characterized by comprising.
  3.  家電に対して操作指示を行う操作指示部と、
     上記操作指示部の操作指示に応じて上記家電が出力する報知音または音声を取得する取得部と、
     上記取得部が取得した報知音または音声を解析する解析部と、
     上記解析部による解析結果から、当該家電の状態を特定する状態特定部と、
    を備えていることを特徴とするリモコン装置。
    An operation instruction unit for operating instructions to home appliances;
    An acquisition unit that acquires notification sound or sound output by the home appliance according to the operation instruction of the operation instruction unit;
    An analysis unit for analyzing the notification sound or voice acquired by the acquisition unit;
    From the analysis result by the analysis unit, a state specifying unit for specifying the state of the home appliance,
    A remote control device comprising:
  4.  受け付けた音声コマンドから家電に対して操作指示を行うリモコン装置であって、
     音声コマンドを認識したとき、予め設定した判断基準により、当該音声コマンドを受け付けて家電に操作指示を行うか否かを判断する判断部を備えていることを特徴とするリモコン装置。
    A remote control device that gives operation instructions to household appliances from received voice commands,
    A remote control device comprising a determination unit that, when recognizing a voice command, determines whether to accept the voice command and give an operation instruction to a home appliance based on a predetermined determination criterion.
  5.  所定の領域に顔画像を表示したロボットであって、
     感情を示す表情データを記憶する記憶部と、
     所定の基準によって定まる感情に応じた上記表情データを上記記憶部から取得し、上記所定の領域に上記顔画像として表示させる制御部と、
    を備えていることを特徴とするロボット。
    A robot displaying a face image in a predetermined area,
    A storage unit for storing facial expression data indicating emotions;
    A control unit that acquires the facial expression data corresponding to the emotion determined by a predetermined reference from the storage unit, and displays the facial expression data as the face image in the predetermined region;
    A robot characterized by comprising:
PCT/JP2015/074117 2014-10-03 2015-08-26 Home appliance management system, home appliance, remote control device, and robot WO2016052018A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014205268A JP2016076799A (en) 2014-10-03 2014-10-03 Consumer electronics administrative system, consumer electronics, remote-control device, and robot
JP2014-205268 2014-10-03

Publications (1)

Publication Number Publication Date
WO2016052018A1 true WO2016052018A1 (en) 2016-04-07

Family

ID=55630064

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/074117 WO2016052018A1 (en) 2014-10-03 2015-08-26 Home appliance management system, home appliance, remote control device, and robot

Country Status (2)

Country Link
JP (1) JP2016076799A (en)
WO (1) WO2016052018A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109506349A (en) * 2017-09-15 2019-03-22 夏普株式会社 Network system, information processing method and server
WO2019069596A1 (en) * 2017-10-03 2019-04-11 東芝ライフスタイル株式会社 Household appliance system
EP3454333A4 (en) * 2016-05-03 2019-12-25 LG Electronics Inc. -1- Electronic device and control method thereof
JP2020085313A (en) * 2018-11-22 2020-06-04 ダイキン工業株式会社 Air conditioning system
WO2020158615A1 (en) * 2019-01-29 2020-08-06 ダイキン工業株式会社 Air conditioning system
CN112331195A (en) * 2019-08-05 2021-02-05 佛山市顺德区美的电热电器制造有限公司 Voice interaction method, device and system
CN113037600A (en) * 2019-12-09 2021-06-25 夏普株式会社 Notification control device, notification control system, and notification control method
CN114040265A (en) * 2017-07-14 2022-02-11 大金工业株式会社 Device operating system

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8600120B2 (en) 2008-01-03 2013-12-03 Apple Inc. Personal computing device control using face detection and recognition
US9002322B2 (en) 2011-09-29 2015-04-07 Apple Inc. Authentication with secondary approver
US9898642B2 (en) 2013-09-09 2018-02-20 Apple Inc. Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs
US10043185B2 (en) 2014-05-29 2018-08-07 Apple Inc. User interface for payments
CN110139732B (en) * 2016-11-10 2023-04-04 华纳兄弟娱乐公司 Social robot with environmental control features
KR101949497B1 (en) * 2017-05-02 2019-02-18 네이버 주식회사 Method and system for processing user command to provide and adjust operation of device or range of providing contents accoding to analyzing presentation of user speech
US11048995B2 (en) * 2017-05-16 2021-06-29 Google Llc Delayed responses by computational assistant
KR102143148B1 (en) 2017-09-09 2020-08-10 애플 인크. Implementation of biometric authentication
JP2019068319A (en) * 2017-10-03 2019-04-25 東芝ライフスタイル株式会社 Consumer-electronics system
JP6960823B2 (en) * 2017-10-30 2021-11-05 三菱電機株式会社 Speech analysis device, speech analysis system, speech analysis method and program
JP2019101492A (en) * 2017-11-28 2019-06-24 トヨタ自動車株式会社 Communication apparatus
JP2019103073A (en) * 2017-12-06 2019-06-24 東芝ライフスタイル株式会社 Electrical apparatus and electrical apparatus system
JP6530528B1 (en) * 2018-03-26 2019-06-12 株式会社エヌ・ティ・ティ・データ Information processing apparatus and program
JP6938415B2 (en) * 2018-03-29 2021-09-22 東京瓦斯株式会社 Alarm robots, programs and systems
US11170085B2 (en) 2018-06-03 2021-11-09 Apple Inc. Implementation of biometric authentication
JP7170428B2 (en) * 2018-06-08 2022-11-14 三菱電機株式会社 ELECTRICAL DEVICE, COMMUNICATION ADAPTER, SETTING METHOD FOR ELECTRICAL DEVICE, AND PROGRAM
JP6463545B1 (en) * 2018-08-22 2019-02-06 株式会社ネイン Information processing apparatus, computer program, and information processing method
US10860096B2 (en) 2018-09-28 2020-12-08 Apple Inc. Device control using gaze information
US11100349B2 (en) 2018-09-28 2021-08-24 Apple Inc. Audio assisted enrollment
US20210158682A1 (en) * 2019-03-26 2021-05-27 Panasonic Intellectual Property Management Co., Ltd. Information notification system and information notification method
US20220351600A1 (en) * 2019-03-26 2022-11-03 Sony Group Corporation Information processing apparatus, information processing method, and information processing program
WO2020195387A1 (en) * 2019-03-26 2020-10-01 パナソニックIpマネジメント株式会社 Information notification system and information notification method
JP7253975B2 (en) * 2019-05-20 2023-04-07 三菱電機株式会社 Notification system
JP2021068370A (en) * 2019-10-28 2021-04-30 ソニー株式会社 Information processor, information processing method, and program
JP7422455B2 (en) 2019-10-29 2024-01-26 キヤノン株式会社 Communication device, communication device control method, program
JP7341426B2 (en) * 2019-11-27 2023-09-11 国立大学法人岩手大学 Notification system, control device in the notification system, and control method in the notification system
JP7458765B2 (en) 2019-12-12 2024-04-01 東芝ライフスタイル株式会社 Information processing systems, home appliances, and programs
JP7366734B2 (en) 2019-12-19 2023-10-23 東芝ライフスタイル株式会社 notification system
WO2021131682A1 (en) * 2019-12-23 2021-07-01 ソニーグループ株式会社 Information processing device, information processing method, and program
JP7442330B2 (en) 2020-02-05 2024-03-04 キヤノン株式会社 Voice input device and its control method and program
CN116194865A (en) * 2020-10-16 2023-05-30 松下知识产权经营株式会社 Notification control device, notification control system, and notification control method
EP4231658A4 (en) * 2020-10-16 2024-03-27 Panasonic Ip Man Co Ltd Notification control apparatus, notification control system, and notification control method
US20230032760A1 (en) 2021-08-02 2023-02-02 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for controlling a serving robot

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003162626A (en) * 2001-11-22 2003-06-06 Sharp Corp Information notifying system and apparatus
JP2013162314A (en) * 2012-02-03 2013-08-19 Sharp Corp Notification system and notification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003162626A (en) * 2001-11-22 2003-06-06 Sharp Corp Information notifying system and apparatus
JP2013162314A (en) * 2012-02-03 2013-08-19 Sharp Corp Notification system and notification method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11030996B2 (en) 2016-05-03 2021-06-08 Lg Electronics Inc. Electronic device and control method thereof
EP3454333A4 (en) * 2016-05-03 2019-12-25 LG Electronics Inc. -1- Electronic device and control method thereof
CN114040265B (en) * 2017-07-14 2024-05-24 大金工业株式会社 Device operating system
CN114040265A (en) * 2017-07-14 2022-02-11 大金工业株式会社 Device operating system
JP2019052797A (en) * 2017-09-15 2019-04-04 シャープ株式会社 Network system, information processing method and server
CN109506349A (en) * 2017-09-15 2019-03-22 夏普株式会社 Network system, information processing method and server
WO2019069596A1 (en) * 2017-10-03 2019-04-11 東芝ライフスタイル株式会社 Household appliance system
JP2020085313A (en) * 2018-11-22 2020-06-04 ダイキン工業株式会社 Air conditioning system
JP2020122585A (en) * 2019-01-29 2020-08-13 ダイキン工業株式会社 Air-conditioning system
CN113348329A (en) * 2019-01-29 2021-09-03 大金工业株式会社 Air conditioning system
EP3919831A4 (en) * 2019-01-29 2022-03-23 Daikin Industries, Ltd. Air conditioning system
WO2020158615A1 (en) * 2019-01-29 2020-08-06 ダイキン工業株式会社 Air conditioning system
CN112331195A (en) * 2019-08-05 2021-02-05 佛山市顺德区美的电热电器制造有限公司 Voice interaction method, device and system
CN112331195B (en) * 2019-08-05 2024-02-20 佛山市顺德区美的电热电器制造有限公司 Voice interaction method, device and system
CN113037600A (en) * 2019-12-09 2021-06-25 夏普株式会社 Notification control device, notification control system, and notification control method

Also Published As

Publication number Publication date
JP2016076799A (en) 2016-05-12

Similar Documents

Publication Publication Date Title
WO2016052018A1 (en) Home appliance management system, home appliance, remote control device, and robot
JP6475386B2 (en) Device control method, device, and program
CN108268235B (en) Dialog-aware active notification for voice interface devices
CN106297781B (en) Control method and controller
US10983753B2 (en) Cognitive and interactive sensor based smart home solution
JP6739907B2 (en) Device specifying method, device specifying device and program
US10958457B1 (en) Device control based on parsed meeting information
CN105323648B (en) Caption concealment method and electronic device
WO2020216107A1 (en) Conference data processing method, apparatus and system, and electronic device
WO2016052164A1 (en) Conversation device
CN105284107A (en) Device, system, and method, and computer-readable medium for providing interactive advertising
WO2017141530A1 (en) Information processing device, information processing method and program
KR20200074680A (en) Terminal device and method for controlling thereof
US10002611B1 (en) Asynchronous audio messaging
KR20230133864A (en) Systems and methods for handling speech audio stream interruptions
US20220122600A1 (en) Information processing device and information processing method
KR20220078866A (en) Method for contolling external device based on voice and electronic device thereof
US11818820B2 (en) Adapting a lighting control interface based on an analysis of conversational input
US11252497B2 (en) Headphones providing fully natural interfaces
US20110216915A1 (en) Providing audible information to a speaker system via a mobile communication device
JP5990311B2 (en) Server, notification method, program, control target device, and notification system
JP2020061046A (en) Voice operation apparatus, voice operation method, computer program, and voice operation system
JP5973030B2 (en) Speech recognition system and speech processing apparatus
KR20190023610A (en) Method and Electronic Apparatus for Suggesting of Break Time during Conference
WO2022215280A1 (en) Speech test method for speaking device, speech test server, speech test system, and program used in terminal communicating with speech test server

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15847769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15847769

Country of ref document: EP

Kind code of ref document: A1