CN113014461A - Control device - Google Patents

Control device Download PDF

Info

Publication number
CN113014461A
CN113014461A CN202011187107.3A CN202011187107A CN113014461A CN 113014461 A CN113014461 A CN 113014461A CN 202011187107 A CN202011187107 A CN 202011187107A CN 113014461 A CN113014461 A CN 113014461A
Authority
CN
China
Prior art keywords
sound
instruction
information
target
air conditioner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011187107.3A
Other languages
Chinese (zh)
Inventor
金山将也
丸谷裕树
中川达也
泷川正史
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Lifestyle Products and Services Corp
Original Assignee
Toshiba Lifestyle Products and Services Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Lifestyle Products and Services Corp filed Critical Toshiba Lifestyle Products and Services Corp
Publication of CN113014461A publication Critical patent/CN113014461A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Selective Calling Equipment (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Provided is a control device which can execute an appropriate operation based on an instruction sound in both a situation where a plurality of devices exist as candidates for an instruction target and a situation where only 1 device exists as a candidate for an instruction target. The control device may include: a first estimating unit configured to estimate, when a plurality of candidate devices are set, a target device to be instructed from the plurality of candidate devices based on a predetermined first condition when the sound receiving unit capable of receiving a sound receives an instruction sound for causing the device to execute a specific operation; a determination unit configured to determine, when only 1 candidate device is set when the instruction sound is received by the sound receiving unit, the 1 candidate device as the target device, regardless of the first condition; and a first operation relation processing execution unit that executes first operation relation processing for causing the target device to execute the specific operation based on the instruction sound.

Description

Control device
Technical Field
The technology disclosed in the present specification relates to a control device for causing a device to execute an operation based on an instruction sound.
Background
In patent document 1, when a voice instruction is given from a user, a wearable terminal converts the voice instruction into an electric signal and transmits the converted signal to a network. The network generates a control signal for controlling the home appliance device according to the signal received from the wearable terminal, and transmits the generated control signal to the home appliance device. The home appliance performs a predetermined action according to the control signal.
Patent document 1: japanese patent laid-open publication No. 2018-120627
In patent document 1, only 1 device exists as a candidate of the instruction target. However, patent document 1 does not assume a situation where a plurality of devices exist as candidates for an instruction target.
Disclosure of Invention
In the present specification, a technique for enabling appropriate operation based on an instruction sound to be executed in any one of a situation where a plurality of apparatuses exist as candidates for an instruction target and a situation where only 1 apparatus exists as a candidate for an instruction target is provided.
The control device disclosed in the present specification may be configured to include: a first estimating unit configured to estimate, when a plurality of candidate devices are set, a target device to be instructed from the plurality of candidate devices based on a predetermined first condition when the sound receiving unit capable of receiving a sound receives an instruction sound for causing the device to execute a specific operation; a determination unit configured to determine, when only 1 candidate device is set when the instruction sound is received by the sound receiving unit, the 1 candidate device as the target device, regardless of the first condition; and a first operation relation processing execution unit that executes first operation relation processing for causing the target device to execute the specific operation based on the instruction sound.
Effects of the invention
In the case where there are a plurality of home appliances that can instruct operations by voice, even if a home appliance that is a target of a user's instruction voice is not explicitly shown, an appropriate home appliance can be selected and operated.
Drawings
Fig. 1 shows a structure of a communication system.
Fig. 2 shows a case a1 in which an instruction sound for a television set is emitted in the first embodiment.
Fig. 3 shows a case a2 where an instruction sound for air conditioning is emitted in the first embodiment.
Fig. 4 shows a continuation of fig. 3.
Fig. 5 shows a case where an instruction sound for air conditioning is emitted in the second embodiment.
Fig. 6 shows a subsequent situation B, C1 as in fig. 5.
Fig. 7 shows a subsequent situation C2 as in fig. 5.
Fig. 8 shows a case where an instruction sound for air conditioning is emitted in the third embodiment.
Description of the reference numerals
2: a communication system; 4: a home; 5: a LAN; 8: a network; 10: a management server; 12: a network I/F; 30: a control unit; 32: a CPU; 34: a memory; 40: carrying out a procedure; 50: a control server; 52: a network I/F; 60: a control unit; 62: a CPU; 64: a memory; 70: carrying out a procedure; 90: an SP server; 100-300: an air conditioner; 310: a television set; 400: an intelligent speaker; 400 a: a microphone; 400 b: a speaker; 500: a router; AI: account information.
Detailed Description
(first embodiment)
(Structure of communication System 2; FIG. 1)
As shown in fig. 1, the communication system 2 includes: the management server 10, the control server 50, the service providing server 90 (hereinafter referred to as "SP server 90"), and the devices 100 to 500 installed in the home 4. The management server 10, the control server 50, and the SP server 90 are connected to the network 8, and are installed outside the home 4.
In the home 4, 3 air conditioners 100, 200, 300, 1 television 310, a smart speaker 400, and a router 500 are installed. The router 500 relays communication between a device (e.g., the air conditioner 100) within the home 4 and a server (e.g., 10) on the network 8. The router 500 is connected to the network 8 and forms a LAN5 within the home 4. The LAN5 is a wireless LAN. In the modification, the LAN5 may be a wired LAN.
The air conditioner 100 is given the device name "ac 1". The device name is, for example, the model of the device. The air conditioners 200, 300 are also given device names "ac 2" and "ac 3". The air conditioner (e.g., 100) is connected to the LAN5, and can communicate with a server (e.g., 10) via the router 500 and the network 8. Air conditioners 100, 200, and 300 are installed in a first room, a second room, and a third room in home 4, respectively. The first room, the second room, and the third room are assigned room names R1, R2, and R3, respectively.
The television 310 is given a device name "tv 1". The television 310 is connected to the LAN5, and can communicate with a server (e.g., 10) via the router 500 and the network 8. The television 310 is installed in a third room R3 in the home 4.
The smart speaker 400 is given an ID "ss 1". Here, the ID is, for example, a manufacturing number given by the manufacturer of the smart speaker 400. The smart speaker 400 is a speaker capable of accepting an operation based on voice recognition. The smart speaker 400 is, for example, Google Home (registered trademark), Amazon Echo (registered trademark), or the like. The smart speaker 400 includes: a microphone 400a capable of receiving sound; and a speaker 400b capable of emitting sound. The smart speaker 400 is connected to the LAN5, and can communicate with a server (e.g., 10) via the router 500 and the network 8. The smart speaker 400 is provided in a first room R1 in the home 4. The router 500 is also installed in the first room R1 in the home 4.
(Structure of management Server 10)
The management server 10 is a server for managing communication between the smart speaker 400 and the control server 50. The management server 10 includes a network interface 12 and a control unit 30. Each of the units 12 and 30 is connected to a bus (reference numeral omitted). The management server 10 is set by, for example, a manufacturer of a device (e.g., the air conditioner 100) in the home 4. Hereinafter, the interface will be referred to as "I/F".
The network I/F12 is connected to the network 8 and is an I/F for performing communication via the network 8. The control unit 30 includes a CPU32 and a memory 34. The CPU32 executes various processes in accordance with the program 40 stored in the memory 34.
(Structure of control Server 50)
The control server 50 is a server for remotely operating a device (e.g., the air conditioner 100) in the home 4. The control server 50 includes a network I/F52 and a control unit 60. Each of the units 52 and 60 is connected to a bus (reference numeral omitted). The control server 50 is set by, for example, a manufacturer of equipment (e.g., the air conditioner 100) in the home 4.
The network I/F52 is the same as the network I/F12 of the management server 10. The control unit 60 includes a CPU62 and a memory 64. The CPU62 executes various processes in accordance with the program 70 stored in the memory 64. The memory 64 also stores account information AI and device information.
The account information AI is information for authenticating the user of the home 4. The device information is information related to devices (e.g., the air conditioner 100) in the home 4. The device information is stored in the memory 64 in correspondence with the account information AI. The device information is set in the control server 50 (i.e., stored in association with the account information AI) by, for example, a user performing an operation for setting the device information. The device information includes a device name, location information, presence information, and past action information. The location information is information indicating a location where the corresponding device is set (for example, a room name R1 indicating a first room).
The presence information includes operation information indicating the current operation state of the corresponding device itself. The operation information indicates "stop" when the equipment is in a stopped state (e.g., a sleep state) and indicates "operation" when the equipment is in an operating state (e.g., a state in which cooling is being performed). When the corresponding device is an air conditioner, the operation information further includes surrounding information indicating a current state of the surrounding of the device. The ambient information contains, for example, the temperature of the surroundings of the device (i.e., room temperature). The ambient information is not limited to room temperature, and may include, for example, the ambient humidity of the device, the outdoor temperature, and the like. The presence information is periodically received from, for example, a device (for example, the air conditioner 100) in the home 4 and stored in the memory 64.
The past operation information indicates a past operation of the corresponding device. For example, when the device is an air conditioner, the past operation information indicates an operation (for example, cooling) performed immediately before the air conditioner stops and a set value of the operation (for example, a set temperature of cooling). Further, for example, in the case where the device is a television, the past operation information indicates a channel number (for example, "1 ch") designated by the user immediately before the television stops.
(configuration of SP Server 90)
The SP server 90 is a server for providing a service using the smart speaker 400. The SP server 90 is set by, for example, the manufacturer of the smart speaker 400. The SP server 90 stores the account information AI in association with the ID "ss 1" of the smart speaker 400. The ID "ss 1" is set in the SP server 90 (i.e., stored in association with the account information AI), for example, by an operation for starting the use of the smart speaker 400 by the user.
(case of TV A1; FIG. 2)
Referring to fig. 2, a case a1 in which an instruction sound for a television is emitted will be described. Further, communication between the apparatuses is performed via the network 8, or via the network 8 and the LAN 5. In the following description of the communication, description of "via the network 8" and "via the LAN 5" may be omitted from the viewpoint of easy understanding. From the viewpoint of easy understanding, the processing executed by the CPU of the device (for example, the CPU32 of the management server 10) in accordance with the program (for example, the program 40) may not be mainly described with respect to the CPU, but may be mainly described with respect to the device.
First, the user located in the first room R1 utters an instruction sound "turn on the television". At T10, the smart speaker 400 acquires sound data representing the instruction sound from the microphone 400a that received the instruction sound. Thus, the smart speaker 400 detects the instruction sound "turn on the television".
At T12, the smart speaker 400 performs voice recognition for the detected indication voice "turn on tv". For example, a program for performing voice recognition is installed in the smart speaker 400, and the smart speaker 400 performs voice recognition on voice data indicated by the detected instruction voice using the program. In this way, the smart speaker 400 acquires result information indicating the result of voice recognition on the detected instruction voice. In this case, the result information contains "television" indicating the kind of the device of the instruction target, and "start" indicating the content of the instruction. In another example, the smart speaker 400 may transmit the voice data indicating the instruction voice to a server (for example, the SP server 90) on the network, and the server may perform voice recognition on the voice data and transmit the result information to the smart speaker 400.
At T14, the smart speaker 400 transmits identification information including the ID "ss 1" of the smart speaker 400, the position information indicating the room name "R1" of the room in which the smart speaker 400 is installed, and the result information acquired at T12 to the management server 10. In addition, position information indicating the room name "R1" is input to the smart speaker 400 by the user.
Upon receiving the identification information from the smart speaker 400 at T14, the management server 10 transmits the ID "ss 1" included in the received identification information to the SP server 90 at T20. Thus, the management server 10 receives the account information AI stored in association with the ID "ss 1" from the SP server 90 at T22.
At T24, the management server 10 transmits a device information request for requesting device information to the control server 50. The device information request contains account information AI received at T22.
Upon receiving the device information request from the management server 10 at T24, the control server 50 acquires device information associated with the account information AI in the device information request from the memory 64 at T30. In this case, 4 pieces of device information are stored in association with the account information AI. The 4 pieces of equipment information include 3 pieces of equipment information corresponding to the 3 air conditioners 100 to 300 and 1 piece of equipment information corresponding to the 1 television 310.
At T32, the control server 50 transmits the 4 pieces of device information acquired at T30 to the management server 10.
When the management server 10 receives the 4 pieces of device information from the control server 50 at T32, it performs determination at T40 using the result information in the identification information received at T14. Specifically, the management server 10 determines whether or not 2 or more device names corresponding to the same category as the category included in the result information exist in the 4 pieces of device information. In this case, the management server 10 determines that only 1 device name "tv 1" corresponding to "tv" of the same kind as the kind contained in the result information exists in the 4 pieces of device information. That is, the management server 10 determines that only 1 television 310 of the same type as the type "television" included in the result information is set. In this case, the management server 10 determines the television 310 having the device name "tv 1" as the instruction target without performing estimation using the position information in the identification information received at T14, and proceeds to T44. In case a2 of fig. 3, which will be described later, it is determined that 2 or more device names corresponding to the same type as the type included in the result information exist in the 4 pieces of device information. In this case, the management server 10 performs estimation using the position information in the identification information received at T14, as will be described in detail later.
At T44, the management server 10 determines the past action information "1 ch" in the device information including the device name "tv 1" of the television 310 to be instructed as the object action information indicating the action instructed to the instructed object. Thus, even when the instruction sound does not include a specific instruction content (for example, a channel number), the management server 10 can estimate the instruction content of this time by using the operation instructed by the user in the past.
At T60, the management server 10 transmits the action command to the control server 50. The action command is a command instructing to transmit a television start request to the television 310 indicated by the device name "tv 1". The television start request is a command requesting the television 310 to perform the operation indicated by the target operation information "1 ch" (that is, the operation of outputting the program being broadcast with the channel number "1 ch"). The action command includes target action information "1 ch".
When receiving the operation command from the management server 10 at T60, the control server 50 transmits a television start request to the television 310 at T62. The television start request contains the channel number "1 ch".
When the television 310 receives the television start request from the control server 50 at T62, it starts outputting the program being broadcast with the channel number "1 ch" in the television start request at T64. Thereby, the user located in the third room R3 can view the program being played in the channel number "1 ch".
At T66, a start notification indicating that the television 310 has started to follow the operation of the television start request is transmitted to the control server 50. Thereby, the start notification is transmitted to the management server 10 via the control server 50.
Upon receiving the start notification from the control server 50 at T68, the management server 10 transmits, to the smart speaker 400, sound data of notification sound "output 1ch in the television tv 1" indicating the result of notifying the user of the instruction sound at T70.
When the smart speaker 400 receives the sound data indicating the notification sound "1 ch output on the television tv 1" from the management server 10 at T70, the sound data is supplied to the speaker 400b at T72. Accordingly, the speaker 400b emits a notification sound "1 ch is output to the television tv 1" in accordance with the sound data. The user who has heard the notification sound can recognize that the program of channel number "1 ch" is output from the television 310 in accordance with the instruction sound of T10.
(case of air conditioner A2; FIG. 3, FIG. 4)
Referring to fig. 3 and 4, a case a2 in which an instruction sound for air conditioning is generated will be described.
T110 is the same as T10 of fig. 2 except that the instruction sound "turn on the air conditioner" is detected. T112 and T114 are the same as T12 and T14 except that the result information includes "air conditioner" indicating the type of the target device. T120 to T132 are the same as T20 to T32.
At T140, the management server 10 determines that 3 device names "ac 1" to "ac 3" corresponding to the same type as the type "air conditioner" included in the result information exist in the 4 pieces of device information. That is, the management server 10 determines that 3 air conditioners 100 to 300 of the same type as the type "air conditioner" included in the result information are set.
At T142, the management server 10 selects, from the 3 device names "ac 1" to "ac 3" determined at T140, the device name "ac 1" associated with the same position information "R1" as the position information "R1" included in the identification information of T114. As described above, the position information "R1" included in the identification information indicates the room in which the smart speaker 400 is set (i.e., the room in which the user, which is the sound source indicating the sound, is located). That is, the management server 10 estimates, as the instruction target, the air conditioner 100 having the equipment name "ac 1" installed in the same room as the user who is the sound source of the instruction sound, from among the 3 air conditioners 100 to 300 in the home 4. T144 is the same as T44 in fig. 2, except that past operation information "cooling at 26 ℃" in the device information including the device name "ac 1" is used.
Next, at T160, the management server 10 transmits an operation command to the control server 50. The action command is a command instructing to transmit a cooling start request to air conditioner 100 indicated by the device name "ac 1". The cooling start request is a command requesting the air conditioner 100 to perform an operation indicated by the target operation information "cooling 26 ℃" (i.e., an operation to start cooling at a set temperature of 26 ℃). The action command contains object action information "cool 26 ℃".
Upon receiving the operation command from management server 10 at T160, control server 50 transmits a cooling start request to air conditioner 100 at T62. The cooling start request contains the set temperature "26 ℃".
When the air conditioner 100 receives the cooling start request from the control server 50 at T162, it starts cooling at the set temperature "26 ℃" in the cooling start request at T164. T166 and T168 are the same as T66 and T68 in fig. 2. T170 and T172 are the same as T70 and T72 except that the notification sound is "cooling of air conditioner ac1 has started". The user who hears this notification sound can recognize that cooling has been started by the air conditioner 100, based on the instruction sound of T110.
When the user intends to start cooling of the air conditioner 100 in the first room R1 and issues the instruction sound "turn on the air conditioner", the present situation ends in the process of T172. However, the instruction sound "turn on the air conditioner" does not include a word (for example, a device name) indicating which of the 3 air conditioners 100 to 300 in the home 4 is referred to. Therefore, there is a possibility that the user intends to start cooling of an air conditioner different from the air conditioner 100 of the first room R1 and makes an instruction sound "turn on the air conditioner". In this case, the user makes an misunderstanding sound "not right" indicating that the notification sound "cooling of the air conditioner c1 has been started" of T172 is different from the user's intention.
At T174, the smart speaker 400 detects the misinterpreted sound "not correct". At T176, the smart speaker 400 performs voice recognition on the detected misinterpreted voice. In this case, the smart speaker 400 acquires result information including "misunderstanding" indicating that an air conditioner different from the user's intention is misunderstood.
At T176, the smart speaker 400 transmits identification information including the ID "ss 1" of the smart speaker 400 and the result information acquired at T174 to the management server 10.
At T180, the management server 10 transmits a stop command to the control server 50. The stop command is a command instructing to send a stop request to the air conditioner 100. The stop request is a command requesting the air conditioner 100 to stop cooling.
Upon receiving the stop command from management server 10 at T180, control server 50 transmits a stop request to air conditioner 100 at T182.
When the air conditioner 100 receives the stop request from the control server 50 at T182, the cooling is stopped at T184. Next, at T186, air conditioner 100 transmits a stop notification indicating that cooling has been stopped to control server 50. Thereby, the stop notification is transmitted to the management server 10 via the control server 50.
When receiving the stop notification from the control server 50 at T186, the management server 10 proceeds to T190.
At T190, the management server 10 determines 2 device names "ac 2", "ac 3" other than the device name "ac 1" selected at T142 of fig. 3 from the 3 device names "ac 1" to "ac 3". Further, the management server 10 estimates, as the next instruction target, an air conditioner having an equipment name corresponding to the current information including the highest room temperature and the operation information indicating "stop" from among the identified 2 equipment names "ac 2" and "ac 3". In this case, the presence information corresponding to the device name "ac 2" includes room temperature "27℃" and "stop", and the presence information corresponding to the device name "ac 3" includes room temperature "30℃" and "stop". Therefore, the management server 10 presumes the air conditioner 300 having the equipment name "ac 3" corresponding to the presence information including the highest room temperature "30 ℃" and "stop" as the next instruction object.
T192 is the same as T144 in fig. 3, except that past operation information "cooling at 26 ℃" in the device information including the device name "ac 3" is used. T200 and T202 are the same as T160 and T162 except that the indication target is the air conditioner 300. T204 and T206 are the same as T164 and T166 except that the operating body is the air conditioner 300. T208 is the same as T168. T210 and T212 are the same as T170 and T172 except that the notification sound is "cooling of air conditioner ac3 has started".
(Effect of the embodiment)
As described above, the control server 50 sets 3 air conditioners 100 to 300 as candidate devices that are candidates for remote operation, regarding the type of device "air conditioner". For example, assume a comparative example in which, when the user intends to perform a remote operation of the air conditioner 100, an instruction sound (for example, "turn on the air conditioner ac 1") including a sentence (for example, the device name "ac 1") for specifying the air conditioner 100 is emitted. In this comparative example, although it is explicitly indicated that the object is the air conditioner 100, the user needs to know the sentence for specifying the air conditioner 100. In addition, the instruction sound emitted by the user is relatively long. Therefore, there is a possibility that the user feels troublesome in the case of emitting the instruction sound. In contrast, in case a2 of fig. 3 and 4, when the smart speaker 400 receives the instruction sound "turn on the air conditioner" (T110), the management server 10 estimates, as the instruction target, the air conditioner 100 installed in the same room as the room in which the smart speaker 400 is installed (that is, the room in which the user who is the sound source of the instruction sound is located) from among the 3 air conditioners 100 to 300 of the same type. Therefore, the user does not need to include a sentence for specifying the air conditioner 100 in the instruction sound. It is possible to suppress the user from feeling troublesome when the user emits the instruction sound. The convenience of the user is improved.
As described above, the control server 50 sets only 1 television 310 as a candidate device for the device type "television". In this case, it is not necessary to estimate the device to be instructed. In case a1 of fig. 2, when the smart speaker 400 receives the instruction sound "turn on the television" (T10), the management server 10 determines that only 1 television 310 of the same type as the type "television" included in the instruction sound is set (T40). Further, the management server 10 does not execute the process of estimating the instruction target (for example, T142 in fig. 4), and transmits an operation command for the television 310 to be instructed to the control server 50. In this case, since the process of estimating the instruction target is not executed, it is possible to suppress the execution of unnecessary processes.
As shown in case a2 of fig. 3 and 4, when a plurality of candidate devices (for example, 3 air conditioners) are set, the control server 50 estimates a device to be instructed and causes the estimated device to execute an operation based on an instruction sound. On the other hand, as shown in case a1 of fig. 2, when only 1 candidate device (for example, 1 television) is set, the control server 50 determines that the 1 candidate device is the device to be instructed, and causes the determined device to execute the operation based on the instruction sound, without estimating the device to be instructed. That is, according to the configuration of the present embodiment, it is possible to cause an appropriate device to execute an operation based on an instruction target in any one of a situation where a plurality of devices exist as candidates for the instruction target and a situation where only 1 device exists as a candidate for the instruction target.
Further, according to the above configuration, when the smart speaker 400 receives the misinterpretation sound "not aligned" indicating that the user's intention is different from the user's intention (T174 in fig. 4), the management server 10 transmits a stop command for the air conditioner 100 estimated by mistake to the control server 50 (T180). Even when the management server 10 erroneously estimates the instruction target, the user can stop the operation of the erroneously estimated instruction target by only generating an erroneous sound. The user may only make a misunderstanding sound, and for example, does not need to make a relatively long sound such as the instruction sound "stop the air conditioner ac 1". The convenience of the user is improved.
Further, according to the above configuration, when the smart speaker 400 receives the misinterpretation sound "not aligned" indicating that the smart speaker is different from the user's intention (T174 in fig. 4), the management server 10 estimates that the air conditioner 300 is the next instruction target from the 2 air conditioners 200 and 300 other than the air conditioner 100 estimated previously (T190). Further, management server 10 transmits the estimated operation command for air conditioner 300 to control server 50 (T200). Even when the management server 10 erroneously estimates the instruction target, the user can cause the air conditioner 300, which is highly likely to be intended by the user, to execute the operation based on the instruction sound by simply generating the erroneous sound. The user may only make a misunderstanding sound, for example, without making a relatively long sound such as an instruction sound "turn on the air conditioner ac 3". The convenience of the user is improved.
Further, according to the above configuration, when the management server 10 makes a wrong estimation using the position information, it estimates the next instruction target using the current information different from the position information (T190 in fig. 4). By executing the subsequent estimation under a different condition from the previous estimation, it is possible to suppress erroneous estimation similar to the previous estimation.
For example, a comparative example is assumed in which, when an estimation error of the air conditioner 100 provided in a room where a user as a sound source of an instruction sound is located is found, the next instruction target is randomly estimated from the remaining candidates. In this comparative example, the estimation of the next instruction target is also erroneous. On the other hand, when the estimation of the air conditioner 100 installed in the room where the user is located, which is the sound source of the instruction sound, is erroneous, the management server 10 estimates, as the next instruction target, the air conditioner 300 having the equipment name "ac 3" associated with the present information including the highest room temperature "30 ℃" (T190). If it is estimated that the instruction content is the start of cooling, there is a high possibility that the user intends to start cooling of the air conditioner in the room having the highest room temperature. By using the information of the room temperature of the room in which the air conditioner is installed, the accuracy of estimation of the next instruction target can be improved as compared with the above-described comparative example. In addition, when it is estimated that the instruction content is the start of heating, there is a high possibility that the user intends to start heating by air conditioning in a room having the lowest room temperature. In this case, the management server 10 may estimate, as a next instruction target, an air conditioner having a device name associated with presence information including the lowest room temperature. In addition, in the modification, the above-described comparative example may also be adopted.
(corresponding relationship)
The management server 10 is an example of a "control device". The microphone 400a and the speaker 400b of the smart speaker 400 are examples of the "sound receiving unit" and the "sound output unit", respectively. The 3 air conditioners 100 to 300 and the 1 television 310 are examples of "a plurality of candidate devices" and "1 candidate device", respectively. The start of the output of the television in the case a1 in fig. 2 and the start of the cooling of the air conditioner in the case a2 in fig. 3 are examples of the "specific operation". Here, the "specific motion" refers to the content of the motion of the device instructed by the user with sound. "turn on the television" in case a1 and "turn on the air conditioner" in case a2 are examples of the "instruction sound". T60 of case a1 and T160 of case a2 exemplify "first action relationship processing". The television start request of the case a1 and the cooling start request of the case a2 are examples of the "execution request". The sound of T72 of case a1 and the sound of T172 of case a2 are examples of the "completion sound". The sound "not" is an example of "misinterpretation sound". The stop request at T182 in fig. 4 is an example of a "stop request". The condition for using the position information included in the identification information in T142 in fig. 3 is an example of the "first condition". The air conditioner 100 and the air conditioner 300 are examples of "a first device (specific device)" and "a second device", respectively. The 2 air conditioners 200 and 300 are examples of "1 or more candidate devices". T200 in fig. 4 is an example of "second action relation processing". The location information "R1" in the identification information is an example of "location information". The peripheral information and the operation information in the presence information are examples of "information on the peripheral state of the equipment" and "information on the state of the equipment itself".
(second embodiment)
In the first embodiment, the action based on the instruction sound is directly performed. In contrast, the second embodiment is configured to be able to perform confirmation for the user before performing the operation by the instruction sound. The configuration of the communication system 2 of the present embodiment is the same as that of the communication system 2 of the first embodiment.
(case of air conditioner; FIG. 5, FIG. 6, FIG. 7)
With reference to fig. 5 to 7, a case where the instruction sound for the air conditioner is emitted in the present embodiment will be described. In this case, the past operation information corresponding to the device name "ac 1" indicates "empty" or "cooling at 26 ℃. Here, the past operation information "empty" indicates that the operation performed by the air conditioner immediately before the stop is air purification.
T310 to T342 are the same as T110 to T142 of fig. 3. At T344, the management server 10 determines the past operation information "empty" as the target operation information when the past operation information in the device information including the device name "ac 1" indicates "empty", and determines the past operation information "cooling 26 ℃" as the target operation information when the past operation information in the device information including the device name "ac 1" indicates "cooling 26 ℃".
At T346, the management server 10 determines whether the target operation information determined at T344 indicates a predetermined operation. Here, the predetermined operation is, for example, an operation of starting air purification. The management server 10 proceeds to case B of fig. 6 when the object motion information indicates a predetermined motion (yes at T346), and proceeds to case C1 or case C2 of fig. 6 when the object motion information indicates a motion other than the predetermined motion (for example, a motion to start cooling) (no at T346).
(case of air purification B: FIG. 6)
With respect to case B, T360 is the same as T160 of fig. 4 except that the action command is a command instructing to transmit an air-conditioning start request to the air conditioner 100. The air-clean start request is a command requesting the air conditioner 100 to execute the operation indicated by the target operation information "air-clean" (i.e., the operation to start air-cleaning). The action command contains object action information "null".
T160 is the same as T162 of fig. 4 except that an air-conditioning 100 is sent an air-cleaning start request. At T346, the air conditioner 100 starts air purification. T366 and T368 are the same as T166 and T168. T370 and T372 are the same as T170 and T172 except that the notification sound is "empty of started air conditioning ac 1".
(case of refrigeration C1; FIG. 6)
In case C1, at T450, the management server 10 does not transmit an operation command to the control server 50, but transmits sound data indicating "start of cooling of air conditioner ac 1" confirmation sound to the smart speaker 400. The confirmation sound is a sound for confirming to the user whether the air conditioner estimated as the indication target is the same as the intention of the user. The confirmation sound is contained in the device name "ac 1" of the air conditioner 100 estimated as the indication target at T342 of fig. 5.
When receiving the sound data indicating the confirmation sound from the management server 10 at T450, the smart speaker 400 issues a confirmation sound "start cooling of the air conditioner ac 1" at T452. The user can know that the air conditioner 100 is presumed to be the indication object by hearing the device name "ac 1" included in the confirmation sound. In this case, the user intends to start cooling of the air conditioner 100 provided in the first room R1 where the user is located. In this case, the user utters a positive interpretation sound "ok" indicating that the user's intention is the same for the confirmation sound of T452.
At T454, the smart speaker 400 detects a positive solution sound "ok". At T456, the smart speaker 400 performs voice recognition on the detected positive solution voice. In this case, the smart speaker 400 acquires result information including "positive solution" indicating that the air conditioner identical to the intention of the user is correctly estimated.
At T458, the smart speaker 400 transmits identification information including the ID "ss 1" of the smart speaker 400 and the result information acquired at T454 to the management server 10.
When the management server 10 receives the identification information from the smart speaker 400 at T458, the process proceeds to T460. T460 is the same as T160 of fig. 4. T462 and T464 are the same as T162 and T164. That is, in the air conditioner 100 estimated as the instruction target, cooling at the temperature of "26 ℃ is started.
(case of refrigeration C2; FIG. 7)
With respect to case C2, T550, T552 are the same as T450, T452 of fig. 6. In this case, the user intends to start cooling of an air conditioner different from the air conditioner 100 provided in the first room R1 where the user is located. In this case, the user utters a misinterpretation sound "don 'T match" indicating that the confirmation sound for T552 is different from the user's intention.
T554, T558 are the same as T176, T178 of fig. 4. When the management server 10 receives the identification information including the result information "misinterpretation" from the smart speaker 400 at T178, it estimates the next instruction target at T560 and T562. T560 and T562 are the same as T190 and T192 in fig. 4. That is, the management server 10 estimates the air conditioner 300 of the device name "ac 3" as the next instruction target.
T570 and T572 are the same as T550 and T552 except that the confirmation sound is "do start cooling of air conditioner ac 3". In this case, the user intends to start cooling of air conditioner 300 provided in third room R3 where the user is not located. In this case, the user utters a positive interpretation sound "ok" indicating the same situation as the user's intention with respect to the confirmation sound of T552.
T576 and T578 are the same as T456 and T458 of fig. 6. T590 and T592 are the same as T460 of fig. 6 except that the indication target is air conditioner 300. T594 is the same as T464 except that the operation subject is air conditioner 300.
Although not shown, it is assumed that only 1 air conditioner 100 is set as a candidate device. In this case, the management server 10 determines the 1 air conditioner 100 as the instruction target without estimating the device to be the instruction target (that is, without executing the process of T342 in fig. 5). Further, the management server 10 transmits the voice data indicating the confirmation voice similar to T450 of fig. 6 to the smart speaker 400.
(Effect of the embodiment)
As shown in case C1 of fig. 6, when a plurality of candidate devices (for example, 3 air conditioners) are set, the control server 50 estimates a device to be instructed and causes the smart speaker 400 to emit a confirmation sound for the user to confirm whether or not the air conditioner estimated to be instructed is the same as the user's intention. On the other hand, when only 1 candidate device (for example, 1 air conditioner) is set, the control server 50 determines the 1 candidate device as the device to be instructed, without estimating the device to be instructed, and causes the smart speaker 400 to emit a confirmation sound. That is, in the configuration of the present embodiment, the smart speaker 400 can be caused to emit an appropriate confirmation sound in any one of the situation where a plurality of devices exist as candidates for instruction and the situation where only 1 device exists as a candidate for instruction. Further, in the present embodiment, the estimation of the device to be instructed is also performed, and therefore the user does not need to make an instruction sound including a sentence for specifying the device intended by the user. The convenience of the user is improved. Further, in this embodiment, when only 1 candidate device is set, the process of estimating the instruction target is not executed, and therefore, it is possible to suppress the execution of unnecessary processes.
Further, according to the above configuration, when smart speaker 400 receives "ok" of the normal interpretation sound for the confirmation sound (T454 in fig. 6), management server 10 transmits an operation command for air conditioner 100 estimated as the instruction target to control server 50 (T460). On the other hand, when the smart speaker 400 receives the misinterpretation sound "no" for the confirmation sound (T554 in fig. 7), the management server 10 does not transmit the operation command for the air conditioner 100 estimated as the instruction target to the control server 50. That is, in the second embodiment, before the action command is transmitted to the control server 50, the intention of the user can be confirmed by the confirmation voice. Thus, compared to the first embodiment in which the operation command is transmitted to the control server 50 without sounding the confirmation sound, it is possible to suppress execution of the processing (for example, T180 in fig. 4) for stopping the operation of the instruction target for erroneous estimation. In addition, in the first embodiment, no confirmation sound is emitted. Therefore, the first embodiment has an advantage of being able to promptly cause the estimated apparatus to perform the action based on the instruction sound.
In case B, when the object operation information indicates "empty" which is a predetermined operation (yes at T346 in fig. 5), the management server 10 transmits an operation command to the control server 50 without causing the smart speaker 400 to emit a confirmation sound (T360 in fig. 6). The "air-clean" of the case B is less power-consuming than the "cooling" of the case C1 in which the sounding of the confirmation sound is performed. That is, "empty" has less economic impact than "refrigeration". Further, the "cooling" is more likely to cause discomfort to the user when performed by mistake than the "air-cleaning". That is, "empty" has little effect on the user (i.e., the human body). As described above, the user is bothersome to confirm the operation by the confirmation voice for economical or human body-less operations. By determining whether the execution target motion information is a predetermined motion at T346, it is possible to omit the sounding of the confirmation sound in a situation where an economical or human body-less influence motion should be executed. The user can be suppressed from feeling troublesome. The convenience of the user is improved.
(corresponding relationship)
The predetermined operation at T346 in fig. 5 is an example of the "predetermined operation". T450 in fig. 6 or T550 in fig. 7 is an example of the "first action relation process". T570 in fig. 7 is an example of "second action relation processing". The confirmation sound at T452 and the forward interpretation sound at T454 in fig. 6 may be "confirmation sound" or "forward interpretation sound", respectively.
(third embodiment)
The third embodiment is the same as the first embodiment except that conditions for estimating the device as the instruction target are different.
(case of air conditioner; FIG. 8)
A case where the instruction sound for the air conditioner is emitted in the present embodiment will be described with reference to fig. 5 to 7.
T610, T612 are the same as T110, T112 of fig. 3. T614 is the same as T114 except that the authentication information does not include the location information. T620 to T640 are the same as T120 to T140. At T642, the management server 10 estimates, as an instruction target, an air conditioner having an equipment name corresponding to the presence information including the highest room temperature and the operation information indicating "stop" from among the 3 equipment names "ac 1" to "ac 3" determined at T640. In this case, the presence information corresponding to the device name "ac 1" includes the highest room temperature "31 ℃" and "stop". Therefore, the management server 10 estimates, as the instruction target, the air conditioner 100 having the equipment name "ac 1" corresponding to the presence information including the highest room temperature "31 ℃" and "stop".
Next, the same processing as T160 to T172 in fig. 4 is executed. Thereby, cooling is started in the air conditioner 100, and then, a notification sound "cooling of the air conditioner ac1 has been started" is emitted in the smart speaker 400.
Although not shown, the processing in the situation where only 1 candidate device (for example, 1 television) is set is the same as that in case a (see fig. 2) of the first embodiment.
Even in this case, as in the first embodiment, in any one of a situation where a plurality of candidate devices (for example, 3 air conditioners) exist and a situation where only 1 device exists as a candidate for an instruction target, an appropriate device can be caused to execute an operation based on the instruction target. The condition using the presence information in T642 in fig. 8 is an example of the "first condition".
Modifications of the above embodiments are described below.
In each of the above embodiments, the communication system 2 includes the management server 10 and the control server 50. Instead, the communication system 2 may include only 1 server. In this case, the 1 server can execute both the processing executed by the management server 10 and the processing executed by the control server 50. In the present modification, the 1 server is an example of the "control device".
In the above embodiments, the communication system 2 includes the smart speaker 400. Instead, the communication system 2 may be provided with a terminal device (e.g., desktop PC, smartphone) having a microphone and a speaker. In the present modification, the microphone of the terminal device and the speaker of the terminal device are examples of the "sound receiving unit" and the "sound output unit", respectively.
In each of the above embodiments, the communication system 2 includes the management server 10 and the control server 50. Instead, the communication system 2 may include the smart speaker 400 and a plurality of devices in the home 4 without the management server 10 and the control server 50. In this case, the smart speaker 400 may store a plurality of pieces of device information corresponding to a plurality of devices, and receive and store the presence information in the device information from the devices via the LAN5 in the home 4. Further, when receiving the instruction sound, the smart speaker 400 may estimate the device to be instructed from a plurality of devices corresponding to a plurality of device names in the plurality of pieces of device information based on a predetermined condition. In the present modification, the smart speaker 400 is an example of a "control device". In another example, the estimation of the device to be instructed may be performed by a terminal device such as a smartphone. In the present modification, the terminal device is an example of the "control device".
The device information may not include past operation information. In this case, the instruction sound may contain a sentence indicating the specific instruction content (cooling at 26 ℃, for example). In general, the "instruction sound" may or may not include a phrase indicating "a specific action".
In the first embodiment, the processing of T70, T72 of fig. 2, and T170, T172 of fig. 4 may not be performed. In the present modification, the "finished sound" and the "output control unit" can be omitted.
In the first embodiment, the processing from T180 to T188 in fig. 4 may not be performed. In the present modification, the "stop processing execution unit" can be omitted.
In the first embodiment, the processing after T190 of fig. 4 may not be performed. In the second embodiment, the processing after T560 in fig. 7 may not be performed. In these modifications, the "second estimating unit" and the "second motion relation processing executing unit" may be omitted.
In the first embodiment, the management server 10 performs the previous estimation using the position information (T142 of fig. 3), and performs the next estimation using the presence information (T190 of fig. 4). Instead, the management server 10 may execute both the previous estimation and the next estimation using the current information. In general, the "second device" can be presumed based on the same condition as the first condition.
The "position information" is not limited to the room name, and may be user information indicating the position of a user who is a sound source for indicating sound, for example. For example, the user information may be information indicating the longitude and latitude measured by a GPS sensor included in a device carried by the user, an output signal from a human motion sensor (for example, a sensor using infrared rays or a sensor using radio waves of Wi-Fi) that detects the user, information indicating the distance from the user to a reference point (for example, an output signal of an infrared distance sensor), information obtained by analyzing an image taken by a camera of the user, or the like.
In the first embodiment, the management server 10 estimates, as an instruction target, the air conditioner 100 provided in the same room as the room in which the user, which is a sound source of the instruction sound, is located (T142 of fig. 3). Instead, the management server 10 may estimate, as the instruction target, an air conditioner installed in a room closest to the room where the user is located.
The "equipment" is not limited to an air conditioner and a television, and may be, for example, a washing machine, an electric cooker, a microwave oven, a water heater, a heater (e.g., a fan heater), a refrigerator, a terminal device, or the like.
The "predetermined action" is not limited to air purification. For example, in the case where the instruction target is a water heater, "the predetermined action" may be a heating action, and the action other than the predetermined action may be a bathtub water adding action. Since the bathtub water adding operation is an operation for storing hot water in the bathtub, if the operation is started by mistake, the hot water in the bathtub is wasted, which has a great economic impact. In contrast, since the heating operation does not increase the amount of hot water in the bathtub, the economical effect is less than that of the bathtub watering operation. For example, when the instruction target is an air conditioner, the "predetermined operation" may be cooling or heating with a small temperature difference between the set temperature and the room temperature, and the operation other than the "predetermined operation" may be cooling or heating with a large temperature difference between the set temperature and the room temperature. The larger the temperature difference, the larger the influence of the human body. In general, the "predetermined action" is an action that is less economically affected or less affected by the human body than an action other than the predetermined action.
The "predetermined action" may be an action that has a slight influence due to the start of the action if the action is canceled immediately after the start of the action. For example, when the instruction target is an air conditioner, the operation of changing the set temperature corresponds to the "predetermined operation" in the present modification because the actual indoor temperature does not change if the original set temperature is restored immediately after the change of the set temperature. For example, when the instruction target is an air cleaner, the operation of starting air cleaning is merely to clean air, and does not have an irreparable adverse effect on the user, and therefore corresponds to the "predetermined operation" in the present modification. On the other hand, in the case where the instruction target is a washing machine, in a situation where laundry scheduled to be washed is loaded in the washing machine, even if the washing is stopped immediately after the start of the washing operation, the laundry is wetted by the input of water, and therefore the operation does not correspond to the "scheduled operation" in the present modification. That is, when the object motion information indicates "start of washing", it is desirable to generate a confirmation sound (e.g., "whether to start washing").
As a modification of the case of fig. 2, a situation is assumed in which only 1 piece of device information corresponding to the television 310 is stored in association with the account information AI. In this case, the instruction sound of T10 may be "on" instead of "on tv". Further, the management server 10 may determine, as the instruction target, the television 310 corresponding to the 1 device information received from the control server 50 at T40. The process of determining the target of instruction in the above-described situation is an example of "when only 1 candidate device is set, the 1 candidate device is determined as the target device regardless of the first condition".
In the third embodiment, the management server 10 estimates a device to be instructed based on a condition that utilizes both the ambient information (for example, room temperature) and the operation information (T642 in fig. 8). Instead, the management server 10 may estimate the device to be instructed based on the condition that only the operation information is used, or may estimate the device to be instructed based on the condition that only the surrounding information is used. In the present modification, generally, at least one of the "information on the state of the device itself" and the "information on the state around the device" may be used.
While several embodiments of the present invention have been described above, the above embodiments are merely presented as examples, and are not intended to limit the scope of the invention. The above embodiments can be implemented in various other ways, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. The above-described embodiments and modifications thereof are included in the scope and gist of the invention, and are also included in the invention described in the claims and the equivalent scope thereof.

Claims (10)

1. A control device is provided with:
a first estimating unit configured to estimate, when a plurality of candidate devices are set, a target device to be instructed from the plurality of candidate devices based on a predetermined first condition when the sound receiving unit capable of receiving a sound receives an instruction sound for causing the device to execute a specific operation;
a determination unit configured to determine, when only 1 candidate device is set when the instruction sound is received by the sound receiving unit, the 1 candidate device as the target device, regardless of the first condition; and
and a first operation relation processing execution unit that executes first operation relation processing for causing the target device to execute the specific operation based on the instruction sound.
2. The control device according to claim 1,
the first action relation process includes a process of transmitting an execution request for requesting execution of the specific action to the target device,
the control device further includes an output control unit that causes an audio output unit to output a completion audio indicating that the target device has performed the specific operation.
3. The control device according to claim 2,
the control device may further include a stop processing execution unit that executes a process of transmitting a stop request requesting a stop of the specific operation to the specific device, when the sound receiving unit receives an misunderstanding sound indicating that the specific device is mistakenly estimated as the target device, after the sound output unit outputs the completion sound.
4. The control device according to claim 1,
the first action relation process includes:
a process for causing a voice output unit to output a confirmation voice for confirming to a user that the target device is caused to execute the specific operation; and
a process for transmitting an execution request requesting execution of the specific operation to the specific device when the sound receiving unit receives a normal interpretation sound indicating that the specific device among the plurality of candidate devices is correctly estimated as the target device after the sound output unit outputs the confirmation sound,
when the sound receiving unit receives a misunderstanding sound indicating that the specific device is erroneously estimated as the target device after the sound output unit outputs the confirmation sound, the execution request is not transmitted to the specific device.
5. The control device according to claim 4,
the first action relation process includes: and a processing unit configured to transmit the execution request to the target device without causing the sound output unit to output the confirmation sound when the specific operation is a predetermined operation.
6. The control device according to any one of claims 1 to 5,
the control device further includes:
a second estimating unit configured to estimate, as an instruction target, a second device from among 1 or more candidate devices excluding the first device from the plurality of candidate devices, when the sound receiving unit receives a misinterpretation sound indicating that the first device is mistakenly estimated as the target device after the first operation relationship processing is executed; and
and a second motion relation processing execution unit that executes second motion relation processing for causing the second device to execute the specific motion based on the instruction sound.
7. The control device according to claim 6,
the second estimating unit estimates the second device from the 1 or more candidate devices based on a predetermined second condition different from the first condition.
8. The control device according to any one of claims 1 to 6,
the first condition includes a condition using position information on a position of a sound source of the instruction sound.
9. The control device according to claim 8,
the first estimation unit estimates, as the target device, a device installed in the same location as the location indicated by the location information, from among the plurality of candidate devices.
10. The control device according to any one of claims 1 to 9,
the first condition includes a condition that uses information of at least one of information relating to a state of the device itself and information relating to a state around the device.
CN202011187107.3A 2019-12-19 2020-10-30 Control device Pending CN113014461A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019229645A JP7373386B2 (en) 2019-12-19 2019-12-19 Control device
JP2019-229645 2019-12-19

Publications (1)

Publication Number Publication Date
CN113014461A true CN113014461A (en) 2021-06-22

Family

ID=76382973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011187107.3A Pending CN113014461A (en) 2019-12-19 2020-10-30 Control device

Country Status (2)

Country Link
JP (1) JP7373386B2 (en)
CN (1) CN113014461A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015224863A (en) * 2014-05-29 2015-12-14 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Control method of terminal device for remotely controlling air conditioner, program executed by terminal device and recommendation method executed by terminal device
CN105207864A (en) * 2015-08-31 2015-12-30 小米科技有限责任公司 Household appliance control method and device
CN106255963A (en) * 2014-05-15 2016-12-21 夏普株式会社 Network system, server, communication equipment, information processing method and program
JP2018120627A (en) * 2013-10-04 2018-08-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Terminal and method for controlling the same
CN109917663A (en) * 2019-03-25 2019-06-21 北京小米移动软件有限公司 The method and apparatus of equipment control

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3523213B2 (en) * 2001-03-28 2004-04-26 株式会社ジャストシステム Command processing device, command processing method, and command processing program
JP4363076B2 (en) * 2002-06-28 2009-11-11 株式会社デンソー Voice control device
JP2006033795A (en) * 2004-06-15 2006-02-02 Sanyo Electric Co Ltd Remote control system, controller, program for imparting function of controller to computer, storage medium with the program stored thereon, and server
JP2006202127A (en) * 2005-01-21 2006-08-03 Pioneer Electronic Corp Recommended information presentation device and recommended information presentation method or the like
JP6025091B2 (en) * 2012-03-29 2016-11-16 パナソニックIpマネジメント株式会社 Device control device, device control system and program
JP6767206B2 (en) * 2016-08-30 2020-10-14 シャープ株式会社 Response system
KR102025391B1 (en) * 2017-05-15 2019-09-25 네이버 주식회사 Device control according to user's talk position

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018120627A (en) * 2013-10-04 2018-08-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Terminal and method for controlling the same
CN106255963A (en) * 2014-05-15 2016-12-21 夏普株式会社 Network system, server, communication equipment, information processing method and program
JP2015224863A (en) * 2014-05-29 2015-12-14 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Control method of terminal device for remotely controlling air conditioner, program executed by terminal device and recommendation method executed by terminal device
CN105207864A (en) * 2015-08-31 2015-12-30 小米科技有限责任公司 Household appliance control method and device
CN109917663A (en) * 2019-03-25 2019-06-21 北京小米移动软件有限公司 The method and apparatus of equipment control

Also Published As

Publication number Publication date
JP7373386B2 (en) 2023-11-02
JP2021099378A (en) 2021-07-01

Similar Documents

Publication Publication Date Title
EP2930715B1 (en) Device control method and device management system
JP6140214B2 (en) CONTROL DEVICE, CONTROL DEVICE CONTROL METHOD, CONTROL SYSTEM, ENVIRONMENT CONTROL DEVICE, AND CONTROL PROGRAM
CN111758128A (en) Method, control device, and program
US11629876B2 (en) Operating system, information processing device, control system, and infrared output device
JP2018036397A (en) Response system and apparatus
CN107306383B (en) Method, apparatus and recording medium for estimating operator
CN112786040A (en) Voice control method, device and equipment applied to intelligent household electrical appliance
JP2005312017A (en) Equipment installation-place setting system, equipment control apparatus, electrical equipment, equipment installation-place setting method and equipment installation-place setting program
JP5436586B2 (en) Air conditioner control adapter
CN109559488B (en) Control method, remote control terminal, household appliance, system and storage medium
JP2017092794A (en) Equipment control system, equipment control device, and equipment control program
JP2016225726A (en) Home electronics linkage system and home electronics
CN113014461A (en) Control device
CN111051784A (en) Air-conditioning control device, environment setting terminal, air-conditioning control method, and program
CN111183478B (en) Household electrical appliance system
JP6927237B2 (en) Air conditioning system
JP2021047012A (en) Air-conditioning system
WO2018168006A1 (en) Network system, information processing method, server, and terminal
JP2018160824A (en) Information providing device, terminal device, display system, and information providing method
WO2019069597A1 (en) Home electric system
CN110542179A (en) control method and device of air conditioner
JP2019052797A (en) Network system, information processing method and server
CN113167496B (en) Air conditioning system
JP7397341B2 (en) air conditioning system
JP6382026B2 (en) Message transmission server, external device, message transmission system, message transmission server control method, control program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210622