WO2020175293A1 - Système de commande d'appareil, procédé de commande d'appareil, et programme - Google Patents

Système de commande d'appareil, procédé de commande d'appareil, et programme Download PDF

Info

Publication number
WO2020175293A1
WO2020175293A1 PCT/JP2020/006636 JP2020006636W WO2020175293A1 WO 2020175293 A1 WO2020175293 A1 WO 2020175293A1 JP 2020006636 W JP2020006636 W JP 2020006636W WO 2020175293 A1 WO2020175293 A1 WO 2020175293A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
control
command
additional information
control system
Prior art date
Application number
PCT/JP2020/006636
Other languages
English (en)
Japanese (ja)
Inventor
池部 早人
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2020175293A1 publication Critical patent/WO2020175293A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Definitions

  • the present disclosure generally relates to a device control system, a device control method, and a program, and more particularly, to a device control system, a device control method, and a program for controlling a control target including at least one device.
  • Patent Document 1 describes a device control system that controls a device (electric device in a bathroom) based on a voice command issued by a user.
  • the device control system described in Patent Document 1 includes a voice recognition unit that receives a voice uttered by a user and recognizes the input voice.
  • Patent Document 1 when a user issues a command "voice with healing light” by voice, the device control system determines which command the input voice data corresponds to. .. As a result, if the input voice data can be recognized as a command "put on hearing light", the device control system sends an operation signal to the healing light to turn on the healing light. Activate.
  • Patent Document 1 Japanese Patent Laid-Open No. 2058-8500
  • the present disclosure has been made in view of the above circumstances, and an object of the present disclosure is to provide a device control system, a device control method, and a program that can flexibly control a device even when a user similarly speaks. .. ⁇ 2020/175293 2
  • a device control system includes a first acquisition unit, a second acquisition unit, and a control unit.
  • the first acquisition unit acquires command information.
  • the command information is information regarding the recognition result of the voice input to the voice input device.
  • the second acquisition unit acquires additional information.
  • the additional information includes information different from the command information.
  • the control unit controls a control target including at least one device by a control command based on both the command information and the additional information.
  • a device control method includes a first acquisition process, a second acquisition process, and a control process.
  • the first acquisition process is a process of acquiring command information.
  • the command information is information about the recognition result of the voice input to the voice input device.
  • the second acquisition process is a process of acquiring additional information.
  • the additional information is information different from the command information.
  • the control process is a process of controlling a control target including at least one device by a control command based on both the command information and the additional information.
  • a program according to an aspect of the present disclosure is a program for causing one or more processors to execute the device control method.
  • FIG. 1 is a schematic system configuration diagram showing a usage example of the device control system according to the first embodiment.
  • FIG. 2 is a block diagram showing a schematic configuration of the device control system of the above.
  • Fig. 3 is a flow chart showing an operation example of the above device control system.
  • FIG. 4 is a sequence diagram schematically showing the operation of the above-mentioned device control system.
  • FIG. 5 is a schematic system configuration diagram showing a usage example of the device control system according to the second embodiment.
  • the device control system 1 is a system for controlling a control target 1 (see FIG. 2) including at least one device 51 to 55. Each of the plurality of devices 5 1 to 5 5 is also simply referred to as “device 5” unless the plurality of devices 5 1 to 55 are distinguished.
  • the device control system 1 controls the device 5 according to the control command by transmitting a control command from the outside of the device 5 to the device 5 as the controlled object.
  • the device control system 1 controls the control target table 1 based on the voice input to the voice input device 2. That is, according to the device control system 1, the user II 1 can control what is controlled by the voice, that is, a so-called voice operation can be performed. Therefore, the user II 1 can operate the control target table 1 by speaking (that is, uttering a voice) even when both hands are closed.
  • the device control system 1 for example, even if the device 5 does not have a communication function with the voice input device 2 and does not support voice operation, the voice of the user 11 1 Depending on the input, indirect voice operation via the device control system 1 is possible.
  • the device control system 1 since the device control system 1 cooperates with the voice input device 2, even if the device 5 is not directly linked to the voice input device 2, It can be indirectly linked to the input device 2.
  • the device control system 1 includes a first acquisition unit 11, a second acquisition unit 12, and a control unit 13.
  • the first acquisition unit 11 acquires the command information [1].
  • Command information 1 is information about the recognition result of the voice input to the voice input device 2.
  • the second acquisition unit 12 acquires the additional information I 2.
  • the additional information [2] consists of information other than the command information I1.
  • the "I" part 13 is a control command based on both the command information 1 and the additional information 2. ⁇ 2020/175293 4
  • the device control system 1 controls not only the command information 1 based on the voice input to the voice input device 2 but also the command information 1 to control the controlled object 1.
  • Another additional information item 2 is also used. Therefore, for example, even if the voices input to the voice input device 2 are the same, for the controlled object D1 consisting of at least one device 5, the device control system 1 is not unique. Different controls are performed depending on the additional information. Therefore, the device control system 1 has an advantage that the device 5 can be flexibly controlled even when the user II 1 similarly speaks.
  • the device control system 1 is installed in the facility 1 and at least one of the plurality of devices 5 installed in the facility 1 is set as the controlled object 1.
  • the term “facility” as used in the present disclosure includes a single-family house, an apartment house, a housing facility such as each dwelling unit of the apartment house, and a non-residential facility such as an office, a factory, a building, a store, a school, a welfare facility, or a hospital.
  • Non-residential facilities include theaters, cinemas, public halls, playgrounds, complex facilities, restaurants, department stores, hotels, inns, kindergartens, libraries, museums, museums, underground malls, stations and airports.
  • “facility” referred to in this disclosure includes not only buildings (buildings) but also outdoor facilities such as stadiums, gardens, parking lots, grounds and parks.
  • the device control system 1 controls at least one of the plurality of devices 5 installed in the facility 1 (here, a detached house) as the control target, and controls the control target.
  • the facility 1 is a detached house in which a family of four including a father, a mother, a sister, and a brother live. Therefore, in this embodiment, Father, mother, sister and younger brother who are residents of 1 can be user U 1 of device control system 1.
  • the "equipment” as referred to in the present disclosure is an equipment that is controlled by receiving a control command from the equipment control system 1, and as an example, an electric equipment that uses electric power, a stationary or portable facility, or an apparatus. And system.
  • the facility F 1 is provided with a plurality of devices 51 to 55 that can be the control target T 1.
  • the devices 5 1 and 5 4 are air conditioners
  • the devices 5 2 and 5 5 are lighting fixtures
  • the device 5 3 is an electric shutter.
  • speech means, for example, a whistle, a finger snapp i ng sound, and a clap pi ng sound in addition to a speech sound emitted by a person through a vocal organ. , Including all sounds intentionally emitted by humans.
  • the device control system 1 includes a controller 100 and a control server 100.
  • the controller 100 and the control server 100 are configured to be able to communicate with each other.
  • the term “communicable” in the present disclosure means that a signal can be directly or indirectly transmitted and received by an appropriate communication method such as wired communication or wireless communication. That is, the controller 10 and the control server 100 can exchange signals with each other.
  • Both the controller 100 and the control server 100 are connected to a network such as the Internet, and can send and receive signals to and from each other bidirectionally via the network.
  • the device control system 1 (the controller 10 and the control server 100) includes a voice input device 2, a voice recognition server 3, and a device 5 (control target T 1 ), respectively. , And is configured to communicate. That is, the device control system 1 can exchange signals with each of the voice input device 2, the voice recognition server 3, and the device 5.
  • the device control system 1 does not include the voice input device 2, the voice recognition server 3, and the device 5 as constituent elements.
  • the voice input device 2 At least one of the voice recognition server 3 and the device 5 may be included in the components of the device control system 1.
  • the device control system 1 may include the voice recognition server 3 as a component.
  • each of the controller 10, the control server 100, the voice input device 2, and the voice recognition server 3 has a computer system having one or more processors and one or more memories.
  • the main configuration is (including server and cloud computing).
  • the processor implements the functions of the controller 10, the control server 100, the voice input device 2, and the voice recognition server 3 by executing the program recorded in the memory.
  • the program may be recorded in a memory in advance, may be recorded in a non-transitory recording medium such as a memory card and provided, or may be provided through an electric communication line.
  • the above program is a program for causing one or more processors to function as each of the controller 10, the control server 100, the voice input device 2, and the voice recognition server 3.
  • the components of the device control system 1 are distributed and arranged in the controller 10 and the control server 100.
  • the controller 10 is installed in the facility F 1
  • the control server 100 is installed outside the facility F 1.
  • the control server 100 is operated by, for example, a service providing company that provides the user U 1 with a device control method that enables the device 5 to be operated by voice, as a service.
  • the details of the configuration of the device control system 1 are described in the section “(2.3) Configuration of device control system”.
  • the voice input device 2 is, for example, a home voice assistant device that responds to the voice of the user U 1 to play music, control a specific home electric appliance, and the like.
  • This type of voice input device 2 performs voice recognition and other operations using technology such as machine learning and interactive artificial intelligence (AI: Artificial Intelligence).
  • AI Artificial Intelligence
  • the “speech recognition” in the present disclosure includes not only a process of converting the voice of the user U 1 into a character string, but also a natural language process such as semantic analysis and context analysis.
  • the voice input device 2 is installed in the facility F 1.
  • the voice input device 2 ⁇ 2020/175 293 7
  • the voice input device 2 is connected to the router by, for example, wireless communication using radio waves as a medium. This enables the voice input device 2 to perform bidirectional communication with the voice recognition server 3 via the router and the network.
  • the communication system of the voice input device 2 is, for example, a wireless communication such as _ _ ⁇ (registered trademark) or Iri 6 1: ⁇ ⁇ 1 (registered trademark).
  • the voice input device 2 is a voice input device using the registered words stored in the memory.
  • the voice input to 2 is recognized (voice recognition).
  • the voice input device 2 outputs the recognition result of the inputted voice to the voice recognition server 3.
  • the registered words include acoustic models and recognition dictionaries. That is, the voice input device 2 refers to the registered word, analyzes the voice input from the user II 1 to extract the acoustic feature amount, and refers to the recognition dictionary to perform voice recognition.
  • the voice input device 2 stores, for example, a registered word for each user II 1.
  • the voice input device 2 performs the speaker identification for identifying the user II 1 who has made a voice, so that the voice other than the user II 1 registered in advance, for example, the voice of a visitor, a television program or a radio It is possible to suppress malfunction of the audio input device 2 due to audio of a program or the like.
  • the voice input device 2 is arranged in a place where the voice of the user II 1 is relatively easy to reach, such as in the "living room" in the facility 1 (here, a detached house).
  • the voice input device 2 triggers a specific event to start accepting the voice operation of the user II 1.
  • the specific event is that a voice representing a specific keyword (wake word) such as “voice operation start” is input from the user II 1. Therefore, the user II 1 can start the voice operation on the voice input device 2 even when both hands are closed.
  • the specific event for starting the reception of the voice operation of the user II 1 is not limited to the input of a specific voice (word).
  • the user II 1 moves to the front of the voice input device 2.
  • the button of the voice input device 2 may be pressed, or the like.
  • the voice input device 2 has a speaker that converts an electric signal into sound, in addition to a microphone that converts the voice of the user II 1 into an electric signal. Therefore, the voice input device 2 can not only accept the operation (voice input) by the voice of the user II 1, but also output the sound (including the voice, the beep sound, the melody, etc.) to the user-only 1. Therefore, the user II 1 can operate the voice input device 2 in a hands-free and interactive manner.
  • the voice recognition server 3 is configured to be communicable with the voice input device 2, as described above. That is, the voice recognition server 3 is connected to a network such as the Internet and can exchange signals with the voice input device 2 bidirectionally via the network.
  • the voice recognition server 3 receives at least the voice recognition result from the voice input device 2.
  • the recognition result received by the voice recognition server 3 from the voice input device 2 includes at least a word input as voice to the voice input device 2 and can be processed by one or more processors of the voice recognition server 3. Is. For example, when the user II 1 utters (speaks) the word “good morning” to the voice input device 2, the voice recognition server 3 receives at least “good morning” in the recognition result received from the voice input device 2. Is included.
  • the voice recognition server 3 is also communicable with the control server 100 of the device control system 1. Both the voice recognition server 3 and the control server 100 are connected to a network such as the Internet and can exchange signals bidirectionally with each other via the network.
  • the voice recognition server 3 Upon receiving the recognition result from the voice input device 2, the voice recognition server 3 determines whether the recognition result is a control word.
  • the voice recognition server 3 is, for example, “Good morning”, “Good night”, “I'm coming”, “I'm free”, “Brighten”, “Darken”, “Raise temperature” and “ A specific keyword such as "lower the temperature” is stored in advance as a control word.
  • the control word can be arbitrarily set by the user II 1 and registered by the setting unit 16 described later. ⁇ 2020/175 293 9 ⁇ (: 171-1? 2020/006636
  • control words such as "Good morning”, “Good night”, “I'm coming”, and “I'm home” do not directly indicate the control of the device 5, but simply in the life of the user 1.
  • scene word indicates a scene such as getting up in bed, going to bed, going out, and going home.
  • the voice recognition server 3 outputs (transmits) the command information 1 to the control server 100 when the received recognition result is the control word, that is, when the recognition result corresponds to the control word.
  • the “command information” referred to in the present disclosure is information regarding the recognition result of the voice input to the voice input device 2. That is, the command information item 1 may be information including the recognition result itself received by the voice recognition server 3 from the voice input device 2, or may be information generated based on the recognition result. As an example, when the voice recognition server 3 receives a recognition result including the word “good morning” from the voice input device 2, it generates command information 1 including the word “good morning”, and the control server 1 Command information 1 is output to 0 0.
  • the voice recognition server 3 does not output the command information 1 to the control server 100 when the received recognition result is not the control word, that is, when the recognition result does not correspond to the control word.
  • controller 10 and a control server 100, which mainly comprises a computer system (including a server and cloud computer) having one or more memories.
  • the acquisition unit 11 includes a second acquisition unit 12 and a control unit 13. Further, in the present embodiment, as shown in FIG. 2, the device control system 1 includes an input unit 14 and a display unit 1 in addition to the first acquisition unit 11 1, the second acquisition unit 12 and the control unit 13. 5, a setting unit 16, a storage unit 17, a communication unit 18 and an information generation unit 19 are further provided.
  • All of the units 18 are provided in the controller 10 and the information generation unit 19 is provided in the control server 100. That is, in this embodiment, the first acquisition unit 11 1, the second acquisition unit 12 2, the control unit 13, the input unit 14, the display unit 15, the setting unit 16, the storage unit 17, and the communication unit.
  • the components of the information processing unit 18 and the information generation unit 19 are distributed and arranged in the controller 10 and the control server 100.
  • the first acquisition unit 11 acquires the command information I 1.
  • the command information 1 is information about the recognition result of the voice input to the voice input device 2, for example, information including the recognition result itself received from the voice input device 2 by the voice recognition server 3.
  • the first acquisition unit 11 acquires from the voice recognition server 3 the command information 1 including the word “good morning”.
  • the timing at which the first acquisition unit 11 acquires the command information [1] depends on the timing at which the voice recognition server 3 outputs the command information [1].
  • the second acquisition unit 12 acquires the additional information I 2 as described above. Additional information
  • the additional information I 2 includes information different from the command information I 1.
  • the additional information I 2 includes external information that is independent of the command information I 1.
  • the “external information” referred to in the present disclosure is information that is independent of the command information 1, that is, does not depend on the command information 1.
  • information that depends on command information 1, such as the number of times or frequency of acquisition of command information 1 by the first acquisition unit 11, that is, information that correlates with command information 1 is "internal information”.
  • the additional information [2] does not include internal information, but includes only external information of external information and internal information.
  • the additional information I2 includes input information that changes according to an operation of a person (user U1). That is, for example, the input information that changes when the user U 1 manually operates a switch or the like is included in the additional information section 2.
  • Appendix 2 includes period information regarding the period. “Period” in the present disclosure means a section defined by a certain width on the time axis, and for example, time zone, day (including day of the week, weekday/holiday), month, year, season, etc. Including the period. That is, for example, when the unit is the season, the additional information [2] includes period information indicating whether the current season is spring, summer, autumn, or winter.
  • the additional information item 2 includes the speaker information about the speaker (user U 1) who uttered the voice input to the voice input device 2.
  • the speaker information indicating whether the user U 1 who uttered the voice input to the voice input device 2 is a father, mother, sister, or younger brother is included in the additional information section 2.
  • the additional information I 2 is extracted from the voice input to the voice input device 2 and includes state information related to the state of the speaker who uttered the voice.
  • the “speaker state” referred to in the present disclosure is information that can be extracted from the voice input to the voice input device 2, and is, for example, estimated from the loudness (volume) and tone of the speaker. Including the emotion or physical condition of.
  • the speaker's emotions are, for example, two types of emotions of a person whose two axes are arousal (Arousal l Level), which is the degree of arousal, and comfort (Valence Level), which is the degree of comfort. It can be estimated using the dimensional model (Russell ring model).
  • the input information, the period information, the speaker information, and the state information as described above do not depend on the command information I 1, that is, the information is independent of the command information I 1, the "external information" include.
  • the timing at which the second acquisition unit 12 acquires the additional information item 2 is synchronized with the timing at which the first acquisition unit 11 acquires the command information I 1. That is, the timing at which the second acquisition unit 12 acquires the additional information I 2 is determined by the timing at which the voice recognition server 3 outputs the command information 1 1.
  • control unit 13 controls the command information I 1 and the additional information I 2. ⁇ 0 2020/175293 12 ⁇ (: 17 2020/006636
  • the control instruction 1 based on both controls the control target 1 consisting of at least one device 5.
  • the control unit 13 outputs the control command information 3 based on both the command information 1 acquired by the first acquisition unit 11 and the additional information I 2 acquired by the second acquisition unit 12. To generate. With the control command 3 generated in this way, the control unit 13 executes control of the control target tool 1 including at least one device 5.
  • the control unit 13 reflects not only the command information item 1 but also the additional information item 2 on the control of the control target document 1. For example, even if the command information 1 is the same, if the additional information 2 satisfies different conditions, the control unit 13 executes control of the controlled object D 1 with different control commands 3. In other words, even if the command information I 1 is the same, the control unit 13 controls the controlled object D 1 with the first control command 3 when the additional information 2 satisfies the first condition, and the additional information 2 If satisfies the second condition, the control target tool 1 is controlled by the second control command I 3.
  • the second condition is different from the first condition
  • the second control command I 3 is a control command I 3 different from the first control command I 3.
  • the additional information item 2 may include the input information, the period information, the speaker information, and the state information, as described above. Therefore, when the input information, the period information, the speaker information and the state information are included in the additional information section 2, the control unit 13 controls the control command according to the input information, the period information, the speaker information and the state information. Change. That is, when the additional information [2] includes input information, the control unit [13] changes the control command [3] at least according to the input information. When the additional information I 2 includes the period information, the control unit 13 changes the control command 3 at least according to the period information. When the additional information [2] includes the speaker information, the control unit [13] changes the control command [3] according to at least the speaker information.
  • the control unit 13 changes the control command 3 at least according to the status information. Further, when the additional information I 2 includes two or more pieces of the input information, the period information, the speaker information, and the status information, the control unit 13 controls according to the two or more pieces of information. Change instruction I 3. ⁇ 2020/175 293 13 ⁇ (:171? 2020 /006636
  • control command in the present disclosure specifies a target specifying item that specifies at least one device 5 that is a controlled target among the plurality of devices 5, and an operation of the controlled target. Includes content specific items and. Then, the control unit 13 determines the target specific item and the content specific item based on both the command information I 1 and the additional information [2].
  • control command (3) can specify at least one device (5) to be controlled by the target specifying item.
  • the target specifying item is, for example, assigned to each of the plurality of devices 5 and is represented by identification information (address, etc.) stored in each device 5.
  • control command [3] makes it possible to specify how to control the controlled object [1] by the content specifying item.
  • the devices 5 1 and 5 4 consisting of air-conditioning equipment are the controlled target 1, the operating mode of the controlled target 1 (cooling/heating, etc.), set temperature, wind direction, air volume and operating time (timer), etc.
  • the operation of is specified in the content specific item.
  • the equipment 5 2 and 5 5 consisting of luminaires is the controlled object 1, the control object 1 will be turned off, lit, dimmed, dimmed level (brightness), light color (color temperature) and operated. Operations such as time (timer) are specified in the content specific items.
  • the device 53 consisting of an electric shutter is the controlled object, the operations such as opening/closing and opening of the controlled object 1 are specified in the content specific items.
  • control unit 13 outputs the control command 3 to determine which device 5 among the plurality of devices 5 is the control target document 1 and which control target document 1 is. You can control or specify.
  • the control unit 13 controls two or more devices 5 out of the plurality of devices 5 based on both the command information 1 and the additional information 2. To choose as. That is, the control unit 13 can select not only one device 5 as the control target document 1 but also a plurality of (two or more) devices 5 as the control target document 1. As a result, the control unit 13 can collectively control the plurality of devices 5.
  • the two or more devices 5 selected as the controlled object 1 may be devices 5 of the same type, for example, an air conditioner and a lighting device. ⁇ 2020/175 293 14 ⁇ (:171? 2020 /006636
  • a control list (control table) including a plurality of command candidates corresponding to the above-described control command 3 (target specific item and content specific item) is stored in a storage unit described later. 1 7 remembers. Then, the control unit 13 selects one of the command candidates included in the control list stored in the storage unit 17 and selects a control command corresponding to the selected command candidate. Execute the control of controlled object D1 according to 3.
  • one or more command candidates are associated with one or more command candidates. That is, it is not possible to limit the number of command candidates to one command information only, but the control unit 13 according to the present embodiment uses the additional information command 2 in addition to the command information I 1 to It is possible to narrow down to one. In short, the control unit 13 selects one instruction candidate from the plurality of instruction candidates associated with one command information item 1 on the basis of the additional information item 2, and selects the selected one instruction candidate item. Is used as a control command I 3 to control the controlled object.
  • the default command in the control list is dealt with. Candidates are available. That is, even if the instruction candidates cannot be narrowed down by the additional information I 2, the control unit 13 can control the controlled object 1 by adopting the default instruction candidate as the control instruction ⁇ 3. .. In short, the control unit 13 selects one instruction candidate according to the determination condition when the additional information [2] satisfies the determination condition, and selects the selected one instruction candidate as the control instruction [3] to control the target object 1. Control. On the other hand, when the additional information I 2 does not satisfy the determination condition, the control unit 13 selects the default instruction candidate from the plurality of instruction candidates and sets the default instruction candidate as the control instruction I 3 to control the target instruction. Control 1
  • the input unit 14 receives input of an input signal according to the operation of the user II 1.
  • the device control system 1 can accept the operation of the user II 1. ⁇ 2020/175 293 15 ⁇ (:171? 2020/006636
  • the input signal received by the input unit 14 is generated by an operation other than the voice operation of the user II 1, basically the operation performed by the user II 1 by his/her own hand.
  • the input unit 14 receives, as an example, an input of an input signal generated by operating a touch panel display provided on the controller 10 or a push button.
  • the display unit 15 has a function of presenting information to the user II 1 by display.
  • the display unit 15 and the input unit 14 are realized by, for example, a touch panel display. Therefore, the controller 10 detects that the operation of an object such as a button on each screen displayed on the touch panel display (such as duplication, swipe, or drag) is detected, and the object such as the button is operated. To judge.
  • the display unit 15 is included in a touch panel display, for example, a liquid crystal display, an organic display! !_ 11111 _1 306 6) Realized by a display etc.
  • the setting unit 16 sets the correspondence between the command information I 1, the additional information item 2 and the control command I 3.
  • the setting unit 16 has two modes: a normal mode for invalidating the setting and a manual setting mode.
  • the normal mode the setting unit 16 fixes the correspondence between the command information I 1, the additional information ⁇ 2 and the control command I3, and sets the command information ⁇ 1, additional information ⁇ 2 and the control command ⁇ 3. Do not set (change) the correspondence relationship of.
  • the manual setting mode is a mode in which the user II 1 can manually set the correspondence between the command information 1 and the additional information 2 and the control command I 3.
  • the setting unit 16 may, for example, according to the input signal received by the input unit 14, that is, the input signal according to the operation of the user II 1, the command information 1 and the additional information 2 and the control command.
  • user II 1 associates one command code with one combination of command information 1 and additional information 2 on the control list.
  • the setting unit 16 has a learning mode in addition to the normal mode and the manual setting mode.
  • the learning mode is limited to the controlled object 1.
  • the learning mode when the control equivalent to command 3 is performed, the correspondence between control command 3, command information 1 and additional information 2 is generated. That is, in the learning mode, the setting unit 16 automatically learns and adjusts (changes) the correspondence relationship between the command information I 1, the additional information 2 and the control command I 3.
  • the setting unit 16 corresponds to this operation when the user U 1 operates the device 5 having a high correlation with respect to one combination of the command information I 1 and the additional information [2]. Generate command candidates and add them to the control list.
  • the setting unit 16 responds to this operation to control the list. Change (including delete) the instruction candidates in.
  • the learning in the setting unit 16 in the learning mode may be performed, for example, by performing training (learning) with a neural network built in the control server 100.
  • the storage unit 17 stores, for example, a control list including a plurality of instruction candidates. In addition, the storage unit 17 further stores information and the like necessary for calculation in the control unit 13 and the like.
  • the storage unit 17 includes a rewritable nonvolatile memory such as an EEPROM (Electrically Erasable Programmable Read-0 nly Memory).
  • the communication unit 18 has a communication function with each of the control server 100 and the control target T 1.
  • the communication unit 18 is capable of bidirectional communication with the control server 100, for example, via a network. Further, the communication unit 18 is capable of bidirectional communication with the control target T 1, for example, by wireless communication using a radio wave as a medium.
  • a specific low power wireless station in the 92 MHz band (a wireless station that does not require a license), W i -F i (registered trademark), or B ⁇ uet ⁇ ⁇ th (registered trademark), etc.
  • the communication protocol in the communication between the communication unit 18 (controller 10) and the control target T 1 is, for example, E het rn et (registered trademark) or ECHO N ET Lite (registered trademark).
  • the information generation unit 19 provided in the control server 100 generates various types of information that can be included in the additional information section 2.
  • the additional information ⁇ 2 indicates the season ⁇ 2020/175 293 17 ⁇ (:171? 2020/006636
  • the information generation unit 19 In the case of including the interval information, the information generation unit 19 generates period information indicating whether the current season is spring, summer, autumn, or winter according to the current time (including month and day). In this case, the control server 100
  • the information generation unit 19 communicates with the voice recognition server 3 so that the user II 1 who utters the voice input to the voice input device 2 is , Speaker information indicating whether the child is a father, mother, sister, or brother.
  • the information generation unit 19 communicates with the voice recognition server 3 to cause the speaker (user II 1 1) who uttered the voice input to the voice input device 2. ) Generates state information indicating the state of.
  • FIG. 3 is a flow chart showing an example of the operation of the device control system 1.
  • the voice input device 2 triggers the voice input from the user II 1 that represents a specific keyword (wake word) such as “start voice operation” to trigger the voice operation of the user II 1. Start accepting. In other words, the voice input device 2 determines that there is no voice input (3 1: N 0) until it receives the input of the wake word from the user II 1, and repeats the process 3 1.
  • a specific keyword such as “start voice operation”
  • the voice input device 2 receives the input of the wake word from the user II 1, the voice input device 2 has the voice input (3 1: 3) and accept the voice from user II 1. At this time, the recognition result of the voice input to the voice input device 2 is transmitted from the voice input device 2 to the voice recognition server 3. Then, the voice recognition server 3 determines whether or not the recognition result received from the voice input device 2 is a control word (32). If the recognition result received from the voice input device 2 is not a control word (3 2 :N 0), process 3 1 is repeated. ⁇ 2020/175 293 18 ⁇ (:171? 2020 /006636
  • the voice recognition server 3 outputs command information 1 to the control server 100.
  • the device control system 1 acquires the command information 1 by the first acquisition unit 11 (3 3).
  • the first acquisition unit 11 acquires the command information 1 from the voice recognition server 3 via the control server 100.
  • the second acquisition unit 12 acquires the additional information I 2 (34).
  • the second acquisition unit 12 acquires, for example, the additional information item 2 generated by the control server 100 (information generation unit 19) from the control server 100.
  • the control unit 13 first selects from among the plurality of command candidates included in the control list stored in the storage unit 17 the command candidate based on the command information ⁇ 1. Narrow down (3 5). At this time, the control unit 13 selects two or more instruction candidates corresponding to the acquired command information [1] from among the plurality of instruction candidates as the _ next candidate.
  • the control unit 13 determines whether or not the additional information item 2 satisfies the determination condition ⁇ 6).
  • the judgment condition is that, for example, when the additional information [2] includes period information indicating a season, the period information indicates either “summer” or “winter”. In this case, if the additional information [2] includes the period information indicating “summer” or “winter”, the judgment condition is satisfied, and whether the additional information includes period information indicating “spring” or “fall”. If the information cannot be discriminated, the judgment condition is not satisfied.
  • control unit 13 narrows down the command candidates from the two or more command candidates selected as the primary candidates by the additional information [2] (37). In this case, the control unit 13 selects one instruction candidate from the primary candidates according to the semi-constant condition. As described above, the control unit 13 can narrow down the command candidates to one by using the additional information item 2 in addition to the command information item I 1.
  • the control system 1 causes the control unit 13 to select a default instruction candidate from the two or more instruction candidates selected as the primary candidates (38). In this way, even if the additional information I2 cannot be discriminated, and the instruction candidates cannot be narrowed down by the additional information I2, the control unit 13 can narrow down the instruction candidates to one. it can.
  • control unit 13 sets the one command candidate selected in the process 37 or the process 38 as the control command 3 and controls the control target I 1 by the control command I 3 (39)
  • the target specifying item included in the control command 3 specifies at least one device 5 to be the controlled target among the plurality of devices 5. Furthermore, how to control the controlled object 1 is specified by the content specification item included in the control command [3].
  • the flow chart of FIG. 3 is merely an example of the operation of the device control system 1, and the processes may be omitted or added as appropriate, or the order of the processes may be appropriately changed.
  • the process in which the first acquisition unit 11 acquires the command information 1 (3 3) and the process in which the second acquisition unit 12 acquires the additional information I 2 (3 4) are in reverse order. May be
  • FIG. 4 is a sequence diagram schematically showing a series of processes in which the device 5 is controlled after the user II 1 speaks and the control result is fed back to the user II 1.
  • the command information I 1 including the word “good morning” is associated with two command candidates regarding the control of the devices 5 1 to 5 3 in the “living room” among the devices 5 1 to 5 5. Suppose that.
  • the command candidate associated with the additional information ⁇ 2 including the period information "summer" is not the device 51 consisting of the air conditioner and the lighting fixture. ⁇ 2020/175 293 20 ⁇ (:171? 2020 /006636
  • the command candidate associated with the additional information [2] including the period information "Winter” is the device 51 that is an air conditioner and the device 52 that is a lighting fixture. Includes target specific items specified as 1. Further, this command candidate includes a content specific item for turning on the operation mode of the device 51 as "heating” and turning on the device 52.
  • the voice input device 2 after the voice input device 2 starts accepting the voice operation of the user II 1, the user 111 outputs the word "Good morning” to the voice input device 2 (3 1 1).
  • the voice input device 2 receives the voice input from the user II 1 and detects a voice command. That is, the voice input device 2 performs voice recognition on the voice uttered by the user II 1 and transmits the recognition result including the word "Good morning” to the voice recognition server 3 (3 1 2).
  • the voice recognition server 3 transmits command information ⁇ 1 including the word “Good morning” to the control server 100 (3 1 3 ).
  • the control server 100 which has received the command information [1], generates the additional information [2] at the timing of receiving the command information [1] (3 1 4).
  • the additional information 2 generated at this time includes at least period information indicating “summer”.
  • the control server 100 sets the command information 1 and the additional information 2 into one set, and sets this one set of data (command information 1 and additional information 2) to the controller 10 Send (3 15).
  • the command information [1] including the word "Good morning” and the additional information [2] including the period information indicating "summer" are transmitted to the controller 10.
  • the controller 10 that has acquired the command information I 1 and the additional information [2] generates the control command I3 based on the command information [1] and the additional information [2], and the device 5 1 as the control target Send control command 3 to ⁇ 5 3 (3 16). This and ⁇ 2020/175 293 21 ⁇ (:171? 2020 /006636
  • the command candidate that is associated with additional information [2] that contains the term information “summer” is the control command. Selected as ⁇ 3.
  • the devices 51 to 53 as the control target knife 1 are controlled according to the control command I 3 based on the command information 1 (“Good morning”) and the additional information 2 (“Summer”).
  • the device 5 1 consisting of air conditioners turns on when the operation mode is “cooling”
  • the device 5 2 consisting of lighting fixtures lights up
  • the device 5 3 consisting of electric shirts opens “open”.
  • the response signal is sent to the voice input device 2 through the controller 10, the control server 100, and the voice recognition server 3 in this order.
  • Sent (3 17 to 320).
  • the response signal is information indicating the result of control of the device 5 by the control instruction I 3.
  • the device 5 basically operates according to the control command 3 after transmitting the response signal, but not limited to this example, the device 5 operates according to the control command 3 and then transmits the response signal. Good.
  • the voice input device 2 Upon receiving the response signal, the voice input device 2 outputs a notification sound to the user II 1 (3 2 1). At this time, the notification sound output by the voice input device 2 is, for example, a sound “ ⁇ ” or a beep sound. Further, the voice input device 2 that receives the response signal may notify the user II 1 of the control result of the device 5 by a display (including simple light emission) or the like instead of or in addition to the notification sound.
  • the control of the device 5 by the device control system 1 is different from the example in FIG. That is, if the season is "winter”, the period information included in the additional information [2] will be [winter], so the control command [3] generated by the controller 10 is different from the example in FIG. Specifically, the device 51 consisting of an air conditioner is turned on when the operation mode is "heating”, the device 52 consisting of a lighting device is turned on, and the device 53 consisting of an electric shirt does not operate. In short, even if the command information (1) (“Good morning”) is the same, the additional information (2) is different, so the device control system 1 ⁇ 2020/175293 22 control (:171? 2020/006636) will be controlled differently.
  • the user II 1 only has to say the word “good morning”, and the control composed of at least one (here, three) devices 5 is required. It is possible to realize control of the target table 1. Moreover, since the device control system 1 uses not only the command information 1 based on the voice input to the voice input device 2 but also the additional information 2 to control the controlled object 1, the user II 1 Even if they say the same word, different controls can be realized depending on the additional information. In other words, even when User II 1 similarly utters the word “Good morning” when waking up in “Summer” and when waking up in “Winter”, different control is applied to the device 5 as described above. To be executed.
  • the command information 1 and the like described above are merely examples, and can be changed as appropriate. For example, if a control word such as “Good night” is set, the user 5 ) 1 at the time of going to bed utters the words “Good night” to control the device 5 that is set for bedtime. it can. If a control word such as “I'm coming” or “I'm home” is set, the user II 1 will say “I'm coming” or “I'm home” when going out or coming home, and It is possible to control the device 5 that is set for returning home. Moreover, even in these cases, since the device control system 1 uses not only the command information 1 but also the additional information 2 to control the controlled object 1, the user II 1 uses the same word.
  • the wake-up user 1 utters the word ‘good morning’, which is one of the control words, toward the voice input device 2.
  • the word "good morning” ⁇ 2020/175 293 23 ⁇ (:171? 2020 /006636
  • command information ⁇ 1 including the above is associated with two command candidates regarding the control of the devices 5 1 to 5 3 in the “living room” among the devices 5 1 to 5 5.
  • the control unit 13 changes the control command 3 at least in accordance with the period information.
  • 7 o'clock (7 o'clock to 8 o'clock) the control command 3 changes.
  • the control unit 13 changes the control command 3 at least according to the input information.
  • the pushbutton provided on controller 10 is operated (that is, input section 14 responds to user II 1's operation).
  • the control command [3] changes depending on whether or not the signal is received.
  • the controller 10 generates the additional information I 2 including the input information corresponding to the input signal received by the input unit 14, and the second acquisition unit 12 generates the additional information I 2 generated in the controller 10. You may also get Informational 2.
  • the control unit 13 changes the control command 3 according to at least the speaker information. Therefore, for example, the user II 1 who utters the word “good morning” changes the control command [3] depending on whether the father, mother, sister, or younger brother.
  • the additional information I 2 including the speaker information is generated, and the second acquisition unit 12 adds the additional information from the voice recognition server 3 (via the control server 100). You may also obtain Information Note 2.
  • the control unit 13 changes the control command 3 at least according to the state information. Therefore, for example, the control command [3] changes depending on the physical condition of the user II 1 who has issued the word “good morning”. In this case, for example, ⁇ 2020/175 293 24 ⁇ (:171? 2020 /006636
  • the voice recognition server 3 generates the additional information 2 including the status information, and the second acquisition unit 12 acquires the additional information 2 from the voice recognition server 3 (via the control server 100). You may.
  • the setting operation (manual setting mode or learning mode) by the setting unit 16 will be described.
  • the information set in the setting operation is at least the correspondence relationship between the command information [1], the additional information [2], and the control command I3. Further, the control word and the like are also set in the setting operation.
  • the device control system 1 starts a setting operation (manual setting mode or learning mode) with a specific event as a trigger.
  • a specific event is that a voice representing a specific keyword such as “setting mode” is input from the user interface 1 to the voice input device 2.
  • the specific event for starting the setting operation is not limited to the input of a specific voice (word), and may be, for example, a specific operation for the controller 10.
  • the setting section 16 returns to the input section, for example.
  • the correspondence between the command information I 1, the additional information 2, and the control command I 3 is changed according to the input signal accepted by the controller 14, that is, the input signal according to the operation of the user II 1.
  • the user II 1 associates the instruction candidate _ with one combination of the command information 1 and the additional information 2 on the control list stored in the storage unit 17.
  • the setting unit 16 can individually change each of the target specific item and the content specific item included in the command candidate (control command 3). Also, the setting unit 16 may add or delete the correspondence relationship between the command information I 1, the additional information [2], and the control command [3].
  • the setting unit 16 updates the control list stored in the storage unit 17 after changing the control list.
  • the setting unit 16 when the setting unit 16 starts the setting operation in the learning mode, for example, it automatically learns the correspondence relationship between the command information 1 and the additional information 2 and the control command I 3. ⁇ 2020/175 293 25 ⁇ (:171? 2020 /006636
  • the setting unit 16 corresponds to this operation when the user 11 1 operates the highly correlated device 5 for one combination of the command information I 1 and the additional information ⁇ 2. Generate a command candidate to be added and add it to the control list.
  • a command candidate to be added and add it to the control list.
  • the user II It is assumed that an operation is performed to “open” the device 5 3 in which 1 is an electric shirt.
  • the setting unit 16 performs the operation of opening the device 5 3 composed of the electric shirt to “open” for the combination of the command information I 1 (“Good morning”) and the additional information ⁇ 2 (“Summer”).
  • an instruction candidate corresponding to this operation is generated and added to the control list.
  • the setting unit 16 performs this operation.
  • the instruction candidates in the control list are changed (including deletion) accordingly.
  • a combination of certain command information 1 (“Ohayo”) and additional information 2 (“Summer”) is associated with a command candidate for lighting the device 5 2 that is a lighting fixture.
  • the device control system 1 obtains the command information 1 (“Good morning”) and the additional information 2 (“Summer”) and controls to turn on the device 52 (for example, 1 minute). Then, the user 11 1 operates to turn off the device 5 2.
  • the setting unit 16 has a high correlation with the operation of turning on the device 52, which is a lighting fixture, with respect to the combination of the command information I 1 (“Good morning”) and the additional information ⁇ 2 (“Summer”). As an operation, the instruction candidate is changed according to this operation. In the above-mentioned case, the setting unit 16 deletes the command candidate for turning on the device 52 composed of the lighting equipment or changes the command candidate for turning off the device 52 composed of the lighting equipment.
  • the setting unit 16 updates the control list stored in the storage unit 17 after changing the control list.
  • Embodiment 1 is only one of the various embodiments of the present disclosure.
  • the first embodiment can be variously modified according to the design and the like as long as the object of the present disclosure can be achieved.
  • Each drawing described in the present disclosure is a schematic drawing, and the sizes and thicknesses of the respective constituent elements in the drawings do not necessarily reflect the actual dimensional ratios.
  • the same function as that of the device control system 1 according to the first embodiment may be embodied by a device control method, a computer program, or a non-transitory recording medium in which the computer program is recorded.
  • the device control method according to one aspect includes a first acquisition process (corresponding to “3 3” in FIG. 3), a second acquisition process (corresponding to “3 4” in FIG.
  • the first acquisition process is a process for acquiring command information [1].
  • Command information 1 is information about the recognition result of the voice input to the voice input device 2.
  • the second acquisition process is a process of acquiring the additional information I 2.
  • the additional information I 2 includes information different from the command information 1.
  • the control process is a process for controlling the controlled object manual 1 consisting of at least one device 5 by the control command [3] based on both the command information [1] and the additional information [2].
  • a (computer) program according to an aspect is a program for causing one or more processors to execute the device control method.
  • the device control system 1 includes a computer system in the control server 100 and the controller 10, for example.
  • the computer system mainly has a processor and a memory as hardware.
  • the function as the device control system 1 according to the present disclosure is realized by the processor executing the program recorded in the memory of the computer system.
  • the program may be pre-recorded in the memory of the computer system or may be provided through a telecommunication line, and may be recorded on a non-transitory recording medium such as a memory card, an optical disk, a hard disk drive, etc. which is readable by the computer system. It may be recorded and provided.
  • Computer system professional A sessa is composed of one or more electronic circuits including a semiconductor integrated circuit (1 C) or a large scale integrated circuit (LS I ).
  • the integration circuits such as C and LS are referred to by the degree of integration as system LS I, VL SI (Very Large Scale Integration), or U LS I (Ultra Large Scale Integration). Called integrated circuit.
  • F PGAs Field-Programmable Gate Arrays
  • logic devices that can be reconfigured with the junction relationship inside the LS or the reconfiguration of the circuit section inside the LSI, which is programmed after manufacturing the LSI.
  • the plurality of electronic circuits may be integrated in one chip or may be distributed and provided in the plurality of chips.
  • the plurality of chips may be integrated in one device or may be distributed and provided in the plurality of devices.
  • a computer system includes a microcontroller with one or more processors and one or more memories. Therefore, the microcontroller is also composed of one or more electronic circuits including semiconductor integrated circuits or large-scale integrated circuits.
  • the device control system 1 it is not an essential configuration of the device control system 1 that at least a part of the functions of the device control system 1 are integrated in one housing, and the components of the device control system 1 are It may be distributed and provided in a plurality of housings.
  • the second acquisition unit 12 of the device control system 1 does not have to be integrated in the same housing of the controller 10 as the first acquisition unit 11 and is different from the first acquisition unit 11. It may be provided in the housing.
  • at least a part of the functions of the device control system 1, for example, the functions of the control unit 13 may be distributed and provided in multiple housings, or realized by cloud (cloud computing) or the like. You may touch it.
  • At least a part of the functions distributed to the plurality of devices may be integrated in one housing.
  • the functions distributed to the control server 100 and the controller 10 may be integrated in one housing.
  • the information generation unit 19 that generates various information that can be included in the additional information I 2 is ⁇ 2020/175 293 28 ⁇ (:171? 2020 /006636
  • the control server 100 may be provided in the controller 10 or may be provided in a device other than the device control system 1, for example.
  • the information generation unit 19 may be provided in the controller 10.
  • the information generation unit 19 generates the additional information section 2 including the input information corresponding to the input signal received by the input section 14 in the controller 10, and the second acquisition section 12 stores the additional information in the controller 10. Acquire the additional information 2 generated in.
  • the information generation unit 19 may be provided in the voice recognition server 3.
  • the information generation unit 19 generates the additional information [2] including the speaker information in the speech recognition server 3, and the second acquisition unit 12 causes the speech recognition server 3 to (the control server 1 Acquire additional information (2) including speaker information.
  • the information generation unit 19 may be provided in the voice recognition server 3.
  • the information generation unit 19 generates the additional information I 2 including the state information in the voice recognition server 3, and the second acquisition unit 12 outputs the additional information I 2 from the voice recognition server 3 (via the control server 100). ) Acquire additional information 2 including status information.
  • the device 5 that can be the controlled object 1 is only required to be a device that is controlled in response to a control command from the device control system 1, and is not limited to an air conditioner, a lighting fixture, an electric shutter, or the like.
  • the device 5 is an intercom device, a television receiver, a washing machine, a refrigerator, a multifunction device, an electric lock, a solar power generation facility, a power storage facility, a wiring device, a charger for an electric vehicle, a mobile terminal, etc. Good.
  • the device 5 as the control target knife 1 is a wiring device such as a branch breaker or a wall switch
  • the device control system 1 controls the device 5 as the controlled target knife 1 to branch the device.
  • the energization state can be controlled for each circuit such as a circuit.
  • the registered words are stored in the voice input device 2, but this configuration is not essential for the device control system 1.
  • the registered words may be stored in the voice recognition server 3.
  • the audio input device Unit 2 does not operate stand-alone, but executes voice recognition in cooperation with voice recognition server 3.
  • the additional information [2] includes only the external information of the external information and the internal information, but the present invention is not limited to this, and the additional information [2] includes the external information and the internal information. And both may be included. Furthermore, the additional information [2] may not include external information, and may include only internal information of external information and internal information.
  • the additional information [2] may include external information such as weather information and request information in addition to or in place of the input information, the period information, the speaker information, and the state information.
  • the meteorological information mentioned here is information on the weather around the facility F 1, and is, for example, information on the weather, the outside temperature, and the amount of pollen scattered.
  • the request information is, for example, information for demanding peak cuts, peak shifts, etc. from electric power companies (electric power companies, etc.).
  • the weather information and the requested information are acquired by the control server 100 or the controller 10 from the weather server and the server of the electric utility, respectively.
  • each communication method used in the device control system 1 is not limited to the communication method exemplified in the first embodiment, and can be changed as appropriate.
  • the communication method between the controller 10 (communication unit 18) and the control target T 1 is not limited to wireless communication, but wired communication conforming to a communication standard such as a wired LAN (Local Area Network). It may be.
  • the setting unit 16 has the three modes of the normal mode, the manual setting mode, and the learning mode. For example, only the learning mode is available. May have.
  • the additional information I2 may include facility information regarding the state of the facility F1.
  • the state of the facility F 1 here includes, for example, the opening/closing state and the locked/unlocked state of the openings (doors and windows, etc.) of the facility F 1, and the presence/absence of a person in the facility F 1.
  • the presence/absence status of a person also includes, for example, the attributes (age, gender, etc.) of the person at the facility F 1.
  • the control unit 13 changes the control command 3 according to at least the facility information. So, for example, user U 1 says the words "good morning.” ⁇ 2020/175 293 30 ⁇ (:171? 2020 /006636
  • Control command 3 changes depending on whether there is an adult in facility 1 when the is issued (that is, there are only “sister” and “younger brother”).
  • the device control system 1 relates to the first embodiment in that the control server 10.8 sends a control command 3 to the controller 108. Different from the device control system 1.
  • the same components as those in the first embodiment will be denoted by the same reference numerals and the description thereof will be appropriately omitted.
  • the first acquisition unit 11 (see FIG. 2), the second acquisition unit 12 (see FIG. 2) and the control unit 13 (see FIG. 2) are not included in the controller 108. Instead, it is installed on the control server 1008.
  • the controller 108 realizes the control of the device 5 by transmitting (transferring) the control command I 3 received from the control server 100 to the device 5 as the controlled object.
  • command information I 1 is generated.
  • the command information I 1 generated by the voice recognition server 3 includes, for example, an item “air conditioning on” indicating that the device 5 1 including an air conditioning device is turned on.
  • the command information 1 is information generated by the voice recognition server 3 based on the recognition result received from the voice input device 2.
  • the recognition result including the word "Ohayo” is associated with the command information 1 including the item "Air conditioning on”, and the voice recognition server 3 uses the command information based on this correspondence.
  • the voice recognition server 3 transmits the generated command information 1 to the control server 1008.
  • the control server 100 which has received the command information I 1 (“air conditioning on”), causes the information generation unit 19 to generate various information that can be included in the additional information [2]. For example, when the additional information ⁇ 2 includes period information indicating a season, the information generation unit 19 determines whether the current season is spring, summer, autumn, or winter according to the current time (including month and day). Generate the period information shown (“Summer” in the example in Figure 5). Control ⁇ 2020/175 293 31 ⁇ (:171? 2020 /006636
  • the first acquisition unit 11 and the second acquisition unit 12 of the service server 1008 acquire the command information 1 and the additional information 2, respectively. Then, the control server 1008 causes the control unit 13 to generate a control command 3 based on the command information I 1 (“air conditioning on”) and the additional information 2 (“summer”).
  • the control command 3 specifies the device 51, which is an air conditioner, as the control target switch 1 and sets the operation mode of the control target switch 1 (device 5 1) to “cooling”. Includes items and content specific items.
  • the control server 1008 transmits such a control command 3 to the controller 108.
  • the controller 108 executes control of the device 5 in accordance with the control command [3].
  • the device 51 which is an air conditioner, is turned on when the operation mode is “cooling”.
  • the device control system 18 it is not an indispensable configuration to set the device 5 1 consisting of an air conditioner as a control target, and for example, control of two or more devices 5 is possible. It can also be the target Ding 1.
  • the device control system (1, 18) includes the first acquisition unit (1 1), the second acquisition unit (1 2), and the control unit (1 3). ,.
  • the first acquisition unit (11) acquires command information ( ⁇ 1).
  • the command information (1) is the information about the recognition result of the voice input to the voice input device (2).
  • the second acquisition unit (1 2) acquires the additional information (I 2).
  • the additional information (I 2) consists of information other than the command information (11).
  • the control unit (1 3) uses the control command (I 3) based on both the command information (1 1) and the additional information (I 2) to select the controlled object (D 1) including at least one device (5). Control.
  • Additional information (1) different from 1) is used. Therefore, for example, even if the voice input to the voice input device (2) is the same, it is not unique to the control target (Ding 1) that consists of at least one device (5), and additional information (I 2) Different controls are performed. Therefore, the equipment control system (1,
  • the device (5) has the advantage that the device (5) can be flexibly controlled even when the user (II 1) speaks in the same way.
  • the additional information ([2]) is the input information that changes according to the operation of the user (II 1). Including.
  • the control section (1 3) changes the control command (3) according to at least the input information.
  • the additional information ([2]) includes period information regarding a period.
  • the control unit (1 3) changes the control command (3) according to at least the period information.
  • the additional information ([2]) is the voice input to the voice input device (2).
  • the control unit (1 3) changes the control command (3) according to at least the speaker information.
  • the additional information ( ⁇ 2) is an external information independent from the command information (11). Contains information.
  • the external information included in the additional information ([2]) is the command information. ⁇ 2020/175 293 33 ⁇ (:171? 2020 /006636
  • the additional information ([2]) includes state information.
  • the state information is information that is extracted from the voice input to the voice input device (2) and is related to the state of the speaker who uttered the voice.
  • the control unit (13) changes the control command (I3) at least according to the status information.
  • the control instruction (I3) includes a target specific item and a content specific item.
  • the target specification item specifies at least one device (5) that is a control target (Ding 1) among a plurality of devices (5).
  • the content specific item specifies the operation of the control target (Ding 1).
  • the control unit (13) determines the target specific item and the content specific item based on both the command information (11) and the additional information ( ⁇ 2).
  • the control unit (13) includes the command information (11) and the additional information (I). 2) and two or more devices (5) are selected as the control target (Ding 1) from the multiple devices (5) based on both.
  • the control unit (13) is associated with one command information (11).
  • One instruction candidate is selected from among the plurality of instruction candidates that are provided, based on the additional information (see 2).
  • the control unit (1 3) controls the instruction candidate of _ as a control instruction (I 3) ⁇ 2020/175 293 34 ⁇ (:171? 2020 /006636
  • command candidates are narrowed down from the plurality of command candidates by the command information ([1]), and one command candidate is selected based on the additional information ([2]). Therefore, the processing load for selecting one instruction candidate can be reduced.
  • the control unit (13) determines that the additional information (I2) satisfies the determination condition. Then, one instruction candidate is selected according to the semi-fixed condition. When the additional information (I 2) does not satisfy the judgment condition, the control unit (1 3) selects a default instruction candidate from a plurality of instruction candidates and sets the default instruction candidate as a control instruction (I 3). Control the control target (Ding 1).
  • the control target can be controlled. (1) can be controlled.
  • the device control system (1, 18) according to the eleventh aspect further includes a setting unit (16) in any one of the first to tenth aspects.
  • the setting unit (16) sets the correspondence between the command information ([1]), additional information ([2]) and the control command (I3).
  • the setting unit (16) has a learning mode.
  • the learning mode when the control target (D1) is controlled by the control command ( ⁇ 3), the control command ( ⁇ 3), command information (1 1) and additional information (I 2 ) Is a mode for generating a correspondence relationship with.
  • the device control method includes a first acquisition process, a second acquisition process, and a control process.
  • the first acquisition process is the process of acquiring the command information (11).
  • the command information (I 1) is information on the recognition result of the voice input to the voice input device (2).
  • the second acquisition process is a process to acquire the additional information (I 2 ).
  • the additional information ([2]) consists of information different from the command information (11).
  • the control process is a process of controlling the controlled object (D1) consisting of at least one device (5) by a control command (D3) based on both command information (D1) and additional information (D2). Is.
  • the device control method has an advantage that the device (5) can be flexibly controlled even when the user (11 1) speaks in the same manner.
  • a program according to a fourteenth aspect is the device control method according to the thirteenth aspect
  • the command information ( ⁇ 1) based on the voice input to the voice input device (2) but also the command information (I1) are used to control the controlled object (Ding 1). Uses other additional information (see 2). Therefore, for example, even if the voice input to the voice input device (2) is the same, it is not unique to the control target (Ding 1) that consists of at least one device (5), and additional information (I 2) Different controls are performed. Therefore, the above program has the advantage that the device (5) can be flexibly controlled even when the user (111) speaks in the same way.
  • the configurations according to the second to 12th aspects are not essential configurations for the device control system (1, 18), and can be omitted as appropriate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Selective Calling Equipment (AREA)

Abstract

La présente invention concerne un système de commande d'appareil, un procédé de commande d'appareil et un programme avec lesquels il est possible de commander de manière flexible un appareil même lorsqu'un utilisateur parle de la même manière. Un système de commande d'appareil (1) comprend une première unité d'acquisition (11), une seconde unité d'acquisition (12), et une unité de commande (13). La première unité d'acquisition (11) acquiert des informations de commande. Les informations de commande concernent le résultat de la reconnaissance d'une parole entrée dans un dispositif d'entrée de parole (2). La seconde unité d'acquisition (12) acquiert des informations supplémentaires. Les informations supplémentaires sont composées d'informations séparées des informations de commande. L'unité de commande (13) commande un objet de commande (T1) composé d'au moins un appareil par une instruction de commande sur la base à la fois des informations de commande et des informations supplémentaires.
PCT/JP2020/006636 2019-02-27 2020-02-19 Système de commande d'appareil, procédé de commande d'appareil, et programme WO2020175293A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-034919 2019-02-27
JP2019034919A JP2020141235A (ja) 2019-02-27 2019-02-27 機器制御システム、機器制御方法及びプログラム

Publications (1)

Publication Number Publication Date
WO2020175293A1 true WO2020175293A1 (fr) 2020-09-03

Family

ID=72239029

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/006636 WO2020175293A1 (fr) 2019-02-27 2020-02-19 Système de commande d'appareil, procédé de commande d'appareil, et programme

Country Status (2)

Country Link
JP (1) JP2020141235A (fr)
WO (1) WO2020175293A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021110921A (ja) * 2020-01-08 2021-08-02 ペキン シャオミ パインコーン エレクトロニクス カンパニー, リミテッド 音声対話方法、装置、機器および記憶媒体
WO2022111282A1 (fr) * 2020-11-24 2022-06-02 International Business Machines Corporation Inclusion sonore sélective basée sur l'ar (réalité augmentée) à partir de l'environnement tout en exécutant toute commande vocale

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002182680A (ja) * 2000-12-19 2002-06-26 Alpine Electronics Inc 操作指示装置
JP2005086768A (ja) * 2003-09-11 2005-03-31 Toshiba Corp 制御装置、制御方法およびプログラム
JP2014183491A (ja) * 2013-03-19 2014-09-29 Sharp Corp 電気機器制御装置、電気機器制御システム、プログラム、および電気機器制御方法
WO2015029379A1 (fr) * 2013-08-29 2015-03-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Procédé de commande de dispositif, procédé de commande d'affichage et procédé de paiement d'achat
WO2016157537A1 (fr) * 2015-04-03 2016-10-06 三菱電機株式会社 Système de conditionnement d'air

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013138553A (ja) * 2011-12-28 2013-07-11 Toshiba Corp 電力管理サーバ装置、電力管理方法及び電力管理プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002182680A (ja) * 2000-12-19 2002-06-26 Alpine Electronics Inc 操作指示装置
JP2005086768A (ja) * 2003-09-11 2005-03-31 Toshiba Corp 制御装置、制御方法およびプログラム
JP2014183491A (ja) * 2013-03-19 2014-09-29 Sharp Corp 電気機器制御装置、電気機器制御システム、プログラム、および電気機器制御方法
WO2015029379A1 (fr) * 2013-08-29 2015-03-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Procédé de commande de dispositif, procédé de commande d'affichage et procédé de paiement d'achat
WO2016157537A1 (fr) * 2015-04-03 2016-10-06 三菱電機株式会社 Système de conditionnement d'air

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021110921A (ja) * 2020-01-08 2021-08-02 ペキン シャオミ パインコーン エレクトロニクス カンパニー, リミテッド 音声対話方法、装置、機器および記憶媒体
US11798545B2 (en) 2020-01-08 2023-10-24 Beijing Xiaomi Pinecone Electronics Co., Ltd. Speech interaction method and apparatus, device and storage medium
WO2022111282A1 (fr) * 2020-11-24 2022-06-02 International Business Machines Corporation Inclusion sonore sélective basée sur l'ar (réalité augmentée) à partir de l'environnement tout en exécutant toute commande vocale
GB2616765A (en) * 2020-11-24 2023-09-20 Ibm AR (augmented reality) based selective sound inclusion from the surrounding while executing any voice command
US11978444B2 (en) 2020-11-24 2024-05-07 International Business Machines Corporation AR (augmented reality) based selective sound inclusion from the surrounding while executing any voice command

Also Published As

Publication number Publication date
JP2020141235A (ja) 2020-09-03

Similar Documents

Publication Publication Date Title
US11355111B2 (en) Voice control of an integrated room automation system
JP7364948B2 (ja) 機器制御システム
US10985936B2 (en) Customized interface based on vocal input
CN102538143B (zh) 语音智能搜索引擎空调系统及其控制方法
US11929844B2 (en) Customized interface based on vocal input
US11665796B2 (en) Multi-purpose voice activated lighting apparatus
WO2020175293A1 (fr) Système de commande d'appareil, procédé de commande d'appareil, et programme
CN110164436A (zh) 便携式多点智能语音控制家居的系统及方法
JP7340764B2 (ja) 音声制御システム
KR20200058015A (ko) 기기 원격 제어 시스템
CN202075619U (zh) 智能家居控制系统
CN102981420A (zh) 智能声波传感服务系统
US20240187276A1 (en) Multi-Source Smart-Home Device Control
CN112003770A (zh) 一种智能语音开关面板及其控制方法
Alkan et al. Indoor Soundscapes of the Future: Listening to Smart Houses
CN113302564A (zh) 实现全屋智能的家居系统的组成及方法
CN115602150A (zh) 能够进行语音控制的电子设备、方法、系统、介质及程序
JP2020013593A (ja) 安否確認システム、及び冷蔵庫
CN116634633A (zh) 一种智能声控室内照明系统
Rashid et al. Smart and Energy Efficient Wireless Embedded Home Automation System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20762813

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20762813

Country of ref document: EP

Kind code of ref document: A1