WO2015174272A1 - Système de réseau, serveur, appareil de communication, procédé de traitement d'informations et programme - Google Patents

Système de réseau, serveur, appareil de communication, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2015174272A1
WO2015174272A1 PCT/JP2015/062803 JP2015062803W WO2015174272A1 WO 2015174272 A1 WO2015174272 A1 WO 2015174272A1 JP 2015062803 W JP2015062803 W JP 2015062803W WO 2015174272 A1 WO2015174272 A1 WO 2015174272A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
processor
instruction
utterance
voice
Prior art date
Application number
PCT/JP2015/062803
Other languages
English (en)
Japanese (ja)
Inventor
太一郎 森下
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Priority to JP2016519202A priority Critical patent/JP6349386B2/ja
Priority to CN201580023383.7A priority patent/CN106255963B/zh
Publication of WO2015174272A1 publication Critical patent/WO2015174272A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Definitions

  • the present invention relates to technology for directly or indirectly controlling home appliances such as refrigeration, air conditioners, and televisions by a server, and more particularly to technology for controlling audio output of home appliances by a server.
  • Patent Document 1 discloses a home appliance, an adapter device, and a home appliance system.
  • a wireless adapter and a speaker are attached as an optional device to a home appliance such as a refrigerator or a microwave oven.
  • the wireless adapter transmits the cooking end signal to the home server through the sub-network, and the home server transmits the cooking end signal to the information server through the Internet.
  • Voice code information such as “cooking is finished” is stored in advance in the wireless adapter, and when the voice synthesis request signal is transmitted from the information server to the wireless adapter, the information server responds to the voice synthesis request signal. Voice code information is selected, voice synthesis is performed, and voice is output from the speaker.
  • an object of the present invention is to allow a device to output audio more flexibly than before, or to suppress the maximum value of the network traffic volume, or to output audio quickly.
  • a network system transmits at least one device capable of storing a plurality of types of audio data, a first instruction for causing the at least one device to acquire audio data, and outputs audio based on the audio data.
  • a server for transmitting a second instruction for output at a timing different from that of the first instruction.
  • the server stores a plurality of second instructions for at least one device.
  • at least one device completes outputting the voice in response to the second instruction, it transmits a first notification to the server.
  • the server receives the first notification, the server transmits a second instruction to the at least one device.
  • the server can refer to the correspondence relationship between at least one device and a group.
  • the server transmits the second instruction to the plurality of devices in the same group and receives the first notification from any of the plurality of devices in the same group, the server transmits the second instruction to the plurality of devices in the same group. Send.
  • the server can refer to the correspondence relationship between at least one device and a group.
  • the server transmits a different second instruction to each of the plurality of devices in the same group so that the plurality of devices in the same group output different sounds.
  • the group is based on at least one of family, location, room, and attribute.
  • the server stores a plurality of second instructions for at least one device.
  • the at least one device transmits a second notification to the server when no sound is output.
  • the server receives the second notification, the server transmits the next second instruction to at least one device.
  • the second instruction includes a condition for outputting sound and an expiration date.
  • At least one device sends a second notification to the server if the condition is not met within the expiration date.
  • the server sends a command to cancel the instruction. More preferably, the server may determine whether or not the execution of the instruction has been completed, and transmit a command when the instruction has not been completed.
  • an information processing method in a network system includes at least one device capable of storing a plurality of types of audio data and a server.
  • the server transmits a first instruction for causing the at least one device to acquire audio data, and the server outputs a second instruction for outputting the audio based on the audio data. And a step of transmitting at a timing different from the first instruction.
  • a server including a communication interface for communicating with at least one device and a processor.
  • the processor transmits, via the communication interface, a first instruction for causing at least one device to acquire audio data, and a second instruction for outputting audio based on the audio data as the first instruction. Transmit at different timings.
  • an information processing method in a server includes a communication interface for communicating with at least one device and a processor.
  • the processor transmits a first instruction for causing at least one device to acquire audio data via the communication interface, and the processor based on the audio data via the communication interface. Transmitting a second instruction for outputting sound at a timing different from that of the first instruction.
  • a server program includes a communication interface and a processor for communicating with at least one device.
  • the program transmits a first instruction for causing at least one device to acquire audio data via the communication interface, and a second for outputting audio based on the audio data via the communication interface. And a step of transmitting the instruction at a timing different from that of the first instruction.
  • a communication device includes a memory for storing a plurality of types of audio data, a communication interface for communicating with a server, and a processor.
  • the processor receives the first instruction for acquiring the voice data from the server via the communication interface, and the second instruction for outputting the voice based on the voice data is different from the first instruction. Receive at the timing.
  • an information processing method in a communication device includes a memory for storing a plurality of types of audio data, a communication interface for communicating with a server, and a processor.
  • the processor receives a first instruction for acquiring audio data from the server via the communication interface, and the processor is based on the audio data from the server via the communication interface. Receiving a second instruction for outputting sound at a timing different from that of the first instruction.
  • a program for a communication device includes a communication interface and a processor for communicating with other devices.
  • the program receives a first instruction for acquiring audio data from the server via the communication interface, and a first instruction for outputting audio based on the audio data from the server via the communication interface. Receiving the instruction of 2 at a timing different from that of the first instruction.
  • a network system a server, a terminal, an information processing method, and a program are provided.
  • FIG. 1 is an image diagram showing an overall configuration and an operation outline of a network system 1 according to first to third embodiments.
  • FIG. It is a block diagram showing the hardware constitutions of the server 100 concerning this Embodiment. It is an image figure which shows the data structure of the audio
  • FIG. 1 is an image diagram showing the overall configuration and operation overview of the network system 1 according to the present embodiment.
  • the network system 1 mainly includes devices such as a refrigerator 200A, an air conditioner 200B, and a washing machine 200C, an audio server 100 for controlling audio output of the devices, and adapters 300A and 300B as communication terminals. , 300C. Furthermore, the network system 1 includes a router 400 for connecting the adapters 300A, 300B, and 300C to the Internet, a control server 500 for processing message exchange between family members or between family members and devices, You may include terminals, such as smart phone 600A, 600B, 600C, notebook personal computer 600D.
  • the network system 1 may further include databases 101 and 501.
  • the voice server 100 or the control server 500 may store at least one of the databases 101 and 501.
  • the voice server 100 is connected to the adapters 300A, 300B, and 300C and the control server 500 via the Internet or the router 400.
  • the voice server 100 receives an utterance command from an administrator or receives an utterance command from the smartphones 600A, 600B, and 600C via the control server 500.
  • the voice server 100 causes the refrigerator 200A, the air conditioner 200B, and the washing machine 200C to output voice via the adapters 300A, 300B, and 300C based on the speech command.
  • the devices such as the refrigerator 200A, the air conditioner 200B, and the washing machine 200C transmit the control command received from the remote controller, the data measured by the sensor, and the like to the voice server 100 and the control server 500 via the adapters 300A, 300B, 300C, the router 400, and the Internet.
  • Devices such as the refrigerator 200 ⁇ / b> A, the air conditioner 200 ⁇ / b> B, and the washing machine 200 ⁇ / b> C perform various operations based on voice data acquisition instructions, voice data, speech instructions, and control commands from the voice server 100 and the control server 500.
  • the devices are not limited to the refrigerator 200A, the air conditioner 200B, and the washing machine 200C, but include air purifiers, humidifiers, dehumidifiers, self-propelled cleaners, home appliances such as lighting, AV (such as TVs, hard disk recorders, music players) Audio / visual) equipment, solar power generators, intercoms, home equipment such as water heaters, and the like.
  • air purifiers such as humidifiers, dehumidifiers, self-propelled cleaners
  • home appliances such as lighting, AV (such as TVs, hard disk recorders, music players) Audio / visual) equipment, solar power generators, intercoms, home equipment such as water heaters, and the like.
  • AV such as TVs, hard disk recorders, music players
  • Audio / visual solar power generators
  • intercoms such as water heaters, and the like.
  • Each of the devices 200 transmits and receives data to and from the communication adapters 300A, 300B, and 300C via a communication interface such as UART (Universal Asynchronous Receiver Receiver Transmitter).
  • UART Universal Asynchronous Receiver Receiver Transmitter
  • the communication adapters 300A, 300B, and 300C communicate with the device 200 via a communication interface such as UART.
  • the communication adapters 300A, 300B, and 300C communicate with the router 400 via a wireless LAN communication interface such as WiFi (registered trademark).
  • the communication adapters 300A, 300B, and 300C transmit data from the device 200 to the voice server 100 or the control server 500 via the router 400 or the Internet.
  • the communication adapters 300 ⁇ / b> A, 300 ⁇ / b> B, and 300 ⁇ / b> C transmit data from the voice server 100 or the control server 500 to the device 200.
  • the communication adapters 300A, 300B, and 300C are also collectively referred to as the adapter 300.
  • Router 400 relays adapter 300 and the Internet.
  • the control server 500 is connected to the adapter 300, the voice server 100, the smartphones 600A, 600B, and 600C through the Internet or the router 400.
  • the control server 500 receives control commands and speech commands for home appliances from smartphones 600A, 600B, and 600C in which the home appliance control application is installed.
  • the control server 500 transmits an operation command to the device 200 via the adapter 300 based on the control command, or transmits a speech command to the voice server 100.
  • Each of smartphones 600A, 600B, and 600C and notebook computer 600D is held by the user.
  • the user controls the device 200, acquires information on the device 200, outputs sound to the device 200, and other audio via the smartphones 600A, 600B, 600C or the notebook computer 600D in which the home appliance control application is installed.
  • Exchange messages with the user's terminal The user is not limited to the smartphones 600A, 600B, and 600C and the notebook computer 600D, but uses other types of terminals that can communicate with the audio server 100 and the control server 500, such as tablets, personal computers, game machines, and electronic book terminals. Also good.
  • these devices are also collectively referred to as a terminal 600.
  • the audio database 101 stores data related to audio output from the device 200.
  • the group database 501 stores data indicating a relationship between a user and a group related to a family, a room, a current position, an address, a user attribute, and the like.
  • the voice database 101 and the group database 501 can be referred to from the voice server 100 and the control server 500. ⁇ Overview of network system operation>
  • the voice server 100 transmits a voice data acquisition instruction to the adapter 300 at the first timing (1).
  • the acquisition instruction includes the date and conditions for outputting audio, conditions, and the like.
  • the first timing may be, for example, the turn of the season, a predetermined time on a predetermined date in each month, or a predetermined time on a predetermined day of the week. It may be a predetermined time of every day, a date designated by the administrator, or a date designated by the user via the terminal 600. This first timing can be set and changed on the service side.
  • the adapter 300 downloads audio data designated from the audio server 100 or another server. When the download of the designated audio data is completed, the adapter 300 notifies the audio server 100 that the acquisition of the audio data is completed (2).
  • the voice server 100 transmits a voice data utterance instruction to the adapter 300 at the second timing (3).
  • the utterance instruction includes designation of voice data to be output.
  • the utterance instruction includes a designated combination and order of audio data to be output.
  • the utterance instruction includes the date and conditions for outputting the voice, conditions, and the like.
  • the second timing may be, for example, the turn of the season, a predetermined time on a predetermined date in each month, or a predetermined time on a predetermined day of the week. It may be a predetermined time of every day, a date designated by the administrator, or a date designated by the user via the terminal 600.
  • the adapter 300 causes the device 200 to output sound based on the utterance instruction.
  • the adapter 300 notifies the voice server 100 that the utterance has been completed (4).
  • the voice server 100 receives the notification that the utterance has been completed from the adapter 300, or transmits the next utterance instruction to the adapter 300 to the adapter 300 at the third timing (5).
  • the third timing may be, for example, a turn of the season, a predetermined time on a predetermined date of every month, It may be a predetermined time of a predetermined day of the week, a predetermined time of every day, a date and time specified by an administrator, or specified by the user via the terminal 600 It may be a date and time.
  • the adapter 300 causes the device 200 to output sound based on the next utterance instruction.
  • the adapter 300 notifies the voice server 100 that the utterance has been completed (6).
  • the voice server 100 transmits the instruction for causing the adapter 300 to acquire voice data and the instruction for causing the device 200 to speak voice at different timings. . Therefore, it is possible to freely combine the voices output from the device 200 according to the utterance instruction without transmitting the voice data every time. That is, the server 100 can control the audio output of the device 200 more flexibly than in the past.
  • the voice data can be downloaded when the traffic volume of data transmission / reception is relatively small, an increase in the maximum value of the network traffic volume can be suppressed.
  • the utterance instruction does not include voice data
  • the amount of data is small, and even if it is transmitted by always-on connection using WebSocket or the like, it is difficult to disturb other data transmission. That is, it is possible to cause the adapter 300 to immediately receive an utterance instruction at a timing desired by the service manager or user. As a result, it is possible to immediately output sound to the device 200 at a timing desired by the service manager or user.
  • a notification to that effect may be once transmitted to the voice server 100 and the control server 500. Then, the voice server 100 determines whether or not to transmit an utterance instruction to the adapter 300 based on the received notification. When the error voice is to be output, the voice server 100 transmits an error voice utterance instruction to the adapter 300. Since the data amount of the utterance instruction is small, the adapter 300 can immediately output a voice indicating that an error has occurred in the device 200.
  • the control server 500 may transmit information indicating an error or the error sound itself to the terminal 600.
  • the control server 500 may post information indicating an error on a SNS (social networking service) page of a group to which the terminal 600 belongs.
  • the terminal 600 displays information indicating an error or outputs a sound indicating an error as shown in FIG.
  • the adapter 300 may cause the device 200 to output a previously stored error voice even if there is no utterance instruction from the voice server 100.
  • the adapter 300 transmits information indicating an error or the error sound itself to the terminal 600 via the control server 500.
  • the terminal 600 displays information indicating an error or outputs a sound indicating an error.
  • FIG. 2 is a block diagram showing a hardware configuration of server 100 according to the present embodiment.
  • the voice server 100 includes a processor 110, a memory 120, various lights 130, various switches 140, and a communication interface 160 as main components.
  • the processor 110 controls each unit of the server 100 by executing a program stored in the memory 120 or an external storage medium. That is, the processor 110 executes various processes described later by executing a program stored in the memory 120.
  • the memory 120 is realized by various types of RAM (Random Access Memory), various types of ROM (Read-Only Memory), flash memory, and the like.
  • the memory 120 is a USB (Universal Serial Bus) (registered trademark) memory, a CD (Compact Disc), a DVD (Digital Versatile Disk), a memory card, a hard disk, an IC (Integrated Circuit) card, which is used via an interface. It is also realized by a storage medium such as an optical card, mask ROM, EPROM (Erasable Programmable Read Only Memory), EEPROM (Electronically Erasable Programmable Read-Only Memory).
  • the memory 120 stores a program executed by the processor 110, data generated by execution of the program by the processor 110, data input from the switch 140, data received from the device 200, the adapter 300, the control server 500, and the terminal 600.
  • the memory 120 may store the database 101 shown in FIG. Alternatively, the database 101 may be stored in the control server 500. Alternatively, it may be stored in another device so that it can be referred to from the voice server 100 and the control server 500.
  • the database 101 includes a voice data acquisition instruction database 121 for sequentially storing voice data acquisition instructions and an utterance instruction database 122 for sequentially storing utterance instructions.
  • FIG. 3 is an image diagram showing a data structure of the audio data acquisition instruction database 121 according to the present embodiment.
  • voice data acquisition instruction database 121 has a voice data acquisition instruction ID, an adapter ID, a voice ID, a voice data storage address, a start date, an end date, a start time, and an end time. And the correspondence relationship between the transmission flag and the completion flag.
  • the device ID of the device 200 corresponding to the adapter 300 may be used instead of the adapter ID.
  • the voice data acquisition instruction database 121 may store the utterance condition and the priority as part of the correspondence relationship for each voice data acquisition instruction.
  • the processor 110 receives data including an audio data acquisition instruction ID, an adapter ID, an audio ID, an audio data storage address, a start date, an end date, a start time, and an end time as an audio data acquisition instruction via the communication interface 160. Send to adapter 300. Note that the processor 110 may also transmit the utterance condition and priority to the adapter 300 as the voice data acquisition instruction.
  • the information included in the voice data acquisition instruction or the information included in the downloaded voice data itself may overlap with the information included in the utterance instruction.
  • it is preferable that information included in the utterance instruction is given priority.
  • the processor 310 of the adapter 300 and the processor 210 of the device 200 follow the start date, the end date, the start time, the end time, the utterance condition, and the priority included in the utterance instruction.
  • the processor 310 of the adapter 300 and the processor 210 of the device 200 perform the voice data acquisition instruction or the voice. It follows the start date, end date, start time, end time, utterance condition, and priority included in the data itself.
  • the voice data acquisition instruction ID is data for specifying a command for instructing the adapter 300 to acquire voice data.
  • the adapter ID is data for specifying the adapter 300 that is a destination for instructing acquisition of audio data.
  • the voice ID is data for specifying the voice data that the device 200 should acquire.
  • the audio data storage address is data for specifying the storage destination of the audio data.
  • the transmission flag is switched to “ON” when the processor 110 sends a corresponding voice data acquisition instruction to the adapter 300. That is, if the corresponding voice data acquisition instruction has not yet been sent to the adapter, the transmission flag is “OFF”.
  • the completion flag is switched to “ON” when the processor 110 receives a notification from the adapter 300 that the downloading of the audio data has been completed. That is, when the adapter 300 has not yet downloaded the audio data, the completion flag is “OFF”.
  • FIG. 4 is an image diagram showing a data structure of the utterance instruction database 122 according to the present embodiment.
  • utterance instruction database 122 includes, for each utterance instruction, utterance instruction ID, adapter ID, voice ID, utterance condition, utterance priority, start date, end date, start time, end time, and transmission flag. Stores the correspondence with the completion flag.
  • the processor 110 transmits data including an utterance instruction ID, an adapter ID, a voice ID, an utterance condition, an utterance priority, a start date, an end date, a start time, and an end time as an utterance instruction via the communication interface 160. Send to.
  • the utterance instruction ID is data for specifying an instruction for instructing the adapter 300 to utter.
  • the adapter ID is data for specifying the adapter 300 that is the destination of the utterance.
  • the audio ID is data for specifying audio data that instructs the device 200 to acquire.
  • the utterance condition is a condition for outputting sound related to data acquired by the sensor of the device 200.
  • the priority is data indicating whether or not the voice of this utterance instruction should be output first in relation to other voice data.
  • the start date is a date on which a period for speaking voice data is started.
  • the end date is the date when the period for speaking the voice data ends.
  • the start time is a start time in a time zone in which voice data is uttered.
  • the end date is a time at which the time zone for speaking the voice data ends.
  • the transmission flag is switched to “ON” when the processor 110 sends a corresponding utterance instruction to the adapter 300. That is, when the corresponding utterance instruction has not yet been sent to the adapter 300, the transmission flag is “OFF”.
  • the completion flag is switched to “ON” when the processor 110 receives a notification that the utterance has been completed from the device 200. That is, when the device 200 has not yet completed the utterance, the completion flag is “OFF”.
  • the utterance instruction transmitted from the server 100 to the adapter 300 may not include the start date, the end date, the start time, and the end time. That is, whether or not the date is after the start date, the date is before the end date, the time is after the start time, and the time is before the end time based on the date and time when the processor 110 of the server 100 transmits the utterance instruction.
  • Utterance instruction satisfying these conditions that is, the utterance instruction ID, the adapter ID, and the voice ID may be transmitted to the adapter 300. More specifically, the server 100 may transmit a notification for canceling the utterance instruction when the period ends, and transmit the utterance instruction when the period starts.
  • the utterance condition and priority are included in the utterance instruction, but at least one of the utterance condition and priority may be included in the voice data acquisition instruction.
  • the voice data acquisition instruction database 121 and the utterance instruction database 122 may store device IDs instead of adapter IDs.
  • the voice server 100 needs to store a correspondence relationship between the device ID and the identification information or address of the adapter 300 for transmitting data to the device 200 corresponding to the device ID.
  • the light 130 transmits various states of the server 100 to the outside by being turned on, blinking, and turned off by a signal from the processor 110.
  • the switch 140 receives an instruction from the administrator and inputs the instruction to the processor 110.
  • the communication interface 160 transmits data from the processor 110 to the adapter 300, the control server 500, and the terminal 600 via the Internet, a carrier network, a router, or the like.
  • the communication interface 160 receives data from the adapter 300, the control server 500, and the terminal 600 via the Internet, a carrier network, a router, etc., and passes it to the processor 110.
  • FIG. 5 is a block diagram showing a hardware configuration of the device 200 according to the present embodiment.
  • device 200 includes a processor 210, a memory 220, various lights 230, various switches 240, a communication interface 260, a speaker 280, and a device driving unit 290 as main components. .
  • the processor 210 controls each unit of the device 200 by executing a program stored in the memory 220 or an external storage medium. That is, the processor 210 executes various processes to be described later by executing a program stored in the memory 220.
  • the memory 220 is realized by various RAMs, various ROMs, flash memories, and the like.
  • the memory 220 stores a program executed by the processor 210, data generated by the execution of the program by the processor 210, input data, data received from the server 100, and the like.
  • the light 230 communicates various states of the device 200 to the outside by turning on / flashing / turning off in response to a signal from the processor 210.
  • the switch 240 receives a command from the user and inputs the command to the processor 210.
  • the communication interface 260 transmits data from the processor 210 to the adapter 300, for example, various states detected by the device and instructions received by the device via a remote control from the user.
  • the communication interface 160 transmits the data to the voice server 100 and the control server 500 via the adapter 300, the router 400, the Internet, and the like.
  • the communication interface 260 receives the data from the voice server 100, the data from the control server 500, the control command from the terminal 600, the voice data, the speech command, and the browsing command via the Internet, the router 400, the adapter 300, etc. Deliver to 210.
  • the speaker 280 outputs various sounds such as sound and music based on the sound signal from the processor 210.
  • the device driving unit 290 plays a main role of the device 200 by controlling a motor, an actuator, a sensor, and the like based on a control command from the processor 210.
  • FIG. 6 is a block diagram showing a hardware configuration of adapter 300 according to the present embodiment.
  • the adapter 300 includes a processor 310, a memory 320, various lights 330, various switches 340, a first communication interface 361, and a second communication interface 362 as main components. Including.
  • the processor 310 controls each unit of the adapter 300 by executing a program stored in the memory 320 or an external storage medium. That is, the processor 310 executes various processes to be described later by executing a program stored in the memory 320.
  • the memory 320 is realized by various RAMs, various ROMs, flash memories, and the like.
  • the memory 320 stores a program executed by the processor 310, data generated by the execution of the program by the processor 310, input data, data received from the voice server 100 and the control server 500, and the like.
  • the memory 320 stores a voice database 321 and utterance instruction data 322.
  • FIG. 7 is an image diagram showing a data structure of the voice database 321 according to the present embodiment.
  • the voice database 321 stores the correspondence between the voice ID and the voice data for each voice data.
  • the audio database 321 may store an address of audio data indicating a storage location of the audio data instead of the audio data itself.
  • the audio database 321 and the audio data itself may be stored in the device 200.
  • the utterance instruction data 322 may also be stored in the device 200.
  • the processor 210 of the device 200 may overwrite or delete the data based on an instruction from the audio server 100 via the adapter 300.
  • the type of device for which three digits before the voice ID are targeted is specified, and the next three digits after the voice ID is the device.
  • the installation location and area are specified, and the second digit from the back of the audio ID specifies the type of audio data.
  • the lower second digit “5” is a message about the weather. If the second digit is “7”, it is a message about food. When the lower second digit is “9”, it is a message indicating an error.
  • FIG. 8 is an image diagram showing a data structure of the utterance instruction data 322 according to the present embodiment.
  • utterance instruction data 322 is a part of the utterance instruction data received from voice server 100. That is, the utterance instruction data 322 includes an utterance instruction ID, a voice ID, an utterance condition, a priority, a start date, an end date, a start time, and an end time.
  • the light 330 transmits various states of the adapter 300 to the outside by being turned on, blinking, and turned off by a signal from the processor 310.
  • the switch 340 receives a command from the user and inputs the command to the processor 310.
  • the first communication interface 361 is realized by UART or the like, and transmits data from the processor 310 to the device 200 and transfers data from the device 200 to the processor 310.
  • the processor 310 transmits voice data to the device 200 via the first communication interface 361 based on the utterance instruction, thereby causing the device 200 to output sound.
  • the processor 310 may only transmit an utterance instruction to the device 200.
  • the processor 210 of the device 200 outputs the audio data stored in the memory 220 to the speaker 280 based on the utterance instruction.
  • the second communication interface 362 is realized by a WiFi (registered trademark) antenna or the like, and transmits data from the processor 310 to the voice server 100 or the control server 500 via the router 400 and the Internet, or the voice server 100 or the control.
  • the data from the server 500 is transferred to the processor 110.
  • the processor 310 receives an audio data acquisition instruction, an utterance instruction, and audio data itself from the audio server 100 via the second communication interface 362.
  • the processor 310 transmits, via the second communication interface 362, a notification that the acquisition of the voice data has been completed and a notification that the utterance has been completed to the voice server 100.
  • FIG. 9 is a block diagram showing a hardware configuration of the control server 500 according to the present embodiment.
  • control server 500 includes a processor 510, a memory 520, various lights 530, various switches 540, and a communication interface 560 as main components.
  • the processor 510 controls each unit of the control server 500 by executing a program stored in the memory 520 or an external storage medium. That is, the processor 510 executes various processes described later by executing a program stored in the memory 520.
  • the memory 520 stores a program executed by the processor 510, data generated by execution of the program by the processor 510, input data, data received from the device 200, the adapter 300, the voice server 100, and the terminal 600.
  • the memory 520 may store the database 501 shown in FIG.
  • the database 501 includes a group database 521 indicating which group the device belongs to, for example, a family, a room, a current position, an address, or a user attribute.
  • FIG. 10 is an image diagram showing a data structure of the group database 521 according to the present embodiment.
  • group database 521 according to the present embodiment stores correspondence relationships between adapter IDs, family IDs as group IDs, and room IDs as group IDs.
  • the adapter ID includes an ID for specifying the adapter 300 and an ID for specifying the terminal 600.
  • the processor 510 returns, via the communication interface 560, the ID of the group to which the device 200 having the device ID belongs, the ID of another device belonging to the group, and the like based on an instruction including the device ID from the voice server 100. Furthermore, the processor 510 receives information indicating the state of the device 200 via the communication interface 560, and transmits the information to the terminal 600. On the contrary, the processor 510 receives a control command for the device 200 from the terminal 600 or transmits the control command to the device 200 via the communication interface 560.
  • the light 530 transmits various states of the control server 500 to the outside by being turned on, blinking, or turned off by a signal from the processor 510.
  • the switch 540 receives an instruction from the administrator and inputs the instruction to the processor 510.
  • the communication interface 560 transmits data from the processor 510 to the adapter 300, the voice server 100, and the terminal 600 via the Internet, a carrier network, the router 400, or the like.
  • the communication interface 560 receives data from the adapter 300, the voice server 100, and the terminal 600 via the Internet, a carrier network, the router 400, etc., and passes it to the processor 510.
  • FIG. 11 is a block diagram showing a hardware configuration of terminal 600 according to the present embodiment.
  • terminal 600 includes a processor 610, a memory 620, a touch panel 650 (display 630 and pointing device 640), a communication interface 660, and a speaker 680 as main components.
  • the processor 610 controls each unit of the terminal 600 by executing a program stored in the memory 620 or an external storage medium. That is, the processor 610 executes various processes described later by executing a program stored in the memory 620.
  • the memory 620 is realized by various RAMs, various ROMs, flash memories, and the like.
  • the memory 620 is used via various interfaces, such as a memory card such as an SD card or a micro SD card, a USB (registered trademark) memory, a CD, a DVD, a hard disk, an IC card, an optical card, a mask ROM, It is also realized by a storage medium such as EPROM or EEPROM.
  • Memory 620 stores programs executed by processor 610, data generated by execution of programs by processor 610, data input via pointing device 640, data received from voice server 100 and control server 500, and the like. To do.
  • the memory 620 stores a device control application.
  • the processor 610 transmits a control command for the device 200 to the control server 500 according to a device control application in the memory 620, transmits an utterance command for causing the device 200 to output a voice to the voice server 100, and the like. Receive information.
  • Display 630 outputs characters and images based on signals from processor 610.
  • the pointing device 640 receives a command from the user and inputs the command to the processor 610.
  • terminal 600 includes touch panel 650 in which display 630 and pointing device 640 are combined.
  • the processor 610 causes the display 630 to display a screen for controlling the device 200, an SNS family page, and the like.
  • the communication interface 660 is realized by an antenna or a connector.
  • the communication interface 660 exchanges data with other devices by wired communication or wireless communication.
  • the processor 610 transmits text data, image data, and the like to other devices such as the voice server 100 and the control server 500 via the communication interface 660.
  • the processor 610 transmits a control command for the device 200 to the voice server 100 and the control server 500.
  • the processor 610 receives programs, control commands, image data, text data, and the like from other devices such as the voice server 100 and the control server 500 via the communication interface 660.
  • the speaker 680 outputs various sounds such as a call sound, music, and a moving image based on the sound signal from the processor 610. ⁇ Information processing in network system 1>
  • FIG. 12 is a sequence diagram showing information processing of the network system 1 according to the present embodiment.
  • the processor 110 of the voice server 100 receives voice data to be output by the refrigerator 200A or the air conditioner 200B (step S102).
  • the processor 110 adds an audio data acquisition instruction to the audio data acquisition instruction database 121 based on the received audio data.
  • the processor 110 of the voice server 100 refers to the voice data acquisition instruction database 121 and transmits a voice data acquisition instruction to the adapter 300A of the refrigerator 200A via the communication interface 160 (step S112). At this time, the processor 110 turns on a transmission flag corresponding to the voice data acquisition instruction in the voice data acquisition instruction database 121.
  • the processor 310 of the adapter 300A requests the voice data from the storage destination of the voice data based on the voice data acquisition instruction via the second communication interface 362 (step S114).
  • the processor 310 of the adapter 300A downloads audio data from the audio server 100 via the second communication interface 362 (step S116).
  • the storage location of the audio data may be a memory of a communication device other than the audio server 100.
  • the processor 310 of the adapter 300A notifies the audio server 100 that the download of the audio data has been completed via the second communication interface 362 (step S118).
  • the processor 110 of the voice server 100 turns on the completion flag corresponding to the voice data acquisition instruction in the voice data acquisition instruction database 121.
  • the processor 110 of the voice server 100 transmits a voice data acquisition instruction to the adapter 300B of the air conditioner 200B via the communication interface 160 (step S122). At this time, the processor 110 turns on a transmission flag corresponding to the voice data acquisition instruction in the voice data acquisition instruction database 121.
  • the processor 310 of the adapter 300B requests voice data from the voice data storage destination via the second communication interface 362 based on the voice data acquisition instruction (step S124).
  • the processor 310 of the adapter 300B downloads the audio data via the second communication interface 362 (step S126).
  • the processor 310 of the adapter 300B notifies the audio server 100 that the download of the audio data has been completed via the second communication interface 362 (step S128).
  • the processor 110 of the voice server 100 turns on the completion flag corresponding to the voice data acquisition instruction in the voice data acquisition instruction database 121.
  • the processor 110 of the voice server 100 receives an utterance command for causing the refrigerator 200A to output voice (step S130).
  • the processor 110 adds an utterance instruction record to the end of the utterance instruction database 122 based on the received utterance instruction.
  • processor 110 of voice server 100 transmits an utterance instruction to adapter 300A of refrigerator 200A via communication interface 160 (step S132). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 310 of the adapter 300A receives an utterance instruction via the second communication interface 362.
  • the processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition. When the condition is satisfied, the processor 310 transmits audio data to the refrigerator 200A via the first communication interface 361, thereby causing the refrigerator 200A to output audio (step S134).
  • the processor 110 of the voice server 100 can accept a new utterance command for causing the refrigerator 200A to output voice (step S140).
  • the processor 110 adds a new utterance instruction record to the utterance instruction database 122 based on the received utterance instruction.
  • the processor 310 transmits an utterance completion notification to the voice server 100 via the second communication interface 362 (step S136).
  • the processor 110 of the voice server 100 Upon receiving the utterance completion notification, the processor 110 of the voice server 100 turns on the completion flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 110 reads the next utterance instruction corresponding to the refrigerator 200 ⁇ / b> A with reference to the utterance instruction database 122.
  • the processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300A of the refrigerator 200A via the communication interface 160 (step S142). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 110 of the voice server 100 receives a new utterance command for causing the refrigerator 200A to output voice (step S150). That is, the processor 110 adds a new utterance instruction record to the utterance instruction database 122 based on the received utterance instruction.
  • the processor 310 of the adapter 300A receives an utterance instruction via the second communication interface 362.
  • the previous speech instruction is overwritten, that is, the previous speech instruction is deleted.
  • the processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition.
  • the processor 310 transmits a notification that the expiration date has expired to the voice server 100 via the second communication interface 362 (step S144). Step S146).
  • the processor 110 of the voice server 100 transmits a notification for canceling the transmitted utterance instruction to the refrigerator 200A via the communication interface 160 (step S148). Then, when receiving the utterance completion notification, the processor 110 of the voice server 100 turns on the completion flag corresponding to the utterance instruction in the utterance instruction database 122. The processor 110 reads the next utterance instruction corresponding to the refrigerator 200 ⁇ / b> A with reference to the utterance instruction database 122. The processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300A of the refrigerator 200A via the communication interface 160 (step S152). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 310 of the adapter 300A receives an utterance instruction via the second communication interface 362.
  • the processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition. When the condition is satisfied, the processor 310 transmits sound data to the refrigerator 200A via the first communication interface 361, thereby causing the refrigerator 200A to output sound (step S154).
  • the processor 310 transmits an utterance completion notification to the voice server 100 via the second communication interface 362 (step S156).
  • the processor 110 of the voice server 100 Upon receiving the utterance completion notification, the processor 110 of the voice server 100 turns on the completion flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 110 reads the next utterance instruction corresponding to the refrigerator 200 ⁇ / b> A with reference to the utterance instruction database 122.
  • the processor 110 of the voice server 100 accepts an utterance command for causing the air conditioner 200B to output voice (step S160). That is, the processor 110 adds an utterance instruction to the utterance instruction database 122 based on the received utterance instruction.
  • the processor 110 of the voice server 100 also accepts the next utterance command for causing the air conditioner 200B to output voice (step S170). That is, the processor 110 also adds the next utterance instruction to the utterance instruction database 122 based on the received utterance instruction.
  • processor 110 of voice server 100 transmits the utterance instruction to adapter 300B of air conditioner 200B via communication interface 160 (step S162). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 310 of the adapter 300B receives the utterance instruction via the second communication interface 362.
  • the processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition. When the condition is satisfied, the processor 310 transmits sound data to the air conditioner 200B via the first communication interface 361, thereby causing the air conditioner 200B to output sound (step S164).
  • the processor 310 transmits an utterance completion notification to the voice server 100 via the second communication interface 362 (step S166).
  • the processor 110 of the voice server 100 Upon receiving the utterance completion notification, the processor 110 of the voice server 100 turns on the completion flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 110 refers to the utterance instruction database 122 and reads the next utterance instruction corresponding to the air conditioner 200B.
  • the processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300B of the air conditioner 200B via the communication interface 160 (step S172). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 310 of the adapter 300B receives the utterance instruction via the second communication interface 362.
  • the processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition. If the expiration date of the utterance instruction has expired without satisfying the condition (step S174), the processor 310 transmits a notification that the expiration date has expired to the voice server 100 via the second communication interface 362 (step S174). Step S176).
  • the processor 110 of the voice server 100 transmits a notification for canceling the transmitted speech instruction to the air conditioner 200B via the communication interface 160 (step S178). Then, the processor 110 of the voice server 100 turns on the completion flag corresponding to the utterance instruction in the utterance instruction database 122. The processor 110 refers to the utterance instruction database 122 and reads the next utterance instruction corresponding to the air conditioner 200B. Thereafter, the same processing as described above is repeated.
  • the processor 110 of the audio server 100 may also transmit audio data to be output to the device 200 to the terminal 600 via the control server 500.
  • the control server 500 transmits voice text data or voice data to the terminal 600.
  • the terminal 600 displays a message or outputs a voice.
  • the control server 500 may post the message on the SNS page of the group to which the terminal 600 belongs.
  • the processor 110 of the voice server 100 may output only the information related to the error of the device 200 to the device 200 and the terminal 600. ⁇ Information processing in server 100>
  • FIG. 13 is a flowchart showing processing of the server 100 for causing the adapter to download new audio data according to the present embodiment.
  • the processor 110 registers the voice data acquisition instruction newly accepted this time in the voice data acquisition instruction database 121 (step S1102).
  • the processor 110 extracts all voice data acquisition instructions including the adapter ID of the voice data acquisition instruction newly received this time from the voice data acquisition instructions in the voice data acquisition instruction database 121 (step S1104).
  • the processor 110 determines whether or not there is an instruction in which the completion flag is not “ON” in the extracted voice data acquisition instruction (step S1106). If there is an instruction in which the completion flag is not “ON” in the extracted voice data acquisition instruction (YES in step S1106), the processor 110 ends this process.
  • step S1106 If there is no instruction in which the completion flag is not “ON” in the extracted voice data acquisition instruction (NO in step S1106), the processor 110 newly accepts this time via the communication interface 160.
  • the voice data acquisition instruction is transmitted to the adapter 300 (step S1108).
  • the processor 110 sets the transmission flag of the voice data acquisition instruction transmitted this time in the voice data acquisition instruction database 121 to “ON”. The processor 110 ends this process.
  • FIG. 14 is a flowchart showing the processing of the server 100 when receiving a notification that downloading of audio data according to the present embodiment has been completed.
  • the processor 110 turns on the completion flag of the voice data acquisition instruction corresponding to the notification received this time in the voice data acquisition instruction database 121 (step S1202).
  • the processor 110 extracts an audio data acquisition instruction including the device ID of the audio data acquisition instruction corresponding to the notification received this time from the audio data acquisition instruction of the audio data acquisition instruction database 121 (step S1204).
  • the processor 110 determines whether there is an audio data acquisition instruction whose transmission flag is not “ON” in the extracted audio data acquisition instruction (step S1206).
  • the voice data acquisition instruction is sent via the communication interface 160. It transmits to the said adapter 300 (step S1208).
  • the processor 110 sets the transmission flag of the voice data acquisition instruction transmitted this time in the voice data acquisition instruction database 121 to “ON”. The processor 110 ends this process.
  • step S1206 If there is no instruction in which the transmission flag is not “ON” in the extracted voice data acquisition instruction (NO in step S1206), the processor 110 ends this process.
  • FIG. 15 is a flowchart showing processing of the server 100 when a new utterance command according to the present embodiment is received.
  • the processor 110 registers the newly received utterance instruction this time in the utterance instruction database 122 (step S1302).
  • the processor 110 extracts an utterance instruction including the device ID of the utterance instruction newly received this time from the utterance instructions in the utterance instruction database 122 (step S1304).
  • the processor 110 determines whether or not there is an instruction in which the transmission flag is “ON” and the completion flag is not “ON” in the extracted utterance instruction (step S1306). If there is an instruction in which the completion flag is not “ON” in the extracted utterance instruction (YES in step S1306), the processor 110 ends this process.
  • the processor 110 In the extracted utterance instruction, if there is no instruction in which the transmission flag is “ON” and the completion flag is not “ON” (NO in step S1306), the processor 110 The newly received utterance instruction this time is transmitted to the adapter 300 via the communication interface 160 (step S1308). At this time, the processor 110 sets the transmission flag of the utterance instruction transmitted this time in the utterance instruction database 122 to “ON”. The processor 110 ends this process.
  • the processor 110 of the voice server 100 sends a notification from one of the adapters 300 that the speech processing has been completed or a notification that the conditions specified within the valid period have not been met. The case of accepting will be described below.
  • the processor 110 sets the utterance instruction completion flag corresponding to the notification received this time in the utterance instruction database 122 to “ON” (step S1402).
  • the processor 110 extracts an utterance instruction including the device ID of the utterance instruction corresponding to the notification received this time from the utterance instructions in the utterance instruction database 122 (step S1404).
  • the processor 110 determines whether or not the extracted utterance instruction includes an utterance instruction whose transmission flag is not “ON” (step S1406). If there is an utterance instruction whose transmission flag is not “ON” in the extracted utterance instruction (YES in step S 1406), the utterance instruction is transmitted to the adapter 300 via the communication interface 160. (Step S1408). At this time, the processor 110 sets the transmission flag of the utterance instruction transmitted this time in the utterance instruction database 122 to “ON”. The processor 110 ends this process.
  • the voice server 100 transmits an utterance instruction separately for each adapter 300.
  • the voice server 100 mediates the utterances of the plurality of devices 200 while transmitting the same utterance instruction to the plurality of adapters 300 belonging to the same group.
  • the utterance instruction database according to the present embodiment, the information processing of the network system 1, and the information processing in the server 100 will be described.
  • FIG. 17 is an image diagram showing a data structure of the utterance instruction database 123 according to the present embodiment.
  • utterance instruction database 123 includes utterance instruction ID, group ID, voice ID, utterance condition, priority, start date, end date, start time, end time, transmission flag, and completion. Stores the correspondence with the flag. That is, the utterance instruction database 122 according to the present embodiment is different from the first embodiment in that the group ID is included in the correspondence relationship. Note that the definition of each data is the same as those in the first embodiment, and thus the description thereof will not be repeated individually. ⁇ Information processing in network system 1>
  • FIG. 18 is a sequence diagram showing information processing of the network system 1 according to the present embodiment. Further, the processing of step S212 to step S228 is the same as the processing of step S112 to step S128 in FIG. 12 of the first embodiment, and therefore description thereof will not be repeated here.
  • the processor 110 of the voice server 100 receives an utterance command for causing the adapter 300 belonging to the group, for example, the adapter 300A of the refrigerator 200A and the adapter 300B of the air conditioner 200B to output voice (step S230).
  • processor 110 will be described with respect to a case where a plurality of types of utterance commands are received.
  • the processor 110 adds a plurality of utterance instructions to the utterance instruction database 123 based on the received utterance instruction (step S231).
  • processor 110 of voice server 100 transmits the utterance instruction to adapter 300A of refrigerator 200A via communication interface 160 (step S232). Similarly, referring to utterance instruction database 123, processor 110 of voice server 100 transmits an utterance instruction to adapter 300B of air conditioner 200B via communication interface 160 (step S233). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 123.
  • the processor 310 of the adapter 300A receives an utterance instruction via the second communication interface 362. The processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition. Similarly, the processor 310 of the adapter 300B also receives the utterance instruction via the second communication interface 362. The processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition.
  • the processor 310 of the adapter 300A transmits audio data to the refrigerator 200A via the first communication interface 361, thereby causing the refrigerator 200A to output audio (step S234).
  • the processor 310 transmits an utterance completion notification to the voice server 100 via the second communication interface 362 (step S236).
  • the processor 110 of the voice server 100 Upon receiving the utterance completion notification, the processor 110 of the voice server 100 turns on a completion flag corresponding to the utterance instruction in the utterance instruction database 123. The processor 110 reads the next utterance instruction corresponding to the group with reference to the utterance instruction database 123.
  • the processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300A of the refrigerator 200A via the communication interface 160 (step S242). Similarly, the processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300B of the air conditioner 200B via the communication interface 160 (step S243). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 123.
  • the processor 310 of the adapter 300A receives an utterance instruction via the second communication interface 362. Also in the present embodiment, when adapter 300 receives a speech instruction from voice server 100, the previous speech instruction is overwritten, that is, the previous speech instruction is deleted. The processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition. Similarly, the processor 310 of the adapter 300B receives the utterance instruction via the second communication interface 362. The processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition.
  • the processor 310 of the adapter 300B transmits sound data to the air conditioner 200B via the first communication interface 361, thereby causing the air conditioner 200B to output sound (step S244).
  • the processor 310 transmits an utterance completion notification to the voice server 100 via the second communication interface 362 (step S246).
  • the processor 110 of the voice server 100 Upon receiving the utterance completion notification, the processor 110 of the voice server 100 turns on a completion flag corresponding to the utterance instruction in the utterance instruction database 123. The processor 110 reads the next utterance instruction corresponding to the group with reference to the utterance instruction database 123.
  • the processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300A of the refrigerator 200A via the communication interface 160 (step S252). Similarly, the processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300B of the air conditioner 200B via the communication interface 160 (step S253). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 123.
  • the processor 310 of the adapter 300A receives an utterance instruction via the second communication interface 362.
  • the processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition.
  • the processor 310 of the adapter 300B receives the utterance instruction via the second communication interface 362.
  • the processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition.
  • step S254 When the expiration date of the utterance instruction has expired without satisfying the condition (step S254), the processor 310 of the adapter 300A notifies the voice server 100 of the expiration date via the second communication interface 362. Transmit (step S256). Similarly, if the expiration date of the utterance instruction has expired without satisfying the condition (step S255), the processor 310 of the adapter 300B indicates that the expiration date has expired in the voice server 100 via the second communication interface 362. Is sent (step S257).
  • the processor 110 of the voice server 100 When the processor 110 of the voice server 100 receives the expiration notification from all the adapters 300 belonging to the group, the notification for canceling the utterance instruction transmitted to all the adapters 300 belonging to the group via the communication interface 160. Is transmitted (step S258, step S259). Then, the processor 110 of the voice server 100 turns on the completion flag corresponding to the utterance instruction in the utterance instruction database 123.
  • the processor 110 of the voice server 100 reads the next utterance instruction corresponding to the group with reference to the utterance instruction database 123. Further, the processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300A of the refrigerator 200A via the communication interface 160 (step S262). Similarly, the processor 110 of the voice server 100 transmits the next utterance instruction to the adapter 300B of the air conditioner 200B via the communication interface 160 (step S263). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 123. Thereafter, the same processing as described above is repeated.
  • the processor 110 of the audio server 100 may also transmit audio data to be output to the device 200 to the terminal 600 via the control server 500.
  • the control server 500 transmits voice text data or voice data to the terminal 600.
  • the terminal 600 displays a message or outputs a voice.
  • the control server 500 may post the message on the SNS page of the group to which the terminal 600 belongs.
  • the processor 110 of the voice server 100 may output only the information related to the error of the device 200 to the device 200 and the terminal 600. ⁇ Information processing in server 100>
  • FIG. 19 is a flowchart showing processing of the server 100 when a new utterance command according to the present embodiment is received.
  • the processor 110 registers the newly received utterance instruction this time in the utterance instruction database 122 (step S2302).
  • the processor 110 extracts an utterance instruction including the group ID of the utterance instruction newly accepted this time from the utterance instructions in the utterance instruction database 122 (step S2304).
  • the processor 110 determines whether or not there is an instruction in which the transmission flag is “ON” and the completion flag is not “ON” in the extracted utterance instruction (step S2306). If there is an instruction in which the completion flag is not “ON” in the extracted utterance instruction (YES in step S2306), the processor 110 ends this process.
  • the processor 110 If the extracted utterance instruction does not include an instruction in which the transmission flag is “ON” and the completion flag is not “ON” (NO in step S2306), the processor 110 The newly accepted speech instruction is transmitted to all adapters 300 belonging to the group via the communication interface 160 (step S2308). At this time, the processor 110 sets the transmission flag of the utterance instruction transmitted this time in the utterance instruction database 122 to “ON”. The processor 110 ends this process.
  • FIG. 20 is a flowchart showing the processing of the server 100 when receiving a notification that the utterance according to the present embodiment has been completed.
  • the processor 110 sets the utterance instruction completion flag corresponding to the notification received this time in the utterance instruction database 123 to “ON” (step S2402).
  • the processor 110 extracts an utterance instruction including the group ID of the utterance instruction corresponding to the notification received this time from the utterance instructions in the utterance instruction database 123 (step S2404).
  • the processor 110 determines whether there is an utterance instruction whose transmission flag is not “ON” in the extracted utterance instruction (step S2406). If there is an utterance instruction whose transmission flag is not “ON” in the extracted utterance instruction (YES in step S2406), the utterance instruction is transmitted to all the groups belonging to the group via the communication interface 160. It transmits to the adapter 300 (step S2408). At this time, the processor 110 sets the transmission flag of the utterance instruction transmitted this time in the utterance instruction database 123 to “ON”. The processor 110 ends this process.
  • processor 110 deletes the utterance instruction via communication interface 160. For this purpose is transmitted to all the adapters 300 belonging to the group. The processor 110 ends this process.
  • the devices 200 belonging to the same group can be prevented from outputting the same sound.
  • a group ID is set for each family or each address
  • a plurality of home appliances in one house can be prevented from emitting the same voice.
  • group ID is set for every room, it can prevent that the some household appliances in one room emit the same audio
  • the voice server 100 mediates the utterances of the plurality of devices 200 while transmitting the same utterance instruction to the plurality of adapters 300 belonging to the same group.
  • the voice server 100 mediates the utterances of the plurality of devices 200 by transmitting different utterance instructions to the plurality of adapters 300 belonging to the same group.
  • the utterance instruction database according to the present embodiment, the information processing of the network system 1, and the information processing in the server 100 will be described.
  • FIG. 21 is an image diagram showing a data structure of the utterance instruction database 124 according to the present embodiment.
  • the utterance instruction database 124 stores a correspondence relationship among the utterance instruction ID, the group ID, the voice ID, the start date, the end date, the start time, the end time, the transmission flag, and the completion flag. That is, the utterance instruction database 122 according to the present embodiment is different from the first embodiment in that the group ID is included in the correspondence relationship. Note that the definition of each data is the same as those in the first embodiment, and thus the description thereof will not be repeated individually. ⁇ Information processing in network system 1>
  • FIG. 22 is a sequence diagram showing information processing of the network system 1 according to the present embodiment. Further, the processing of step S312 to step S328 is the same as the processing of step S212 to step S228 in FIG. 12 of the first embodiment, and therefore description thereof will not be repeated here.
  • the processor 110 of the voice server 100 receives an utterance command for causing the adapter 300 belonging to the group, for example, the adapter 300A of the refrigerator 200A and the adapter 300B of the air conditioner 200B to output voice (step S330).
  • processor 110 will be described with respect to a case where a plurality of types of utterance commands are received.
  • the processor 110 adds a plurality of utterance instructions to the utterance instruction database 123 (step S331).
  • processor 110 of voice server 100 transmits a first utterance instruction corresponding to the group to adapter 300A of refrigerator 200A via communication interface 160 (step S332). At this time, the processor 110 turns on the transmission flag corresponding to the first utterance instruction in the utterance instruction database 123.
  • the processor 110 of the voice server 100 sends a second utterance different from the first utterance instruction corresponding to the group to the adapter 300B of the air conditioner 200B via the communication interface 160.
  • An utterance instruction is transmitted (step S333).
  • the processor 110 turns on the transmission flag corresponding to the second utterance instruction in the utterance instruction database 123.
  • the processor 310 of the adapter 300A receives the first utterance instruction from the voice server 100 via the second communication interface 362. The processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition. Similarly, the processor 310 of the adapter 300B also receives the second utterance instruction from the voice server 100 via the second communication interface 362. The processor 310 determines whether the current time, temperature, and user operation satisfy the second utterance instruction condition.
  • the processor 310 of the adapter 300A transmits sound data to the refrigerator 200A via the first communication interface 361, thereby causing the refrigerator 200A to output sound (step S334).
  • the processor 310 transmits an utterance completion notification to the voice server 100 via the second communication interface 362 (step S336).
  • the processor 110 of the voice server 100 Upon receiving the utterance completion notification, the processor 110 of the voice server 100 turns on the completion flag corresponding to the first utterance instruction in the utterance instruction database 123.
  • the processor 110 refers to the utterance instruction database 123 and reads the next utterance instruction corresponding to the group, that is, the third utterance instruction.
  • the processor 110 of the voice server 100 transmits a third utterance instruction to the adapter 300A of the refrigerator 200A via the communication interface 160 (step S342). At this time, the processor 110 turns on the transmission flag corresponding to the third utterance instruction in the utterance instruction database 123.
  • the processor 310 of the adapter 300A receives the third utterance instruction via the second communication interface 362. Also in this embodiment, when adapter 300 receives an utterance instruction from voice server 100, the previous first utterance instruction is overwritten, that is, the previous first utterance instruction is deleted. The processor 310 determines whether the current time, temperature, and user operation satisfy the condition of the third utterance instruction.
  • the processor 310 of the adapter 300B transmits sound data to the air conditioner 200B via the first communication interface 361, thereby causing the air conditioner 200B to output sound (step S344).
  • the processor 310 transmits an utterance completion notification to the voice server 100 via the second communication interface 362 (step S346).
  • the processor 110 of the voice server 100 Upon receiving the utterance completion notification, the processor 110 of the voice server 100 turns on the completion flag corresponding to the second utterance instruction in the utterance instruction database 123.
  • the processor 110 refers to the utterance instruction database 123 and reads the next utterance instruction corresponding to the group, that is, the fourth utterance instruction.
  • the processor 110 of the voice server 100 transmits a fourth utterance instruction to the adapter 300B of the air conditioner 200B via the communication interface 160 (step S353). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 123.
  • the processor 310 of the adapter 300A expires in the voice server 100 via the second communication interface 362 when the expiration date of the third utterance instruction expires without satisfying the condition. Send a notification to the effect.
  • the processor 110 of the voice server 100 When receiving the expiration notification from the adapter 300A, the processor 110 of the voice server 100 refers to the utterance instruction database 123 via the communication interface 160, that is, the next utterance instruction corresponding to the group, that is, the fifth utterance. Read instructions. Further, the processor 110 of the voice server 100 transmits a fifth utterance instruction to the adapter 300A of the refrigerator 200A via the communication interface 160. Thereafter, the same processing as described above is repeated.
  • the processor 110 of the audio server 100 may also transmit audio data to be output to the device 200 to the terminal 600 via the control server 500.
  • the control server 500 transmits voice text data or voice data to the terminal 600.
  • the terminal 600 displays a message or outputs a voice.
  • the control server 500 may post the message on the SNS page of the group to which the terminal 600 belongs.
  • the processor 110 of the voice server 100 may output only the information related to the error of the device 200 to the device 200 and the terminal 600. ⁇ Information processing in server 100>
  • FIG. 23 is a flowchart showing processing of the server 100 when a new utterance command according to the present embodiment is received.
  • the processor 110 registers the newly received utterance instruction this time in the utterance instruction database 122 (step S3302).
  • the processor 110 extracts an utterance instruction including the group ID of the utterance instruction newly accepted this time from the utterance instructions in the utterance instruction database 122 (step S3304).
  • the processor 110 determines whether there is a device ID that is not associated with the utterance instruction with the transmission flag “ON” and the completion flag “OFF” among the device IDs corresponding to the group ID (step S3306). ). If there is no device ID that is not associated with the utterance instruction with the transmission flag “ON” and the completion flag “OFF” (NO in step S3306), the processor 110 ends this process.
  • the processor 110 passes this time via the communication interface 160.
  • the newly accepted utterance instruction is transmitted to the adapter 300 for the device ID (step S3308).
  • the processor 110 sets the transmission flag of the utterance instruction transmitted this time in the utterance instruction database 122 to “ON”. The processor 110 ends this process.
  • FIG. 24 is a flowchart showing the processing of the server 100 when receiving a notification that the utterance according to the present embodiment has been completed.
  • the processor 110 sets the utterance instruction completion flag corresponding to the notification received this time in the utterance instruction database 123 to “ON” (step S3402).
  • the processor 110 extracts an utterance instruction including the group ID of the utterance instruction corresponding to the notification received this time from the utterance instructions in the utterance instruction database 123 (step S3404).
  • the processor 110 determines whether there is an utterance instruction whose transmission flag is not “ON” in the extracted utterance instruction (step S3406). If there is an utterance instruction in which the transmission flag is not “ON” in the extracted utterance instruction (YES in step S3406), transmission of the notification that the utterance instruction has been received this time is transmitted via the communication interface 160. It transmits to the original adapter 300 (step S3408). At this time, the processor 110 sets the transmission flag of the utterance instruction transmitted this time in the utterance instruction database 123 to “ON”. The processor 110 ends this process.
  • step S3406 If there is no instruction in which the transmission flag is not “ON” in the extracted utterance instruction (NO in step S3406), the processor 110 ends this process.
  • the devices 200 belonging to the same group can be prevented from outputting the same sound.
  • a group ID is set for each family or each address
  • a plurality of home appliances in one house can be prevented from emitting the same voice.
  • group ID is set for every room, it can prevent that the some household appliances in one room emit the same audio
  • the device 200 transmits / receives data to / from the voice server 100 via the adapter 300.
  • the device 200 transmits / receives data to / from the voice server 100 without using the adapter 300.
  • the configuration in which the device 200 transmits / receives data to / from the voice server 100 without using the adapter 300 can be applied to the network system 1 of any other embodiment.
  • the network system according to the present embodiment is different from the network system 1 according to the first to third embodiments in that the device 200 has the role of the adapter 300. ⁇ Overall configuration of network system>
  • FIG. 25 is an image diagram showing an overall configuration and an operation outline of the network system 1 according to the present embodiment.
  • the network system 1 mainly includes devices such as a refrigerator 200A, an air conditioner 200B, and a washing machine 200C, and a sound server 100 for controlling sound output of the devices.
  • the network system according to the present embodiment is different from the network system 1 of the first embodiment in that the adapters 300A, 300B, and 300C are not included.
  • the network system 1 includes a router 400 for connecting devices such as the refrigerator 200A, the air conditioner 200B, and the washing machine 200C to the Internet, and a control server 500 for processing message exchange between the family and the device.
  • Smartphones 600A, 600B, 600C, notebook personal computer 600D and other terminals, and databases 101, 501 may be included.
  • the voice server 100 or the control server 500 may store at least one of the databases 101 and 501.
  • the voice server 100 transmits a voice data acquisition instruction to the device 200 at the first timing (1).
  • the first timing is the same as the first to third timings, the description will not be repeated here.
  • the processor 210 of the device 200 downloads the designated audio data.
  • the processor 210 of the device 200 notifies the audio server 100 that the acquisition of the audio data has been completed (2).
  • the voice server 100 transmits a voice data utterance instruction to the device 200 at the second timing (3).
  • the second timing is the same as the first to third timings, the description will not be repeated here.
  • the processor 210 of the device 200 causes the speaker 280 to output sound based on the utterance instruction.
  • the processor 210 notifies the voice server 100 of the notification that the utterance is completed (4).
  • the voice server 100 receives the notification that the utterance is completed from the device 200, or transmits the next utterance instruction to the device 200 to the device 200 at the third timing (5).
  • the third timing is the same as the first to third timings, so the description will not be repeated here.
  • the processor 210 of the device 200 causes the speaker 280 to output sound based on the next utterance instruction.
  • the processor 210 notifies the voice server 100 of the notification that the utterance is completed via the communication interface 260 (6).
  • the network system 1 according to the present embodiment also has the same effects as those of the first to third embodiments.
  • the device 200 when the device 200 detects an error of the device 200, a notification to that effect may be transmitted to the voice server 100 and the control server 500. Then, the voice server 100 transmits an error voice utterance instruction to the device 200. The device 200 emits a sound indicating that an error has occurred. Similarly, control server 500 transmits information indicating an error or error sound itself to terminal 600. As a result, the terminal 600 displays information indicating an error or outputs a sound indicating an error. The control server 500 may post information indicating an error on the SNS page of the group to which the terminal 600 belongs.
  • the processor 210 of the device 200 may cause the speaker 280 to output the error sound acquired in advance even if there is no utterance instruction from the voice server 100.
  • the device 200 transmits information indicating an error or the error sound itself to the terminal 600 via the control server 500.
  • the terminal 600 displays information indicating an error or outputs a sound indicating an error.
  • the device 200 transmits / receives data to / from the voice server 100 via the adapter 300.
  • the device 200 transmits / receives data to / from the voice server 100 without using the adapter 300.
  • an intermediate configuration between them can also be adopted.
  • adapters 300A, 300B, and 300C are arranged between devices such as refrigerator 200A, air conditioner 200B, and washing machine 200C and router 400.
  • the role of the adapter 300 is reduced, and the role of the device 200 is increased accordingly.
  • the device 200 plays a part of the role of the adapter 300 according to the first to third embodiments.
  • the adapter 300 simply receives data from the device 200 by using UART or the like, and transfers the data to the router 400 by using WiFi (registered trademark) or the like. Conversely, the adapter 300 simply receives data from the router 400 by using WiFi (registered trademark) or the like, and transmits the data to the device 200 by using UART or the like.
  • the device 200 and the adapter 300 transmit data that is the basis for canceling the utterance.
  • the voice server 100 may be capable of transmitting a command for canceling an utterance instruction that has not yet been completed.
  • FIG. 27 is a sequence diagram showing information processing of the network system 1 according to the present embodiment.
  • processor 110 of voice server 100 transmits an utterance instruction to adapter 300A of refrigerator 200A via communication interface 160 (step S132). At this time, the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 310 of the adapter 300A receives an utterance instruction via the second communication interface 362.
  • the processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition.
  • the processor 310 indicates an LED state (flashing) indicating that there is a voice instruction via the first communication interface 361 when a condition for blinking the LED light 230, that is, a condition for speaking is satisfied. Is sent to the device 200.
  • the processor 210 of the device 200 causes the LED light 230 to blink (step S133).
  • the processor 310 of the adapter 300 When the switch (button) 240 is pressed in a state where the LED light 230 is blinking (step S134), the processor 310 of the adapter 300 performs voice communication via the communication interface 360 when the utterance condition is satisfied. By transmitting data to the refrigerator 200A, the audio is output to the refrigerator 200A (step S135).
  • the processor 210 of the device 200 cancels the target speech command and turns off the LED light 230 (steps S145 and step S145). S175). Even if the processor 110 of the voice server 100 receives the expiration notification (steps S146 and S176), it is not necessary to transmit a notification for cancellation.
  • the processor 110 of the voice server 100 turns ON the completion flag corresponding to the utterance instruction in the utterance instruction database 122.
  • the processor 110 refers to the utterance instruction database 122 and reads the next utterance instruction corresponding to the air conditioner 200B.
  • the processor 110 transmits the next utterance instruction to the adapter 300 via the communication interface 160 (step S152).
  • the processor 110 transmits a command for canceling the speech instruction to the adapter 300 via the communication interface 160 in response to a command from the administrator or a request from another computer. (Step S182).
  • the processor of the device 200 or the adapter 300 cancels the target utterance command.
  • the LED light 230 is turned off (step S155).
  • the network system 1 also transmits an instruction for the device 200 or the adapter 300 to cancel the utterance.
  • the voice server 100 may be capable of canceling a speech instruction that has already been completed by another device with respect to the device 200.
  • FIG. 28 is a sequence diagram showing information processing of the network system 1 according to the present embodiment.
  • processor 110 of voice server 100 transmits an utterance instruction to adapter 300A of refrigerator 200A via communication interface 160 (step S232).
  • processor 110 of voice server 100 transmits an utterance instruction to adapter 300B of air conditioner 200B via communication interface 160 (step S233).
  • the processor 110 turns on the transmission flag corresponding to the utterance instruction in the utterance instruction database 123.
  • the processor 310 of the adapter 300A receives an utterance instruction via the second communication interface 362. The processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition. Similarly, the processor 310 of the adapter 300B also receives the utterance instruction via the second communication interface 362. The processor 310 determines whether the current time, temperature, and user operation satisfy the utterance instruction condition.
  • the processor 310 of the adapter 300A indicates an LED status indicating that there is a voice instruction via the first communication interface 361 when the condition for blinking the LED light 230, that is, the condition for speaking is satisfied. (Flashing) setting command is transmitted to the device 200.
  • the processor 210 of the device 200 blinks the LED light 230 (step S2321).
  • the processor 310 of the adapter 300 transmits the audio data to the refrigerator via the communication interface 360 when the speech condition is satisfied.
  • the audio is output to refrigerator 200A (step S234).
  • the processor 310 transmits an utterance completion notification to the voice server 100 via the second communication interface 362 (step S236).
  • the processor 310 of the adapter 300B indicates that there is a voice instruction via the first communication interface 361 when the condition for blinking the LED light 230, that is, the condition for speaking is satisfied.
  • An LED state (blinking) setting command is transmitted to the device 200.
  • the processor 210 of the device 200 blinks the LED light 230 (step S2331).
  • the processor 110 of the voice server 100 Upon receiving the utterance completion notification from the adapter 300A, the processor 110 of the voice server 100 turns on the completion flag corresponding to the utterance instruction in the utterance instruction database 123. Then, the processor 110 transmits an instruction to cancel the utterance instruction to the adapter 300B via the communication interface 160 (step S2332). When receiving a command for canceling the utterance instruction, the processor of the device 200 or the adapter 300 cancels the target utterance command. Then, the LED light 230 is turned off (step S2333). The processor 110 of the voice server 100 refers to the utterance instruction database 122 and reads the next utterance instruction corresponding to the group.
  • step S254 when the expiration date of the audio data has expired (step S254, step S255), the processor 210 of the device 200 cancels the target speech command and turns off the LED light 230 (step S2523, step S255). S2533). Even if the processor 110 of the voice server 100 receives the expiration notification (step S256, step S257), it is not necessary to transmit a notification for cancellation.
  • step S2406 if there is no instruction in which the transmission flag is not “ON” in the extracted utterance instruction (NO in step S2406), canceling is performed. There is no need to send a notification to the adapter 300.
  • the configuration of the present embodiment can also be applied to the network system 1 of the third to fifth embodiments. ⁇ Other application examples>
  • the present invention can also be applied to a case where the present invention is achieved by supplying a program to a system or apparatus. Then, a storage medium (or memory) storing a program represented by software for achieving the present invention is supplied to the system or apparatus, and the computer (or CPU or MPU) of the system or apparatus stores it in the storage medium.
  • the effect of the present invention can also be enjoyed by reading and executing the program code.
  • the program code itself read from the storage medium realizes the functions of the above-described embodiment, and the storage medium storing the program code constitutes the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephonic Communication Services (AREA)
  • Selective Calling Equipment (AREA)

Abstract

L'invention concerne un système de réseau qui peut amener de manière plus souple un appareil à réaliser une sortie vocale, ou qui peut mieux supprimer la quantité maximale de trafic sur un réseau, ou qui peut produire plus rapidement une sortie vocale que les systèmes classiques. L'invention concerne un système de réseau (1) qui comprend : au moins un appareil (200, 300) qui peut mémoriser une pluralité de types de données vocales ; et un serveur (100) qui est destiné à envoyer audit appareil des premières instructions qui ordonnent l'acquisition des données vocales, et à envoyer, à une temporisation différente des premières instructions, des secondes instructions qui ordonnent la production d'une sortie vocale sur la base des données vocales.
PCT/JP2015/062803 2014-05-15 2015-04-28 Système de réseau, serveur, appareil de communication, procédé de traitement d'informations et programme WO2015174272A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2016519202A JP6349386B2 (ja) 2014-05-15 2015-04-28 ネットワークシステム、サーバ、通信機器、および情報処理方法
CN201580023383.7A CN106255963B (zh) 2014-05-15 2015-04-28 网络系统、服务器、通信设备以及信息处理方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-101082 2014-05-15
JP2014101082 2014-05-15

Publications (1)

Publication Number Publication Date
WO2015174272A1 true WO2015174272A1 (fr) 2015-11-19

Family

ID=54479814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/062803 WO2015174272A1 (fr) 2014-05-15 2015-04-28 Système de réseau, serveur, appareil de communication, procédé de traitement d'informations et programme

Country Status (3)

Country Link
JP (3) JP6349386B2 (fr)
CN (1) CN106255963B (fr)
WO (1) WO2015174272A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108092925A (zh) * 2017-12-05 2018-05-29 佛山市顺德区美的洗涤电器制造有限公司 语音更新方法及装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7036561B2 (ja) * 2017-10-03 2022-03-15 東芝ライフスタイル株式会社 家電システム
EP3751823B1 (fr) * 2018-03-14 2023-06-28 Google LLC Génération d'une ou de plusieurs notifications basées sur l'ido et fourniture d'une ou de plusieurs instructions amenant un ou plusieurs clients d'assistant automatisé d'un ou de plusieurs dispositifs clients à exécuter un rendu automatique de la ou des notifications basées sur l'ido
JP6681429B2 (ja) * 2018-05-25 2020-04-15 シャープ株式会社 ネットワークシステム、サーバ、および情報処理方法
JP2020041738A (ja) * 2018-09-10 2020-03-19 シャープ株式会社 ネットワークシステム、サーバ、および情報処理方法
CN109558357B (zh) * 2018-10-31 2020-10-30 许继集团有限公司 一种采集与控制信号的方法及主cpu插件、子插件
JP7373386B2 (ja) * 2019-12-19 2023-11-02 東芝ライフスタイル株式会社 制御装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005031540A (ja) * 2003-07-10 2005-02-03 Toshiba Corp 音声機能を有する家電機器
JP2005311864A (ja) * 2004-04-23 2005-11-04 Toshiba Corp 家電機器、アダプタ装置および家電機器システム
JP2008046424A (ja) * 2006-08-17 2008-02-28 Toshiba Corp 家電機器及び家電機器ネットワークシステム
JP2013257295A (ja) * 2012-06-14 2013-12-26 Sharp Corp 体重測定システム、サーバ、体重計、体重測定結果通知方法およびプログラム
JP2013258656A (ja) * 2012-06-14 2013-12-26 Sharp Corp 情報通知システム、情報通知サーバ、情報通知方法およびプログラム

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100514191B1 (ko) * 2003-01-23 2005-09-13 삼성전자주식회사 통합 리모트 컨트롤러 및 그 통합 리모트 컨트롤러용셋톱박스
JP2004274228A (ja) * 2003-03-06 2004-09-30 Matsushita Electric Ind Co Ltd 情報処理システム、電子機器、情報処理装置および音出力装置
KR100754521B1 (ko) * 2005-02-22 2007-09-03 삼성전자주식회사 홈네트워크 시스템 및 홈네트워크 시스템에서의 정보제공방법
JP4703688B2 (ja) * 2008-06-03 2011-06-15 三菱電機株式会社 発話権調整システムおよび発話可能機器
JP2010068390A (ja) * 2008-09-12 2010-03-25 Hitachi Kokusai Electric Inc 無線通信システム
KR101482138B1 (ko) * 2009-07-31 2015-01-13 엘지전자 주식회사 가전기기 진단시스템 및 그 진단방법
WO2012005512A2 (fr) * 2010-07-06 2012-01-12 엘지전자 주식회사 Appareil pour le diagnostic d'appareils ménagers
JP5785218B2 (ja) * 2013-05-22 2015-09-24 シャープ株式会社 ネットワークシステム、サーバ、家電、プログラムおよび家電の連携方法
JP2015148648A (ja) * 2014-02-04 2015-08-20 シャープ株式会社 対話システム、発話制御装置、対話装置、発話制御方法、発話制御装置の制御プログラム、および、対話装置の制御プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005031540A (ja) * 2003-07-10 2005-02-03 Toshiba Corp 音声機能を有する家電機器
JP2005311864A (ja) * 2004-04-23 2005-11-04 Toshiba Corp 家電機器、アダプタ装置および家電機器システム
JP2008046424A (ja) * 2006-08-17 2008-02-28 Toshiba Corp 家電機器及び家電機器ネットワークシステム
JP2013257295A (ja) * 2012-06-14 2013-12-26 Sharp Corp 体重測定システム、サーバ、体重計、体重測定結果通知方法およびプログラム
JP2013258656A (ja) * 2012-06-14 2013-12-26 Sharp Corp 情報通知システム、情報通知サーバ、情報通知方法およびプログラム

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108092925A (zh) * 2017-12-05 2018-05-29 佛山市顺德区美的洗涤电器制造有限公司 语音更新方法及装置

Also Published As

Publication number Publication date
CN106255963A (zh) 2016-12-21
CN106255963B (zh) 2019-02-15
JP6349386B2 (ja) 2018-06-27
JPWO2015174272A1 (ja) 2017-04-20
JP2017192158A (ja) 2017-10-19
JP2017220248A (ja) 2017-12-14
JP6678626B2 (ja) 2020-04-08
JP6371889B2 (ja) 2018-08-08

Similar Documents

Publication Publication Date Title
JP6371889B2 (ja) ネットワークシステム、サーバ、および情報処理方法
CN108702389B (zh) 用于遥控iot(物联网)设备的架构
JP6715283B2 (ja) ネットワークシステム、および情報処理方法
CN105580313B (zh) 用于控制用于智能家居服务的装置的方法和设备
WO2015129372A1 (fr) Système audio
KR101695398B1 (ko) 서브 단말에서의 홈 오토메이션 구성 기기 제어 장치 및 방법
JP2016063415A (ja) ネットワークシステム、音声出力方法、サーバ、機器、および音声出力プログラム
JP2018019313A (ja) 制御システム、通信機器、制御方法、およびプログラム
CN113168334A (zh) 数据处理方法、装置、电子设备及可读存储介质
US20150312622A1 (en) Proximity detection of candidate companion display device in same room as primary display using upnp
KR102403117B1 (ko) 동글 및 그의 제어 방법
CN105573128B (zh) 用户装置及其驱动方法、提供服务的设备及其驱动方法
JP6069239B2 (ja) ネットワークシステム、通信方法、サーバ、端末、通信プログラム
JP6418863B2 (ja) ネットワークシステム、音声出力方法、サーバ、機器、および音声出力プログラム
WO2018079063A1 (fr) Système réseau, serveur, procédé de traitement d'informations, climatiseur et programme
US20200041151A1 (en) Air conditioning control device, air conditioning control method, and program
JP6607668B2 (ja) ネットワークシステム、音声出力方法、サーバ、機器、および音声出力プログラム
KR20180077490A (ko) 휴대용 단말기를 이용한 홈 네트워크 서비스 제공 시스템
WO2016052107A1 (fr) Système de réseau, serveur, dispositif et terminal de communication
JP7147158B2 (ja) 情報処理装置、プログラム及び制御方法
JP2020122585A (ja) 空気調和システム
US20240015073A1 (en) Connection configuration method and apparatus
JP2013255108A (ja) コントローラ、制御端末、遠隔制御システムおよび通信方法をプロセッサに実行させるためのプログラム
US20240028315A1 (en) Automatically Creating Efficient Meshbots
JP2017151742A (ja) ネットワークシステム、サーバ、情報処理方法、および電気機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15792838

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016519202

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15792838

Country of ref document: EP

Kind code of ref document: A1