WO2015174113A1 - Information-processing device, system, information-processing method, and program - Google Patents

Information-processing device, system, information-processing method, and program Download PDF

Info

Publication number
WO2015174113A1
WO2015174113A1 PCT/JP2015/054820 JP2015054820W WO2015174113A1 WO 2015174113 A1 WO2015174113 A1 WO 2015174113A1 JP 2015054820 W JP2015054820 W JP 2015054820W WO 2015174113 A1 WO2015174113 A1 WO 2015174113A1
Authority
WO
WIPO (PCT)
Prior art keywords
condition
sensor data
target state
detection target
information processing
Prior art date
Application number
PCT/JP2015/054820
Other languages
French (fr)
Japanese (ja)
Inventor
大夢 瀧沢
丈博 萩原
弘之 増田
玄大 近藤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2015174113A1 publication Critical patent/WO2015174113A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]

Definitions

  • the present disclosure relates to an information processing apparatus, a system, an information processing method, and a program.
  • conditions such as a threshold value for the detection value of the sensor are set in advance including changes due to changes in the connection state of the terminals.
  • various conditions according to the user's needs can be detected by allowing the user to arbitrarily set the conditions.
  • the detected value of the sensor is not so simple that the user can intuitively understand, and the relationship between the detected value and the state to be detected is not always intuitively understood by the user. Therefore, it is not always easy for the user to arbitrarily set the conditions.
  • the present disclosure proposes a new and improved information processing apparatus, system, information processing method, and program that allow a user to set conditions regarding sensor data intuitively and easily.
  • the detection target state is detected based on a sensor data acquisition function for acquiring sensor data provided by one or a plurality of sensors and the sensor data acquired during practice of the detection target state.
  • an information processing apparatus including a processing circuit that realizes a condition setting function for setting a condition relating to the sensor data.
  • one or more sensors and sensor data provided by the one or more sensors are acquired, and the detection is performed based on the sensor data acquired during the practice of the detection target state.
  • a system includes an information processing device that sets conditions related to the sensor data for detecting a target state, and a terminal device that outputs or presents instructions or information related to the setting of the conditions provided by the information processing device.
  • the processing circuit of the information processing apparatus acquires sensor data provided by one or more sensors, and based on the sensor data acquired during the practice of the detection target state, There is provided an information processing method including setting a condition relating to the sensor data for detecting the detection target state.
  • the detection target state is determined based on a sensor data acquisition function for acquiring sensor data provided by one or more sensors and the sensor data acquired during the practice of the detection target state.
  • a program for causing a processing circuit to realize a condition setting function for setting conditions relating to the sensor data for detection.
  • FIG. 1 is a diagram illustrating a schematic configuration of a system to which a technology according to an embodiment of the present disclosure can be applied. It is a figure which shows the 1st specific structural example of the system shown by FIG. It is a figure which shows the 2nd specific structural example of the system shown by FIG.
  • FIG. 4 is a diagram illustrating a third specific configuration example of the system illustrated in FIG. 1. It is a figure which shows the 4th specific structural example of the system shown by FIG.
  • FIG. 10 is a diagram illustrating a fifth specific configuration example of the system illustrated in FIG. 1.
  • FIG. 8 is a diagram illustrating a sixth specific configuration example of the system illustrated in FIG. 1.
  • FIG. 8 is a diagram illustrating a seventh specific configuration example of the system illustrated in FIG. 1.
  • FIG. 6 is a diagram illustrating a second example of a UI provided in the system illustrated in FIG. 1. It is a figure for demonstrating the 1st specific example of the cooperation operation
  • FIG. 1 is a diagram illustrating a first example of a system configuration according to an embodiment of the present disclosure.
  • FIG. It is a figure which shows the 2nd example of the system configuration
  • 10 is a flowchart illustrating an example of processing according to an embodiment of the present disclosure. It is a flowchart which shows the condition production
  • FIG. 3 is a diagram illustrating an example of a user interface for sensor selection according to an embodiment of the present disclosure.
  • 3 is a diagram illustrating an example of a user interface for condition selection according to an embodiment of the present disclosure.
  • FIG. It is a figure showing an example of a user interface for condition proposal in one embodiment of this indication.
  • 3 is a diagram illustrating an example of a user interface for generating a composite condition according to an embodiment of the present disclosure.
  • FIG. 3 is a diagram illustrating an example of a user interface for training data input for classification according to an embodiment of the present disclosure.
  • FIG. 6 is a diagram illustrating a second example of a user interface for extracting training data according to an embodiment of the present disclosure. It is a figure showing the 3rd example of a user interface for extracting training data in one embodiment of this indication.
  • FIG. 3 is a diagram illustrating an example of a user interface for real-time feedback according to an embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating another example of a user interface for real-time feedback in an embodiment of the present disclosure.
  • FIG. 6 is a diagram illustrating still another example of a user interface for real-time feedback according to an embodiment of the present disclosure.
  • FIG. 10 is a diagram for describing a first application example according to an embodiment of the present disclosure.
  • FIG. FIG. 43 is a diagram showing an example of a user interface for creating a program in the example of FIG. 10 is a diagram for describing a second application example according to an embodiment of the present disclosure.
  • FIG. FIG. 3 is a block diagram illustrating a hardware configuration example of an information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 1 is a diagram illustrating a schematic configuration of a system to which a technique according to an embodiment of the present disclosure can be applied.
  • the system 10 includes an element 100, a manager 200, a server 300, and a UI (User Interface) device 400. These devices are connected to each other via a network NW.
  • the network NW can include, for example, Bluetooth (registered trademark), Wi-Fi, the Internet, and the like.
  • Bluetooth registered trademark
  • Wi-Fi Wireless Fidelity
  • Element 100 is a device including a communication unit 110, a control unit 120, a function unit 130, and a power supply unit 140.
  • the communication unit 110 includes a communication device that communicates with the manager 200 and / or other elements 100 via the network NW.
  • the control unit 120 is realized by, for example, a microcontroller or a CPU (Central Processing Unit), and controls the functional unit 130.
  • the function unit 130 includes, for example, a sensor and an actuator, and realizes a function unique to each element 100.
  • the power supply unit 140 includes a battery, a power plug, and the like, and supplies power for operating the communication unit 110, the control unit 120, and the function unit 130.
  • devices other than the element 100 also have a power supply part, illustration is abbreviate
  • the manager 200 is a device including a communication unit 210, a control unit 220, and a storage unit 230.
  • the communication unit 210 may include a communication device that communicates with the element 100, the server 300, and the UI device 400 via the network NW.
  • the control unit 220 is realized by, for example, a microcontroller or a CPU, and controls communication between the elements 100 and between the element 100 and the server 300 via the communication unit 210.
  • the control unit 220 can include a calculation unit 221.
  • the computation unit 221 performs computations on information received from the element 100, information transmitted to the element 100, information transmitted to the server 300, information received from the server 300, and the like.
  • the storage unit 230 includes a memory or a storage, and stores information exchanged by communication controlled by the control unit 220, information calculated by the calculation unit 221, and the like.
  • the server 300 includes a communication unit 310, a control unit 320, and a storage unit 330.
  • Communication unit 310 includes a communication device that communicates with manager 200 via network NW.
  • the control unit 320 is realized by, for example, a microcontroller or a CPU, and can include a calculation unit 321 and a UI providing unit 323.
  • the calculation unit 321 performs calculations on information received from the element 100 or the manager 200, information transmitted to the element 100 or the manager 200, information received from other servers, information transmitted to other servers, and the like.
  • the UI providing unit 323 provides a UI for the user to specify a combination of the element 100 and / or the manager 200 and to confirm various information.
  • the UI is provided via, for example, a display and a touch panel included in the input / output unit 430 of the UI device 400.
  • the storage unit 330 includes a memory or a storage, and stores various information related to the element 100, a program for operating the plurality of elements 100 in combination, software combined with the elements 100, information calculated by the calculation unit 321, and the like. To do.
  • identification information (ID) of the element 100 in the system 10 is stored in the storage unit 330 of the server 300.
  • the element 100 can be added to the system 10 at any time.
  • the storage unit 330 also stores a program for operating the plurality of elements 100 in combination.
  • the program can be added at any time by, for example, a user, a developer, or a system administrator.
  • the storage unit 330 stores software.
  • the software implements a function by being executed in combination with the element 100.
  • the function may be realized by executing software in combination.
  • Functions realized by the software may include, for example, provision of materials such as voice and images, timers, calendars, analysis functions such as image recognition and voice recognition, information acquisition functions from external services such as weather forecasts and news, etc. .
  • Software can be added at any time by, for example, a user, a developer, or a system administrator.
  • the UI providing unit 323 of the server 300 provides a UI via, for example, a display and a touch panel included in the input / output unit 430 of the UI device 400.
  • the user selects, for example, a program for operating a plurality of elements 100 in combination from a program preinstalled in the element 100 or the manager 200, a program provided by the server 300, or the like. can do.
  • the user can specify a combination of a plurality of elements 100, a combination of the elements 100 and software, and a combination of a plurality of software via the UI.
  • the specified combination is stored as a program in the storage unit 330 of the server 300, and the user can obtain a desired application by executing the program by the element 100 and the software.
  • the UI device 400 is a device that includes a communication unit 410, a control unit 420, and an input / output unit 430.
  • Communication unit 410 may include a communication device that communicates with manager 200 and server 300 via network NW.
  • the control unit 420 is realized by, for example, a microcontroller or a CPU, and controls the input / output unit 430 and the exchange of information through the communication unit 410.
  • the input / output unit 430 includes, for example, a display, a speaker, a touch panel, and the like, presents various types of information to the user via the UI, and accepts an operation input from the user.
  • FIG. 2 is a diagram showing a first specific configuration example of the system shown in FIG. Referring to FIG. 2, system 10 a includes a first tablet that functions as element 100, a second tablet that functions as element 100, manager 200, and UI device 400, and server 300.
  • the first and second tablets similarly include a display, a touch panel, a CPU, a sensor, and the like.
  • one of the two tablets functions as the element 100, the manager 200, and the UI device 400, and the other functions as the element 100.
  • the roles of these tablets may be interchangeable, and the tablets that function as the manager 200 and the UI device 400 may be changed depending on the situation.
  • the functions desired by the user can be realized by using various sensors of the tablet and operating the two tablets in combination.
  • the first and second tablets in the example shown in FIG. 2 include a functional unit 130 (such as a sensor) that functions as the element 100 and a control unit 220 (such as a CPU) that functions as the manager 200.
  • a functional unit 130 such as a sensor
  • a control unit 220 such as a CPU
  • the terminal device which has.
  • Such a terminal device is not limited to a tablet, and may be another device such as a smartphone.
  • the number of terminal devices included in the system 10a is not limited to the illustrated example, and may be three or more, for example.
  • FIG. 3 is a diagram illustrating a second specific configuration example of the system illustrated in FIG. 1.
  • the system 10b includes elements 100a to 100g, a tablet that functions as the manager 200 and the UI device 400, and a server 300.
  • the tablet functions as the manager 200 and the UI device 400. Similar to the first example above, the tablet may be replaced by another device such as a smartphone.
  • the element 100 includes an acceleration sensor 100a, a camera 100b, a human sensor 100c, a button 100d, a speaker 100e, an LED (Light Emitting Diode) lamp 100f, and a microphone 100g.
  • Each element 100 communicates with the tablet by wireless communication such as Bluetooth (registered trademark), and operates in association with the control of the manager 200 realized by the tablet.
  • acceleration sensors 100a to 100g are illustrated as examples of the element 100, but this is intended to limit the types of the elements 100 used in each example. is not.
  • the system 10 may include any of the acceleration sensors 100a to 100g, or any other type of element 100.
  • FIG. 4 is a diagram showing a third specific configuration example of the system shown in FIG. Referring to FIG. 4, the system 10 c includes an element 100, a manager 200, a tablet that functions as the UI device 400, and a server 300.
  • the manager 200 exists separately from the tablet that functions as the UI device 400.
  • the manager 200 may be realized by a dedicated device, for example, or may be realized as one of base station functions such as Wi-Fi. Similar to the first and second examples described above, the tablet functioning as the UI device 400 may be replaced by another device such as a smartphone.
  • the manager 200 and the tablet may be able to communicate with the server 300 independently of each other.
  • the tablet may directly transmit the setting information to the manager 200 via Wi-Fi or the like.
  • Bluetooth registered trademark
  • Wi-Fi Wireless Fidelity
  • a mesh network repeater NW_m using Wi-Fi or the like is used for connection with the element 100 at a remote location.
  • various wireless communications such as Bluetooth (registered trademark) and Wi-Fi are used for communication between the element 100 and the manager 200 and / or between the elements 100. Can be used.
  • FIG. 5 is a diagram showing a fourth specific configuration example of the system shown in FIG. Referring to FIG. 5, the system 10 d includes an element 100 that partially functions as the manager 200, a tablet that functions as the UI device 400, and a server 300.
  • At least one of the elements 100 also functions as the manager 200.
  • the elements 100 form a mesh network using Bluetooth (registered trademark) or the like. With such a configuration, in the system 10d, even when the communication with the server 300 and the UI device 400 (tablet) is temporarily disconnected, the element 100 can operate in an autonomous manner.
  • FIG. 6 is a diagram illustrating a fifth specific configuration example of the system illustrated in FIG. 1.
  • the system 10 e includes an element 100 that partially functions as the manager 200, a tablet that functions as the UI device 400, and a server 300.
  • the system 10e is an example in which the manager 200 is incorporated in one of the elements 100 in the system 10c according to the third example.
  • FIG. 7 is a diagram illustrating a sixth specific configuration example of the system illustrated in FIG. 1.
  • the system 10 f includes an element 100, managers 200 a and 200 b, a tablet that functions as the UI device 400, and a server 300.
  • the system 10f is an example in which a plurality of managers 200 are arranged in the system 10c according to the third example.
  • Each element 100 can be connected to, for example, the closer one of the managers 200a and 200b.
  • the connection state of the element 100 and the program for operating the element 100 in a coordinated manner are synchronized as necessary or periodically.
  • FIG. 8 is a diagram illustrating a seventh specific configuration example of the system illustrated in FIG. 1.
  • the system 10g includes an element 100, a manager 200a, a tablet that functions as the manager 200b and the UI device 400, and a server 300.
  • the system 10g is an example in which the function of the manager 200b is integrated into the tablet in the system 10f according to the sixth example.
  • Each element 100 can be connected to, for example, a manager 200a or a tablet closer to the position. Further, between the manager 200a and the tablet, the connection state of the element 100 and the program for operating the element 100 in a coordinated manner are synchronized as necessary or periodically.
  • FIG. 9 is a diagram illustrating an eighth specific configuration example of the system illustrated in FIG. 1.
  • the system 10 h includes an element 100, a tablet that functions as the UI device 400, and a server 300 that also functions as the manager 200.
  • the system 10h is an example in which the function of the manager 200 is taken into the server 300 in the system 10b according to the second example.
  • Each element 100 communicates directly with the server 300 via, for example, a mobile communication network.
  • FIG. 10 is a diagram showing a first example of a UI provided in the system shown in FIG.
  • a screen 4100 displayed on the display of the terminal device functioning as the UI device 400 includes a user profile 4101, a program list tab 4103, and an element list tab 4105.
  • the program list 4107 is displayed.
  • the program list 4107 includes a program icon 4109, a used element icon 4111, and a description 4113.
  • the screen 4100 can be displayed, for example, as a portal screen when the user uses a service provided by the system 10.
  • FIG. 11 is a diagram showing a second example of a UI provided in the system shown in FIG.
  • a screen 4200 displayed on the display of the terminal device that functions as the UI device 400 includes a toolbar 4201, a canvas 4203, and a tray 4205.
  • the screen 4200 is used, for example, for editing a program for operating the element 100 in association with the system 10.
  • Function buttons such as “Save” and “Redo” are arranged on the toolbar 4201.
  • An element icon 4207, a detailed information icon 4209, and a link 4211 can be arranged or drawn on the canvas 4203. With these icons and links, it is possible to set and confirm the elements 100 used for the linkage operation, the processing executed by each element 100, and the relationship between the elements 100.
  • an element property box 4213 is arranged on the canvas 4203, and for example, the attribute and state of the element 100 arranged as the element icon 4207 are displayed.
  • the elements 100, software, and the like that can be incorporated in the program by being arranged on the canvas 4203 are displayed as icons.
  • UI described with reference to FIGS. 10 and 11 is merely an example, and various forms of UI can be provided in the present embodiment.
  • various UIs used in the screen for visual programming can be applied.
  • FIG. 12 is a diagram for explaining a first specific example of element linking operation in the system shown in FIG. 1.
  • an acceleration sensor 100a and a camera 100b are used among the elements 100.
  • the acceleration sensor 100a is attached to the door of the refrigerator, and the camera 100b is attached to a position where the interior of the refrigerator can be projected.
  • the link 601 from the acceleration sensor 100a to the camera 100b means that “the camera 100b performs shooting based on the detection value of the acceleration sensor 100a”.
  • “upload a captured image to the server” is designated as the operation of the camera 100b.
  • software 603a for detecting that the door is opened based on the acceleration is used.
  • the software 603a is executed by the arithmetic unit 221 of the manager 200, for example, and determines that the refrigerator door is opened based on the analysis result of the detection value of the acceleration sensor 100a.
  • the link 601 performs a process of “when the determination is made that the refrigerator door is opened based on the detection value of the acceleration sensor 100a, the camera 100b performs photographing”.
  • an image captured by the camera 100b when the refrigerator door is opened is uploaded to the server.
  • the user can grasp the change in the contents of the refrigerator and the latest inventory status by browsing the uploaded image.
  • a sensor dedicated to open / close detection using magnetism or the like can also be used.
  • the software 603a it is possible to detect that the door is opened by using the acceleration sensor 100a. Therefore, in order to detect that the door of the refrigerator is opened, for example, the acceleration sensor 100a that has been used for another purpose can be used instead of the dedicated sensor.
  • the acceleration sensor 100a can be diverted to other applications.
  • the same element 100 can be utilized for various uses by changing the combination of software and other elements 100.
  • FIG. 13 is a diagram for explaining a second specific example of element linking operation in the system shown in FIG. 1.
  • an acceleration sensor 100a and a button 100d are used among the elements 100.
  • the acceleration sensor 100a is attached to a toilet door
  • the button 100d is attached to a toilet.
  • software 603a for detecting that the door is opened based on acceleration and software 603b for recording the provided data are further used.
  • a link 601 from the acceleration sensor 100a to the software 603b through the software 603a means that the software 603b records that it is determined that the door is opened based on the detection value of the acceleration sensor 100a.
  • the link 601 from the button 100d to the software 603b means that the software 603b records that a signal is output from the button 100d.
  • the time when the user presses the button 100d in the bathroom and the time when the toilet door is opened are recorded as a log. For example, if the user decides to press the button 100d when washing his face in the bathroom after getting up, the time when the button 100d was pressed can be recorded as the rising time. From such a log, for example, it is possible to refer to changes in daily wake-up time and the time of going to the toilet in time series, which can be used to improve the user's life rhythm.
  • FIG. 14 is a diagram for explaining a third specific example of the element linking operation in the system shown in FIG. Referring to FIG. 14, in this example, among the elements 100, an acceleration sensor 100a and a human sensor 100c are used.
  • the acceleration sensor 100a is attached to a chair
  • the human sensor 100c is attached to a desk in front of the chair.
  • software 603c for recording the time that the user has been sitting on the chair based on the detection results of the acceleration sensor 100a and the human sensor 100c is further used.
  • a link 601 from the acceleration sensor 100a to the software 603c means “providing the detection value of the acceleration sensor 100a to the software 603c”.
  • the link 601 from the human sensor 100c to the software 603c means “providing the detection value of the human sensor 100c to the software 603c”.
  • the time that the user has been sitting on the chair is recorded. Based on this record, the user can grasp the sitting time at the workplace and take a break if the sitting time is too long.
  • software that outputs an alert from a smartphone or the like based on the detection result of the software 603c is further combined, and when the user is sitting continuously over a predetermined time, an alert is output from the smartphone or the like to take a break. You may be encouraged to take it.
  • FIG. 15 is a diagram for explaining a fourth specific example of the element linking operation in the system shown in FIG. 1.
  • an acceleration sensor 100a for example, a speaker 100e, and an LED lamp 100f are used among the elements 100.
  • the acceleration sensor 100a, the speaker 100e, and the LED lamp 100f are each attached to an appropriate part of the body of the user (for example, a child).
  • a plurality of acceleration sensors 100a may be used.
  • software 603d that reproduces sound effects according to the acceleration pattern and software 603e that causes the lamp to blink in a predetermined pattern according to the acceleration pattern are also shown.
  • the acceleration pattern handled by these software 603 may be a single acceleration waveform pattern or a combination of a plurality of acceleration waveforms.
  • a link 601 is set from the acceleration sensor 100a to each of the software 603d and 603e. These links 601 mean “provide the detection values of the acceleration sensor 100a to the software 603d and 603e, respectively”. Further, a link 601 is set from the software 603d to the speaker 100e, and from the software 603e to the LED lamp 100f. These links 601 mean “the speaker 100e outputs sound according to the audio signal provided by the software 603d” and “the LED lamp 100f emits light according to the signal provided by the software 603e”.
  • FIG. 16 is a diagram for explaining a fifth specific example of element linking operation in the system shown in FIG. 1.
  • an acceleration sensor 100a for example, a human sensor 100c, a camera 100b, and an LED lamp 100f are used.
  • the acceleration sensor 100a, the human sensor 100c, and the camera 100b are attached to a bird table, and the LED lamp 100f is attached to a house.
  • software 603f for determining that a bird has stopped on the feeding base based on the detection results of the acceleration sensor 100a and the human sensor 100c is further illustrated.
  • a link 601 is set from the acceleration sensor 100a and the human sensor 100c toward the software 603f.
  • the link 601 from the acceleration sensor 100a means “provide the detection value of the acceleration sensor 100a to the software 603f”. Further, the link 601 from the human sensor 100c means “providing the detection value of the human sensor 100c to the software 603f”. In the software 603f, for example, a condition that is established by a combination of sensor data provided by each of the acceleration sensor 100a and the human sensor 100c is provided. Furthermore, a link 601 is set from the software 603f to the camera 100b and the LED lamp 100f. The link 601 to the camera 100b means that “the camera 100b performs shooting based on the output of the software 603f”. The link to the LED lamp 100f means that the LED lamp 100f is caused to emit light based on the output of the software 603f.
  • FIG. 17 is a diagram for explaining a sixth specific example of element linking operation in the system shown in FIG. 1.
  • a button 100d and a speaker 100e are used among the elements 100.
  • the speaker 100e is disposed in a bedroom and the button 100d is disposed in a washroom.
  • the alarm alarm software 603g is further shown.
  • a link 601 is set to the speaker 100e from the software 603g. This link 601 means that “the speaker 100e outputs audio according to the audio signal output when the software 603 reaches the set time”.
  • a link 601 is set in the software 603g from the button 100d. This link 601 means “when the button 100d is pressed, the output of the audio signal by the software 603g is stopped”.
  • the alarm sound output from the speaker 100e disposed in the bedroom does not stop until the toilet button 100d is pressed.
  • the embodiment of the present disclosure relates to a process for a user to set a condition when the system performs a predetermined operation when a condition regarding sensor data provided by a sensor in the system is satisfied.
  • the system 10 demonstrated with reference to FIG. 1 etc. is demonstrated below as an example, the system which concerns on embodiment of this indication is not restricted to this.
  • Sensors in the system include, for example, push buttons, acceleration sensors, image sensors, microphones, color sensors, illuminance sensors, temperature sensors, humidity sensors, human sensors, wireless signal strength detectors, distance sensors, pressure sensors, strain gauges, A gas sensor, an ultraviolet sensor, a gyro sensor, a magnetic sensor, a speed sensor, a tilt sensor, a touch sensor, and the like can be included.
  • the user selects or creates a condition established by sensor data provided by any of these sensors or a combination of sensor data provided by a plurality of types of sensors. Also, the user may be able to verify or adjust the selected or created conditions.
  • FIG. 18 is a diagram illustrating a first example of a system configuration according to an embodiment of the present disclosure.
  • the system 10-1 includes an element 100-1, a manager 200-1, and a UI device 400-1.
  • the element 100-1 includes a communication unit 110, a sensor control unit 120a, an actuator control unit 120b, a sensor 130a, and an actuator 130b.
  • the communication unit 110 includes a communication device that communicates with the manager 200-1 and other elements 100-1 via a network.
  • the sensor control unit 120a and the actuator control unit 120b are included in the control unit 120 described with reference to FIG.
  • the control unit 120 is realized by a microcontroller, a CPU, or the like.
  • the sensor control unit 120a controls the sensor 130a, and the actuator control unit 120b controls the actuator 130b.
  • the sensor 130a and the actuator 130b are included in the functional unit 130 described with reference to FIG.
  • the sensor 130a can include, for example, the various sensors described above.
  • the sensor 130a detects a change in the external environment and a user interaction with the element 100-1.
  • the actuator 130b includes, for example, an LED lamp, a motor, a speaker, and the like, and gives some action to the outside world.
  • the manager 200-1 includes a communication unit 210, an integrated control unit 222, a condition adjustment / generation unit 223, a condition management unit 224, a condition determination unit 225, and a condition handling unit 226.
  • the communication unit 210 includes a communication device that communicates with the element 100-1, the UI device 400-1, and the external system 500 via a network.
  • the integrated control unit 222, the condition adjustment / generation unit 223, the condition management unit 224, the condition determination unit 225, and the condition handling unit 226 are included in the control unit 220 described with reference to FIG.
  • the control unit 220 is realized by a microcontroller, a CPU, or the like.
  • the integrated control unit 222 controls the functional units related to the conditions in the manager 200-1, the element 100, the UI device 400, and the like, and integrates the entire system 10 by executing information input / output and information processing. Control. For example, the integrated control unit 222 acquires sensor data provided by the sensor 130a included in the one or more elements 100-1.
  • the condition adjustment / generation unit 223 provides a function for the user to adjust the condition or generate a new condition. For example, the condition adjustment / generation unit 223 sets conditions relating to sensor data for detecting the detection target state based on the sensor data acquired during the practice of the detection target state by the user. At this time, the condition adjustment / generation unit 223 instructs the user of the timing of the detection target state practice via the UI device 400, or uses machine data that is acquired during the detection target state practice as training data. Or carry out.
  • the condition adjustment / generation unit 223 is a sensor acquired during the practice of the detection target state for a newly generated condition, an existing single condition, a condition that combines a plurality of existing conditions, and the like. Perform verification based on the data. The verification is performed, for example, by determining whether or not the condition is satisfied based on the sensor data acquired during the practice of the detection target state.
  • the condition adjustment / generation unit 223 can change the setting of the condition based on the user operation during the verification of the condition. Therefore, for example, if a condition that should be satisfied is not satisfied during verification, or if it is determined that the reliability is low even if it is satisfied, the user can change the setting of the condition in real time. it can.
  • the condition management unit 224 manages the conditions that are already generated and stored in the system 10 and the newly generated condition candidates. Conditions and condition candidates are stored in, for example, the storage unit 230 (not shown in FIG. 18) of the manager 200-1. Conditions and candidate conditions are stored by associating identification information (ID) with a mathematical model for determining whether or not the condition is satisfied by sensor data provided by the sensor 130a, for example.
  • ID identification information
  • the condition management unit 224 stores newly generated conditions (including conditions obtained by combining a plurality of existing conditions) in the storage unit 230 and the like, and reads one or more existing conditions from the storage unit 230.
  • the condition determination unit 225 determines whether the condition is satisfied based on the sensor data provided by the sensor 130a. The determination is executed by referring to the condition saved in the storage unit 230 of the manager 200-1 by the condition management unit 224 or by copying it into the condition determination unit 225, for example.
  • the condition determining unit 225 determines whether or not the condition is satisfied based on the sensor data, for example, according to a mathematical model defined in the condition.
  • the condition handling unit 226 realizes a predetermined operation in the system 10 when the condition is satisfied. More specifically, when the condition determination unit 225 determines that the condition is satisfied, the condition handling unit 226 realizes the operation of the system 10 associated with the condition.
  • the operation of the system 10 is performed, for example, by causing the actuator 130b to perform a predetermined operation via the actuator control unit 120b of the element 100-1, requesting a predetermined function from the external system 500, or the control unit 420 of the UI device 400. This is realized by outputting predetermined information from the output unit 430b via the.
  • the UI device 400-1 includes a communication unit 410, a control unit 420, an input unit 430a, and an output unit 430b.
  • the communication unit 410 includes a communication device that communicates with the manager 200-1 and the like via a network.
  • the control unit 420 is realized by a microcontroller, a CPU, or the like, and controls the input unit 430a and the output unit 430b.
  • the input unit 430a and the output unit 430b are included in the input / output unit 430 described with reference to FIG.
  • the input unit 430a is realized by a keyboard, a mouse, a touch panel, and the like, and receives an operation input from a user.
  • the output unit 430b is realized by a display, a speaker, and the like, and transmits the output from the system 10 to the user.
  • the UI device 400-1 functions as a terminal device that outputs or presents instructions or information related to setting conditions provided by the condition adjustment / generation unit 223 of the manager 200.
  • the instruction or information regarding the setting of the condition may be output or presented to the user via a device different from the element 100-1 (the device including the sensor 130a), or the element 100 as in an example described later.
  • -1 may be output or presented via an actuator 130b included in the -1.
  • the external system 500 is a system that provides a predetermined function without depending on components such as the element 100-1, the manager 200-1, and the UI device 400-1.
  • the external system 500 can be, for example, a system that exists outside the system 10 (including the server 300 not shown).
  • the external system 500 is realized by, for example, one or a plurality of information processing devices that constitute a server.
  • the external system 500 includes a communication unit 510 including a communication device that communicates with the manager 200 (or the server 300) via a network, and a function unit 520 that provides a predetermined function via the communication unit 510.
  • the functional unit 520 can include, for example, a microcontroller or CPU that provides an arithmetic function, a memory or storage that provides a data storage function, and the like.
  • the processing related to the conditions is concentrated in the manager 200-1. Accordingly, the sensor data provided by the sensor 130a included in one or more elements 100-1 is transmitted to the manager 200-1, and in the manager 200-1, the integrated control unit 222 transmits the sensor data to the element 100- 1 is specified, and the condition determination unit 225 determines whether the condition is satisfied based on the sensor data. When the condition is satisfied, the condition handling unit 226 sends a control command for causing the system 10 to perform a predetermined operation to the element 100-1, the UI device 400-1, or the external system 500 via the communication unit 210. Send.
  • the UI device 400-1 when the user's operation input for changing the setting of the condition is accepted by the input unit 430a, information indicating the operation input is received via the communication unit 410 of the UI device 400-1. Is transmitted to the manager 200-1. In the manager 200-1, the condition adjustment / generation unit 223 changes the setting of the condition according to the operation input, and the condition management unit 224 updates the information related to the condition and the condition with the changed setting.
  • a sensor provided by a sensor 130a included in one or more elements 100-1 when adjusting conditions by real-time feedback for candidate conditions presented by the system 10, or generated or selected by a user
  • the data is transmitted to the manager 200-1, and the condition determination unit 225 determines whether the condition is satisfied based on the sensor data. Further, the determination result as to whether or not the condition is satisfied is transmitted to the UI device 400-1 via the communication unit 210, and is transmitted to the user from the output unit 430b according to the control of the control unit 420.
  • the user can perform readjustment or reselection of conditions while referring to feedback on whether or not the conditions are satisfied based on the transmitted information, that is, sensor data when the detection target state is put into practice.
  • FIG. 19 is a diagram illustrating a second example of a system configuration according to an embodiment of the present disclosure.
  • the system 10-2 includes an element 100-2, a manager 200-2, and a UI device 400-2.
  • Element 100-2 includes a communication unit 110, a sensor control unit 120a, an actuator control unit 120b, a condition determination unit 121, a sensor 130a, and an actuator 130b.
  • the condition determination unit 121 is also included in the control unit 120 described above with reference to FIG. 1 in the same manner as the sensor control unit 120a and the actuator control unit 120b.
  • the control unit 120 is realized by a microcontroller, a CPU, or the like.
  • the condition determining unit 121 determines whether or not a condition set by, for example, a detection value from the sensor 130a is satisfied, similarly to the condition determining unit 225 of the manager 200-1 in the first example.
  • the conditions used for the determination are stored, for example, in the storage unit 230 of the manager 200-1.
  • condition determination unit 121 executes the determination by referring to the condition via the communication unit 210 or by copying the condition into the condition determination unit 121.
  • the conditions used for the determination may be stored in a storage unit (not shown) of the element 100-2.
  • the condition determining unit 121 determines whether or not the condition is satisfied based on the sensor data provided by the sensor 130a, for example, according to a mathematical model defined in the condition. Note that the communication unit 110, the sensor control unit 120a, the actuator control unit 120b, the sensor 130a, and the actuator 130b are the same as in the first example described above, and thus redundant description is omitted.
  • the manager 200-2 includes a communication unit 210, an integrated control unit 222, a condition adjustment / generation unit 223, a condition management unit 224, and a condition handling unit 226.
  • the condition determination unit 121 is included in the element 100-2, while the condition determination unit 121 is not included in the manager 200-2.
  • the integrated control unit 222 transmits the conditions stored in the storage unit 230 of the manager 200-2 by the condition management unit 224 to the element 100-2 via the communication unit 210. The transmission may be executed after the sensor data is acquired, or may be executed in advance prior to acquisition of the sensor data.
  • the integrated control unit 222 may distribute condition information in advance to the condition determination unit 121 of the element 100-2 having the sensor 130a related to the condition.
  • the condition handling unit 226 notifies the determination result of whether or not the condition is satisfied from the condition determination unit 121 of the element 100-2 via the communication unit 210.
  • the communication unit 210, the integrated control unit 222, the condition adjustment / generation unit 223, the condition management unit 224, and the condition handling unit 226 are the same as those in the first example except for the points described above. To do.
  • the configurations of the UI device 400-2 and the external system 500 are the same as those of the UI device 400-1 and the external system 500 in the first example, and therefore, a duplicate description thereof will be omitted.
  • the condition determination unit 121 is included in the element 100-2 having the sensor 130a that acquires the detection value. Therefore, the detection value of the sensor 130a included in one or more elements 100-2 is used to determine whether or not a condition is satisfied inside the element 100-2.
  • the condition determination unit 121 transmits information indicating the established condition to the manager 200-2 via the communication unit 110.
  • the integrated control unit 222 identifies the element 100-2 to which the information has been transmitted, and the condition handling unit 226 transmits the element 100-2, the UI device 400-2, or the external device via the communication unit 210.
  • a control command for causing the system 10 to realize a predetermined operation is transmitted to the system 500.
  • the UI device 400-2 when the user's operation input for changing the setting of the condition is received by the input unit 430a, information indicating the operation input is received via the communication unit 410 of the UI device 400-2. It is transmitted to the manager 200-2.
  • the condition adjustment / generation unit 223 changes the setting of the condition according to the operation input, and the condition management unit 224 updates the condition with the changed setting.
  • the integrated control unit 222 may notify the element 100-2 of the update of the condition, and in the element 100-2, the condition determination unit 121 may change the setting of the condition held internally.
  • a sensor provided by the sensor 130a included in the one or more elements 100-2 when the condition is adjusted by real-time feedback for the candidate condition presented by the system 10 or generated or selected by the user.
  • the condition determination unit 121 determines whether or not the condition is satisfied.
  • the determination result as to whether or not the condition is satisfied is transmitted to the manager 200-2 via, for example, the communication unit 110, further transferred to the UI device 400-2 from the communication unit 210, and output unit 430b according to the control of the control unit 420.
  • the user can perform readjustment or reselection of conditions while referring to feedback on whether or not the conditions are satisfied based on the transmitted information, that is, sensor data provided by the sensors.
  • FIG. 20 is a diagram illustrating a third example of a system configuration according to an embodiment of the present disclosure.
  • the system 10-3 includes an element 100-3, a manager 200-3, and a UI device 400-3.
  • the element 100-3 can have the same configuration as the element 100-2 in the second example.
  • the manager 200-3 can have the same configuration as the manager 200-1 in the first example. That is, in the third example, in addition to at least a part of the element 100-3 including the condition determining unit 121, the manager 200-3 also includes the condition determining unit 225. Note that the configurations of the element 100-3 and the manager 200-3 are the same as those in the first example or the second example described above, and a duplicate description is omitted.
  • the type of detection value of the sensor 130a for example, the type of detection value of the sensor 130a, the size of the memory or storage of each element 100-3, the bandwidth available for communication between each element 100-3 and the manager 200, etc. Accordingly, it is determined whether to provide the condition determining unit 121 in the element 100-3 or to use the condition determining unit 225 of the manager 200-3.
  • the condition determination unit 225 of the manager 200-3 is used for the detection value of a certain sensor 130a, the element 100-3 including the sensor 130a may not include the condition determination unit 121.
  • the configurations of the UI device 400-3 and the external system 500 are the same as those of the UI devices 400-1 and 400-2 and the external system 500 in the first example or the second example described above, and therefore, there is no overlap in these.
  • the explanations made are omitted.
  • FIG. 21 is a flowchart illustrating an example of processing according to an embodiment of the present disclosure.
  • the processing subject in the first example of the system configuration will be described as an example, but the same processing can be performed for the second and third examples of the system configuration.
  • the system 10-1, the element 100-1, the manager 200-1, and the UI device 400-1 are simply referred to as the system 10, the element 100, the manager 200, and the UI device, respectively. 400.
  • the sensor used for the condition is selected (S101).
  • the sensor is selected by a user operation, for example.
  • the input unit 430a of the UI device 400 acquires a user operation for selecting a sensor.
  • Information indicating the user operation is transmitted to the manager 200 via the control unit 420 and the communication unit 410.
  • the output unit 430b of the UI device 400 presents a UI screen showing selectable sensors to the user based on the information transmitted from the integrated control unit 222 of the manager 200 via the communication unit 210. Also good.
  • the integrated control unit 222 may narrow down the selectable sensors from all available sensors, for example, according to the type of condition to be set. Alternatively, the integrated control unit 222 may automatically perform sensor selection itself according to the type of condition to be set.
  • the integrated control unit 222 identifies the selected sensor from the sensors 130 a that can be used in the system 10 based on the information indicating the user operation received from the UI device 400 via the communication unit 210. To do. Further, when setting the condition using the sensor data provided by the selected sensor, the integrated control unit 222 displays a UI screen for inquiring whether or not to automatically search for a condition candidate and the output unit 430b of the UI device 400. To the user (S103). Note that the condition candidates here may include conditions that have already been generated and saved in the storage unit 230 of the manager 200 by the condition management unit 224.
  • the system 10 enters a detection target state standby state. More specifically, the integrated control unit 222 of the manager 200 notifies the UI device 400 that recording of the detection target state is started via the communication unit 210, and further requests to practice the detection target state. To do. Upon receiving these, the output unit 430b of the UI device 400 notifies the user that recording of the detection target state is started, and then presents a UI screen prompting the user to practice the detection target state. In response to this, the user practices the detection target state (S105).
  • the practice of the detection target state in S105 described above means that, for example, the user wants to practice a state that the user desires to be detected under the conditions that the user wants to set after operating the sensor 130a. More specifically, for example, when the user wants to detect that the door of the refrigerator is opened using the detection value of the acceleration sensor 100a, in S105, after the acceleration sensor 100a is attached to the refrigerator door, Open the refrigerator door. While the detection target state is put into practice, sensing by one or a plurality of sensors 130a is performed.
  • the detection target state may be practiced after notifying the user to start recording as described above or requesting the user to practice the detection target state. It may be automatically started without outputting such a notification or request. Moreover, the practice of the detection target state may be performed only once, for example, may be automatically ended after being repeatedly performed, or may be repeatedly performed until an end instruction by the user. When the practice of the detection target state is automatically terminated, the number of implementations reaches a preset number, or the elapsed time after the practice is started can be set as the termination condition. Furthermore, when practicing the detection target state, the time stamp recorded at any timing by the user or the user practicing the detection target state in order to make it easy to specify the section of the detection value indicating the detection target state. An image or sound in which the state of being recorded is recorded may be additionally acquired.
  • the condition determination unit 225 determines whether there is a condition candidate that is satisfied by the detection value of the sensor 130a acquired in the practiced detection target state (S107).
  • the condition candidates may include conditions that are already generated and stored in the storage unit 230 of the manager 200 by the condition management unit 224.
  • the integrated control unit 222 may inquire through the communication unit 210 whether or not to retry the practice of the detection target state to the UI device 400. Receiving this, the output unit 430b of the UI device 400 inquires of the user whether or not to retry the practice of the detection target state (S109). When the user selects to retry, the detection target state practice in S105 is re-executed.
  • the integrated control unit 222 determines to apply the condition candidate (S111). At this time, when there are a plurality of condition candidates, a UI screen showing the condition candidates that can be selected via the output unit 430b of the UI device 400 is presented to the user, and the condition candidates are selected via the input unit 430a. The user operation may be acquired. For the applied condition candidate, a condition verification process (S115) is performed as necessary. Details of the condition verification process will be described later.
  • condition generation process S113
  • condition verification process S115
  • FIG. 22 is a flowchart showing the condition generation process in FIG. 21 in more detail.
  • the compound condition is a condition candidate, for example, a condition generated by combining a plurality of conditions already generated and stored in the storage unit 230 of the manager 200 by the condition management unit 224.
  • a combined condition of “double press including long press” can be generated by combining the condition of “long press” and the condition of “double press”.
  • the integrated control unit 222 passes through the output unit 430b of the UI device 400. Then, a UI screen for selecting a condition that becomes a component of the composite condition is presented, and the input unit 430a acquires a user operation for selecting the condition of the component (S123).
  • the condition management unit 224 reads the selected plurality of existing conditions, and the condition adjustment / generation unit 223 generates a composite condition using these existing conditions (S125).
  • the integrated control unit 222 when a user operation indicating that the composite condition is not used is acquired via the input unit 430a of the UI device 400 (NO), the integrated control unit 222 generates a condition using machine learning. Then, it is determined whether or not divertable data exists (S127).
  • the divertable data is provided, for example, in a photograph taken by the camera and stored in the storage, or in the external system 500 if it is desired to generate a condition including identifying the user's face using the camera 100b. It may be an image including the user's face that is already available in the system 10, such as a photo posted to a service such as social media.
  • training data is diverted (S129). More specifically, the integrated control unit 222 acquires data from the external system 500 and provides the condition adjustment / generation unit 223 with the data.
  • the user practices the detection target state (S131) in the same manner as the process of S105 described above with reference to FIG.
  • the detection target state may be practiced after notifying the user to start recording or requesting the user to practice the detection target state. It may be started automatically without outputting a request.
  • the practice of the detection target state may be performed only once, for example, may be automatically ended after being repeatedly performed, or may be repeatedly performed until an end instruction by the user.
  • An image or sound in which the state of being recorded is recorded may be additionally acquired.
  • Data including the detection value of the sensor 130a acquired in the practice of the detection target state in S127 is recorded as training data by the integrated control unit 222 (S133).
  • condition adjustment / generation unit 223 performs learning using the training data acquired in the above S129 or S133 (S135).
  • the learning here is so-called supervised machine learning, and various known machine learning techniques can be applied.
  • the condition adjustment / generation unit 223 Based on the learning result, the condition adjustment / generation unit 223 generates a new condition (S137).
  • the conditions generated in S125 or S137 are stored in the storage unit 230 of the manager 200 by the condition management unit 224 through the condition verification process (S115) shown in FIG.
  • FIG. 23 is a flowchart showing the condition verification process in FIG. 21 in more detail.
  • the condition verification process first, the user is inquired whether to verify the condition via the output unit 430b of the UI device 400 (S141).
  • the condition verification process is not performed. That is, the condition candidates applied in S111 in FIG. 21 and the new conditions generated in S113 are determined as conditions as they are.
  • the condition is verified (S143).
  • the verification of the condition can be performed by real-time feedback from the system 10 to the user, for example.
  • the detection value of the sensor 130a is transmitted to the manager 200, and the condition determination unit 225 determines whether or not the condition is satisfied based on the detection value. Further, the determination result as to whether or not the condition is satisfied is transmitted to the UI device 400-1 via the communication unit 210, and is transmitted to the user from the output unit 430b according to the control of the control unit 420.
  • the UI device 400 In response to the verification result of the condition output in S143, the UI device 400 presents the user with a UI screen asking whether or not the condition itself is to be changed by the output unit 430b (S145).
  • the process returns to S103 illustrated in FIG. That is, when it is selected to change the condition itself, the process is re-executed from an inquiry as to whether or not to automatically search for a condition candidate.
  • the UI device 400 further presents a UI screen asking whether to adjust the condition to the user by the output unit 430b (S147).
  • a user operation indicating that the condition is adjusted is acquired by the input unit 430a (YES)
  • the condition is adjusted (S149). More specifically, for example, in the UI device 400, a UI screen for adjusting a condition by changing a setting value such as a threshold value or a range of a detection value is presented to the user, and the setting value is set by the input unit 430a. The user operation to change is acquired.
  • the condition adjustment is completed, the verification of the condition in S143 is performed again.
  • condition verification process ends, and the condition candidate applied in S111 in FIG. 21 or the new candidate generated in S113 is displayed. Conditions or conditions obtained by adjusting the conditions in S149 to these condition candidates or conditions are determined.
  • FIG. 24 is a sequence diagram illustrating an example of processing for obtaining a time stamp together with sensor data in the embodiment of the present disclosure.
  • the time stamp recorded at any timing by the user during the practice of the detection target state by the condition adjustment / generation unit 223. It may additionally be acquired.
  • processing related to the practice of the detection target state when the time stamp is acquired is shown.
  • the integrated control unit 222 determines the start of recording (S201). For example, the recording may be started in response to an operation input from the user via the input unit 430a of the UI device 400, or may be automatically started as in the illustrated example.
  • the sensor 130a performs sensing in the element 100 (S203), and the sensor control unit 120a transmits the detection value to the manager 200 via the communication unit 110.
  • the integrated control unit 222 stores the sensor detection value received via the communication unit 210 (S205).
  • the UI device 400 can perform a time stamp generation operation by the user via the input unit 430a.
  • the time stamp generation operation is executed at an arbitrary timing (S207), and a command instructing generation of a time stamp is transmitted from the UI device 400 to the manager 200.
  • the integrated control unit 222 records a time stamp based on the command received via the communication unit 210 (S209). Note that the time series of time stamps recorded and the time series of sensor detection values are synchronized.
  • the sensor 130a may further perform sensing (S211), and the sensor detection value may be stored in the manager 200 (S213).
  • time stamp generation according to S207 and S209 may be executed before sensing is performed in S203 and S205. Further, the generation of the time stamp is not limited to once, and may be repeated. In this case, one or more times of sensing may be performed between the time stamps before and after, or generation of time stamps may be continued without performing sensing in between. In addition, after the last time stamp, sensing one or more times may be performed, or sensing may not be performed.
  • the integrated control unit 222 determines the end of recording (S215).
  • the recording may be ended in response to a user operation input via the input unit 430a of the UI device 400, or may be automatically ended as in the illustrated example.
  • the recording may be automatically terminated on the condition that the time stamp is generated.
  • the integrated control unit 222 stores, in the UI device 400, data in which the sensor detection values stored in S205, S213, and the like and the time stamps recorded in S209 are associated in time series. It outputs (S217).
  • the output unit 430b displays the section setting UI for specifying the section of the detected value indicating the detection target state using the above data (S219).
  • a time stamp is displayed together with the time-series waveform of the sensor data, and the user assists the operation of setting the section of the sensor data.
  • the condition adjustment / generation unit 223 may automatically extract sensor data based on the time stamp.
  • FIG. 25 is a sequence diagram illustrating an example of processing for acquiring an image / sound together with sensor data in the embodiment of the present disclosure.
  • the user An image or sound recording a state of practicing the above may be additionally acquired.
  • FIG. 25 shows an example of detection target state recognition in such a case.
  • the recording / recording apparatus 600 records images and sounds. Although not shown in the drawings so far, the recording / recording apparatus 600 is a camera or microphone that operates in conjunction with the manager 200, or an apparatus in which these are incorporated.
  • the recording / recording apparatus 600 may be incorporated in the system 10 or may be outside the system 10, for example.
  • the element 100 having a recording or recording function, the manager 200, or the UI apparatus 400 may function as the recording / recording apparatus 600.
  • the integrated control unit 222 determines the start of recording (S231). Similar to the example described above with reference to FIG. 24, recording can be initiated by various triggers.
  • the integrated control unit 222 notifies the recording / recording apparatus 600 of the start of recording. In response to this notification, the recording / recording apparatus 600 starts recording / recording (S233).
  • the sensor 130a performs sensing in the element 100 (S235), and the sensor control unit 120a transmits the detection value to the manager 200 via the communication unit 110.
  • the integrated control unit 222 temporarily stores the sensor detection value received via the communication unit 210 (S237). The sensing by S235 and S237 may be executed only once or may be repeated many times.
  • the integrated control unit 222 determines the end of recording (S239). Similar to the example described above with reference to FIG. 24, the recording can be terminated by various triggers.
  • the integrated control unit 222 notifies the recording / recording apparatus 600 of the end of recording. Upon receiving this notification, the recording / recording apparatus 600 ends the recording / recording and outputs the image / sound data to the manager 200 (S241).
  • the integrated control unit 222 of the manager 200 that has received the image / sound data sets data in which the sensor detection values stored in S237 and the like and the image / sound data received from the recording / recording apparatus 600 are associated in time series. The data is output to the UI device 400 (S243).
  • the output unit 430b displays the section setting UI for specifying the section of the detection value indicating the detection target state using the above data (S245). Note that a specific example of the section setting UI in this case will be described later in the same manner as the section setting UI in the example of FIG.
  • FIG. 26 is a diagram illustrating an example of a user interface for sensor selection according to an embodiment of the present disclosure.
  • sensors sensors 130 a
  • the condition adjustment / generation unit 223 of the manager 200 selects the one or more sensors related to the condition from the available sensor group via the UI device 400.
  • An interface can be provided.
  • the condition adjustment / generation unit 223 may automatically select one or more sensors related to the condition from the available sensor group.
  • “Button 1” 4303 in the list is selected.
  • the condition list 4305 assigned to “Button 1” is displayed.
  • the condition list 4305 in the illustrated example indicates that “LongPress” and “DoublePress” conditions have already been assigned to “Button1”.
  • the user can assign a new condition using “Button 1” by selecting “Assign Condition” displayed below.
  • FIG. 27 is a diagram illustrating an example of a user interface for condition selection according to an embodiment of the present disclosure.
  • FIG. 27 shows an assignable condition list 4311 displayed when, for example, “Assign Condition” is selected in FIG.
  • condition list 4311 for example, conditions that can be assigned to “Button 1” in the example of FIG. 26 are shown.
  • the condition list 4311 shows a default condition 4313 including “Press”, “DoublePress”, and “LongPress”, a composite condition 4315, and a learning generation condition 4317.
  • the predetermined condition 4313 is a condition that has already been generated and saved in the storage unit 230 of the manager 200 or the like. These conditions can be assigned to “Button 1” by specifying them on the condition list 4311, for example.
  • the composite condition 4315 or the learning generation condition 4317 is selected, for example, a condition generation process as described above with reference to FIG. 22 is started.
  • FIG. 28 is a diagram illustrating an example of a user interface for condition proposal according to an embodiment of the present disclosure.
  • FIG. 28 shows an example of a UI that is displayed when “automatically search for condition candidates” is designated in S103 in the example described with reference to FIG.
  • the standby screen 4321 is displayed while waiting for the user to practice the detection target state for searching for condition candidates.
  • the user is waiting for the detection target state to be put into practice for the sensor “Acceleration 1”.
  • a search result presentation screen 4323 is displayed.
  • the name of the condition candidate is displayed, and a retry button 4325 is displayed for a case where the user thinks that the condition candidate is not appropriate (this point is retried when there is a condition candidate) This is different from the example shown in Fig. 21. Both examples can be adopted in this embodiment.
  • a search result presentation screen 4327 is displayed. On the screen 4327, it is displayed that a condition candidate has not been found, and a retry button 4325 is displayed.
  • FIG. 29 is a diagram illustrating an example of a user interface for generating a composite condition according to an embodiment of the present disclosure.
  • FIG. 29 shows a composite condition list 4331.
  • the composite condition list 4331 includes a rank 4333, a component 4335, and a component candidate 4337.
  • each condition included in the composite condition is indicated by the component 4335, and the order of each condition is indicated by the order 4333.
  • the component 4335 can be added or changed by selecting from the component candidates 4337. In addition, the component 4335 can be deleted.
  • “DoublePress” is set in the rank “1”
  • “LongPress” is set in the rank “2”
  • “Press” is set in the rank “3”.
  • the condition is satisfied when “DoublePress”, “LongPress”, and finally “Press” are executed in this order.
  • the compound condition generated using the compound condition list 4331 can be referred to and selected as the arbitrarily set condition 4319 together with the default condition 4313 in the condition list 4311 described above with reference to FIG.
  • FIG. 30 is a diagram illustrating another example of a user interface for generating a composite condition according to an embodiment of the present disclosure.
  • FIG. 30 shows a composite condition list 4331 as in the example of FIG. 29 described above. However, in the illustrated example, the same order 4333 “1” is set for the two components 4335 “DetectFace” and “DetectHand”.
  • FIG. 30 shows an example of a condition list 4341 different from that shown in FIG.
  • the compound condition generated using the compound condition list 4311 can be referred to and selected as the arbitrarily set condition 4349 together with the default condition 4343, the (new) combination condition 4345, and the learning generation condition 4347 in the condition list 4341. .
  • FIG. 31 is a diagram illustrating an example of a user interface for training data input according to an embodiment of the present disclosure.
  • FIG. 31 shows an example of a UI displayed when the detection target state is put into practice in S131 and S133 and training data is recorded in the example described with reference to FIG.
  • the screen 4401 and the screen 4407 in the illustrated example are received as “single training data corresponding to the detection target state” and “timing to start inputting training data” when the user repeatedly inputs training data. "Time” is displayed to communicate to the user.
  • a start button 4403 is pressed on the screen 4401, the center circle 4405 starts to be missing as shown by a circle 4411.
  • the time until the circle 4405 is completely missing after the state of the circle 4411 indicates “time accepted as single training data”.
  • the timing at which the circle-missed region goes around and the display of the circle 4411 returns to the circle 4405 in the initial state is “timing to start (restart) input of training data”.
  • FIG. 32 is a diagram illustrating another example of a user interface for inputting training data according to an embodiment of the present disclosure.
  • FIG. 32 shows a screen 4413 and a screen 4417 that are partially different from the screen 4401 and the screen 4407 described above with reference to FIG.
  • a start button 4403 and a circle 4405 are displayed in the same manner as the screen 4401, and a number setting unit 4415 is displayed.
  • the number setting unit 4415 allows the user to set the number of repetitions of training data input.
  • a circle 4411 is displayed as in the screen 4407, but a stop button is not displayed. Instead, a remaining number display 4419 is displayed on the screen 4417, and the remaining number of times until the input of training data is automatically terminated is displayed.
  • the “time that can be accepted as a single training data” can be automatically specified based on the time stamp and image / sound acquired together with the sensor data.
  • the user does not have to measure the input timing of training data from the screen. Therefore, in this case, the user interface may be simplified by omitting the circles 4405 and 4411.
  • FIG. 33 is a diagram illustrating a first example of a user interface for inputting training data including an upper limit value and a lower limit value according to an embodiment of the present disclosure.
  • the definition of the condition includes a range of detection values included in the sensor data.
  • FIG. 33 shows a screen 4421 for recording and referring to the upper limit value and the lower limit value of the detection value range when generating the condition “Fast & Slow”.
  • Screen 4421 includes a condition name 4423 and a record / reference button 4425.
  • the record / reference button 4425 includes a record button 4425a and a reference button 4425b.
  • the recording button 4425a is pressed, a training data input user interface as described below is displayed, and training data can be input.
  • the reference button 4425b is pressed, a user interface for referring to existing data is displayed, and the existing data can be used as training data.
  • FIG. 34 is a diagram illustrating a second example of a user interface for inputting training data including an upper limit value and a lower limit value according to an embodiment of the present disclosure.
  • FIG. 34 shows a screen 4431 and a screen 4435 that are displayed when the record button 4425a is pressed in the example of FIG. On the screen 4431, it is instructed to input upper limit training data by the text 4433. On the screen 4437, the text 4437 indicates that lower limit training data is input.
  • the screen 4431 and the screen 4435 described above may be directly displayed without passing through the screen 4421.
  • an upper limit value input screen 4431 and then a lower limit value input screen 4435 are automatically and continuously displayed, so that the user can perform text 4433 and 4437 without any additional operation.
  • the training data input of the upper limit value and the lower limit value of the range can be automatically executed while confirming the above.
  • the condition adjustment / generation unit 223 of the manager 200 includes one or more detection values (not limited to the upper limit value and the lower limit value within the range related to the condition definition, and may include intermediate values).
  • the instruction for practicing the detection target state so as to correspond to “good” may be output via the UI device 400.
  • FIG. 35 is a diagram illustrating an example of a user interface for inputting training data for classification according to an embodiment of the present disclosure.
  • FIG. 35 shows a screen 4441 for setting a type of condition for classifying sensor data such as identification of a user by a face shown in an image, for example.
  • the screen 4441 includes an index 4443, a label 4445, a record button 4447, a reference button 4449, and an add button 4451.
  • the label 4445 displays a label given to a class that is classified according to conditions.
  • a training data input user interface for each class is displayed, and training data can be input.
  • training data can be input by photographing the face of the target user.
  • a reference button 4449 is pressed, a user interface for referring to existing data is displayed, and the existing data can be used as training data.
  • an image obtained by acquiring an image including the target user's face from a user's local storage or a shared server can be used as training data.
  • an add button 4451 is pressed, a new class is generated, and a new label 4445, a record button 4447, and a reference button 4449 are displayed.
  • FIG. 36 is a diagram illustrating a first example of a user interface for extracting training data according to an embodiment of the present disclosure.
  • the sensor data acquired during the practice of the detection target state based on the user operation is extracted from the sensor data obtained continuously during and after the detection target state.
  • a user interface is provided.
  • FIG. 36 shows a training data extraction screen 4501.
  • the training data extraction screen 4501 includes an image / sound reproduction unit 4503, a time series waveform 4505 of sensor detection values, and a section designation button 4507.
  • the sensor detection value when the user practices the detection target state is temporarily stored by the integrated control unit 222 of the manager 200.
  • the user specifies the section corresponding to the detection target state from the sensor detection values while referring to the training data extraction screen 4501, for example. More specifically, the user adjusts the section 4505r displayed together with the time-series waveform 4505 of the sensor detection value, and then presses the section designation button 4507, thereby detecting the sensor detection value corresponding to each detection target state. Can be specified.
  • the image / sound reproduction unit 4503 presents the image or sound recorded during the practice of the detection target state. More specifically, for example, the image / sound reproduction unit 4503 may reproduce the image or sound in the range of the time-series waveform 4505 of the sensor detection value, or the image or sound in the section 4505r selected by the user. May be reproduced, and images or sounds before and after the start point or end point of the section 4505r may be reproduced. In any case, the reproduced image or sound is associated with the sensor data displayed by the time series waveform 4505.
  • FIG. 37 is a diagram illustrating a second example of a user interface for extracting training data according to an embodiment of the present disclosure.
  • FIG. 37 shows a training data extraction screen 4511.
  • the training data extraction screen 4511 includes an image / sound reproduction unit 4513, a time series waveform 4505 of sensor detection values, and a section designation button 4507.
  • the image / sound reproduction unit 4513 displays an image as a series of still images instead of a moving image.
  • the size of the image acquired together with the sensor detection value is suppressed, and the processing load for synchronization with the sensor detection value in the manager 200 and transmission of data to the UI device 400 can be reduced.
  • the detection target state is presented as a discrete still image instead of a continuous moving image, it may be easier for the user to set and adjust the section 4505r.
  • FIG. 38 is a diagram illustrating a third example of a user interface for extracting training data according to an embodiment of the present disclosure.
  • FIG. 38 shows a training data extraction screen 4521.
  • the training data extraction screen 4521 includes a time series waveform 4505 of sensor detection values, a section designation button 4507, and a time stamp 4523.
  • a time stamp 4523 is displayed at a position on the time series corresponding to the time stamp recorded as described above. By displaying the time stamp 4523, the user can easily set the section 4505r more accurately.
  • FIG. 39 is a diagram illustrating an example of a user interface for real-time feedback according to an embodiment of the present disclosure.
  • FIG. 39 shows real-time feedback screens 4531 and 4537.
  • the condition definition includes that the volume detected by the microphone 100g exceeds the threshold.
  • a screen 4531 is displayed.
  • a gauge 4533 that does not reach the threshold level 4535 and an X mark 4537 indicating that the condition is not satisfied are displayed.
  • the threshold value can be changed in real time by the adjustment button 4539 and the verification can be performed again.
  • a screen 4541 is displayed.
  • a gauge 4533 exceeding the threshold level 4535 and a ⁇ mark 4543 indicating that the condition is satisfied are displayed.
  • the threshold value can be changed in real time using the adjustment button 4539 on the screen 4541.
  • FIG. 40 is a diagram illustrating another example of a user interface for real-time feedback according to an embodiment of the present disclosure.
  • FIG. 40 shows real-time feedback screens 4551 and 4557.
  • a condition is set for discriminating between a case where the acceleration sensor 100a is moved slowly and a case where the acceleration sensor 100a is moved quickly based on a predetermined range of detection values (corresponding to a case where the acceleration sensor 100a is moved quickly).
  • a screen 4551 is displayed.
  • a gauge 4553 displaying the range 4555
  • a detection value display 4557 indicating the gauge 4553 outside the range 4555
  • an X mark 4559 indicating that the condition is not satisfied are displayed.
  • the range 4555 may be dragged and moved, and verification may be performed again.
  • a screen 4561 is displayed.
  • a gauge 4553 displaying the range 4555
  • a detection value display 4557 indicating the gauge 4553 in the range 4555
  • a circle 4559 indicating that the condition is satisfied are displayed.
  • the screen 4561 may be able to change the range in real time by moving the range 4555.
  • FIG. 41 is a diagram illustrating still another example of a user interface for real-time feedback according to an embodiment of the present disclosure.
  • FIG. 41 shows real-time feedback screens 4601a to 4601c.
  • a condition for determining whether or not the user's face is reflected in the image acquired by the camera 100b is set.
  • the screen 4601a is displayed when the user's face is shown in the image and it is determined that the condition is satisfied. In this case, it is displayed that the condition is satisfied by the circle mark and the text 4603a.
  • the user can input feedback regarding whether or not such a determination result is correct using a button 4605. In the illustrated example, since the determination is correctly performed, the “correct” button 4605 can be pressed.
  • the screen 4601b is displayed when it is determined that the user's face is not shown on the screen and the condition is not satisfied. In this case, it is displayed that the condition is not satisfied by the x mark and the text 4603b.
  • the user can input feedback using the button 4605 as in the screen 4601a. Also in the example shown in the figure, since the determination is correctly performed, the “correct” button 4605 can be pressed.
  • the screen 4601c is displayed when it is determined that the condition is satisfied even though the user's face is not shown on the screen. In this case, it is displayed that the condition is satisfied by the circle mark and the text 4603c.
  • the user since the determination is not correctly performed, the user can input feedback for correcting the setting of the condition by pressing a “Wong” button 4605.
  • an instruction or information to the user may be output or presented via the actuator 130b of the element 100.
  • the user does not necessarily have to carry the UI device 400 when setting or verifying conditions, and in addition, stepwise output or presentation is performed via various types of actuators 130b. Is possible.
  • the LED when information is presented via an LED lamp included in the actuator 130b, the LED is detected when the detection value of the sensor is close to a value that satisfies the condition (for example, a close value within a predetermined range, a value that is above or below a threshold value). It may be set so that the lamp gradually lights up gradually and the LED lamp blinks when the condition is satisfied.
  • a loud sound is gradually output from the speaker when the detection value of the sensor approaches a value that satisfies the condition, and a predetermined melody is generated when the condition is satisfied. It may be set to flow.
  • information may be presented in a natural language such as “not likely to react”, “looks to react a little later”, or “reacted”.
  • FIG. 42 is a diagram for describing a first application example according to an embodiment of the present disclosure.
  • acceleration sensor 100a is attached to the door of refrigerator REF.
  • the user uses the acceleration sensor 100a to detect that the door of the refrigerator REF is opened, and in this case, the camera 100b (not shown) wants to acquire an image inside the refrigerator REF, When is closed, it is assumed that the processing is not desired.
  • the user repeatedly opens and closes the door of the refrigerator REF with sensing performed by the acceleration sensor 100a.
  • the camera 100b is also photographing the refrigerator REF.
  • the camera 100b functions as the recording / recording apparatus 600 described above with reference to FIG.
  • the integrated control unit 222 of the manager 200 associates the detection value of the acceleration sensor 100a when the user practices the detection target state as described above with the image captured by the camera 100b in time series.
  • the training data extraction screen 4501 described above with reference to FIG. 36 is presented via the UI device 400.
  • an image / sound reproduction unit 4503 is displayed, and an image of the refrigerator REF captured by the camera 100b can be referred to along with the time-series waveform 4505 of the sensor detection value.
  • the user can adjust the section 4505r indicating the detection target state so as to include a section where the door of the refrigerator REF is opened and not include a section where the door is closed.
  • the section designation button 4507 By pressing the section designation button 4507 in a state where the section 4505r matches the section where the door is opened, the detected value of the acceleration sensor 100a when the door is opened can be specified as training data when the condition is satisfied. it can.
  • FIG. 43 is a diagram showing an example of a user interface for creating a program in the example of FIG. FIG. 43 shows the list 4301, the element icon 4701, and the condition / processing icon 4703 described above with reference to FIG.
  • the condition described above with reference to FIG. 42 is displayed as “Door Open” in the condition list 4305 in the list 4301.
  • the condition “Door Open” can be assigned to the acceleration sensor 100a as indicated by the element icon 4701a and the condition / process icon 4703.
  • an element icon 4701b of the camera 100b a condition / processing icon 4703b indicating that a photograph is to be taken, an icon 4701c indicating a cloud service, and a condition / process indicating that the photographed photograph is to be uploaded.
  • An icon 4703c is shown.
  • the user interface shown in FIG. 43 may implement the first specific example described above with reference to FIG. In FIG. 12, the software 603a used to detect that the door has been opened based on the acceleration is included in the conditions set in the acceleration sensor 100a in the example shown in FIG.
  • FIG. 44 is a diagram for describing a second application example according to an embodiment of the present disclosure.
  • an element icon 4711a and a condition / processing icon 4713a of the acceleration sensor 100a and an element icon 4711b and a condition / processing icon 4713b of the speaker 100e are shown.
  • the user interface shown in FIG. 44 may realize the fourth specific example described above with reference to FIG.
  • the condition / processing icon 4713a includes three conditions (Motion1, Motion2, and Motion3) that are determined based on the detection value of the acceleration sensor 100a.
  • condition / processing icon 4713b indicates three sound effects respectively corresponding to the above three conditions.
  • the software 603d used for reproducing the sound effect according to the acceleration pattern is the condition set in the acceleration sensor 100a in the example shown in FIG. 44 and the speaker corresponding to this condition. It is incorporated in the function of 100e.
  • FIG. 45 is a block diagram illustrating a hardware configuration example of the information processing apparatus according to the embodiment of the present disclosure.
  • the illustrated information processing apparatus 900 can realize, for example, the element 100, the manager 200, the server 300, and / or the UI apparatus 400 in the above-described embodiment.
  • the information processing apparatus 900 includes a CPU (Central Processing unit) 901, a ROM (Read Only Memory) 903, and a RAM (Random Access Memory) 905.
  • the information processing apparatus 900 may include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, and a communication device 925.
  • the information processing apparatus 900 may include an imaging device 933 and a sensor 935 as necessary.
  • the information processing apparatus 900 may include a processing circuit called DSP (Digital Signal Processor) or ASIC (Application Specific Integrated Circuit) instead of or in addition to the CPU 901.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • the CPU 901 functions as an arithmetic processing device and a control device, and controls all or a part of the operation in the information processing device 900 according to various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 927.
  • the ROM 903 stores programs and calculation parameters used by the CPU 901.
  • the RAM 905 primarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
  • the CPU 901, the ROM 903, and the RAM 905 are connected to each other by a host bus 907 configured by an internal bus such as a CPU bus. Further, the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
  • PCI Peripheral Component Interconnect / Interface
  • the input device 915 is a device operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever.
  • the input device 915 may be, for example, a remote control device that uses infrared rays or other radio waves, or may be an external connection device 929 such as a mobile phone that supports the operation of the information processing device 900.
  • the input device 915 includes an input control circuit that generates an input signal based on information input by the user and outputs the input signal to the CPU 901. The user operates the input device 915 to input various data and instruct processing operations to the information processing device 900.
  • the output device 917 is a device that can notify the user of the acquired information visually or audibly.
  • the output device 917 can be, for example, a display device such as an LCD (Liquid Crystal Display), a PDP (Plasma Display Panel), an organic EL (Electro-Luminescence) display, an audio output device such as a speaker and headphones, and a printer device.
  • the output device 917 outputs the result obtained by the processing of the information processing device 900 as video such as text or an image, or outputs it as audio such as voice or sound.
  • the storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 900.
  • the storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
  • the storage device 919 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
  • the drive 921 is a reader / writer for a removable recording medium 927 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and is built in or externally attached to the information processing apparatus 900.
  • the drive 921 reads information recorded on the attached removable recording medium 927 and outputs the information to the RAM 905.
  • the drive 921 writes a record in the attached removable recording medium 927.
  • the connection port 923 is a port for directly connecting a device to the information processing apparatus 900.
  • the connection port 923 can be, for example, a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, or the like.
  • the connection port 923 may be an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, or the like.
  • the communication device 925 is a communication interface configured with, for example, a communication device for connecting to the communication network 931.
  • the communication device 925 may be, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB).
  • the communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication.
  • the communication device 925 transmits and receives signals and the like using a predetermined protocol such as TCP / IP with the Internet and other communication devices, for example.
  • the communication network 931 connected to the communication device 925 is a wired or wireless network, such as the Internet, a home LAN, infrared communication, radio wave communication, or satellite communication.
  • the imaging device 933 uses various members such as an imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor), and a lens for controlling the formation of a subject image on the imaging element. It is an apparatus that images a real space and generates a captured image.
  • the imaging device 933 may capture a still image or may capture a moving image.
  • the sensor 935 is various sensors such as an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, and a sound sensor.
  • the sensor 935 acquires information about the state of the information processing apparatus 900 itself, such as the posture of the information processing apparatus 900, and information about the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900, for example. To do.
  • the sensor 935 may include a GPS sensor that receives a GPS (Global Positioning System) signal and measures the latitude, longitude, and altitude of the apparatus.
  • GPS Global Positioning System
  • Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
  • Embodiments of the present disclosure include, for example, an information processing apparatus (a manager, or an element, a server, or a UI apparatus that also functions as a manager), a system, an information processing apparatus, or an information processing method executed by the system And a program for causing the information processing apparatus to function, and a non-temporary tangible medium on which the program is recorded.
  • an information processing apparatus a manager, or an element, a server, or a UI apparatus that also functions as a manager
  • a system an information processing apparatus, or an information processing method executed by the system
  • a program for causing the information processing apparatus to function and a non-temporary tangible medium on which the program is recorded.
  • a sensor data acquisition function for acquiring sensor data provided by one or more sensors
  • An information processing apparatus comprising: a processing circuit that realizes a condition setting function for setting a condition relating to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.
  • the condition setting function includes the function of setting the condition by performing machine learning using the sensor data acquired during practice of the detection target state as training data. Information processing device.
  • the information processing apparatus according to (2), wherein the processing circuit further realizes a function of outputting an instruction of a timing of practice of the detection target state.
  • the definition of the condition includes a range of detection values included in the sensor data, The information processing according to (2) or (3), wherein the processing circuit further realizes a function of outputting an instruction to practice the detection target state so as to correspond to one or more detection values within the range. apparatus.
  • the sensor data acquisition function acquires the sensor data during and before and after the detection target state is put into practice, The processing circuit further realizes a sensor data extraction function for extracting the sensor data acquired during practice of the detection target state from the acquired sensor data.
  • the information processing apparatus according to claim 1.
  • the processing circuit further realizes a function of acquiring a time stamp recorded during practice of the detection target state, The information processing apparatus according to (5), wherein the sensor data extraction function extracts the sensor data based on the time stamp.
  • the sensor data extraction function extracts the sensor data based on a user operation
  • the processing circuit further realizes a function of acquiring an image or sound recorded during practice of the detection target state, and a function of presenting the image or the sound in association with the sensor data to a user,
  • the information processing apparatus according to (5) or (6).
  • the information processing apparatus according to any one of (2) to (7), wherein the detection target state is repeatedly practiced.
  • the information processing apparatus according to (1), wherein the condition setting function includes a function of verifying the condition based on the sensor data acquired during practice of the detection target state.
  • the processing circuit further realizes an existing condition reading function for reading an existing condition, The information processing apparatus according to (10), wherein the condition includes a single existing condition or a combination of a plurality of existing conditions. (12) The information processing apparatus according to (10) or (11), wherein the processing circuit further realizes a function of changing the setting of the condition based on a user operation during the verification of the condition. (13) Any one of (1) to (12), wherein the processing circuit further realizes a sensor selection function for selecting the one or more sensors from a group of available sensors based on a user operation. The information processing apparatus according to item. (14) The information processing apparatus according to any one of (1) to (13), wherein the definition of the condition includes a threshold value set for a detection value included in the sensor data.
  • the information processing apparatus includes a condition established by a combination of sensor data provided by a plurality of sensors.
  • the processing circuit further realizes a function of outputting or presenting instructions or information to a user via a device different from the device including the one or more sensors.
  • the information processing apparatus according to any one of claims.
  • the processing circuit further realizes a function of operating an actuator when the condition is satisfied, and a function of outputting or presenting an instruction or information to a user via the actuator.
  • the information processing apparatus according to any one of (16).
  • the processing circuit of the information processing apparatus is Obtaining sensor data provided by one or more sensors; An information processing method comprising: setting a condition related to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.
  • a sensor data acquisition function for acquiring sensor data provided by one or more sensors
  • a program for causing a processing circuit to realize a condition setting function for setting a condition relating to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To enable a condition pertaining to sensor data to be set intuitively and easily by a user. [Solution] The present invention provides an information-processing device equipped with a processing circuit for realizing a sensor data acquisition function for acquiring sensor data supplied by one or a plurality of sensors and a condition-setting function for setting, on the basis of the sensor data acquired while a state to be detected is in practice, a condition pertaining to the sensor data needed for detecting the state to be detected.

Description

情報処理装置、システム、情報処理方法およびプログラムInformation processing apparatus, system, information processing method, and program
 本開示は、情報処理装置、システム、情報処理方法およびプログラムに関する。 The present disclosure relates to an information processing apparatus, a system, an information processing method, and a program.
 センサの検出値に応じて装置の何らかの動作を実現することは、広く知られた技術である。例えば、特許文献1には、電子機器の空間内での位置の変化を検出可能な3軸加速度センサの検出値を取得し、該検出値と予め設定された閾値とに基づいて操作データを生成する情報処理装置において、外部機器を接続可能な端子の接続状態の変化に応じて上記閾値を変化させる技術が記載されている。 Realizing some operation of the device according to the detection value of the sensor is a well-known technique. For example, in Patent Document 1, a detection value of a three-axis acceleration sensor capable of detecting a change in position in the space of an electronic device is acquired, and operation data is generated based on the detection value and a preset threshold value In the information processing apparatus, a technique is described in which the threshold value is changed according to a change in a connection state of a terminal to which an external device can be connected.
特開2013-54411号公報JP 2013-54411 A
 例えば特許文献1に記載されたような技術では、センサの検出値に対する閾値などの条件が、端子の接続状態の変化による変化などを含めて予め設定されている。これに対して、条件をユーザが任意に設定可能にすることによって、ユーザのニーズに即した多様な状態を検出可能にすることが考えられる。しかしながら、センサの検出値はユーザが直観的に理解できるような単純なものではない場合も多く、また検出値と検出したい状態との関係もユーザには直観的に理解できるとは限らない。従って、条件をユーザが任意に設定することは、必ずしも容易ではない。 For example, in the technique described in Patent Document 1, conditions such as a threshold value for the detection value of the sensor are set in advance including changes due to changes in the connection state of the terminals. On the other hand, it is conceivable that various conditions according to the user's needs can be detected by allowing the user to arbitrarily set the conditions. However, there are many cases where the detected value of the sensor is not so simple that the user can intuitively understand, and the relationship between the detected value and the state to be detected is not always intuitively understood by the user. Therefore, it is not always easy for the user to arbitrarily set the conditions.
 そこで、本開示では、センサデータに関する条件をユーザが直観的かつ容易に設定することを可能にする、新規かつ改良された情報処理装置、システム、情報処理方法およびプログラムを提案する。 Therefore, the present disclosure proposes a new and improved information processing apparatus, system, information processing method, and program that allow a user to set conditions regarding sensor data intuitively and easily.
 本開示によれば、1または複数のセンサによって提供されるセンサデータを取得するセンサデータ取得機能と、検出対象状態の実践中に取得された上記センサデータに基づいて、上記検出対象状態を検出するための上記センサデータに関する条件を設定する条件設定機能とを実現する処理回路を備える情報処理装置が提供される。 According to the present disclosure, the detection target state is detected based on a sensor data acquisition function for acquiring sensor data provided by one or a plurality of sensors and the sensor data acquired during practice of the detection target state. There is provided an information processing apparatus including a processing circuit that realizes a condition setting function for setting a condition relating to the sensor data.
 また、本開示によれば、1または複数のセンサと、上記1または複数のセンサによって提供されるセンサデータを取得し、検出対象状態の実践中に取得された上記センサデータに基づいて、上記検出対象状態を検出するための上記センサデータに関する条件を設定する情報処理装置と、上記情報処理装置が提供する、上記条件の設定に関する指示または情報を出力または提示する端末装置とを含むシステムが提供される。 In addition, according to the present disclosure, one or more sensors and sensor data provided by the one or more sensors are acquired, and the detection is performed based on the sensor data acquired during the practice of the detection target state. A system is provided that includes an information processing device that sets conditions related to the sensor data for detecting a target state, and a terminal device that outputs or presents instructions or information related to the setting of the conditions provided by the information processing device. The
 また、本開示によれば、情報処理装置の処理回路が、1または複数のセンサによって提供されるセンサデータを取得することと、検出対象状態の実践中に取得された上記センサデータに基づいて、上記検出対象状態を検出するための上記センサデータに関する条件を設定することとを含む情報処理方法が提供される。 Further, according to the present disclosure, the processing circuit of the information processing apparatus acquires sensor data provided by one or more sensors, and based on the sensor data acquired during the practice of the detection target state, There is provided an information processing method including setting a condition relating to the sensor data for detecting the detection target state.
 また、本開示によれば、1または複数のセンサによって提供されるセンサデータを取得するセンサデータ取得機能と、検出対象状態の実践中に取得された上記センサデータに基づいて、上記検出対象状態を検出するための上記センサデータに関する条件を設定する条件設定機能とを処理回路に実現させるためのプログラムが提供される。 Further, according to the present disclosure, the detection target state is determined based on a sensor data acquisition function for acquiring sensor data provided by one or more sensors and the sensor data acquired during the practice of the detection target state. There is provided a program for causing a processing circuit to realize a condition setting function for setting conditions relating to the sensor data for detection.
 以上説明したように本開示によれば、センサデータに関する条件をユーザが直観的かつ容易に設定することが可能になる。 As described above, according to the present disclosure, it is possible for the user to intuitively and easily set conditions regarding sensor data.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
本開示の実施形態に係る技術を適用することが可能なシステムの概略的な構成を示す図である。1 is a diagram illustrating a schematic configuration of a system to which a technology according to an embodiment of the present disclosure can be applied. 図1に示されたシステムの第1の具体的な構成例を示す図である。It is a figure which shows the 1st specific structural example of the system shown by FIG. 図1に示されたシステムの第2の具体的な構成例を示す図である。It is a figure which shows the 2nd specific structural example of the system shown by FIG. 図1に示されたシステムの第3の具体的な構成例を示す図である。FIG. 4 is a diagram illustrating a third specific configuration example of the system illustrated in FIG. 1. 図1に示されたシステムの第4の具体的な構成例を示す図である。It is a figure which shows the 4th specific structural example of the system shown by FIG. 図1に示されたシステムの第5の具体的な構成例を示す図である。FIG. 10 is a diagram illustrating a fifth specific configuration example of the system illustrated in FIG. 1. 図1に示されたシステムの第6の具体的な構成例を示す図である。FIG. 8 is a diagram illustrating a sixth specific configuration example of the system illustrated in FIG. 1. 図1に示されたシステムの第7の具体的な構成例を示す図である。FIG. 8 is a diagram illustrating a seventh specific configuration example of the system illustrated in FIG. 1. 図1に示されたシステムの第8の具体的な構成例を示す図である。It is a figure which shows the 8th specific structural example of the system shown by FIG. 図1に示されたシステムにおいて提供されるUIの第1の例を示す図である。It is a figure which shows the 1st example of UI provided in the system shown by FIG. 図1に示されたシステムにおいて提供されるUIの第2の例を示す図である。FIG. 6 is a diagram illustrating a second example of a UI provided in the system illustrated in FIG. 1. 図1に示されたシステムにおけるエレメントの連係動作の第1の具体的な例について説明するための図である。It is a figure for demonstrating the 1st specific example of the cooperation operation | movement of the element in the system shown by FIG. 図1に示されたシステムにおけるエレメントの連係動作の第2の具体的な例について説明するための図である。It is a figure for demonstrating the 2nd specific example of the cooperation operation | movement of the element in the system shown by FIG. 図1に示されたシステムにおけるエレメントの連係動作の第3の具体的な例について説明するための図である。It is a figure for demonstrating the 3rd specific example of the cooperation operation | movement of the element in the system shown by FIG. 図1に示されたシステムにおけるエレメントの連係動作の第4の具体的な例について説明するための図である。It is a figure for demonstrating the 4th specific example of the cooperation operation | movement of the element in the system shown by FIG. 図1に示されたシステムにおけるエレメントの連係動作の第5の具体的な例について説明するための図である。It is a figure for demonstrating the 5th specific example of the cooperation operation | movement of the element in the system shown by FIG. 図1に示されたシステムにおけるエレメントの連係動作の第6の具体的な例について説明するための図である。It is a figure for demonstrating the 6th specific example of the cooperation operation | movement of the element in the system shown by FIG. 本開示の一実施形態におけるシステム構成の第1の例を示す図である。1 is a diagram illustrating a first example of a system configuration according to an embodiment of the present disclosure. FIG. 本開示の一実施形態におけるシステム構成の第2の例を示す図である。It is a figure which shows the 2nd example of the system configuration | structure in one Embodiment of this indication. 本開示の一実施形態におけるシステム構成の第3の例を示す図である。It is a figure which shows the 3rd example of the system configuration | structure in one Embodiment of this indication. 本開示の一実施形態における処理の例を示すフローチャートである。10 is a flowchart illustrating an example of processing according to an embodiment of the present disclosure. 図21における条件生成処理をさらに詳細に示すフローチャートである。It is a flowchart which shows the condition production | generation process in FIG. 21 in detail. 図21における条件検証処理をさらに詳細に示すフローチャートである。It is a flowchart which shows the condition verification process in FIG. 21 in detail. 本開示の実施形態において、センサデータとともにタイムスタンプを取得する処理の例を示すシーケンス図である。In an embodiment of this indication, it is a sequence figure showing an example of processing which acquires a time stamp with sensor data. 本開示の実施形態において、センサデータとともに画像・音声を取得する処理の例を示すシーケンス図である。In the embodiment of this indication, it is a sequence figure showing an example of processing which acquires an image and sound with sensor data. 本開示の一実施形態におけるセンサ選択用のユーザインターフェースの例を示す図である。FIG. 3 is a diagram illustrating an example of a user interface for sensor selection according to an embodiment of the present disclosure. 本開示の一実施形態における条件選択用のユーザインターフェースの例を示す図である。3 is a diagram illustrating an example of a user interface for condition selection according to an embodiment of the present disclosure. FIG. 本開示の一実施形態における条件提案用のユーザインターフェースの例を示す図である。It is a figure showing an example of a user interface for condition proposal in one embodiment of this indication. 本開示の一実施形態における複合条件生成用のユーザインターフェースの例を示す図である。3 is a diagram illustrating an example of a user interface for generating a composite condition according to an embodiment of the present disclosure. FIG. 本開示の一実施形態における複合条件生成用のユーザインターフェースの別の例を示す図である。It is a figure which shows another example of the user interface for compound condition production | generation in one Embodiment of this indication. 本開示の一実施形態における訓練データ入力用のユーザインターフェースの例を示す図である。It is a figure which shows the example of the user interface for training data input in one Embodiment of this indication. 本開示の一実施形態における訓練データ入力用のユーザインターフェースの別の例を示す図である。It is a figure which shows another example of the user interface for training data input in one Embodiment of this indication. 本開示の一実施形態における、上限値および下限値を含む訓練データ入力用のユーザインターフェースの第1の例を示す図である。It is a figure which shows the 1st example of the user interface for training data input containing an upper limit and a lower limit in one Embodiment of this indication. 本開示の一実施形態における、上限値および下限値を含む訓練データ入力用のユーザインターフェースの第2の例を示す図である。It is a figure showing the 2nd example of the user interface for training data input containing the upper limit and the lower limit in one embodiment of this indication. 本開示の一実施形態におけるクラス分類のための訓練データ入力用のユーザインターフェースの例を示す図である。FIG. 3 is a diagram illustrating an example of a user interface for training data input for classification according to an embodiment of the present disclosure. 本開示の一実施形態において訓練データを抽出するためのユーザインターフェースの第1の例を示す図である。It is a figure showing the 1st example of a user interface for extracting training data in one embodiment of this indication. 本開示の一実施形態において訓練データを抽出するためのユーザインターフェースの第2の例を示す図である。FIG. 6 is a diagram illustrating a second example of a user interface for extracting training data according to an embodiment of the present disclosure. 本開示の一実施形態において訓練データを抽出するためのユーザインターフェースの第3の例を示す図である。It is a figure showing the 3rd example of a user interface for extracting training data in one embodiment of this indication. 本開示の一実施形態におけるリアルタイムフィードバックのためのユーザインターフェースの例を示す図である。FIG. 3 is a diagram illustrating an example of a user interface for real-time feedback according to an embodiment of the present disclosure. 本開示の一実施形態におけるリアルタイムフィードバックのためのユーザインターフェースの別の例を示す図である。FIG. 5 is a diagram illustrating another example of a user interface for real-time feedback in an embodiment of the present disclosure. 本開示の一実施形態におけるリアルタイムフィードバックのためのユーザインターフェースのさらに別の例を示す図である。FIG. 6 is a diagram illustrating still another example of a user interface for real-time feedback according to an embodiment of the present disclosure. 本開示の一実施形態における第1の適用例について説明するための図である。10 is a diagram for describing a first application example according to an embodiment of the present disclosure. FIG. 図42の例におけるプログラムの作成のためのユーザインターフェースの例を示す図である。FIG. 43 is a diagram showing an example of a user interface for creating a program in the example of FIG. 本開示の一実施形態における第2の適用例について説明するための図である。10 is a diagram for describing a second application example according to an embodiment of the present disclosure. FIG. 本開示の実施形態に係る情報処理装置のハードウェア構成例を示すブロック図である。FIG. 3 is a block diagram illustrating a hardware configuration example of an information processing apparatus according to an embodiment of the present disclosure.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書および図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the present specification and drawings, components having substantially the same functional configuration are denoted by the same reference numerals, and redundant description is omitted.
 なお、説明は以下の順序で行うものとする。
 1.適用可能なシステムの例
  1-1.概略的な構成
  1-2.具体的な構成例
  1-3.ユーザインターフェースの例
  1-4.連係動作の例
 2.本開示の実施形態
  2-1.システム構成例
  2-2.処理フローの例
  2-3.ユーザインターフェースの例
  2-4.具体的な適用例
 3.ハードウェア構成
 4.補足
The description will be made in the following order.
1. Examples of applicable systems 1-1. Schematic configuration 1-2. Specific configuration example 1-3. Example of user interface 1-4. Example of linkage operation Embodiments of the present disclosure 2-1. System configuration example 2-2. Example of processing flow 2-3. Example of user interface 2-4. 2. Specific application examples Hardware configuration Supplement
 (1.適用可能なシステムの例)
 (1-1.概略的な構成)
 図1は、本開示の実施形態に係る技術を適用することが可能なシステムの概略的な構成を示す図である。図1を参照すると、システム10は、エレメント100と、マネージャ200と、サーバ300と、UI(User Interface)装置400とを含む。これらの装置は、ネットワークNWで互いに接続されている。ネットワークNWは、例えば、Bluetooth(登録商標)、Wi-Fi、インターネットなどを含みうる。なお、以下では一例としてシステム10について説明するが、本開示の実施形態に係る技術は他の様々なシステムにおいても適用可能である。
(1. Examples of applicable systems)
(1-1. Schematic configuration)
FIG. 1 is a diagram illustrating a schematic configuration of a system to which a technique according to an embodiment of the present disclosure can be applied. Referring to FIG. 1, the system 10 includes an element 100, a manager 200, a server 300, and a UI (User Interface) device 400. These devices are connected to each other via a network NW. The network NW can include, for example, Bluetooth (registered trademark), Wi-Fi, the Internet, and the like. In addition, although the system 10 is demonstrated as an example below, the technique which concerns on embodiment of this indication is applicable also to other various systems.
 エレメント100は、通信部110と、制御部120と、機能部130と、電源部140とを含む装置である。通信部110は、マネージャ200および/または他のエレメント100と、ネットワークNWを介して通信する通信装置を含む。制御部120は、例えばマイクロコントローラやCPU(Central Processing Unit)などによって実現され、機能部130を制御する。機能部130は、例えばセンサやアクチュエータなどを含み、それぞれのエレメント100に固有の機能を実現する。電源部140は、バッテリーや電源プラグなどを含み、通信部110、制御部120、および機能部130を動作させるための電源を供給する。なお、エレメント100以外の装置も電源部を有するが、図示は省略されている。 Element 100 is a device including a communication unit 110, a control unit 120, a function unit 130, and a power supply unit 140. The communication unit 110 includes a communication device that communicates with the manager 200 and / or other elements 100 via the network NW. The control unit 120 is realized by, for example, a microcontroller or a CPU (Central Processing Unit), and controls the functional unit 130. The function unit 130 includes, for example, a sensor and an actuator, and realizes a function unique to each element 100. The power supply unit 140 includes a battery, a power plug, and the like, and supplies power for operating the communication unit 110, the control unit 120, and the function unit 130. In addition, although devices other than the element 100 also have a power supply part, illustration is abbreviate | omitted.
 マネージャ200は、通信部210と、制御部220と、記憶部230とを含む装置である。通信部210は、エレメント100、サーバ300およびUI装置400と、ネットワークNWを介して通信する通信装置を含みうる。制御部220は、例えばマイクロコントローラやCPUなどによって実現され、通信部210を介して、エレメント100同士の間、およびエレメント100とサーバ300との間の通信を制御する。制御部220には、演算部221が含まれうる。演算部221は、エレメント100から受信された情報や、エレメント100に送信する情報、サーバ300に送信する情報、サーバ300から受信された情報などについての演算を実施する。記憶部230は、メモリまたはストレージを含み、制御部220が制御する通信によってやりとりされる情報や、演算部221によって算出された情報などを格納する。 The manager 200 is a device including a communication unit 210, a control unit 220, and a storage unit 230. The communication unit 210 may include a communication device that communicates with the element 100, the server 300, and the UI device 400 via the network NW. The control unit 220 is realized by, for example, a microcontroller or a CPU, and controls communication between the elements 100 and between the element 100 and the server 300 via the communication unit 210. The control unit 220 can include a calculation unit 221. The computation unit 221 performs computations on information received from the element 100, information transmitted to the element 100, information transmitted to the server 300, information received from the server 300, and the like. The storage unit 230 includes a memory or a storage, and stores information exchanged by communication controlled by the control unit 220, information calculated by the calculation unit 221, and the like.
 サーバ300は、通信部310と、制御部320と、記憶部330とを含む。通信部310は、マネージャ200とネットワークNWを介して通信する通信装置を含む。制御部320は、例えばマイクロコントローラやCPUなどによって実現され、演算部321、およびUI提供部323を含みうる。演算部321は、エレメント100またはマネージャ200から受信された情報や、エレメント100またはマネージャ200に送信する情報、他のサーバから受信された情報、他のサーバに送信する情報などについての演算を実施する。UI提供部323は、ユーザがエレメント100および/またはマネージャ200の組み合わせを指定したり、各種の情報を確認したりするためのUIを提供する。UIは、例えば、UI装置400の入出力部430に含まれるディスプレイおよびタッチパネルなどを介して提供される。記憶部330は、メモリまたはストレージを含み、エレメント100に関する各種の情報や、複数のエレメント100を組み合わせて動作させるためのプログラム、エレメント100と組み合わせられるソフトウェア、演算部321によって算出された情報などを格納する。 The server 300 includes a communication unit 310, a control unit 320, and a storage unit 330. Communication unit 310 includes a communication device that communicates with manager 200 via network NW. The control unit 320 is realized by, for example, a microcontroller or a CPU, and can include a calculation unit 321 and a UI providing unit 323. The calculation unit 321 performs calculations on information received from the element 100 or the manager 200, information transmitted to the element 100 or the manager 200, information received from other servers, information transmitted to other servers, and the like. . The UI providing unit 323 provides a UI for the user to specify a combination of the element 100 and / or the manager 200 and to confirm various information. The UI is provided via, for example, a display and a touch panel included in the input / output unit 430 of the UI device 400. The storage unit 330 includes a memory or a storage, and stores various information related to the element 100, a program for operating the plurality of elements 100 in combination, software combined with the elements 100, information calculated by the calculation unit 321, and the like. To do.
 ここで、サーバ300の記憶部330には、例えばシステム10内のエレメント100の識別情報(ID)が格納される。エレメント100は、システム10に随時追加されうる。また、記憶部330には、複数のエレメント100を組み合わせて動作させるためのプログラムも格納される。プログラムは、例えばユーザや、開発者、システム管理者によって随時追加されうる。さらに、記憶部330には、ソフトウェアが格納される。ソフトウェアは、例えば、エレメント100と組み合わせて実行されることによって機能を実現する。あるいは、ソフトウェア同士が組み合わせて実行されることによって機能を実現してもよい。ソフトウェアが実現する機能は、例えば、音声や画像などの素材提供、タイマー、カレンダー、画像認識や音声認識などの解析機能、天気予報やニュースなどのような外部サービスからの情報取得機能などを含みうる。ソフトウェアは、例えばユーザや、開発者、システム管理者によって随時追加されうる。 Here, for example, identification information (ID) of the element 100 in the system 10 is stored in the storage unit 330 of the server 300. The element 100 can be added to the system 10 at any time. The storage unit 330 also stores a program for operating the plurality of elements 100 in combination. The program can be added at any time by, for example, a user, a developer, or a system administrator. Further, the storage unit 330 stores software. For example, the software implements a function by being executed in combination with the element 100. Alternatively, the function may be realized by executing software in combination. Functions realized by the software may include, for example, provision of materials such as voice and images, timers, calendars, analysis functions such as image recognition and voice recognition, information acquisition functions from external services such as weather forecasts and news, etc. . Software can be added at any time by, for example, a user, a developer, or a system administrator.
 一方、サーバ300のUI提供部323は、例えば、UI装置400の入出力部430に含まれるディスプレイおよびタッチパネルなどを介してUIを提供する。ユーザは、このUIを介して、例えば、複数のエレメント100を組み合わせて動作させるためのプログラムを、エレメント100またはマネージャ200にプリインストールされたプログラムや、サーバ300によって提供されるプログラムなどの中から選択することができる。また、上記のUIを介して、ユーザは、複数のエレメント100の組み合わせや、エレメント100とソフトウェアとの組み合わせ、複数のソフトウェアの組み合わせを指定することもできる。指定された組み合わせがプログラムとしてサーバ300の記憶部330に保存され、このプログラムがエレメント100とソフトウェアとによって実行されることによって、ユーザは所望のアプリケーションを得ることができる。 On the other hand, the UI providing unit 323 of the server 300 provides a UI via, for example, a display and a touch panel included in the input / output unit 430 of the UI device 400. Through this UI, the user selects, for example, a program for operating a plurality of elements 100 in combination from a program preinstalled in the element 100 or the manager 200, a program provided by the server 300, or the like. can do. Also, the user can specify a combination of a plurality of elements 100, a combination of the elements 100 and software, and a combination of a plurality of software via the UI. The specified combination is stored as a program in the storage unit 330 of the server 300, and the user can obtain a desired application by executing the program by the element 100 and the software.
 UI装置400は、通信部410と、制御部420と、入出力部430とを含む装置である。通信部410は、マネージャ200およびサーバ300とネットワークNWを介して通信する通信装置を含みうる。制御部420は、例えばマイクロコントローラやCPUなどによって実現され、入出力部430を制御するとともに、通信部410を介した情報のやりとりを制御する。入出力部430は、例えばディスプレイ、スピーカ、およびタッチパネルなどを含み、UIを介してユーザに各種の情報を提示するとともに、ユーザからの操作入力を受け付ける。 The UI device 400 is a device that includes a communication unit 410, a control unit 420, and an input / output unit 430. Communication unit 410 may include a communication device that communicates with manager 200 and server 300 via network NW. The control unit 420 is realized by, for example, a microcontroller or a CPU, and controls the input / output unit 430 and the exchange of information through the communication unit 410. The input / output unit 430 includes, for example, a display, a speaker, a touch panel, and the like, presents various types of information to the user via the UI, and accepts an operation input from the user.
 (1-2.具体的な構成例)
  (第1の例)
 図2は、図1に示されたシステムの第1の具体的な構成例を示す図である。図2を参照すると、システム10aは、エレメント100として機能する第1のタブレットと、エレメント100、マネージャ200、およびUI装置400として機能する第2のタブレットと、サーバ300とを含む。
(1-2. Specific configuration example)
(First example)
FIG. 2 is a diagram showing a first specific configuration example of the system shown in FIG. Referring to FIG. 2, system 10 a includes a first tablet that functions as element 100, a second tablet that functions as element 100, manager 200, and UI device 400, and server 300.
 第1および第2のタブレットは、いずれも同じように、ディスプレイ、タッチパネル、CPU、センサなどを含む。システム10aでは、2つのタブレットのうちの一方が、エレメント100、マネージャ200、およびUI装置400としても機能し、他方がエレメント100として機能する。これらのタブレットの役割は互換的であってもよく、状況に応じてマネージャ200やUI装置400として機能するタブレットが変更されてもよい。システム10aでは、例えば、タブレットが有する各種のセンサを利用して、2つのタブレットを組み合わせて動作させ、ユーザが所望する機能を実現することができる。 The first and second tablets similarly include a display, a touch panel, a CPU, a sensor, and the like. In the system 10a, one of the two tablets functions as the element 100, the manager 200, and the UI device 400, and the other functions as the element 100. The roles of these tablets may be interchangeable, and the tablets that function as the manager 200 and the UI device 400 may be changed depending on the situation. In the system 10a, for example, the functions desired by the user can be realized by using various sensors of the tablet and operating the two tablets in combination.
 なお、図2に示した例における第1および第2のタブレットは、エレメント100として機能するための機能部130(センサなど)と、マネージャ200として機能するための制御部220(CPUなど)とを有する端末装置の例である。こうした端末装置はタブレットには限られず、スマートフォンなどの他の装置であってもよい。また、システム10aに含まれる端末装置の数は図示された例には限られず、例えば3つ以上であってもよい。 Note that the first and second tablets in the example shown in FIG. 2 include a functional unit 130 (such as a sensor) that functions as the element 100 and a control unit 220 (such as a CPU) that functions as the manager 200. It is an example of the terminal device which has. Such a terminal device is not limited to a tablet, and may be another device such as a smartphone. Further, the number of terminal devices included in the system 10a is not limited to the illustrated example, and may be three or more, for example.
  (第2の例)
 図3は、図1に示されたシステムの第2の具体的な構成例を示す図である。図3を参照すると、システム10bは、エレメント100a~100gと、マネージャ200およびUI装置400として機能するタブレットと、サーバ300とを含む。
(Second example)
FIG. 3 is a diagram illustrating a second specific configuration example of the system illustrated in FIG. 1. Referring to FIG. 3, the system 10b includes elements 100a to 100g, a tablet that functions as the manager 200 and the UI device 400, and a server 300.
 システム10bでは、タブレットがマネージャ200およびUI装置400として機能する。上記の第1の例と同様に、タブレットは、スマートフォンなどの他の装置によって代替されてもよい。図示された例では、エレメント100が、加速度センサ100a、カメラ100b、人感センサ100c、ボタン100d、スピーカ100e、LED(Light Emitting Diode)ランプ100f、マイクロフォン100gを含む。それぞれのエレメント100は、Bluetooth(登録商標)などの無線通信によってタブレットと通信し、タブレットが実現するマネージャ200の制御に従って連係動作する。 In the system 10b, the tablet functions as the manager 200 and the UI device 400. Similar to the first example above, the tablet may be replaced by another device such as a smartphone. In the illustrated example, the element 100 includes an acceleration sensor 100a, a camera 100b, a human sensor 100c, a button 100d, a speaker 100e, an LED (Light Emitting Diode) lamp 100f, and a microphone 100g. Each element 100 communicates with the tablet by wireless communication such as Bluetooth (registered trademark), and operates in association with the control of the manager 200 realized by the tablet.
 なお、以下の例では、エレメント100の例として、加速度センサ100a~マイクロフォン100gの一部または全部が図示されるが、これはそれぞれの例において用いられるエレメント100の種類を限定することを意図したものではない。それぞれの例において、システム10は、加速度センサ100a~マイクロフォン100gのうちのいずれか、またはそれ以外の任意の種類のエレメント100を含みうる。 In the following examples, some or all of the acceleration sensors 100a to 100g are illustrated as examples of the element 100, but this is intended to limit the types of the elements 100 used in each example. is not. In each example, the system 10 may include any of the acceleration sensors 100a to 100g, or any other type of element 100.
  (第3の例)
 図4は、図1に示されたシステムの第3の具体的な構成例を示す図である。図4を参照すると、システム10cは、エレメント100と、マネージャ200と、UI装置400として機能するタブレットと、サーバ300とを含む。
(Third example)
FIG. 4 is a diagram showing a third specific configuration example of the system shown in FIG. Referring to FIG. 4, the system 10 c includes an element 100, a manager 200, a tablet that functions as the UI device 400, and a server 300.
 システム10cでは、マネージャ200が、UI装置400として機能するタブレットとは別に存在する。マネージャ200は、例えば専用の装置によって実現されてもよいし、Wi-Fiなどの基地局の機能の1つとして実現されてもよい。上記の第1および第2の例と同様に、UI装置400として機能するタブレットは、スマートフォンなどの他の装置によって代替されてもよい。マネージャ200とタブレットとは、それぞれ独立してサーバ300と通信可能でありうる。また、タブレットが提供するUIを介してシステム10cの設定が変更された場合、タブレットはWi-Fiなどを介して直接的にマネージャ200に設定情報を送信してもよい。 In the system 10c, the manager 200 exists separately from the tablet that functions as the UI device 400. The manager 200 may be realized by a dedicated device, for example, or may be realized as one of base station functions such as Wi-Fi. Similar to the first and second examples described above, the tablet functioning as the UI device 400 may be replaced by another device such as a smartphone. The manager 200 and the tablet may be able to communicate with the server 300 independently of each other. In addition, when the setting of the system 10c is changed via the UI provided by the tablet, the tablet may directly transmit the setting information to the manager 200 via Wi-Fi or the like.
 また、図4の例では、マネージャ200とエレメント100との間の通信に、Bluetooth(登録商標)だけではなくWi-Fiも利用されることが示されている。また、離れた場所にあるエレメント100との接続のために、Wi-Fiなどによるメッシュネットワーク中継器NW_mが利用されている。図示された第3の例に限らず、他の例でも、エレメント100とマネージャ200、および/またはエレメント100同士の間の通信には、Bluetooth(登録商標)やWi-Fiなど、各種の無線通信が用いられうる。 In the example of FIG. 4, it is shown that not only Bluetooth (registered trademark) but also Wi-Fi is used for communication between the manager 200 and the element 100. Further, a mesh network repeater NW_m using Wi-Fi or the like is used for connection with the element 100 at a remote location. In addition to the third example shown in the figure, in other examples, various wireless communications such as Bluetooth (registered trademark) and Wi-Fi are used for communication between the element 100 and the manager 200 and / or between the elements 100. Can be used.
  (第4の例)
 図5は、図1に示されたシステムの第4の具体的な構成例を示す図である。図5を参照すると、システム10dは、一部がマネージャ200としても機能するエレメント100と、UI装置400として機能するタブレットと、サーバ300とを含む。
(Fourth example)
FIG. 5 is a diagram showing a fourth specific configuration example of the system shown in FIG. Referring to FIG. 5, the system 10 d includes an element 100 that partially functions as the manager 200, a tablet that functions as the UI device 400, and a server 300.
 システム10dでは、上記の第1~第3の例とは異なり、エレメント100のうちの少なくとも1つがマネージャ200としても機能する。また、システム10dでは、エレメント100同士がBluetooth(登録商標)などによってメッシュネットワークを形成している。このような構成によって、システム10dでは、サーバ300およびUI装置400(タブレット)との通信が一時的に切断されたような場合でも、エレメント100が自律的に連係動作することが可能である。 In the system 10d, unlike the first to third examples, at least one of the elements 100 also functions as the manager 200. In the system 10d, the elements 100 form a mesh network using Bluetooth (registered trademark) or the like. With such a configuration, in the system 10d, even when the communication with the server 300 and the UI device 400 (tablet) is temporarily disconnected, the element 100 can operate in an autonomous manner.
  (第5の例)
 図6は、図1に示されたシステムの第5の具体的な構成例を示す図である。図6を参照すると、システム10eは、一部がマネージャ200としても機能するエレメント100と、UI装置400として機能するタブレットと、サーバ300とを含む。システム10eは、上記の第3の例に係るシステム10cにおいて、マネージャ200をいずれかのエレメント100に取り込んだ例である。
(Fifth example)
FIG. 6 is a diagram illustrating a fifth specific configuration example of the system illustrated in FIG. 1. Referring to FIG. 6, the system 10 e includes an element 100 that partially functions as the manager 200, a tablet that functions as the UI device 400, and a server 300. The system 10e is an example in which the manager 200 is incorporated in one of the elements 100 in the system 10c according to the third example.
  (第6の例)
 図7は、図1に示されたシステムの第6の具体的な構成例を示す図である。図7を参照すると、システム10fは、エレメント100と、マネージャ200a,200bと、UI装置400として機能するタブレットと、サーバ300とを含む。システム10fは、上記の第3の例に係るシステム10cにおいて、マネージャ200を複数配置した例である。それぞれのエレメント100は、例えば、マネージャ200a,200bのうち位置的に近い方に接続されうる。また、複数のマネージャ200a,200bの間では、エレメント100の接続状態や、エレメント100を連係動作させるためのプログラムが、必要に応じて、または定期的に同期される。
(Sixth example)
FIG. 7 is a diagram illustrating a sixth specific configuration example of the system illustrated in FIG. 1. Referring to FIG. 7, the system 10 f includes an element 100, managers 200 a and 200 b, a tablet that functions as the UI device 400, and a server 300. The system 10f is an example in which a plurality of managers 200 are arranged in the system 10c according to the third example. Each element 100 can be connected to, for example, the closer one of the managers 200a and 200b. In addition, between the plurality of managers 200a and 200b, the connection state of the element 100 and the program for operating the element 100 in a coordinated manner are synchronized as necessary or periodically.
  (第7の例)
 図8は、図1に示されたシステムの第7の具体的な構成例を示す図である。図8を参照すると、システム10gは、エレメント100と、マネージャ200aと、マネージャ200bおよびUI装置400として機能するタブレットと、サーバ300とを含む。システム10gは、上記の第6の例に係るシステム10fにおいて、マネージャ200bの機能をタブレットに統合した例である。それぞれのエレメント100は、例えば、マネージャ200a、またはタブレットのうち位置的に近い方に接続されうる。また、マネージャ200aとタブレットとの間では、エレメント100の接続状態や、エレメント100を連係動作させるためのプログラムが、必要に応じて、または定期的に同期される。
(Seventh example)
FIG. 8 is a diagram illustrating a seventh specific configuration example of the system illustrated in FIG. 1. Referring to FIG. 8, the system 10g includes an element 100, a manager 200a, a tablet that functions as the manager 200b and the UI device 400, and a server 300. The system 10g is an example in which the function of the manager 200b is integrated into the tablet in the system 10f according to the sixth example. Each element 100 can be connected to, for example, a manager 200a or a tablet closer to the position. Further, between the manager 200a and the tablet, the connection state of the element 100 and the program for operating the element 100 in a coordinated manner are synchronized as necessary or periodically.
  (第8の例)
 図9は、図1に示されたシステムの第8の具体的な構成例を示す図である。図9を参照すると、システム10hは、エレメント100と、UI装置400として機能するタブレットと、マネージャ200としても機能するサーバ300とを含む。システム10hは、上記の第2の例に係るシステム10bにおいて、マネージャ200の機能をサーバ300に取り込んだ例である。それぞれのエレメント100は、例えば、移動体通信網などを介して、直接的にサーバ300と通信する。
(Eighth example)
FIG. 9 is a diagram illustrating an eighth specific configuration example of the system illustrated in FIG. 1. Referring to FIG. 9, the system 10 h includes an element 100, a tablet that functions as the UI device 400, and a server 300 that also functions as the manager 200. The system 10h is an example in which the function of the manager 200 is taken into the server 300 in the system 10b according to the second example. Each element 100 communicates directly with the server 300 via, for example, a mobile communication network.
 以上、図1に示されたシステム10のいくつかの具体的な構成例について説明した。なお、システム10の具体的な構成例は、以上で説明した例には限られず、これらの例に基づいて当業者には明らかに理解される各種の変形例を含みうる。 Heretofore, some specific configuration examples of the system 10 shown in FIG. 1 have been described. The specific configuration example of the system 10 is not limited to the examples described above, and may include various modifications that are clearly understood by those skilled in the art based on these examples.
 (1-3.ユーザインターフェースの例)
 図10は、図1に示されたシステムにおいて提供されるUIの第1の例を示す図である。図10を参照すると、UI装置400として機能する端末装置のディスプレイに表示される画面4100は、ユーザプロファイル4101と、プログラム一覧タブ4103と、エレメント一覧タブ4105とを含む。図示された例では、プログラム一覧タブ4103が選択されているため、プログラムリスト4107が表示されている。プログラムリスト4107は、プログラムアイコン4109と、使用エレメントアイコン4111と、説明4113とを含む。画面4100は、例えば、ユーザがシステム10によって提供されるサービスを利用するときのポータル画面として表示されうる。
(1-3. Example of user interface)
FIG. 10 is a diagram showing a first example of a UI provided in the system shown in FIG. Referring to FIG. 10, a screen 4100 displayed on the display of the terminal device functioning as the UI device 400 includes a user profile 4101, a program list tab 4103, and an element list tab 4105. In the illustrated example, since the program list tab 4103 is selected, the program list 4107 is displayed. The program list 4107 includes a program icon 4109, a used element icon 4111, and a description 4113. The screen 4100 can be displayed, for example, as a portal screen when the user uses a service provided by the system 10.
 図11は、図1に示されたシステムにおいて提供されるUIの第2の例を示す図である。図11を参照すると、UI装置400として機能する端末装置のディスプレイに表示される画面4200は、ツールバー4201と、キャンバス4203と、トレー4205とを含む。画面4200は、例えば、システム10においてエレメント100を連係動作させるためのプログラムの編集に利用される。ツールバー4201には、「保存」や「やり直し」などの機能ボタンが配置される。キャンバス4203には、エレメントアイコン4207や、詳細情報アイコン4209、リンク4211を配置または描画することが可能である。これらのアイコンやリンクによって、連係動作に使用されるエレメント100と、それぞれのエレメント100が実行する処理、エレメント100同士の関係を設定および確認することができる。さらに、キャンバス4203には、エレメントプロパティボックス4213が配置され、例えばエレメントアイコン4207として配置されているエレメント100の属性や状態を表示する。トレー4205には、キャンバス4203に配置することによってプログラムに組み込むことが可能なエレメント100やソフトウェアなどがアイコンとして表示される。 FIG. 11 is a diagram showing a second example of a UI provided in the system shown in FIG. Referring to FIG. 11, a screen 4200 displayed on the display of the terminal device that functions as the UI device 400 includes a toolbar 4201, a canvas 4203, and a tray 4205. The screen 4200 is used, for example, for editing a program for operating the element 100 in association with the system 10. Function buttons such as “Save” and “Redo” are arranged on the toolbar 4201. An element icon 4207, a detailed information icon 4209, and a link 4211 can be arranged or drawn on the canvas 4203. With these icons and links, it is possible to set and confirm the elements 100 used for the linkage operation, the processing executed by each element 100, and the relationship between the elements 100. Further, an element property box 4213 is arranged on the canvas 4203, and for example, the attribute and state of the element 100 arranged as the element icon 4207 are displayed. In the tray 4205, the elements 100, software, and the like that can be incorporated in the program by being arranged on the canvas 4203 are displayed as icons.
 なお、図10および図11を参照して説明したUIは一例にすぎず、本実施形態ではさまざまな形態のUIが提供されうる。例えば、図11を参照して説明したプログラムの編集のための画面では、ビジュアルプログラミングのための画面において用いられる各種のUIを応用することができる。 Note that the UI described with reference to FIGS. 10 and 11 is merely an example, and various forms of UI can be provided in the present embodiment. For example, in the screen for editing a program described with reference to FIG. 11, various UIs used in the screen for visual programming can be applied.
 (1-4.連係動作の例)
 以下、図1に示されたシステムにおけるエレメントの連係動作の具体的な例について、さらに説明する。なお、以下の説明では、理解を容易にするために、エレメント100を連係動作させるためのプログラムを視覚的に表現した図が参照される。なお、これらの図は、UI装置400によって提供されるUI(例えば、図11で画面4200として例示されたUI)とは必ずしも関係がない。以下で説明されるソフトウェアは、エレメント100(ハードウェアエレメント)と同様のエレメント(ソフトウェアエレメント)として提供されてもよいし、エレメント100の機能または動作の一部として提供されてもよい。
(1-4. Example of linkage operation)
Hereinafter, a specific example of the element linking operation in the system shown in FIG. 1 will be further described. In the following description, in order to facilitate understanding, reference is made to a diagram visually representing a program for operating the element 100 in a linked manner. Note that these drawings are not necessarily related to the UI provided by the UI device 400 (for example, the UI illustrated as the screen 4200 in FIG. 11). The software described below may be provided as an element (software element) similar to the element 100 (hardware element), or may be provided as part of the function or operation of the element 100.
  (第1の具体的な例)
 図12は、図1に示されたシステムにおけるエレメントの連係動作の第1の具体的な例について説明するための図である。図12を参照すると、本例では、エレメント100のうち、加速度センサ100aと、カメラ100bとが使用される。例えば、加速度センサ100aは冷蔵庫のドアに取り付けられ、カメラ100bは、冷蔵庫の庫内を映すことのできる位置に取り付けられる。図示された例において、加速度センサ100aからカメラ100bに向かうリンク601は、「加速度センサ100aの検出値に基づいてカメラ100bが撮影を実行する」ことを意味する。さらに、図示された例では、カメラ100bの動作として「撮影した画像をサーバにアップロードする」ことが指定されている。
(First specific example)
FIG. 12 is a diagram for explaining a first specific example of element linking operation in the system shown in FIG. 1. Referring to FIG. 12, in this example, among the elements 100, an acceleration sensor 100a and a camera 100b are used. For example, the acceleration sensor 100a is attached to the door of the refrigerator, and the camera 100b is attached to a position where the interior of the refrigerator can be projected. In the illustrated example, the link 601 from the acceleration sensor 100a to the camera 100b means that “the camera 100b performs shooting based on the detection value of the acceleration sensor 100a”. Further, in the illustrated example, “upload a captured image to the server” is designated as the operation of the camera 100b.
 さらに、図示された例では、加速度に基づいてドアが開いたことを検出するためのソフトウェア603aが使用されている。ソフトウェア603aは、例えばマネージャ200の演算部221で実行され、加速度センサ100aの検出値の解析結果に基づいて、冷蔵庫のドアが開いたことを判定する。ソフトウェア603aが導入されることによって、リンク601は「加速度センサ100aの検出値に基づいて、冷蔵庫のドアが開いたことが判定された場合に、カメラ100bが撮影を実行する」という処理になる。 Further, in the illustrated example, software 603a for detecting that the door is opened based on the acceleration is used. The software 603a is executed by the arithmetic unit 221 of the manager 200, for example, and determines that the refrigerator door is opened based on the analysis result of the detection value of the acceleration sensor 100a. By introducing the software 603a, the link 601 performs a process of “when the determination is made that the refrigerator door is opened based on the detection value of the acceleration sensor 100a, the camera 100b performs photographing”.
 以上のようなプログラムの設定によって、本例では、冷蔵庫のドアが開いたときにカメラ100bが撮影した画像が、サーバにアップロードされる。ユーザは、アップロードされた画像を閲覧することによって、冷蔵庫の内容物の変化や、最新の在庫状況を把握することができる。 With the above program settings, in this example, an image captured by the camera 100b when the refrigerator door is opened is uploaded to the server. The user can grasp the change in the contents of the refrigerator and the latest inventory status by browsing the uploaded image.
 ここで、冷蔵庫のドアが開いたことを検出したいのであれば、磁気などを用いた開閉検出専用のセンサを使用することもできる。しかし、本例では、ソフトウェア603aを導入することによって、加速度センサ100aを利用してドアが開いたことを検出することを可能にしている。従って、冷蔵庫のドアが開いたことを検出するために、専用のセンサではなく、例えばそれまで別の用途で使用していた加速度センサ100aを流用することができる。また、冷蔵庫のドアが開いたことを検出することがもはや必要ではなくなった場合には、他の用途に加速度センサ100aを転用することができる。このように、本実施形態では、ソフトウェアや他のエレメント100との組み合わせを変更することによって、同じエレメント100を多様な用途で活用することができる。 Here, if it is desired to detect that the refrigerator door is opened, a sensor dedicated to open / close detection using magnetism or the like can also be used. However, in this example, by introducing the software 603a, it is possible to detect that the door is opened by using the acceleration sensor 100a. Therefore, in order to detect that the door of the refrigerator is opened, for example, the acceleration sensor 100a that has been used for another purpose can be used instead of the dedicated sensor. In addition, when it is no longer necessary to detect that the refrigerator door is opened, the acceleration sensor 100a can be diverted to other applications. Thus, in this embodiment, the same element 100 can be utilized for various uses by changing the combination of software and other elements 100.
  (第2の具体的な例)
 図13は、図1に示されたシステムにおけるエレメントの連係動作の第2の具体的な例について説明するための図である。図13を参照すると、本例では、エレメント100のうち、加速度センサ100aと、ボタン100dとが使用される。例えば、加速度センサ100aはトイレのドアに取り付けられ、ボタン100dは洗面所に取り付けられる。図示された例では、さらに、加速度に基づいてドアが開いたことを検出するためのソフトウェア603aと、提供されたデータを記録するソフトウェア603bとが使用されている。加速度センサ100aからソフトウェア603aを経てソフトウェア603bに向かうリンク601は、「加速度センサ100aの検出値に基づいてドアが開いたことが判定されたことを、ソフトウェア603bが記録する」ことを意味する。また、ボタン100dからソフトウェア603bに向かうリンク601は、「ボタン100dから信号が出力されたことをソフトウェア603bが記録する」ことを意味する。
(Second specific example)
FIG. 13 is a diagram for explaining a second specific example of element linking operation in the system shown in FIG. 1. Referring to FIG. 13, in this example, among the elements 100, an acceleration sensor 100a and a button 100d are used. For example, the acceleration sensor 100a is attached to a toilet door, and the button 100d is attached to a toilet. In the illustrated example, software 603a for detecting that the door is opened based on acceleration and software 603b for recording the provided data are further used. A link 601 from the acceleration sensor 100a to the software 603b through the software 603a means that the software 603b records that it is determined that the door is opened based on the detection value of the acceleration sensor 100a. The link 601 from the button 100d to the software 603b means that the software 603b records that a signal is output from the button 100d.
 以上のようなプログラムの設定によって、本例では、ユーザが洗面所でボタン100dを押下した時刻と、トイレのドアを開いた時刻とが、ログとして記録される。例えば、ユーザが起床後に洗面所で顔を洗うときにボタン100dを押すことに決めれば、ボタン100dが押下された時刻を起床時刻として記録することができる。このようなログから、例えば、毎日の起床時刻とトイレに行った時刻との変化を時系列で参照することができ、ユーザの生活リズムの改善などに役立てることができる。 With the above program settings, in this example, the time when the user presses the button 100d in the bathroom and the time when the toilet door is opened are recorded as a log. For example, if the user decides to press the button 100d when washing his face in the bathroom after getting up, the time when the button 100d was pressed can be recorded as the rising time. From such a log, for example, it is possible to refer to changes in daily wake-up time and the time of going to the toilet in time series, which can be used to improve the user's life rhythm.
  (第3の具体的な例)
 図14は、図1に示されたシステムにおけるエレメントの連係動作の第3の具体的な例について説明するための図である。図14を参照すると、本例では、エレメント100のうち、加速度センサ100aと、人感センサ100cとが使用される。例えば、加速度センサ100aは椅子に取り付けられ、人感センサ100cは椅子の前の机に取り付けられる。図示された例では、さらに、加速度センサ100aと人感センサ100cとの検出結果に基づいて、ユーザが椅子に座っていた時間を記録するためのソフトウェア603cが使用されている。加速度センサ100aからソフトウェア603cに向かうリンク601は、「加速度センサ100aの検出値をソフトウェア603cに提供する」ことを意味する。また、人感センサ100cからソフトウェア603cに向かうリンク601は、「人感センサ100cの検出値をソフトウェア603cに提供する」ことを意味する。
(Third specific example)
FIG. 14 is a diagram for explaining a third specific example of the element linking operation in the system shown in FIG. Referring to FIG. 14, in this example, among the elements 100, an acceleration sensor 100a and a human sensor 100c are used. For example, the acceleration sensor 100a is attached to a chair, and the human sensor 100c is attached to a desk in front of the chair. In the illustrated example, software 603c for recording the time that the user has been sitting on the chair based on the detection results of the acceleration sensor 100a and the human sensor 100c is further used. A link 601 from the acceleration sensor 100a to the software 603c means “providing the detection value of the acceleration sensor 100a to the software 603c”. Further, the link 601 from the human sensor 100c to the software 603c means “providing the detection value of the human sensor 100c to the software 603c”.
 以上のようなプログラムの設定によって、本例では、ユーザが椅子に座っていた時間が記録される。この記録に基づいて、ユーザは、職場などで座っている時間を把握し、連続して座っている時間が長すぎるようであれば休憩をとったりすることができる。あるいは、ソフトウェア603cの検出結果に基づいてスマートフォンなどからアラートを出力するソフトウェアがさらに組み合わされ、ユーザが所定の時間を超えて連続して座っている場合にはスマートフォンなどからアラートを出力して休憩をとることを促してもよい。 According to the program settings as described above, in this example, the time that the user has been sitting on the chair is recorded. Based on this record, the user can grasp the sitting time at the workplace and take a break if the sitting time is too long. Alternatively, software that outputs an alert from a smartphone or the like based on the detection result of the software 603c is further combined, and when the user is sitting continuously over a predetermined time, an alert is output from the smartphone or the like to take a break. You may be encouraged to take it.
  (第4の具体的な例)
 図15は、図1に示されたシステムにおけるエレメントの連係動作の第4の具体的な例について説明するための図である。図15を参照すると、本例では、エレメント100のうち、加速度センサ100aと、スピーカ100eと、LEDランプ100fとが使用される。例えば、加速度センサ100a、スピーカ100e、LEDランプ100fは、それぞれ、ユーザ(例えば子供)の身体の適切な部位に取り付けられる。なお、加速度センサ100aは、複数用いられてもよい。図示された例では、さらに、加速度のパターンに応じて効果音を再生するソフトウェア603dと、同じく加速度のパターンに応じて所定のパターンでランプを点滅させるソフトウェア603eとが示されている。なお、これらのソフトウェア603が扱う加速度のパターンは、単一の加速度波形のパターンであってもよいし、複数の加速度波形の組み合わせのパターンであってもよい。加速度センサ100aからは、ソフトウェア603d,603eのそれぞれに向かってリンク601が設定されている。これらのリンク601は、「ソフトウェア603d,603eのそれぞれに加速度センサ100aの検出値を提供する」ことを意味する。さらに、ソフトウェア603dからはスピーカ100eに、ソフトウェア603eからはLEDランプ100fに、それぞれリンク601が設定されている。これらのリンク601は、「ソフトウェア603dによって提供される音声信号に従ってスピーカ100eが音声を出力すること」および「ソフトウェア603eによって提供される信号に従ってLEDランプ100fが発光すること」を意味する。
(Fourth specific example)
FIG. 15 is a diagram for explaining a fourth specific example of the element linking operation in the system shown in FIG. 1. Referring to FIG. 15, in this example, among the elements 100, an acceleration sensor 100a, a speaker 100e, and an LED lamp 100f are used. For example, the acceleration sensor 100a, the speaker 100e, and the LED lamp 100f are each attached to an appropriate part of the body of the user (for example, a child). A plurality of acceleration sensors 100a may be used. In the illustrated example, software 603d that reproduces sound effects according to the acceleration pattern and software 603e that causes the lamp to blink in a predetermined pattern according to the acceleration pattern are also shown. The acceleration pattern handled by these software 603 may be a single acceleration waveform pattern or a combination of a plurality of acceleration waveforms. A link 601 is set from the acceleration sensor 100a to each of the software 603d and 603e. These links 601 mean “provide the detection values of the acceleration sensor 100a to the software 603d and 603e, respectively”. Further, a link 601 is set from the software 603d to the speaker 100e, and from the software 603e to the LED lamp 100f. These links 601 mean “the speaker 100e outputs sound according to the audio signal provided by the software 603d” and “the LED lamp 100f emits light according to the signal provided by the software 603e”.
 以上のようなプログラムの設定によって、本例では、ユーザ(例えば子供)が手足を特定のパターンで動かすと、スピーカ100eから効果音が出力されたり、LEDランプ100fが発光したりする。これによって、例えば子供がスーパーヒーローになりきって遊ぶときに、効果音や光を加えてエキサイティングな体験をすることができる。 According to the program settings as described above, in this example, when a user (for example, a child) moves his / her limb in a specific pattern, a sound effect is output from the speaker 100e or the LED lamp 100f emits light. As a result, for example, when a child plays as a super hero, it can have an exciting experience by adding sound effects and light.
  (第5の具体的な例)
 図16は、図1に示されたシステムにおけるエレメントの連係動作の第5の具体的な例について説明するための図である。図16を参照すると、本例では、エレメント100のうち、加速度センサ100aと、人感センサ100cと、カメラ100bと、LEDランプ100fとが使用される。例えば、加速度センサ100a、人感センサ100c、およびカメラ100bは庭のえさ台(bird table)に取り付けられ、LEDランプ100fは家の中に取り付けられる。図示された例では、さらに、加速度センサ100aと人感センサ100cとの検出結果に基づいて、えさ台に鳥がとまったことを判定するためのソフトウェア603fが示されている。加速度センサ100aおよび人感センサ100cからは、それぞれソフトウェア603fに向かってリンク601が設定されている。加速度センサ100aからのリンク601は、「加速度センサ100aの検出値をソフトウェア603fに提供する」ことを意味する。また、人感センサ100cからからのリンク601は、「人感センサ100cの検出値をソフトウェア603fに提供する」ことを意味する。ソフトウェア603fでは、例えば、加速度センサ100aおよび人感センサ100cのそれぞれによって提供されるセンサデータの組み合わせによって成立する条件が提供される。さらに、ソフトウェア603fからカメラ100bおよびLEDランプ100fに向かうリンク601が設定されている。カメラ100bへのリンク601は「ソフトウェア603fの出力に基づいてカメラ100bが撮影を実行する」ことを意味する。LEDランプ100fへのリンクは「ソフトウェア603fの出力に基づいてLEDランプ100fを発光させる」ことを意味する。
(Fifth specific example)
FIG. 16 is a diagram for explaining a fifth specific example of element linking operation in the system shown in FIG. 1. Referring to FIG. 16, in this example, among the elements 100, an acceleration sensor 100a, a human sensor 100c, a camera 100b, and an LED lamp 100f are used. For example, the acceleration sensor 100a, the human sensor 100c, and the camera 100b are attached to a bird table, and the LED lamp 100f is attached to a house. In the illustrated example, software 603f for determining that a bird has stopped on the feeding base based on the detection results of the acceleration sensor 100a and the human sensor 100c is further illustrated. A link 601 is set from the acceleration sensor 100a and the human sensor 100c toward the software 603f. The link 601 from the acceleration sensor 100a means “provide the detection value of the acceleration sensor 100a to the software 603f”. Further, the link 601 from the human sensor 100c means “providing the detection value of the human sensor 100c to the software 603f”. In the software 603f, for example, a condition that is established by a combination of sensor data provided by each of the acceleration sensor 100a and the human sensor 100c is provided. Furthermore, a link 601 is set from the software 603f to the camera 100b and the LED lamp 100f. The link 601 to the camera 100b means that “the camera 100b performs shooting based on the output of the software 603f”. The link to the LED lamp 100f means that the LED lamp 100f is caused to emit light based on the output of the software 603f.
 以上のようなプログラムの設定によって、本例では、えさ台に鳥がとまったときに自動的にカメラ100bによって静止画または動画が撮影され、どのような鳥が来たかを知ることができる。また、えさ台に鳥がとまったときに家の中のLEDランプ100fが発光することによって、庭に出て実際に来ている鳥を観察することができる。 According to the program settings as described above, in this example, when a bird stops on the food stand, a still image or a moving image is automatically shot by the camera 100b, and it is possible to know what kind of bird has come. Moreover, when the bird stops on the food stand, the LED lamp 100f in the house emits light, so that it is possible to observe the bird actually coming out of the garden.
  (第6の具体的な例)
 図17は、図1に示されたシステムにおけるエレメントの連係動作の第6の具体的な例について説明するための図である。図17を参照すると、本例では、エレメント100のうち、ボタン100dと、スピーカ100eとが使用される。例えば、スピーカ100eが寝室に配置され、ボタン100dが洗面所に配置される。図示された例では、さらに、目覚ましアラームのソフトウェア603gが示されている。ソフトウェア603gからは、スピーカ100eにリンク601が設定されている。このリンク601は、「ソフトウェア603が設定時刻になったら出力する音声信号に従って、スピーカ100eが音声を出力する」ことを意味する。また、ボタン100dからは、ソフトウェア603gにリンク601が設定されている。このリンク601は、「ボタン100dが押下されたら、ソフトウェア603gによる音声信号の出力を停止する」ことを意味する。
(Sixth specific example)
FIG. 17 is a diagram for explaining a sixth specific example of element linking operation in the system shown in FIG. 1. Referring to FIG. 17, in this example, among the elements 100, a button 100d and a speaker 100e are used. For example, the speaker 100e is disposed in a bedroom and the button 100d is disposed in a washroom. In the illustrated example, the alarm alarm software 603g is further shown. A link 601 is set to the speaker 100e from the software 603g. This link 601 means that “the speaker 100e outputs audio according to the audio signal output when the software 603 reaches the set time”. A link 601 is set in the software 603g from the button 100d. This link 601 means “when the button 100d is pressed, the output of the audio signal by the software 603g is stopped”.
 以上のようなプログラムの設定によって、本例では、寝室に配置されたスピーカ100eから出力される目覚ましアラームの音が、洗面所のボタン100dを押下するまで止まらなくなる。これによって、寝たままアラームを止めてしまうといったような事態が生じにくくなり、予定した時刻に確実に目覚めることができる。 By setting the program as described above, in this example, the alarm sound output from the speaker 100e disposed in the bedroom does not stop until the toilet button 100d is pressed. As a result, it is difficult to cause a situation such as stopping the alarm while sleeping, and it is possible to wake up reliably at the scheduled time.
 (2.本開示の実施形態)
 本開示の実施形態は、システム内のセンサによって提供されるセンサデータに関する条件が成立した場合にシステムが所定の動作を実行するにあたり、条件をユーザが設定するための処理に関する。なお、以下では図1などを参照して説明したシステム10を例として説明するが、本開示の実施形態に係るシステムはこれには限られない。
(2. Embodiment of the present disclosure)
The embodiment of the present disclosure relates to a process for a user to set a condition when the system performs a predetermined operation when a condition regarding sensor data provided by a sensor in the system is satisfied. In addition, although the system 10 demonstrated with reference to FIG. 1 etc. is demonstrated below as an example, the system which concerns on embodiment of this indication is not restricted to this.
 システム内のセンサは、例えば、押しボタン、加速度センサ、イメージセンサ、マイクロフォン、カラーセンサ、照度センサ、温度センサ、湿度センサ、人感センサ、無線信号強度検出器、距離センサ、圧力センサ、ひずみゲージ、ガスセンサ、紫外線センサ、ジャイロセンサ、磁気センサ、速度センサ、傾斜センサ、タッチセンサなどを含みうる。ユーザは、これらのセンサのうちのいずれかによって提供されるセンサデータ、または複数の種類のセンサによって提供されるセンサデータの組み合わせによって成立する条件を、選択または作成する。また、ユーザは、選択または作成された条件を、検証したり、調整したりすることが可能であってもよい。 Sensors in the system include, for example, push buttons, acceleration sensors, image sensors, microphones, color sensors, illuminance sensors, temperature sensors, humidity sensors, human sensors, wireless signal strength detectors, distance sensors, pressure sensors, strain gauges, A gas sensor, an ultraviolet sensor, a gyro sensor, a magnetic sensor, a speed sensor, a tilt sensor, a touch sensor, and the like can be included. The user selects or creates a condition established by sensor data provided by any of these sensors or a combination of sensor data provided by a plurality of types of sensors. Also, the user may be able to verify or adjust the selected or created conditions.
 (2-1.システム構成例)
  (第1の例)
 図18は、本開示の一実施形態におけるシステム構成の第1の例を示す図である。図18を参照すると、システム10-1は、エレメント100-1と、マネージャ200-1と、UI装置400-1とを含む。
(2-1. System configuration example)
(First example)
FIG. 18 is a diagram illustrating a first example of a system configuration according to an embodiment of the present disclosure. Referring to FIG. 18, the system 10-1 includes an element 100-1, a manager 200-1, and a UI device 400-1.
 エレメント100-1は、通信部110と、センサ制御部120aと、アクチュエータ制御部120bと、センサ130aと、アクチュエータ130bとを含む。通信部110は、上記で図1を参照して説明したように、マネージャ200-1や他のエレメント100-1とネットワークを介して通信する通信装置を含む。センサ制御部120aおよびアクチュエータ制御部120bは、図1を参照して説明した制御部120に含まれる。制御部120は、マイクロコントローラやCPUなどによって実現される。センサ制御部120aはセンサ130aを制御し、アクチュエータ制御部120bはアクチュエータ130bを制御する。センサ130aおよびアクチュエータ130bは、図1を参照して説明した機能部130に含まれる。センサ130aは、例えば上述したさまざまなセンサを含みうる。センサ130aは、外界の変化やエレメント100-1に対するユーザのインタラクションを検出する。アクチュエータ130bは、例えばLEDランプ、モータ、スピーカなどを含み、外界に対して何らかの作用を与える。 The element 100-1 includes a communication unit 110, a sensor control unit 120a, an actuator control unit 120b, a sensor 130a, and an actuator 130b. As described above with reference to FIG. 1, the communication unit 110 includes a communication device that communicates with the manager 200-1 and other elements 100-1 via a network. The sensor control unit 120a and the actuator control unit 120b are included in the control unit 120 described with reference to FIG. The control unit 120 is realized by a microcontroller, a CPU, or the like. The sensor control unit 120a controls the sensor 130a, and the actuator control unit 120b controls the actuator 130b. The sensor 130a and the actuator 130b are included in the functional unit 130 described with reference to FIG. The sensor 130a can include, for example, the various sensors described above. The sensor 130a detects a change in the external environment and a user interaction with the element 100-1. The actuator 130b includes, for example, an LED lamp, a motor, a speaker, and the like, and gives some action to the outside world.
 マネージャ200-1は、通信部210と、統合制御部222と、条件調整・生成部223と、条件管理部224と、条件判別部225と、条件ハンドリング部226とを含む。通信部210は、上記で図1を参照して説明したように、エレメント100-1、UI装置400-1、および外部システム500とネットワークを介して通信する通信装置を含む。統合制御部222、条件調整・生成部223、条件管理部224、条件判別部225、および条件ハンドリング部226は、図1を参照して説明した制御部220に含まれる。制御部220は、マイクロコントローラやCPUなどによって実現される。以下、これらの機能構成についてさらに説明する。 The manager 200-1 includes a communication unit 210, an integrated control unit 222, a condition adjustment / generation unit 223, a condition management unit 224, a condition determination unit 225, and a condition handling unit 226. As described above with reference to FIG. 1, the communication unit 210 includes a communication device that communicates with the element 100-1, the UI device 400-1, and the external system 500 via a network. The integrated control unit 222, the condition adjustment / generation unit 223, the condition management unit 224, the condition determination unit 225, and the condition handling unit 226 are included in the control unit 220 described with reference to FIG. The control unit 220 is realized by a microcontroller, a CPU, or the like. Hereinafter, these functional configurations will be further described.
 統合制御部222は、マネージャ200-1内部の条件に関連する機能部や、エレメント100やUI装置400などを制御し、情報の入出力および情報の処理を実行することによって、システム10全体を統合的に制御する。例えば、統合制御部222は、1または複数のエレメント100-1が備えるセンサ130aによって提供されるセンサデータを取得する。 The integrated control unit 222 controls the functional units related to the conditions in the manager 200-1, the element 100, the UI device 400, and the like, and integrates the entire system 10 by executing information input / output and information processing. Control. For example, the integrated control unit 222 acquires sensor data provided by the sensor 130a included in the one or more elements 100-1.
 条件調整・生成部223は、ユーザが条件に対する調整を実行したり、新たに条件を生成したりするための機能を提供する。例えば、条件調整・生成部223は、ユーザによる検出対象状態の実践中に取得されたセンサデータに基づいて、検出対象状態を検出するためのセンサデータに関する条件を設定する。この際、条件調整・生成部223は、検出対象状態の実践のタイミングをUI装置400を介してユーザに指示したり、検出対象状態の実践中に取得されたセンサデータを訓練データとして用いる機械学習を実施したりする。 The condition adjustment / generation unit 223 provides a function for the user to adjust the condition or generate a new condition. For example, the condition adjustment / generation unit 223 sets conditions relating to sensor data for detecting the detection target state based on the sensor data acquired during the practice of the detection target state by the user. At this time, the condition adjustment / generation unit 223 instructs the user of the timing of the detection target state practice via the UI device 400, or uses machine data that is acquired during the detection target state practice as training data. Or carry out.
 また、例えば、条件調整・生成部223は、新たに生成された条件や、既存の単一の条件、既存の複数の条件を組み合わせた条件などについて、検出対象状態の実践中に取得されたセンサデータに基づいて検証を実施する。検証は、例えば、検出対象状態の実践中に取得されたセンサデータによって条件が成立するか否かを判別することによって実施される。ここで、条件調整・生成部223は、条件の検証中に、ユーザ操作に基づいて条件の設定を変更することが可能である。従って、例えば、検証中に、成立するはずの条件が成立しなかったり、成立しても信頼性が低いと判断されたりしたような場合には、ユーザはリアルタイムで条件の設定を変更することができる。 In addition, for example, the condition adjustment / generation unit 223 is a sensor acquired during the practice of the detection target state for a newly generated condition, an existing single condition, a condition that combines a plurality of existing conditions, and the like. Perform verification based on the data. The verification is performed, for example, by determining whether or not the condition is satisfied based on the sensor data acquired during the practice of the detection target state. Here, the condition adjustment / generation unit 223 can change the setting of the condition based on the user operation during the verification of the condition. Therefore, for example, if a condition that should be satisfied is not satisfied during verification, or if it is determined that the reliability is low even if it is satisfied, the user can change the setting of the condition in real time. it can.
 条件管理部224は、既に生成されてシステム10内に保持されている条件と、新たに生成されている条件の候補とを管理する。条件および条件の候補は、例えばマネージャ200-1の記憶部230(図18には示されていない)に保存される。条件および条件の候補は、識別情報(ID)と、例えばセンサ130aが提供するセンサデータによって条件が成立するか否かを判別するための数学的モデルとを関連付けることによって保存されている。条件管理部224は、新たに生成された条件(既存の複数の条件を組み合わせた条件を含む)を記憶部230などに格納するとともに、記憶部230から1または複数の既存の条件を読み込む。 The condition management unit 224 manages the conditions that are already generated and stored in the system 10 and the newly generated condition candidates. Conditions and condition candidates are stored in, for example, the storage unit 230 (not shown in FIG. 18) of the manager 200-1. Conditions and candidate conditions are stored by associating identification information (ID) with a mathematical model for determining whether or not the condition is satisfied by sensor data provided by the sensor 130a, for example. The condition management unit 224 stores newly generated conditions (including conditions obtained by combining a plurality of existing conditions) in the storage unit 230 and the like, and reads one or more existing conditions from the storage unit 230.
 条件判別部225は、センサ130aが提供するセンサデータによって条件が成立するか否かを判別する。判別は、条件管理部224によって例えばマネージャ200-1の記憶部230に保存された条件を参照するか、条件判別部225の内部にコピーすることによって実行される。条件判別部225は、例えば条件において定義された数学的モデルに従って、センサデータによって条件が成立するか否かを判別する。 The condition determination unit 225 determines whether the condition is satisfied based on the sensor data provided by the sensor 130a. The determination is executed by referring to the condition saved in the storage unit 230 of the manager 200-1 by the condition management unit 224 or by copying it into the condition determination unit 225, for example. The condition determining unit 225 determines whether or not the condition is satisfied based on the sensor data, for example, according to a mathematical model defined in the condition.
 条件ハンドリング部226は、条件が成立した場合に、システム10において所定の動作を実現させる。より具体的には、条件ハンドリング部226は、条件判別部225によって条件が成立することが判別された場合に、当該条件に関連付けられたシステム10の動作を実現させる。システム10の動作は、例えば、エレメント100-1のアクチュエータ制御部120bを介してアクチュエータ130bに所定の動作を実行させたり、外部システム500に所定の機能を要求したり、UI装置400の制御部420を介して出力部430bから所定の情報を出力させたりすることによって実現される。 The condition handling unit 226 realizes a predetermined operation in the system 10 when the condition is satisfied. More specifically, when the condition determination unit 225 determines that the condition is satisfied, the condition handling unit 226 realizes the operation of the system 10 associated with the condition. The operation of the system 10 is performed, for example, by causing the actuator 130b to perform a predetermined operation via the actuator control unit 120b of the element 100-1, requesting a predetermined function from the external system 500, or the control unit 420 of the UI device 400. This is realized by outputting predetermined information from the output unit 430b via the.
 UI装置400-1は、通信部410と、制御部420と、入力部430aと、出力部430bとを含む。通信部410は、上記で図1を参照して説明したように、マネージャ200-1などとネットワークを介して通信する通信装置を含む。制御部420は、マイクロコントローラやCPUなどによって実現され、入力部430aおよび出力部430bを制御する。入力部430aおよび出力部430bは、図1を参照して説明した入出力部430に含まれる。入力部430aは、キーボード、マウス、タッチパネルなどによって実現され、ユーザからの操作入力を受け付ける。出力部430bは、ディスプレイ、スピーカなどによって実現され、ユーザにシステム10からの出力を伝達する。 The UI device 400-1 includes a communication unit 410, a control unit 420, an input unit 430a, and an output unit 430b. As described above with reference to FIG. 1, the communication unit 410 includes a communication device that communicates with the manager 200-1 and the like via a network. The control unit 420 is realized by a microcontroller, a CPU, or the like, and controls the input unit 430a and the output unit 430b. The input unit 430a and the output unit 430b are included in the input / output unit 430 described with reference to FIG. The input unit 430a is realized by a keyboard, a mouse, a touch panel, and the like, and receives an operation input from a user. The output unit 430b is realized by a display, a speaker, and the like, and transmits the output from the system 10 to the user.
 本実施形態において、UI装置400-1は、マネージャ200の条件調整・生成部223によって提供される、条件の設定に関する指示または情報を出力または提示する端末装置として機能する。このように、条件の設定に関する指示または情報は、エレメント100-1(センサ130aを備える装置)とは異なる装置を介してユーザに出力または提示されてもよいし、後述する例のようにエレメント100-1が備えるアクチュエータ130bを介して出力または提示されてもよい。 In this embodiment, the UI device 400-1 functions as a terminal device that outputs or presents instructions or information related to setting conditions provided by the condition adjustment / generation unit 223 of the manager 200. In this way, the instruction or information regarding the setting of the condition may be output or presented to the user via a device different from the element 100-1 (the device including the sensor 130a), or the element 100 as in an example described later. -1 may be output or presented via an actuator 130b included in the -1.
 外部システム500は、エレメント100-1、マネージャ200-1、およびUI装置400-1などの構成要素に依存せず、所定の機能を提供するシステムである。外部システム500は、例えば、システム10(図示されていないサーバ300を含む)の外部に存在するシステムでありうる。外部システム500は、例えばサーバを構成する1または複数の情報処理装置によって実現される。外部システム500は、マネージャ200(またはサーバ300)とネットワークを介して通信する通信装置を含む通信部510と、通信部510を介して所定の機能を提供する機能部520とを含む。機能部520は、例えば、演算機能を提供するマイクロコントローラやCPU、データの保存機能を提供するメモリまたはストレージなどを含みうる。 The external system 500 is a system that provides a predetermined function without depending on components such as the element 100-1, the manager 200-1, and the UI device 400-1. The external system 500 can be, for example, a system that exists outside the system 10 (including the server 300 not shown). The external system 500 is realized by, for example, one or a plurality of information processing devices that constitute a server. The external system 500 includes a communication unit 510 including a communication device that communicates with the manager 200 (or the server 300) via a network, and a function unit 520 that provides a predetermined function via the communication unit 510. The functional unit 520 can include, for example, a microcontroller or CPU that provides an arithmetic function, a memory or storage that provides a data storage function, and the like.
 以上で説明したようなシステム構成の第1の例では、条件に関する処理がマネージャ200-1に集約されている。従って、1または複数のエレメント100-1に含まれるセンサ130aによって提供されるセンサデータは、マネージャ200-1に送信され、マネージャ200-1では、統合制御部222がセンサデータを送信したエレメント100-1を特定するとともに、条件判別部225がセンサデータによって条件が成立するか否かを判別する。条件が成立する場合、条件ハンドリング部226が、通信部210を介して、エレメント100-1、UI装置400-1、または外部システム500に、システム10に所定の動作を実現させるための制御コマンドを送信する。 In the first example of the system configuration as described above, the processing related to the conditions is concentrated in the manager 200-1. Accordingly, the sensor data provided by the sensor 130a included in one or more elements 100-1 is transmitted to the manager 200-1, and in the manager 200-1, the integrated control unit 222 transmits the sensor data to the element 100- 1 is specified, and the condition determination unit 225 determines whether the condition is satisfied based on the sensor data. When the condition is satisfied, the condition handling unit 226 sends a control command for causing the system 10 to perform a predetermined operation to the element 100-1, the UI device 400-1, or the external system 500 via the communication unit 210. Send.
 一方、UI装置400-1において、入力部430aによって、条件の設定を変更するためのユーザの操作入力が受け付けられた場合、UI装置400-1の通信部410を介して、操作入力を示す情報がマネージャ200-1に送信される。マネージャ200-1では、条件調整・生成部223が、操作入力に応じて条件の設定を変更し、条件管理部224が変更された設定で条件および条件に関連する情報を更新する。 On the other hand, in the UI device 400-1, when the user's operation input for changing the setting of the condition is accepted by the input unit 430a, information indicating the operation input is received via the communication unit 410 of the UI device 400-1. Is transmitted to the manager 200-1. In the manager 200-1, the condition adjustment / generation unit 223 changes the setting of the condition according to the operation input, and the condition management unit 224 updates the information related to the condition and the condition with the changed setting.
 また、システム10によって提示された、またはユーザによって生成または選択された条件の候補について、リアルタイムのフィードバックによって条件を調整する場合、1または複数のエレメント100-1に含まれるセンサ130aによって提供されるセンサデータはマネージャ200-1に送信され、マネージャ200-1で条件判別部225がセンサデータによって条件が成立するか否かを判別する。さらに、条件が成立するか否かの判別結果は、通信部210を介してUI装置400-1に送信され、制御部420の制御に従って出力部430bからユーザに伝達される。ユーザは、伝達された情報、すなわち検出対象状態を実践したときのセンサデータによって条件が成立するか否かのフィードバックを参照しながら、条件の再調整または再選択を実施することができる。 In addition, a sensor provided by a sensor 130a included in one or more elements 100-1 when adjusting conditions by real-time feedback for candidate conditions presented by the system 10, or generated or selected by a user The data is transmitted to the manager 200-1, and the condition determination unit 225 determines whether the condition is satisfied based on the sensor data. Further, the determination result as to whether or not the condition is satisfied is transmitted to the UI device 400-1 via the communication unit 210, and is transmitted to the user from the output unit 430b according to the control of the control unit 420. The user can perform readjustment or reselection of conditions while referring to feedback on whether or not the conditions are satisfied based on the transmitted information, that is, sensor data when the detection target state is put into practice.
  (第2の例)
 図19は、本開示の一実施形態におけるシステム構成の第2の例を示す図である。図19を参照すると、システム10-2は、エレメント100-2と、マネージャ200-2と、UI装置400-2とを含む。
(Second example)
FIG. 19 is a diagram illustrating a second example of a system configuration according to an embodiment of the present disclosure. Referring to FIG. 19, the system 10-2 includes an element 100-2, a manager 200-2, and a UI device 400-2.
 エレメント100-2は、通信部110と、センサ制御部120aと、アクチュエータ制御部120bと、条件判別部121と、センサ130aと、アクチュエータ130bとを含む。条件判別部121も、センサ制御部120aやアクチュエータ制御部120bと同様に上記で図1を参照して説明した制御部120に含まれる。制御部120は、マイクロコントローラやCPUなどによって実現される。条件判別部121は、第1の例におけるマネージャ200-1の条件判別部225と同様に、例えばセンサ130aからの検出値によって設定された条件が成立するか否かを判別する。判別に用いられる条件は、例えばマネージャ200-1の記憶部230に保存されている。この場合、条件判別部121は、通信部210を介して条件を参照するか、条件を条件判別部121の内部にコピーすることによって判別を実行する。あるいは、判別に用いられる条件は、エレメント100-2の記憶部(図示せず)に保存されていてもよい。条件判別部121は、例えば条件において定義された数学的モデルに従って、センサ130aによって提供されるセンサデータによって条件が成立するか否かを判別する。なお、通信部110、センサ制御部120a、アクチュエータ制御部120b、センサ130a、およびアクチュエータ130bについては、上記の第1の例と同様であるため、重複した説明は省略する。 Element 100-2 includes a communication unit 110, a sensor control unit 120a, an actuator control unit 120b, a condition determination unit 121, a sensor 130a, and an actuator 130b. The condition determination unit 121 is also included in the control unit 120 described above with reference to FIG. 1 in the same manner as the sensor control unit 120a and the actuator control unit 120b. The control unit 120 is realized by a microcontroller, a CPU, or the like. The condition determining unit 121 determines whether or not a condition set by, for example, a detection value from the sensor 130a is satisfied, similarly to the condition determining unit 225 of the manager 200-1 in the first example. The conditions used for the determination are stored, for example, in the storage unit 230 of the manager 200-1. In this case, the condition determination unit 121 executes the determination by referring to the condition via the communication unit 210 or by copying the condition into the condition determination unit 121. Alternatively, the conditions used for the determination may be stored in a storage unit (not shown) of the element 100-2. The condition determining unit 121 determines whether or not the condition is satisfied based on the sensor data provided by the sensor 130a, for example, according to a mathematical model defined in the condition. Note that the communication unit 110, the sensor control unit 120a, the actuator control unit 120b, the sensor 130a, and the actuator 130b are the same as in the first example described above, and thus redundant description is omitted.
 マネージャ200-2は、通信部210と、統合制御部222と、条件調整・生成部223と、条件管理部224と、条件ハンドリング部226とを含む。この第2の例では、エレメント100-2に条件判別部121が含まれる一方で、マネージャ200-2は条件判別部が含まれない。統合制御部222は、条件管理部224によってマネージャ200-2の記憶部230に保存された条件を、通信部210を介してエレメント100-2に送信する。送信は、センサデータが取得されてから実行されてもよいし、センサデータの取得に先立って予め実行されていてもよい。例えば、統合制御部222は、条件に関係するセンサ130aを有するエレメント100-2の条件判別部121に、予め条件の情報を配信してもよい。条件ハンドリング部226は、条件が成立したか否かの判別結果を、通信部210を介してエレメント100-2の条件判別部121から通知される。なお、通信部210、統合制御部222、条件調整・生成部223、条件管理部224、および条件ハンドリング部226について、上述した点以外は第1の例と同様であるため、重複した説明は省略する。 The manager 200-2 includes a communication unit 210, an integrated control unit 222, a condition adjustment / generation unit 223, a condition management unit 224, and a condition handling unit 226. In the second example, the condition determination unit 121 is included in the element 100-2, while the condition determination unit 121 is not included in the manager 200-2. The integrated control unit 222 transmits the conditions stored in the storage unit 230 of the manager 200-2 by the condition management unit 224 to the element 100-2 via the communication unit 210. The transmission may be executed after the sensor data is acquired, or may be executed in advance prior to acquisition of the sensor data. For example, the integrated control unit 222 may distribute condition information in advance to the condition determination unit 121 of the element 100-2 having the sensor 130a related to the condition. The condition handling unit 226 notifies the determination result of whether or not the condition is satisfied from the condition determination unit 121 of the element 100-2 via the communication unit 210. The communication unit 210, the integrated control unit 222, the condition adjustment / generation unit 223, the condition management unit 224, and the condition handling unit 226 are the same as those in the first example except for the points described above. To do.
 なお、UI装置400-2および外部システム500の構成は、上記の第1の例におけるUI装置400-1および外部システム500と同様であるため、これらについての重複した説明は省略する。 Note that the configurations of the UI device 400-2 and the external system 500 are the same as those of the UI device 400-1 and the external system 500 in the first example, and therefore, a duplicate description thereof will be omitted.
 以上で説明したようなシステム構成の第2の例では、条件判別部121が、検出値を取得するセンサ130aを有するエレメント100-2に含まれている。従って、1または複数のエレメント100-2に含まれるセンサ130aの検出値は、エレメント100-2の内部で、条件が成立するか否かの判別に用いられる。条件判別部121は、成立した条件を示す情報を、通信部110を介してマネージャ200-2に送信する。マネージャ200-2では、統合制御部222が情報を送信したエレメント100-2を特定するとともに、条件ハンドリング部226が、通信部210を介して、エレメント100-2、UI装置400-2、または外部システム500に、システム10に所定の動作を実現させるための制御コマンドを送信する。 In the second example of the system configuration as described above, the condition determination unit 121 is included in the element 100-2 having the sensor 130a that acquires the detection value. Therefore, the detection value of the sensor 130a included in one or more elements 100-2 is used to determine whether or not a condition is satisfied inside the element 100-2. The condition determination unit 121 transmits information indicating the established condition to the manager 200-2 via the communication unit 110. In the manager 200-2, the integrated control unit 222 identifies the element 100-2 to which the information has been transmitted, and the condition handling unit 226 transmits the element 100-2, the UI device 400-2, or the external device via the communication unit 210. A control command for causing the system 10 to realize a predetermined operation is transmitted to the system 500.
 一方、UI装置400-2において、入力部430aによって、条件の設定を変更するためのユーザの操作入力が受け付けられた場合、UI装置400-2の通信部410を介して操作入力を示す情報がマネージャ200-2に送信される。マネージャ200-2では、条件調整・生成部223が、操作入力に応じて条件の設定を変更し、条件管理部224が変更された設定で条件を更新する。さらに、統合制御部222は、条件の更新をエレメント100-2に通知し、エレメント100-2では、条件判別部121が、内部的に保持している条件の設定を変更してもよい。 On the other hand, in the UI device 400-2, when the user's operation input for changing the setting of the condition is received by the input unit 430a, information indicating the operation input is received via the communication unit 410 of the UI device 400-2. It is transmitted to the manager 200-2. In the manager 200-2, the condition adjustment / generation unit 223 changes the setting of the condition according to the operation input, and the condition management unit 224 updates the condition with the changed setting. Further, the integrated control unit 222 may notify the element 100-2 of the update of the condition, and in the element 100-2, the condition determination unit 121 may change the setting of the condition held internally.
 また、システム10によって提示された、またはユーザによって生成または選択された条件の候補について、リアルタイムのフィードバックによって条件を調整する場合、1または複数のエレメント100-2に含まれるセンサ130aによって提供されるセンサデータに基づいて、条件判別部121が、条件が成立するか否かを判別する。条件が成立するか否かの判別結果は、例えば通信部110を介してマネージャ200-2に送信され、さらに通信部210からUI装置400-2に転送され、制御部420の制御に従って出力部430bからユーザに伝達される。ユーザは、伝達された情報、すなわちセンサによって提供されるセンサデータによって条件が成立するか否かのフィードバックを参照しながら、条件の再調整または再選択を実施することができる。 In addition, a sensor provided by the sensor 130a included in the one or more elements 100-2 when the condition is adjusted by real-time feedback for the candidate condition presented by the system 10 or generated or selected by the user. Based on the data, the condition determination unit 121 determines whether or not the condition is satisfied. The determination result as to whether or not the condition is satisfied is transmitted to the manager 200-2 via, for example, the communication unit 110, further transferred to the UI device 400-2 from the communication unit 210, and output unit 430b according to the control of the control unit 420. To the user. The user can perform readjustment or reselection of conditions while referring to feedback on whether or not the conditions are satisfied based on the transmitted information, that is, sensor data provided by the sensors.
  (第3の例)
 図20は、本開示の一実施形態におけるシステム構成の第3の例を示す図である。図20を参照すると、システム10-3は、エレメント100-3と、マネージャ200-3と、UI装置400-3とを含む。
(Third example)
FIG. 20 is a diagram illustrating a third example of a system configuration according to an embodiment of the present disclosure. Referring to FIG. 20, the system 10-3 includes an element 100-3, a manager 200-3, and a UI device 400-3.
 この第3の例では、エレメント100-3が、上記の第2の例におけるエレメント100-2と同様の構成を有しうる。また、マネージャ200-3が、上記の第1の例におけるマネージャ200-1と同様の構成を有しうる。つまり、第3の例では、エレメント100-3の少なくとも一部が条件判別部121を含むのに加えて、マネージャ200-3も条件判別部225を含む。なお、エレメント100-3およびマネージャ200-3のそれぞれの構成については、上記の第1の例または第2の例と同様であるため、重複した説明は省略する。 In the third example, the element 100-3 can have the same configuration as the element 100-2 in the second example. Further, the manager 200-3 can have the same configuration as the manager 200-1 in the first example. That is, in the third example, in addition to at least a part of the element 100-3 including the condition determining unit 121, the manager 200-3 also includes the condition determining unit 225. Note that the configurations of the element 100-3 and the manager 200-3 are the same as those in the first example or the second example described above, and a duplicate description is omitted.
 第3の例では、例えば、センサ130aの検出値の種類や、それぞれのエレメント100-3のメモリまたはストレージのサイズ、それぞれのエレメント100-3とマネージャ200との通信において利用可能な帯域幅などに応じて、エレメント100-3に条件判別部121を設けるか、マネージャ200-3の条件判別部225を使用するかが決定される。あるセンサ130aの検出値についてマネージャ200-3の条件判別部225が使用される場合、当該センサ130aを含むエレメント100-3には、条件判別部121が含まれなくてもよい。 In the third example, for example, the type of detection value of the sensor 130a, the size of the memory or storage of each element 100-3, the bandwidth available for communication between each element 100-3 and the manager 200, etc. Accordingly, it is determined whether to provide the condition determining unit 121 in the element 100-3 or to use the condition determining unit 225 of the manager 200-3. When the condition determination unit 225 of the manager 200-3 is used for the detection value of a certain sensor 130a, the element 100-3 including the sensor 130a may not include the condition determination unit 121.
 なお、UI装置400-3および外部システム500の構成は、上記の第1の例または第2の例におけるUI装置400-1,400-2および外部システム500と同様であるため、これらについての重複した説明は省略する。 Note that the configurations of the UI device 400-3 and the external system 500 are the same as those of the UI devices 400-1 and 400-2 and the external system 500 in the first example or the second example described above, and therefore, there is no overlap in these. The explanations made are omitted.
 (2-2.処理フローの例)
 図21は、本開示の一実施形態における処理の例を示すフローチャートである。なお、以下の説明では、上記のシステム構成の第1の例における処理主体を例として説明するが、システム構成の第2および第3の例についても同様の処理が可能である。なお、以下の説明では、特に必要がない限り、システム10-1、エレメント100-1、マネージャ200-1、およびUI装置400-1を、それぞれ単にシステム10、エレメント100、マネージャ200、およびUI装置400と記載する。
(2-2. Example of processing flow)
FIG. 21 is a flowchart illustrating an example of processing according to an embodiment of the present disclosure. In the following description, the processing subject in the first example of the system configuration will be described as an example, but the same processing can be performed for the second and third examples of the system configuration. In the following description, unless otherwise specified, the system 10-1, the element 100-1, the manager 200-1, and the UI device 400-1 are simply referred to as the system 10, the element 100, the manager 200, and the UI device, respectively. 400.
 まず、条件に使用されるセンサが選択される(S101)。センサは、例えばユーザ操作によって選択される。この場合、例えば、UI装置400の入力部430aが、センサを選択するためのユーザ操作を取得する。当該ユーザ操作を示す情報は、制御部420および通信部410を介して、マネージャ200に送信される。これに先立って、UI装置400の出力部430bは、マネージャ200の統合制御部222から通信部210を介して送信された情報に基づいて、選択可能なセンサを示すUI画面をユーザに提示してもよい。この時点で、統合制御部222は、選択可能なセンサを、利用可能なすべてのセンサの中から、例えば設定しようとしている条件の種類に応じて絞り込んでもよい。あるいは、統合制御部222は、センサの選択自体を、設定しようとしている条件の種類などに応じて自動的に実行してもよい。 First, the sensor used for the condition is selected (S101). The sensor is selected by a user operation, for example. In this case, for example, the input unit 430a of the UI device 400 acquires a user operation for selecting a sensor. Information indicating the user operation is transmitted to the manager 200 via the control unit 420 and the communication unit 410. Prior to this, the output unit 430b of the UI device 400 presents a UI screen showing selectable sensors to the user based on the information transmitted from the integrated control unit 222 of the manager 200 via the communication unit 210. Also good. At this point, the integrated control unit 222 may narrow down the selectable sensors from all available sensors, for example, according to the type of condition to be set. Alternatively, the integrated control unit 222 may automatically perform sensor selection itself according to the type of condition to be set.
 マネージャ200では、統合制御部222が、UI装置400から通信部210を介して受信されたユーザ操作を示す情報に基づいて、システム10において利用可能なセンサ130aの中から、選択されたセンサを特定する。さらに、統合制御部222は、選択されたセンサによって提供されるセンサデータを用いた条件を設定するにあたり、条件候補を自動検索するか否かを問い合わせるUI画面を、UI装置400の出力部430bを介してユーザに提示する(S103)。なお、ここでいう条件候補は、既に生成されて、条件管理部224によってマネージャ200の記憶部230に保存されている条件を含みうる。 In the manager 200, the integrated control unit 222 identifies the selected sensor from the sensors 130 a that can be used in the system 10 based on the information indicating the user operation received from the UI device 400 via the communication unit 210. To do. Further, when setting the condition using the sensor data provided by the selected sensor, the integrated control unit 222 displays a UI screen for inquiring whether or not to automatically search for a condition candidate and the output unit 430b of the UI device 400. To the user (S103). Note that the condition candidates here may include conditions that have already been generated and saved in the storage unit 230 of the manager 200 by the condition management unit 224.
 上記のS103において、UI装置400の入力部430aを介して、条件候補を自動検索することを示すユーザ操作が取得された場合(YES)、システム10は検出対象状態の待ち受け状態になる。より具体的には、マネージャ200の統合制御部222が、通信部210を介して、UI装置400に、検出対象状態のレコーディングを開始することを通知し、さらに、検出対象状態を実践するよう要求する。これらを受けたUI装置400の出力部430bは、検出対象状態のレコーディングが開始されることをユーザに通知し、続いて検出対象状態を実践するようユーザに促すUI画面をユーザに提示する。これを受けて、ユーザは、検出対象状態を実践する(S105)。 In S103 described above, when a user operation indicating that a condition candidate is automatically searched is acquired via the input unit 430a of the UI device 400 (YES), the system 10 enters a detection target state standby state. More specifically, the integrated control unit 222 of the manager 200 notifies the UI device 400 that recording of the detection target state is started via the communication unit 210, and further requests to practice the detection target state. To do. Upon receiving these, the output unit 430b of the UI device 400 notifies the user that recording of the detection target state is started, and then presents a UI screen prompting the user to practice the detection target state. In response to this, the user practices the detection target state (S105).
 上記のS105における検出対象状態の実践とは、例えばユーザが、設定したい条件において検出されることを望む状態を、センサ130aを動作させた上で実践することを意味する。より具体的には、例えば、ユーザが、加速度センサ100aの検出値を用いて冷蔵庫のドアが開けられたことを検出したい場合、S105では、冷蔵庫のドアに加速度センサ100aを取り付けた上で、実際に冷蔵庫のドアを開ける。検出対象状態の実践中には、1または複数のセンサ130aによるセンシングが実施される。 The practice of the detection target state in S105 described above means that, for example, the user wants to practice a state that the user desires to be detected under the conditions that the user wants to set after operating the sensor 130a. More specifically, for example, when the user wants to detect that the door of the refrigerator is opened using the detection value of the acceleration sensor 100a, in S105, after the acceleration sensor 100a is attached to the refrigerator door, Open the refrigerator door. While the detection target state is put into practice, sensing by one or a plurality of sensors 130a is performed.
 ここで、検出対象状態の実践は、上記のようにレコーディングを開始することをユーザに通知したり、検出対象状態を実践することをユーザに要求したりした上で実施されてもよいし、そのような通知や要求を出力することなく、自動的に開始されてもよい。また、検出対象状態の実践は、例えば1回だけ実施されてもよく、繰り返して実施された後に自動的に終了してもよく、ユーザによる終了の指示まで繰り返して実施されてもよい。検出対象状態の実践を自動的に終了する場合、実施の回数が予め設定された回数に到達することや、実践を開始してからの経過時間などを終了条件として設定することができる。さらに、検出対象状態の実践にあたっては、検出対象状態を示す検出値の区間を特定することを容易にするために、ユーザによって任意のタイミングで記録されるタイムスタンプや、ユーザが検出対象状態を実践している様子を記録した画像や音声が付加的に取得されてもよい。 Here, the detection target state may be practiced after notifying the user to start recording as described above or requesting the user to practice the detection target state. It may be automatically started without outputting such a notification or request. Moreover, the practice of the detection target state may be performed only once, for example, may be automatically ended after being repeatedly performed, or may be repeatedly performed until an end instruction by the user. When the practice of the detection target state is automatically terminated, the number of implementations reaches a preset number, or the elapsed time after the practice is started can be set as the termination condition. Furthermore, when practicing the detection target state, the time stamp recorded at any timing by the user or the user practicing the detection target state in order to make it easy to specify the section of the detection value indicating the detection target state. An image or sound in which the state of being recorded is recorded may be additionally acquired.
 S105における検出対象状態の実践が終了すると、条件判別部225が、実践された検出対象状態において取得されたセンサ130aの検出値によって成立する条件候補があるか否かを判定する(S107)。上記のように、条件候補は、既に生成されて、条件管理部224によってマネージャ200の記憶部230に保存されている条件を含みうる。ここで、条件候補がない場合(NO)、統合制御部222が、通信部210を介して、UI装置400に、検出対象状態の実践にリトライするか否かを問い合わせてもよい。これを受けたUI装置400の出力部430bは、検出対象状態の実践をリトライするか否かをユーザに問い合わせる(S109)。ユーザがリトライすることを選択した場合、S105の検出対象状態の実践が再実行される。 When the practice of the detection target state in S105 ends, the condition determination unit 225 determines whether there is a condition candidate that is satisfied by the detection value of the sensor 130a acquired in the practiced detection target state (S107). As described above, the condition candidates may include conditions that are already generated and stored in the storage unit 230 of the manager 200 by the condition management unit 224. Here, when there is no condition candidate (NO), the integrated control unit 222 may inquire through the communication unit 210 whether or not to retry the practice of the detection target state to the UI device 400. Receiving this, the output unit 430b of the UI device 400 inquires of the user whether or not to retry the practice of the detection target state (S109). When the user selects to retry, the detection target state practice in S105 is re-executed.
 一方、上記のS107の判定において、条件候補がある場合(YES)、統合制御部222は、当該条件候補を適用することを決定する(S111)。このとき、複数の条件候補がある場合には、UI装置400の出力部430bを介して選択可能な条件候補を示すUI画面がユーザに提示され、入力部430aを介して条件候補を選択するためのユーザ操作が取得されてもよい。適用された条件候補については、必要に応じて条件検証処理(S115)が実施される。なお、条件検証処理の詳細については後述する。 On the other hand, if there is a condition candidate in the determination of S107 (YES), the integrated control unit 222 determines to apply the condition candidate (S111). At this time, when there are a plurality of condition candidates, a UI screen showing the condition candidates that can be selected via the output unit 430b of the UI device 400 is presented to the user, and the condition candidates are selected via the input unit 430a. The user operation may be acquired. For the applied condition candidate, a condition verification process (S115) is performed as necessary. Details of the condition verification process will be described later.
 一方、上記のS103において、UI装置400の入力部430aを介して、条件候補を自動検索しないことを示すユーザ操作が取得された場合(NO)、および、上記のS109において、ユーザがリトライしないことを選択した場合、条件生成処理(S113)が実施される。条件生成処理によって生成された条件についても、必要に応じて条件検証処理(S115)が実施される。 On the other hand, when the user operation indicating that the condition candidate is not automatically searched is acquired via the input unit 430a of the UI device 400 in S103 described above (NO), and the user does not retry in S109 described above. Is selected, a condition generation process (S113) is performed. For the conditions generated by the condition generation process, the condition verification process (S115) is performed as necessary.
  (条件生成処理)
 図22は、図21における条件生成処理をさらに詳細に示すフローチャートである。図22を参照すると、条件生成処理では、まず、UI装置400の出力部430bを介して、複合条件を使用するか否かがユーザに問い合わせられる(S121)。複合条件とは、条件候補、例えば既に生成されて条件管理部224によってマネージャ200の記憶部230に保存されている条件を複数組わせることによって生成される条件である。例えば、タッチセンサの場合であれば、「長押し」という条件と「2回押し」という条件とを組み合わせて、「長押しを含む2回押し」という複合条件を生成することができる。
(Condition generation process)
FIG. 22 is a flowchart showing the condition generation process in FIG. 21 in more detail. Referring to FIG. 22, in the condition generation process, first, the user is inquired whether to use the composite condition via the output unit 430b of the UI device 400 (S121). The compound condition is a condition candidate, for example, a condition generated by combining a plurality of conditions already generated and stored in the storage unit 230 of the manager 200 by the condition management unit 224. For example, in the case of a touch sensor, a combined condition of “double press including long press” can be generated by combining the condition of “long press” and the condition of “double press”.
 上記のS121において、UI装置400の入力部430aを介して、複合条件を使用することを示すユーザ操作が取得された場合(YES)、統合制御部222は、UI装置400の出力部430bを介して、複合条件の構成要素になる条件を選択するためのUI画面を提示し、入力部430aが、構成要素の条件を選択するユーザ操作を取得する(S123)。構成要素の条件の選択が終了すると、条件管理部224が選択された複数の既存条件を読み出し、条件調整・生成部223がこれらの既存条件を用いて複合条件を生成する(S125)。 In S121 described above, when a user operation indicating that the composite condition is used is acquired via the input unit 430a of the UI device 400 (YES), the integrated control unit 222 passes through the output unit 430b of the UI device 400. Then, a UI screen for selecting a condition that becomes a component of the composite condition is presented, and the input unit 430a acquires a user operation for selecting the condition of the component (S123). When the selection of the component conditions is completed, the condition management unit 224 reads the selected plurality of existing conditions, and the condition adjustment / generation unit 223 generates a composite condition using these existing conditions (S125).
 一方、S121において、UI装置400の入力部430aを介して、複合条件を使用しないことを示すユーザ操作が取得された場合(NO)、統合制御部222は、機械学習を利用した条件の生成にあたり、流用可能なデータが存在するか否かを判定する(S127)。流用可能なデータは、例えば、カメラ100bを用いてユーザの顔を識別することを含む条件を生成したい場合であれば、例えばカメラで撮影されてストレージに蓄積された写真や、外部システム500において提供されるソーシャルメディアなどのサービスに投稿された写真などのように、システム10で既に利用可能な、ユーザの顔を含む画像でありうる。S127において、そのようなデータが存在する場合(YES)、訓練データが流用される(S129)。より具体的には、統合制御部222が、データを外部システム500などから取得して、条件調整・生成部223に提供する。 On the other hand, in S121, when a user operation indicating that the composite condition is not used is acquired via the input unit 430a of the UI device 400 (NO), the integrated control unit 222 generates a condition using machine learning. Then, it is determined whether or not divertable data exists (S127). The divertable data is provided, for example, in a photograph taken by the camera and stored in the storage, or in the external system 500 if it is desired to generate a condition including identifying the user's face using the camera 100b. It may be an image including the user's face that is already available in the system 10, such as a photo posted to a service such as social media. In S127, when such data exists (YES), training data is diverted (S129). More specifically, the integrated control unit 222 acquires data from the external system 500 and provides the condition adjustment / generation unit 223 with the data.
 一方、S127において、学習に流用可能なデータが存在しない場合(NO)、上記で図21を参照して説明したS105の処理と同様に、ユーザが検出対象状態を実践する(S131)。ここでも、検出対象状態の実践は、レコーディングを開始することをユーザに通知したり、検出対象状態を実践することをユーザに要求したりした上で実施されてもよいし、そのような通知や要求を出力することなく、自動的に開始されてもよい。また、検出対象状態の実践は、例えば1回だけ実施されてもよく、繰り返して実施された後に自動的に終了してもよく、ユーザによる終了の指示まで繰り返して実施されてもよい。また、検出対象状態の実践にあたっては、検出対象状態を示す検出値の区間を特定することを容易にするために、ユーザによって任意のタイミングで記録されるタイムスタンプや、ユーザが検出対象状態を実践している様子を記録した画像や音声が付加的に取得されてもよい。S127における検出対象状態の実践において取得された、センサ130aの検出値を含むデータは、統合制御部222によって訓練データとして記録される(S133)。 On the other hand, in S127, when there is no data that can be used for learning (NO), the user practices the detection target state (S131) in the same manner as the process of S105 described above with reference to FIG. Here, the detection target state may be practiced after notifying the user to start recording or requesting the user to practice the detection target state. It may be started automatically without outputting a request. Moreover, the practice of the detection target state may be performed only once, for example, may be automatically ended after being repeatedly performed, or may be repeatedly performed until an end instruction by the user. In practicing the detection target state, the time stamp recorded at any timing by the user or the user practicing the detection target state in order to make it easy to specify the section of the detection value indicating the detection target state. An image or sound in which the state of being recorded is recorded may be additionally acquired. Data including the detection value of the sensor 130a acquired in the practice of the detection target state in S127 is recorded as training data by the integrated control unit 222 (S133).
 さらに、上記のS129またはS133で取得された訓練データを用いて、条件調整・生成部223が学習を実施する(S135)。ここでの学習には、いわゆる教師あり機械学習であり、公知の各種の機械学習の手法を応用することができる。学習の結果に基づいて、条件調整・生成部223は、新規な条件を生成する(S137)。上記のS125またはS137において生成された条件は、必要に応じて図21に示された条件検証処理(S115)を経て、条件管理部224によってマネージャ200の記憶部230などに保存される。 Further, the condition adjustment / generation unit 223 performs learning using the training data acquired in the above S129 or S133 (S135). The learning here is so-called supervised machine learning, and various known machine learning techniques can be applied. Based on the learning result, the condition adjustment / generation unit 223 generates a new condition (S137). The conditions generated in S125 or S137 are stored in the storage unit 230 of the manager 200 by the condition management unit 224 through the condition verification process (S115) shown in FIG.
  (条件検証処理)
 図23は、図21における条件検証処理をさらに詳細に示すフローチャートである。図23を参照すると、条件検証処理では、まず、UI装置400の出力部430bを介して、条件を検証するか否かがユーザに問い合わせられる(S141)。ここで、入力部430aを介して、条件を検証しないことを示すユーザ操作が取得された場合(NO)、条件検証処理は実施されない。つまり、図21のS111で適用された条件候補や、S113で生成された新たな条件が、そのまま条件として確定される。
(Condition verification process)
FIG. 23 is a flowchart showing the condition verification process in FIG. 21 in more detail. Referring to FIG. 23, in the condition verification process, first, the user is inquired whether to verify the condition via the output unit 430b of the UI device 400 (S141). Here, when a user operation indicating that the condition is not verified is acquired via the input unit 430a (NO), the condition verification process is not performed. That is, the condition candidates applied in S111 in FIG. 21 and the new conditions generated in S113 are determined as conditions as they are.
 一方、上記のS141において、条件を検証することを示すユーザ操作が取得された場合(YES)、条件の検証が実施される(S143)。条件の検証は、例えば、システム10からユーザへのリアルタイムのフィードバックによって実施されうる。この場合、例えば、条件に関係するセンサ130aがセンシングを実施した上で、ユーザが検出対象状態を実践する。このとき、センサ130aの検出値はマネージャ200に送信され、マネージャ200で条件判別部225が検出値によって条件が成立するか否かを判別する。さらに、条件が成立するか否かの判別結果は、通信部210を介してUI装置400-1に送信され、制御部420の制御に従って出力部430bからユーザに伝達される。 On the other hand, when a user operation indicating that the condition is to be verified is acquired in S141 described above (YES), the condition is verified (S143). The verification of the condition can be performed by real-time feedback from the system 10 to the user, for example. In this case, for example, after the sensor 130a related to the condition performs sensing, the user practices the detection target state. At this time, the detection value of the sensor 130a is transmitted to the manager 200, and the condition determination unit 225 determines whether or not the condition is satisfied based on the detection value. Further, the determination result as to whether or not the condition is satisfied is transmitted to the UI device 400-1 via the communication unit 210, and is transmitted to the user from the output unit 430b according to the control of the control unit 420.
 上記のS143において出力された条件の検証結果を受けて、UI装置400では、出力部430bによって、条件自体を変更するか否かを問い合わせるUI画面がユーザに提示される(S145)。ここで、入力部430aによって、条件自体を変更することを示すユーザ操作が取得された場合(YES)、処理は図21に示されたS103まで戻る。つまり、条件自体を変更することが選択された場合、条件候補を自動検索するか否かの問い合わせから処理が再実行される。 In response to the verification result of the condition output in S143, the UI device 400 presents the user with a UI screen asking whether or not the condition itself is to be changed by the output unit 430b (S145). Here, when the user operation indicating that the condition itself is changed is acquired by the input unit 430a (YES), the process returns to S103 illustrated in FIG. That is, when it is selected to change the condition itself, the process is re-executed from an inquiry as to whether or not to automatically search for a condition candidate.
 一方、S145において条件自体は変更されなかった場合(NO)、さらに、UI装置400では、出力部430bによって、条件を調整するか否かを問い合わせるUI画面がユーザに提示される(S147)。ここで、入力部430aによって、条件を調整することを示すユーザ操作が取得された場合(YES)、条件の調整が実施される(S149)。より具体的には、例えば、UI装置400において、検出値の閾値や範囲などの設定値を変更するなどして条件を調整するためのUI画面がユーザに提示され、入力部430aによって、設定値を変更するユーザ操作が取得される。条件の調整が終了すると、S143の条件の検証が再実施される。 On the other hand, if the condition itself is not changed in S145 (NO), the UI device 400 further presents a UI screen asking whether to adjust the condition to the user by the output unit 430b (S147). Here, when a user operation indicating that the condition is adjusted is acquired by the input unit 430a (YES), the condition is adjusted (S149). More specifically, for example, in the UI device 400, a UI screen for adjusting a condition by changing a setting value such as a threshold value or a range of a detection value is presented to the user, and the setting value is set by the input unit 430a. The user operation to change is acquired. When the condition adjustment is completed, the verification of the condition in S143 is performed again.
 一方、S147において条件の調整も必要ないことを示すユーザ操作が取得された場合(NO)、条件の検証処理が終了し、図21のS111で適用された条件候補やS113で生成された新たな条件、またはこれらの条件候補または条件にS149における調整が加えられた条件が確定される。 On the other hand, when a user operation indicating that the condition adjustment is not necessary is acquired in S147 (NO), the condition verification process ends, and the condition candidate applied in S111 in FIG. 21 or the new candidate generated in S113 is displayed. Conditions or conditions obtained by adjusting the conditions in S149 to these condition candidates or conditions are determined.
  (センサデータとともにタイムスタンプを取得する例)
 図24は、本開示の実施形態において、センサデータとともにタイムスタンプを取得する処理の例を示すシーケンス図である。上述のとおり、図21のS105や、図22のS131における検出対象状態の実践では、条件調整・生成部223によって、検出対象状態の実践中に、ユーザによって任意のタイミングで記録されるタイムスタンプが付加的に取得されてもよい。図では、タイムスタンプが取得される場合の、検出対象状態の実践に関する処理が示されている。
(Example of acquiring time stamp with sensor data)
FIG. 24 is a sequence diagram illustrating an example of processing for obtaining a time stamp together with sensor data in the embodiment of the present disclosure. As described above, in the practice of the detection target state in S105 of FIG. 21 or S131 of FIG. 22, the time stamp recorded at any timing by the user during the practice of the detection target state by the condition adjustment / generation unit 223. It may additionally be acquired. In the figure, processing related to the practice of the detection target state when the time stamp is acquired is shown.
 図示された例では、まず、マネージャ200において、統合制御部222がレコーディングの開始を決定する(S201)。レコーディングは、例えば、UI装置400の入力部430aを介したユーザからの操作入力に応じて開始されてもよいし、図示された例のように自動的に開始されてもよい。レコーディングの開始以降、エレメント100においてセンサ130aがセンシングを実施し(S203)、センサ制御部120aが通信部110を介して検出値をマネージャ200に送信する。マネージャ200では、統合制御部222が、通信部210を介して受信されたセンサ検出値を保存する(S205)。 In the illustrated example, first, in the manager 200, the integrated control unit 222 determines the start of recording (S201). For example, the recording may be started in response to an operation input from the user via the input unit 430a of the UI device 400, or may be automatically started as in the illustrated example. After the start of recording, the sensor 130a performs sensing in the element 100 (S203), and the sensor control unit 120a transmits the detection value to the manager 200 via the communication unit 110. In the manager 200, the integrated control unit 222 stores the sensor detection value received via the communication unit 210 (S205).
 一方、レコーディングの開始以降、UI装置400では、入力部430aを介したユーザによるタイムスタンプ生成操作が可能になる。タイムスタンプ生成操作は任意のタイミングで実行され(S207)、タイムスタンプの生成を指示するコマンドがUI装置400からマネージャ200に送信される。マネージャ200では、統合制御部222が、通信部210を介して受信されたコマンドに基づいて、タイムスタンプを記録する(S209)。なお、記録されるタイムスタンプの時系列と、センサ検出値の時系列とは同期している。この後、さらにセンサ130aがセンシングを実施し(S211)、マネージャ200でセンサ検出値が保存されてもよい(S213)。 On the other hand, after the start of recording, the UI device 400 can perform a time stamp generation operation by the user via the input unit 430a. The time stamp generation operation is executed at an arbitrary timing (S207), and a command instructing generation of a time stamp is transmitted from the UI device 400 to the manager 200. In the manager 200, the integrated control unit 222 records a time stamp based on the command received via the communication unit 210 (S209). Note that the time series of time stamps recorded and the time series of sensor detection values are synchronized. Thereafter, the sensor 130a may further perform sensing (S211), and the sensor detection value may be stored in the manager 200 (S213).
 なお、図示された例におけるセンシングとタイムスタンプの記録の前後関係は一例であり、上記の通りタイムスタンプは任意のタイミングで生成されうる。例えば、S203,S205でセンシングが実施される前に、S207,S209によるタイムスタンプの生成が実行されてもよい。また、タイムスタンプの生成は1回には限らず繰り返されてもよい。この場合、前後するタイムスタンプの間には、1回以上のセンシングが実行されてもよいし、間にセンシングが実行されずにタイムスタンプの生成が連続していてもよい。また、最後のタイムスタンプの後には、1回以上のセンシングが実施されてもよいし、センシングが実施されなくてもよい。 Note that the context of sensing and time stamp recording in the illustrated example is an example, and the time stamp can be generated at an arbitrary timing as described above. For example, time stamp generation according to S207 and S209 may be executed before sensing is performed in S203 and S205. Further, the generation of the time stamp is not limited to once, and may be repeated. In this case, one or more times of sensing may be performed between the time stamps before and after, or generation of time stamps may be continued without performing sensing in between. In addition, after the last time stamp, sensing one or more times may be performed, or sensing may not be performed.
 センシングとタイムスタンプの生成とがそれぞれ少なくとも1回ずつ実施された後、マネージャ200において、統合制御部222がレコーディングの終了を決定する(S215)。レコーディングは、例えば、UI装置400の入力部430aを介したユーザの操作入力に応じて終了してもよいし、図示された例のように自動的に終了してもよい。レコーディングが自動的に終了する場合、例えば、実施されたセンシングの回数や、レコーディング開始からの経過時間が終了の条件になりうる。また、図示された例のようにタイムスタンプが生成される場合には、タイムスタンプが生成された回数を条件としてレコーディングが自動的に終了してもよい。 After the sensing and the time stamp are generated at least once, in the manager 200, the integrated control unit 222 determines the end of recording (S215). For example, the recording may be ended in response to a user operation input via the input unit 430a of the UI device 400, or may be automatically ended as in the illustrated example. When recording automatically ends, for example, the number of sensing operations performed or the elapsed time from the start of recording can be a condition for ending. Further, when the time stamp is generated as in the illustrated example, the recording may be automatically terminated on the condition that the time stamp is generated.
 上記のS215におけるレコーディングの終了後、統合制御部222は、S205,S213などで保存されたセンサ検出値と、S209などで記録されたタイムスタンプとを時系列で対応付けたデータをUI装置400に出力する(S217)。UI装置400では、出力部430bが、上記のデータを利用して、検出対象状態を示す検出値の区間を特定するための区間設定UIを表示させる(S219)。区間設定UIでは、例えば、センサデータの時系列波形とともにタイムスタンプが表示され、ユーザがセンサデータの区間を設定する操作を補助する。なお、区間設定UIの具体的な例については後述する。あるいは、条件調整・生成部223は、タイムスタンプに基づいて自動的にセンサデータを抽出してもよい。 After the recording in S215 is completed, the integrated control unit 222 stores, in the UI device 400, data in which the sensor detection values stored in S205, S213, and the like and the time stamps recorded in S209 are associated in time series. It outputs (S217). In the UI device 400, the output unit 430b displays the section setting UI for specifying the section of the detected value indicating the detection target state using the above data (S219). In the section setting UI, for example, a time stamp is displayed together with the time-series waveform of the sensor data, and the user assists the operation of setting the section of the sensor data. A specific example of the section setting UI will be described later. Alternatively, the condition adjustment / generation unit 223 may automatically extract sensor data based on the time stamp.
  (センサデータとともに画像・音声を取得する例)
 図25は、本開示の実施形態において、センサデータとともに画像・音声を取得する処理の例を示すシーケンス図である。上述の通り、図21のS105や、図22のS131における検出対象状態の実践およびそのセンシングでは、検出対象状態を示す検出値の区間を特定することを容易にするために、ユーザが検出対象状態を実践している様子を記録した画像や音声が付加的に取得されてもよい。図25には、そのような場合の検出対象状態認識の例が示されている。
(Example of acquiring image and sound together with sensor data)
FIG. 25 is a sequence diagram illustrating an example of processing for acquiring an image / sound together with sensor data in the embodiment of the present disclosure. As described above, in the practice of the detection target state in S105 of FIG. 21 and the detection target state in S131 of FIG. 22 and its sensing, in order to make it easy to specify the detection value section indicating the detection target state, the user An image or sound recording a state of practicing the above may be additionally acquired. FIG. 25 shows an example of detection target state recognition in such a case.
 図示された例では、画像や音声の記録が、録画・録音装置600によって実施されている。録画・録音装置600は、これまでの図には示されていないが、マネージャ200と連係動作するカメラやマイクロフォン、またはこれらが組み込まれた装置である。録画・録音装置600は、例えばシステム10に組み込まれていてもよいし、システム10の外にあってもよい。録画・録音装置600がシステム10に組み込まれる場合、例えば録画や録音の機能を有するエレメント100、マネージャ200、またはUI装置400が、録画・録音装置600として機能してもよい。 In the illustrated example, the recording / recording apparatus 600 records images and sounds. Although not shown in the drawings so far, the recording / recording apparatus 600 is a camera or microphone that operates in conjunction with the manager 200, or an apparatus in which these are incorporated. The recording / recording apparatus 600 may be incorporated in the system 10 or may be outside the system 10, for example. When the recording / recording apparatus 600 is incorporated in the system 10, for example, the element 100 having a recording or recording function, the manager 200, or the UI apparatus 400 may function as the recording / recording apparatus 600.
 まず、マネージャ200において、統合制御部222がレコーディングの開始を決定する(S231)。上記で図24を参照して説明した例と同様に、レコーディングはさまざまなトリガによって開始されうる。統合制御部222は、レコーディングの開始を録画・録音装置600に通知する。この通知を受けて、録画・録音装置600は、録画・録音を開始する(S233)。 First, in the manager 200, the integrated control unit 222 determines the start of recording (S231). Similar to the example described above with reference to FIG. 24, recording can be initiated by various triggers. The integrated control unit 222 notifies the recording / recording apparatus 600 of the start of recording. In response to this notification, the recording / recording apparatus 600 starts recording / recording (S233).
 一方、レコーディングの開始以降、エレメント100においてセンサ130aがセンシングを実施し(S235)、センサ制御部120aが通信部110を介して検出値をマネージャ200に送信する。マネージャ200では、統合制御部222が、通信部210を介して受信されたセンサ検出値を一時的に保存する(S237)。このS235,S237によるセンシングは、1回だけ実行されてもよいし、何度も繰り返されてもよい。 On the other hand, after the start of recording, the sensor 130a performs sensing in the element 100 (S235), and the sensor control unit 120a transmits the detection value to the manager 200 via the communication unit 110. In the manager 200, the integrated control unit 222 temporarily stores the sensor detection value received via the communication unit 210 (S237). The sensing by S235 and S237 may be executed only once or may be repeated many times.
 その後、マネージャ200において、統合制御部222がレコーディングの終了を決定する(S239)。上記で図24を参照して説明した例と同様に、レコーディングはさまざまなトリガによって終了されうる。統合制御部222は、レコーディングの終了を録画・録音装置600に通知する。この通知を受けて、録画・録音装置600は、録画・録音を終了し、画像・音声データをマネージャ200に出力する(S241)。画像・音声データを受信したマネージャ200の統合制御部222は、S237などで保存されたセンサ検出値と、録画・録音装置600から受信された画像・音声データとを時系列で対応付けたデータをUI装置400に出力する(S243)。UI装置400では、出力部430bが、上記のデータを利用して、検出対象状態を示す検出値の区間を特定するための区間設定UIを表示させる(S245)。なお、この場合の区間設定UIの具体的な例についても、図24の例の区間設定UIと同様に後述する。 Thereafter, in the manager 200, the integrated control unit 222 determines the end of recording (S239). Similar to the example described above with reference to FIG. 24, the recording can be terminated by various triggers. The integrated control unit 222 notifies the recording / recording apparatus 600 of the end of recording. Upon receiving this notification, the recording / recording apparatus 600 ends the recording / recording and outputs the image / sound data to the manager 200 (S241). The integrated control unit 222 of the manager 200 that has received the image / sound data sets data in which the sensor detection values stored in S237 and the like and the image / sound data received from the recording / recording apparatus 600 are associated in time series. The data is output to the UI device 400 (S243). In the UI device 400, the output unit 430b displays the section setting UI for specifying the section of the detection value indicating the detection target state using the above data (S245). Note that a specific example of the section setting UI in this case will be described later in the same manner as the section setting UI in the example of FIG.
 (2-3.ユーザインターフェースの例)
 図26は、本開示の一実施形態におけるセンサ選択用のユーザインターフェースの例を示す図である。図26には、システム10内のエレメント100の機能として実現されるセンサ(センサ130a)が、リスト4301として表示されている。ユーザは、リスト4301の中から、所望の条件を設定するためのセンサを選択することができる。このように、本実施形態では、マネージャ200の条件調整・生成部223が、UI装置400を介して、利用可能なセンサ群の中から条件に関連する1または複数のセンサを選択するためのユーザインターフェースを提供しうる。あるいは、条件調整・生成部223は、自動的に、利用可能なセンサ群の中から条件に関連する1または複数のセンサを選択してもよい。
(2-3. User interface example)
FIG. 26 is a diagram illustrating an example of a user interface for sensor selection according to an embodiment of the present disclosure. In FIG. 26, sensors (sensors 130 a) realized as functions of the element 100 in the system 10 are displayed as a list 4301. The user can select a sensor for setting a desired condition from the list 4301. As described above, in the present embodiment, the condition adjustment / generation unit 223 of the manager 200 selects the one or more sensors related to the condition from the available sensor group via the UI device 400. An interface can be provided. Alternatively, the condition adjustment / generation unit 223 may automatically select one or more sensors related to the condition from the available sensor group.
 例えば、図示された例では、リスト内の「Button1」4303が選択されている。このとき、例えば、図の右側に示したようなリストの展開によって、「Button1」に割り当てられている条件リスト4305が表示される。図示された例の条件リスト4305では、「Button1」について、「LongPress」および「DoublePress」の条件が既に割り当てられていることを示す。ユーザは、その下に表示されている「Assign Condition」を選択することによって、「Button1」を用いた新たな条件を割り当てることができる。 For example, in the illustrated example, “Button 1” 4303 in the list is selected. At this time, for example, by expanding the list as shown on the right side of the figure, the condition list 4305 assigned to “Button 1” is displayed. The condition list 4305 in the illustrated example indicates that “LongPress” and “DoublePress” conditions have already been assigned to “Button1”. The user can assign a new condition using “Button 1” by selecting “Assign Condition” displayed below.
 図27は、本開示の一実施形態における条件選択用のユーザインターフェースの例を示す図である。図27には、例えば図26において「Assign Condition」を選択した場合に表示される、割り当て可能な条件リスト4311が示されている。 FIG. 27 is a diagram illustrating an example of a user interface for condition selection according to an embodiment of the present disclosure. FIG. 27 shows an assignable condition list 4311 displayed when, for example, “Assign Condition” is selected in FIG.
 条件リスト4311では、例えば、図26の例において「Button1」について割り当て可能な条件が示されている。図示された例では、条件リスト4311に、「Press」、「DoublePress」、および「LongPress」を含む既定条件4313と、複合条件4315と、学習生成条件4317とが示されている。既定条件4313は、既に生成されてマネージャ200の記憶部230などに保存された条件である。これらの条件については、例えば条件リスト4311上で指定することによって、「Button1」に割り当てることができる。一方、複合条件4315や、学習生成条件4317を選択すると、例えば上記で図22を参照して説明されたような条件の生成処理が開始される。 In the condition list 4311, for example, conditions that can be assigned to “Button 1” in the example of FIG. 26 are shown. In the illustrated example, the condition list 4311 shows a default condition 4313 including “Press”, “DoublePress”, and “LongPress”, a composite condition 4315, and a learning generation condition 4317. The predetermined condition 4313 is a condition that has already been generated and saved in the storage unit 230 of the manager 200 or the like. These conditions can be assigned to “Button 1” by specifying them on the condition list 4311, for example. On the other hand, when the composite condition 4315 or the learning generation condition 4317 is selected, for example, a condition generation process as described above with reference to FIG. 22 is started.
 図28は、本開示の一実施形態における条件提案用のユーザインターフェースの例を示す図である。図28には、例えば上記の図21を参照して説明した例において、S103で「条件候補を自動検索する」ことが指定された場合に表示されるUIの例が示されている。この場合、例えば、条件候補を検索するためのユーザによる検出対象状態の実践を待つ間、待機画面4321が表示される。図示された例では、センサ「Acceleration1」について、ユーザによる検出対象状態の実践が待機されていることが示されている。 FIG. 28 is a diagram illustrating an example of a user interface for condition proposal according to an embodiment of the present disclosure. FIG. 28 shows an example of a UI that is displayed when “automatically search for condition candidates” is designated in S103 in the example described with reference to FIG. In this case, for example, the standby screen 4321 is displayed while waiting for the user to practice the detection target state for searching for condition candidates. In the illustrated example, it is shown that the user is waiting for the detection target state to be put into practice for the sensor “Acceleration 1”.
 ユーザによる検出対象状態の実践後、条件候補が発見された場合(図21のフローチャートの例では、S107のYES)、検索結果提示画面4323が表示される。画面4323では、条件候補の名称が表示されるとともに、ユーザが条件候補を適切でないと考えた場合のためにリトライボタン4325が表示される(この点は、条件候補があった場合にリトライが実施されない図21の例とは異なる。本実施形態ではどちらの例も採用されうる)。 When a condition candidate is found after the user performs the detection target state (YES in S107 in the example of the flowchart of FIG. 21), a search result presentation screen 4323 is displayed. On the screen 4323, the name of the condition candidate is displayed, and a retry button 4325 is displayed for a case where the user thinks that the condition candidate is not appropriate (this point is retried when there is a condition candidate) This is different from the example shown in Fig. 21. Both examples can be adopted in this embodiment.
 一方、条件候補が発見されなかった場合(図21の例では、S107のNO)、検索結果提示画面4327が表示される。画面4327では、条件候補が発見されなかったことが表示されるとともに、リトライボタン4325が表示される。 On the other hand, if no condition candidate is found (NO in S107 in the example of FIG. 21), a search result presentation screen 4327 is displayed. On the screen 4327, it is displayed that a condition candidate has not been found, and a retry button 4325 is displayed.
 図29は、本開示の一実施形態における複合条件生成用のユーザインターフェースの例を示す図である。図29には、複合条件リスト4331が示されている。複合条件リスト4331は、順位4333と、構成要素4335と、構成要素候補4337とを含む。複合条件リスト4331では、構成要素4335によって複合条件に含まれる個々の条件が示され、順位4333によってそれぞれの条件の順位が示されている。構成要素4335は、構成要素候補4337から選択することによって追加または変更することが可能である。また、構成要素4335は削除可能である。 FIG. 29 is a diagram illustrating an example of a user interface for generating a composite condition according to an embodiment of the present disclosure. FIG. 29 shows a composite condition list 4331. The composite condition list 4331 includes a rank 4333, a component 4335, and a component candidate 4337. In the composite condition list 4331, each condition included in the composite condition is indicated by the component 4335, and the order of each condition is indicated by the order 4333. The component 4335 can be added or changed by selecting from the component candidates 4337. In addition, the component 4335 can be deleted.
 図示された例では、順位「1」に「DoublePress」、順位「2」に「LongPress」、順位「3」に「Press」が、それぞれ設定されている。この場合、まず「DoublePress」、次に「LongPress」、最後に「Press」がこの順番で実行された場合に、条件が成立する。複合条件リスト4331を用いて生成された複合条件は、例えば、上記で図27を参照して説明した条件リスト4311において、既定条件4313などとともに、任意設定条件4319として参照および選択可能でありうる。 In the illustrated example, “DoublePress” is set in the rank “1”, “LongPress” is set in the rank “2”, and “Press” is set in the rank “3”. In this case, the condition is satisfied when “DoublePress”, “LongPress”, and finally “Press” are executed in this order. The compound condition generated using the compound condition list 4331 can be referred to and selected as the arbitrarily set condition 4319 together with the default condition 4313 in the condition list 4311 described above with reference to FIG.
 図30は、本開示の一実施形態における複合条件生成用のユーザインターフェースの別の例を示す図である。図30では、上記の図29の例と同様に、複合条件リスト4331が示されている。ただし、図示された例では、2つの構成要素4335「DetectFace」および「DetectHand」について、いずれも同じ順位4333「1」が設定されている。 FIG. 30 is a diagram illustrating another example of a user interface for generating a composite condition according to an embodiment of the present disclosure. FIG. 30 shows a composite condition list 4331 as in the example of FIG. 29 described above. However, in the illustrated example, the same order 4333 “1” is set for the two components 4335 “DetectFace” and “DetectHand”.
 この場合、複合条件は、「DetectFace」および「DetectHand」が同時に発生した場合に成立する。図30では、図27に示されたものとは異なる条件リスト4341の例が示されている。複合条件リスト4311を用いて生成された複合条件は、条件リスト4341において、既定条件4343、(新たな)組み合わせ条件4345、および学習生成条件4347とともに、任意設定条件4349として参照および選択可能でありうる。 In this case, the composite condition is satisfied when “DetectFace” and “DetectHand” occur simultaneously. FIG. 30 shows an example of a condition list 4341 different from that shown in FIG. The compound condition generated using the compound condition list 4311 can be referred to and selected as the arbitrarily set condition 4349 together with the default condition 4343, the (new) combination condition 4345, and the learning generation condition 4347 in the condition list 4341. .
 図31は、本開示の一実施形態における訓練データ入力用のユーザインターフェースの例を示す図である。図31には、例えば上記の図22を参照して説明した例において、S131,S133で検出対象状態を実践するとともに訓練データを記録する場合に表示されるUIの例が示されている。 FIG. 31 is a diagram illustrating an example of a user interface for training data input according to an embodiment of the present disclosure. FIG. 31 shows an example of a UI displayed when the detection target state is put into practice in S131 and S133 and training data is recorded in the example described with reference to FIG.
 図示された例における画面4401および画面4407は、ユーザが繰り返し訓練データを入力する際に、「訓練データの入力を開始するタイミング」と、「検出対象状態に対応する単一の訓練データとして受け付けられる時間」とをユーザに伝達するために表示される。画面4401において、スタートボタン4403を押下すると、中央の円4405が、円4411に示すように欠け始める。円4405が、円4411の状態を経て完全に欠けてしまうまでの時間が、「単一の訓練データとして受け付けられる時間」を示す。また、円の欠けた領域が一周して、円4411の表示が初期状態の円4405に戻ったタイミングが、「訓練データの入力を開始(再開)するタイミング」である。訓練データの入力を終了する場合、ユーザは画面4407に表示されたストップボタン4409を押下する。 The screen 4401 and the screen 4407 in the illustrated example are received as “single training data corresponding to the detection target state” and “timing to start inputting training data” when the user repeatedly inputs training data. "Time" is displayed to communicate to the user. When a start button 4403 is pressed on the screen 4401, the center circle 4405 starts to be missing as shown by a circle 4411. The time until the circle 4405 is completely missing after the state of the circle 4411 indicates “time accepted as single training data”. Further, the timing at which the circle-missed region goes around and the display of the circle 4411 returns to the circle 4405 in the initial state is “timing to start (restart) input of training data”. When ending the training data input, the user presses a stop button 4409 displayed on the screen 4407.
 図32は、本開示の一実施形態における訓練データ入力用のユーザインターフェースの別の例を示す図である。図32には、上記で図31を参照して説明した画面4401および画面4407とは部分的に異なる、画面4413および画面4417が示されている。画面4413では、画面4401と同様にスタートボタン4403と円4405が表示されるのに加えて、回数設定部4415が表示される。回数設定部4415によって、ユーザは、訓練データ入力の繰り返し回数を設定することができる。 FIG. 32 is a diagram illustrating another example of a user interface for inputting training data according to an embodiment of the present disclosure. FIG. 32 shows a screen 4413 and a screen 4417 that are partially different from the screen 4401 and the screen 4407 described above with reference to FIG. On the screen 4413, a start button 4403 and a circle 4405 are displayed in the same manner as the screen 4401, and a number setting unit 4415 is displayed. The number setting unit 4415 allows the user to set the number of repetitions of training data input.
 一方、画面4417では、画面4407と同様に円4411が表示される一方で、ストップボタンが表示されない。代わりに、画面4417では、残り回数表示4419が表示され、訓練データの入力が自動的に終了するまでの残り回数を表示する。 On the other hand, on the screen 4417, a circle 4411 is displayed as in the screen 4407, but a stop button is not displayed. Instead, a remaining number display 4419 is displayed on the screen 4417, and the remaining number of times until the input of training data is automatically terminated is displayed.
 なお、例えば上記の画面4401,4407,4413,4417の例において、センサデータとともに取得されるタイムスタンプや画像・音声に基づいて「単一の訓練データとして受け付けられる時間」が自動的に特定可能である場合、ユーザは画面から訓練データの入力タイミングを計らなくてもよい。従って、この場合、円4405,4411を省略することによってユーザインターフェースを簡略化してもよい。 For example, in the examples of the screens 4401, 4407, 4413, and 4417 described above, the “time that can be accepted as a single training data” can be automatically specified based on the time stamp and image / sound acquired together with the sensor data. In some cases, the user does not have to measure the input timing of training data from the screen. Therefore, in this case, the user interface may be simplified by omitting the circles 4405 and 4411.
 図33は、本開示の一実施形態における、上限値および下限値を含む訓練データ入力用のユーザインターフェースの第1の例を示す図である。図示された例では、条件の定義が、センサデータに含まれる検出値の範囲を含む。図33には、「Fast&Slow」という条件を生成するにあたり、検出値の範囲の上限値および下限値の記録および参照を実行するための画面4421が示されている。画面4421は、条件名4423と、記録/参照ボタン4425とを含む。記録/参照ボタン4425は、記録ボタン4425aと、参照ボタン4425bとを含む。記録ボタン4425aを押下すると、次に説明するような訓練データ入力用のユーザインターフェースが表示され、訓練データを入力することができる。一方、参照ボタン4425bを押下すると、既存データを参照するためのユーザインターフェースが表示され、既存データを訓練データとして流用することができる。 FIG. 33 is a diagram illustrating a first example of a user interface for inputting training data including an upper limit value and a lower limit value according to an embodiment of the present disclosure. In the illustrated example, the definition of the condition includes a range of detection values included in the sensor data. FIG. 33 shows a screen 4421 for recording and referring to the upper limit value and the lower limit value of the detection value range when generating the condition “Fast & Slow”. Screen 4421 includes a condition name 4423 and a record / reference button 4425. The record / reference button 4425 includes a record button 4425a and a reference button 4425b. When the recording button 4425a is pressed, a training data input user interface as described below is displayed, and training data can be input. On the other hand, when the reference button 4425b is pressed, a user interface for referring to existing data is displayed, and the existing data can be used as training data.
 図34は、本開示の一実施形態における、上限値および下限値を含む訓練データ入力用のユーザインターフェースの第2の例を示す図である。図34には、図33の例において記録ボタン4425aを押下した場合に表示される画面4431および画面4435が示されている。画面4431では、テキスト4433によって、上限値の訓練データを入力することが指示されている。また、画面4437では、テキスト4437によって、下限値の訓練データを入力することが示されている。 FIG. 34 is a diagram illustrating a second example of a user interface for inputting training data including an upper limit value and a lower limit value according to an embodiment of the present disclosure. FIG. 34 shows a screen 4431 and a screen 4435 that are displayed when the record button 4425a is pressed in the example of FIG. On the screen 4431, it is instructed to input upper limit training data by the text 4433. On the screen 4437, the text 4437 indicates that lower limit training data is input.
 あるいは、上記の画面4431および画面4435は、画面4421を経ずに直接的に表示されてもよい。まず上限値の入力用の画面4431が、次に下限値の入力用の画面4435が、自動的に連続して表示されることによって、ユーザは、別途の操作をせずとも、テキスト4433,4437を確認しながら自動的に範囲の上限値および下限値の訓練データ入力を実行することができる。このように、本実施形態において、マネージャ200の条件調整・生成部223は、条件の定義に関する範囲内の1または複数の検出値(上限値および下限値には限られず、中間の値を含んでもよい)に対応するように検出対象状態を実践する指示を、UI装置400を介して出力させてもよい。 Alternatively, the screen 4431 and the screen 4435 described above may be directly displayed without passing through the screen 4421. First, an upper limit value input screen 4431 and then a lower limit value input screen 4435 are automatically and continuously displayed, so that the user can perform text 4433 and 4437 without any additional operation. The training data input of the upper limit value and the lower limit value of the range can be automatically executed while confirming the above. As described above, in the present embodiment, the condition adjustment / generation unit 223 of the manager 200 includes one or more detection values (not limited to the upper limit value and the lower limit value within the range related to the condition definition, and may include intermediate values). The instruction for practicing the detection target state so as to correspond to “good” may be output via the UI device 400.
 図35は、本開示の一実施形態におけるクラス分類のための訓練データ入力用のユーザインターフェースの例を示す図である。図35には、例えば画像に映った顔によるユーザの識別など、センサデータをクラス分類する種類の条件を設定するための画面4441が示されている。画面4441は、インデックス4443と、ラベル4445と、記録ボタン4447と、参照ボタン4449と、追加ボタン4451とを含む。 FIG. 35 is a diagram illustrating an example of a user interface for inputting training data for classification according to an embodiment of the present disclosure. FIG. 35 shows a screen 4441 for setting a type of condition for classifying sensor data such as identification of a user by a face shown in an image, for example. The screen 4441 includes an index 4443, a label 4445, a record button 4447, a reference button 4449, and an add button 4451.
 ラベル4445は、条件に従って分類が実施されるクラスに与えられたラベルを表示する。記録ボタン4447を押下すると、それぞれのクラスについての訓練データ入力用のユーザインターフェースが表示され、訓練データを入力することができる。例えば、顔によるユーザの識別の場合であれば、対象のユーザの顔を撮影することによって、訓練データを入力することができる。一方、参照ボタン4449を押下すると、既存データを参照するためのユーザインターフェースが表示され、既存データを訓練データとして流用することができる。例えば、顔によるユーザの識別の場合であれば、対象のユーザの顔を含む画像を、ユーザのローカルストレージや共有サーバなどから取得したものを、訓練データに流用することができる。追加ボタン4451を押下すると、新たなクラスが生成され、新たなラベル4445、記録ボタン4447、および参照ボタン4449が表示される。 The label 4445 displays a label given to a class that is classified according to conditions. When the record button 4447 is pressed, a training data input user interface for each class is displayed, and training data can be input. For example, in the case of identification of a user by face, training data can be input by photographing the face of the target user. On the other hand, when a reference button 4449 is pressed, a user interface for referring to existing data is displayed, and the existing data can be used as training data. For example, in the case of identification of a user by face, an image obtained by acquiring an image including the target user's face from a user's local storage or a shared server can be used as training data. When an add button 4451 is pressed, a new class is generated, and a new label 4445, a record button 4447, and a reference button 4449 are displayed.
 図36は、本開示の一実施形態において訓練データを抽出するためのユーザインターフェースの第1の例を示す図である。図示された例では、検出対象状態の実践中およびその前後を通じて連続的に取得されたセンサデータの中から、ユーザ操作に基づいて検出対象状態の実践中に取得されたセンサデータを抽出するためのユーザインターフェースが提供される。図36には、訓練データ抽出画面4501が示されている。訓練データ抽出画面4501は、画像・音声再生部4503と、センサ検出値の時系列波形4505と、区間指定ボタン4507とを含む。 FIG. 36 is a diagram illustrating a first example of a user interface for extracting training data according to an embodiment of the present disclosure. In the illustrated example, the sensor data acquired during the practice of the detection target state based on the user operation is extracted from the sensor data obtained continuously during and after the detection target state. A user interface is provided. FIG. 36 shows a training data extraction screen 4501. The training data extraction screen 4501 includes an image / sound reproduction unit 4503, a time series waveform 4505 of sensor detection values, and a section designation button 4507.
 図示された例では、ユーザが検出対象状態を実践したときのセンサ検出値が、マネージャ200の統合制御部222によって一時的に保存されている。ユーザは、事後的に、例えば訓練データ抽出画面4501を参照しながら、センサ検出値の中から検出対象状態に対応する区間を特定する。より具体的には、ユーザは、センサ検出値の時系列波形4505とともに表示される区間4505rを調整したうえで、区間指定ボタン4507を押下することによって、個々の検出対象状態に対応するセンサ検出値の区間を特定することができる。 In the illustrated example, the sensor detection value when the user practices the detection target state is temporarily stored by the integrated control unit 222 of the manager 200. The user specifies the section corresponding to the detection target state from the sensor detection values while referring to the training data extraction screen 4501, for example. More specifically, the user adjusts the section 4505r displayed together with the time-series waveform 4505 of the sensor detection value, and then presses the section designation button 4507, thereby detecting the sensor detection value corresponding to each detection target state. Can be specified.
 ここで、画像・音声再生部4503では、検出対象状態の実践中に記録された画像または音声が提示される。より具体的には、例えば、画像・音声再生部4503では、センサ検出値の時系列波形4505の範囲における画像または音声が再生されてもよいし、ユーザが選択している区間4505rにおける画像または音声が再生されてもよいし、区間4505rの始点または終点の前後での画像または音声が再生されてもよい。いずれの場合も、再生される画像または音声は、時系列波形4505によって表示されているセンサデータに対応付けられている。 Here, the image / sound reproduction unit 4503 presents the image or sound recorded during the practice of the detection target state. More specifically, for example, the image / sound reproduction unit 4503 may reproduce the image or sound in the range of the time-series waveform 4505 of the sensor detection value, or the image or sound in the section 4505r selected by the user. May be reproduced, and images or sounds before and after the start point or end point of the section 4505r may be reproduced. In any case, the reproduced image or sound is associated with the sensor data displayed by the time series waveform 4505.
 図37は、本開示の一実施形態において訓練データを抽出するためのユーザインターフェースの第2の例を示す図である。図37には、訓練データ抽出画面4511が示されている。訓練データ抽出画面4511は、画像・音声再生部4513と、センサ検出値の時系列波形4505と、区間指定ボタン4507とを含む。 FIG. 37 is a diagram illustrating a second example of a user interface for extracting training data according to an embodiment of the present disclosure. FIG. 37 shows a training data extraction screen 4511. The training data extraction screen 4511 includes an image / sound reproduction unit 4513, a time series waveform 4505 of sensor detection values, and a section designation button 4507.
 図示された例は、図36に示した例の変形例である。図示された例では、画像・音声再生部4513において、画像が、動画像ではなく一連の静止画像として表示されている。これによって、例えば、センサ検出値とともに取得される画像のサイズが抑えられ、マネージャ200におけるセンサ検出値との同期や、UI装置400へのデータの送信のための処理負荷を軽減することができる。また、検出対象状態が、連続的な動画像ではなく離散的な静止画像として提示されることによって、ユーザが、区間4505rを設定および調整しやすくなる場合もありうる。 The example shown is a modification of the example shown in FIG. In the illustrated example, the image / sound reproduction unit 4513 displays an image as a series of still images instead of a moving image. Thereby, for example, the size of the image acquired together with the sensor detection value is suppressed, and the processing load for synchronization with the sensor detection value in the manager 200 and transmission of data to the UI device 400 can be reduced. In addition, since the detection target state is presented as a discrete still image instead of a continuous moving image, it may be easier for the user to set and adjust the section 4505r.
 図38は、本開示の一実施形態において訓練データを抽出するためのユーザインターフェースの第3の例を示す図である。図38には、訓練データ抽出画面4521が示されている。訓練データ抽出画面4521は、センサ検出値の時系列波形4505と、区間指定ボタン4507と、タイムスタンプ4523とを含む。 FIG. 38 is a diagram illustrating a third example of a user interface for extracting training data according to an embodiment of the present disclosure. FIG. 38 shows a training data extraction screen 4521. The training data extraction screen 4521 includes a time series waveform 4505 of sensor detection values, a section designation button 4507, and a time stamp 4523.
 上記で図24を参照して説明したように、本実施形態では、訓練データの取得と同期してタイムスタンプを記録することが可能である。訓練データ抽出画面4521では、そのようにして記録されたタイムスタンプに対応する時系列上の位置に、タイムスタンプ4523が表示されている。タイムスタンプ4523が表示されていることによって、ユーザは、区間4505rをより正確に設定することが容易になる。 As described above with reference to FIG. 24, in this embodiment, it is possible to record a time stamp in synchronization with the acquisition of training data. In the training data extraction screen 4521, a time stamp 4523 is displayed at a position on the time series corresponding to the time stamp recorded as described above. By displaying the time stamp 4523, the user can easily set the section 4505r more accurately.
 図39は、本開示の一実施形態におけるリアルタイムフィードバックのためのユーザインターフェースの例を示す図である。図39には、リアルタイムフィードバック画面4531,4537が示されている。図示された例では、マイクロフォン100gによって検出された音量が閾値を超えることが条件の定義に含まれる。 FIG. 39 is a diagram illustrating an example of a user interface for real-time feedback according to an embodiment of the present disclosure. FIG. 39 shows real- time feedback screens 4531 and 4537. In the illustrated example, the condition definition includes that the volume detected by the microphone 100g exceeds the threshold.
 条件の検証時に、センサ検出値が閾値を超えない(つまり、上記の例では条件が成立しない)場合、画面4531が表示される。画面4531では、閾値レベル4535に到達しないゲージ4533と、条件が成立していないことを示す×印4537とが表示されている。例えば、この結果を参照したユーザが閾値を調整したい場合、調整ボタン4539によってリアルタイムで閾値を変更し、再度検証を実施することができる。 When the condition is verified, if the sensor detection value does not exceed the threshold value (that is, the condition is not satisfied in the above example), a screen 4531 is displayed. On the screen 4531, a gauge 4533 that does not reach the threshold level 4535 and an X mark 4537 indicating that the condition is not satisfied are displayed. For example, when the user who refers to this result wants to adjust the threshold value, the threshold value can be changed in real time by the adjustment button 4539 and the verification can be performed again.
 一方、条件の検証時に、センサ検出値が閾値を超える(つまり、上記の例では条件が成立する)場合、画面4541が表示される。画面4541では、閾値レベル4535を超えているゲージ4533と、条件が成立していることを示す○印4543とが表示されている。画面4541でも、画面4531と同様に、調整ボタン4539によってリアルタイムで閾値を変更することができる。 On the other hand, when the sensor detection value exceeds the threshold value (that is, the condition is satisfied in the above example) during the verification of the condition, a screen 4541 is displayed. On the screen 4541, a gauge 4533 exceeding the threshold level 4535 and a ◯ mark 4543 indicating that the condition is satisfied are displayed. Similarly to the screen 4531, the threshold value can be changed in real time using the adjustment button 4539 on the screen 4541.
 図40は、本開示の一実施形態におけるリアルタイムフィードバックのためのユーザインターフェースの別の例を示す図である。図40には、リアルタイムフィードバック画面4551,4557が示されている。図示された例では、加速度センサ100aを遅く動かした場合と速く動かした場合とを、検出値の所定の範囲(速く動かした場合に対応する)によって判別する条件が設定されている。 FIG. 40 is a diagram illustrating another example of a user interface for real-time feedback according to an embodiment of the present disclosure. FIG. 40 shows real- time feedback screens 4551 and 4557. In the illustrated example, a condition is set for discriminating between a case where the acceleration sensor 100a is moved slowly and a case where the acceleration sensor 100a is moved quickly based on a predetermined range of detection values (corresponding to a case where the acceleration sensor 100a is moved quickly).
 条件の検証時に、センサ検出値が所定の範囲外にある(つまり、上記の例では加速度センサ100aを遅く動かしたと判別される)場合、画面4551が表示される。画面4551では、範囲4555が表示されたゲージ4553と、範囲4555の外でゲージ4553を指す検出値表示4557と、条件が成立していないことを示す×印4559とが表示されている。例えば、この結果を参照したユーザが閾値を調整したい場合、範囲4555をドラッグして移動させ、再度検証を実施してもよい。 When verifying the conditions, if the sensor detection value is outside the predetermined range (that is, in the above example, it is determined that the acceleration sensor 100a is moved slowly), a screen 4551 is displayed. On the screen 4551, a gauge 4553 displaying the range 4555, a detection value display 4557 indicating the gauge 4553 outside the range 4555, and an X mark 4559 indicating that the condition is not satisfied are displayed. For example, when the user who refers to this result wants to adjust the threshold value, the range 4555 may be dragged and moved, and verification may be performed again.
 一方、条件の検証時に、センサ検出値が所定の範囲内にある(つまり、上記の例では加速度センサ100aを速く動かしたと判別される)場合、画面4561が表示される。画面4561では、範囲4555が表示されたゲージ4553と、範囲4555の中でゲージ4553を指す検出値表示4557と、条件が成立していることを示す○印4559とが表示されている。画面4561でも、画面4551と同様に、範囲4555を移動させることによってリアルタイムで範囲を変更することが可能であってもよい。 On the other hand, when the sensor detection value is within a predetermined range at the time of verifying the condition (that is, in the above example, it is determined that the acceleration sensor 100a is moved quickly), a screen 4561 is displayed. On the screen 4561, a gauge 4553 displaying the range 4555, a detection value display 4557 indicating the gauge 4553 in the range 4555, and a circle 4559 indicating that the condition is satisfied are displayed. Similarly to the screen 4551, the screen 4561 may be able to change the range in real time by moving the range 4555.
 図41は、本開示の一実施形態におけるリアルタイムフィードバックのためのユーザインターフェースのさらに別の例を示す図である。図41には、リアルタイムフィードバック画面4601a~4601cが示されている。図示された例では、カメラ100bが取得した画像に、ユーザの顔が映っているか否かを判別する条件が設定されている。 FIG. 41 is a diagram illustrating still another example of a user interface for real-time feedback according to an embodiment of the present disclosure. FIG. 41 shows real-time feedback screens 4601a to 4601c. In the illustrated example, a condition for determining whether or not the user's face is reflected in the image acquired by the camera 100b is set.
 画面4601aは、画像にユーザの顔が映っており、かつ条件が成立したと判別された場合に表示される。この場合、○印およびテキスト4603aによって条件が成立していることが表示される。ユーザは、そのような判別結果が正しいか否かのフィードバックを、ボタン4605を用いて入力することができる。図示された例では、判別が正しく実施されているため、「Correct」のボタン4605が押下されうる。 The screen 4601a is displayed when the user's face is shown in the image and it is determined that the condition is satisfied. In this case, it is displayed that the condition is satisfied by the circle mark and the text 4603a. The user can input feedback regarding whether or not such a determination result is correct using a button 4605. In the illustrated example, since the determination is correctly performed, the “correct” button 4605 can be pressed.
 画面4601bは、画面にユーザの顔が映っておらず、かつ条件が成立しないと判別された場合に表示される。この場合、×印およびテキスト4603bによって条件が成立していないことが表示される。ユーザは、画面4601aと同様に、ボタン4605を用いてフィードバックを入力することができる。図示された例でも、判別が正しく実施されているため、「Correct」のボタン4605が押下されうる。 The screen 4601b is displayed when it is determined that the user's face is not shown on the screen and the condition is not satisfied. In this case, it is displayed that the condition is not satisfied by the x mark and the text 4603b. The user can input feedback using the button 4605 as in the screen 4601a. Also in the example shown in the figure, since the determination is correctly performed, the “correct” button 4605 can be pressed.
 一方、画面4601cは、画面にユーザの顔が映っていないにもかかわらず、条件が成立したと判別された場合に表示される。この場合、○印およびテキスト4603cによって条件が成立していることが表示される。図示された例では、判別が正しく実施されていないため、ユーザは、「Wrong」のボタン4605を押下することによって、条件の設定を修正するためのフィードバックを入力することができる。 On the other hand, the screen 4601c is displayed when it is determined that the condition is satisfied even though the user's face is not shown on the screen. In this case, it is displayed that the condition is satisfied by the circle mark and the text 4603c. In the illustrated example, since the determination is not correctly performed, the user can input feedback for correcting the setting of the condition by pressing a “Wong” button 4605.
 なお、本開示の実施形態における指示または情報の出力または提示については、上記で説明した例以外にも、さまざまな変形例が可能である。例えば、UI装置400の出力部430bの代わりに、またはこれとともに、エレメント100のアクチュエータ130bを介してユーザへの指示または情報を出力または提示してもよい。これによって、条件の設定や検証をする際にユーザが必ずしもUI装置400を携帯していなくてもよくなるのに加えて、さまざまな種類のアクチュエータ130bを介して段階的な出力または提示を実施することが可能である。 In addition to the examples described above, various modifications are possible for the output or presentation of instructions or information in the embodiment of the present disclosure. For example, instead of or together with the output unit 430b of the UI device 400, an instruction or information to the user may be output or presented via the actuator 130b of the element 100. As a result, the user does not necessarily have to carry the UI device 400 when setting or verifying conditions, and in addition, stepwise output or presentation is performed via various types of actuators 130b. Is possible.
 例えば、アクチュエータ130bに含まれるLEDランプを介して情報を提示する場合、センサの検出値が、条件が成立する値(例えば所定の範囲内の近い値、閾値を上回る/下回る値)に近くなるとLEDランプが次第に強く点灯し、条件が成立するとLEDランプが点滅するように設定されていてもよい。 For example, when information is presented via an LED lamp included in the actuator 130b, the LED is detected when the detection value of the sensor is close to a value that satisfies the condition (for example, a close value within a predetermined range, a value that is above or below a threshold value). It may be set so that the lamp gradually lights up gradually and the LED lamp blinks when the condition is satisfied.
 また、例えば、アクチュエータ130bに含まれるスピーカを介して情報を提示する場合、センサの検出値が、条件が成立する値に近くなるとスピーカから次第に大きな音声が出力され、条件が成立すると所定のメロディが流れるように設定されていてもよい。また、例えば「反応しそうにない」、「あと少しで反応しそう」、「反応した」といったような自然言語による情報の提示を実施してもよい。 For example, when information is presented via a speaker included in the actuator 130b, a loud sound is gradually output from the speaker when the detection value of the sensor approaches a value that satisfies the condition, and a predetermined melody is generated when the condition is satisfied. It may be set to flow. In addition, for example, information may be presented in a natural language such as “not likely to react”, “looks to react a little later”, or “reacted”.
 (2-4.具体的な適用例)
  (第1の例)
 図42は、本開示の一実施形態における第1の適用例について説明するための図である。図42を参照すると、この第1の適用例では、冷蔵庫REFのドアに、加速度センサ100aが取り付けられる。ここで、ユーザは、加速度センサ100aを用いて、冷蔵庫REFのドアが開いたことを検出し、その場合にはカメラ100b(図示せず)によって冷蔵庫REFの内部の画像を取得したい一方で、ドアが閉じた場合にはその処理を実行したくないものとする。このような場合、例えば、検出対象状態の実践として、加速度センサ100aによるセンシングを実施した状態で、ユーザが冷蔵庫REFのドアの開閉を繰り返す。このとき、カメラ100bも、冷蔵庫REFを撮影している。このとき、カメラ100bは、上記で図25を参照して説明した録画・録音装置600として機能しているともいえる。
(2-4. Specific application examples)
(First example)
FIG. 42 is a diagram for describing a first application example according to an embodiment of the present disclosure. Referring to FIG. 42, in the first application example, acceleration sensor 100a is attached to the door of refrigerator REF. Here, the user uses the acceleration sensor 100a to detect that the door of the refrigerator REF is opened, and in this case, the camera 100b (not shown) wants to acquire an image inside the refrigerator REF, When is closed, it is assumed that the processing is not desired. In such a case, for example, as a practice of the detection target state, the user repeatedly opens and closes the door of the refrigerator REF with sensing performed by the acceleration sensor 100a. At this time, the camera 100b is also photographing the refrigerator REF. At this time, it can be said that the camera 100b functions as the recording / recording apparatus 600 described above with reference to FIG.
 図示された例では、マネージャ200の統合制御部222が、ユーザが上記のように検出対象状態を実践したときの加速度センサ100aの検出値と、カメラ100bによって撮影された画像とを時系列で対応付けることによって、上記で図36を参照して説明したような訓練データ抽出画面4501を、UI装置400を介して提示している。訓練データ抽出画面4501では、画像・音声再生部4503が表示され、センサ検出値の時系列波形4505とともに、カメラ100bが撮影した冷蔵庫REFの画像を参照することができる。これによって、ユーザは、検出対象状態を示す区間4505rを、冷蔵庫REFのドアが開いた区間を含み、ドアが閉じた区間を含まないように調整することができる。区間4505rがドアが開いた区間に一致した状態で区間指定ボタン4507を押下することによって、ドアが開いたときの加速度センサ100aの検出値を、条件が成立する場合の訓練データとして特定することができる。 In the illustrated example, the integrated control unit 222 of the manager 200 associates the detection value of the acceleration sensor 100a when the user practices the detection target state as described above with the image captured by the camera 100b in time series. Accordingly, the training data extraction screen 4501 described above with reference to FIG. 36 is presented via the UI device 400. On the training data extraction screen 4501, an image / sound reproduction unit 4503 is displayed, and an image of the refrigerator REF captured by the camera 100b can be referred to along with the time-series waveform 4505 of the sensor detection value. Accordingly, the user can adjust the section 4505r indicating the detection target state so as to include a section where the door of the refrigerator REF is opened and not include a section where the door is closed. By pressing the section designation button 4507 in a state where the section 4505r matches the section where the door is opened, the detected value of the acceleration sensor 100a when the door is opened can be specified as training data when the condition is satisfied. it can.
 図43は、図42の例におけるプログラムの作成のためのユーザインターフェースの例を示す図である。図43には、上記で図26を参照して説明したリスト4301と、エレメントアイコン4701と、条件/処理アイコン4703とが示されている。図示された例では、上記で図42を参照して説明した条件が、リスト4301において、条件リスト4305に「Door Open」として表示されている。例えば、ユーザが条件リスト4305において「Door Open」を選択することによって、エレメントアイコン4701aおよび条件/処理アイコン4703によって示されるように、加速度センサ100aについて、「Door Open」という条件を割り当てることができる。 FIG. 43 is a diagram showing an example of a user interface for creating a program in the example of FIG. FIG. 43 shows the list 4301, the element icon 4701, and the condition / processing icon 4703 described above with reference to FIG. In the illustrated example, the condition described above with reference to FIG. 42 is displayed as “Door Open” in the condition list 4305 in the list 4301. For example, when the user selects “Door Open” in the condition list 4305, the condition “Door Open” can be assigned to the acceleration sensor 100a as indicated by the element icon 4701a and the condition / process icon 4703.
 さらに、図43では、カメラ100bのエレメントアイコン4701bと、写真を撮影することを示す条件/処理アイコン4703b、および、クラウドサービスを示すアイコン4701cと、撮影された写真をアップロードすることを示す条件/処理アイコン4703cとが示されている。図43に示されたユーザインターフェースは、上記で図12を参照して説明された第1の具体的な例を実現しうる。なお、図12で、加速度に基づいてドアが開いたことを検出するために用いられたソフトウェア603aは、図43に示された例では加速度センサ100aに設定される条件に取り込まれている。 Further, in FIG. 43, an element icon 4701b of the camera 100b, a condition / processing icon 4703b indicating that a photograph is to be taken, an icon 4701c indicating a cloud service, and a condition / process indicating that the photographed photograph is to be uploaded. An icon 4703c is shown. The user interface shown in FIG. 43 may implement the first specific example described above with reference to FIG. In FIG. 12, the software 603a used to detect that the door has been opened based on the acceleration is included in the conditions set in the acceleration sensor 100a in the example shown in FIG.
 (第2の例)
 図44は、本開示の一実施形態における第2の適用例について説明するための図である。図44を参照すると、加速度センサ100aのエレメントアイコン4711aおよび条件/処理アイコン4713aと、スピーカ100eのエレメントアイコン4711bおよび条件/処理アイコン4713bとが示されている。図44に示されたユーザインターフェースは、上記で図15を参照して説明された第4の具体的な例を実現しうる。条件/処理アイコン4713aは、加速度センサ100aの検出値に基づいて判別される3つの条件(Motion1,Motion2,Motion3)を含む。これらの条件は、例えば、ユーザ(例えば子供)が加速度センサ100aを装着した状態で手足を特定のパターンで動かすことを検出対象状態の実践として繰り返した結果を学習することによって生成される。一方、条件/処理アイコン4713bは、上記の3つの条件にそれぞれ対応する3つの効果音を示す。なお、図15で、加速度のパターンに応じて効果音を再生するために用いられたソフトウェア603dは、図44に示された例では加速度センサ100aに設定される条件、およびこの条件に対応するスピーカ100eの機能に取り込まれている。
(Second example)
FIG. 44 is a diagram for describing a second application example according to an embodiment of the present disclosure. Referring to FIG. 44, an element icon 4711a and a condition / processing icon 4713a of the acceleration sensor 100a, and an element icon 4711b and a condition / processing icon 4713b of the speaker 100e are shown. The user interface shown in FIG. 44 may realize the fourth specific example described above with reference to FIG. The condition / processing icon 4713a includes three conditions (Motion1, Motion2, and Motion3) that are determined based on the detection value of the acceleration sensor 100a. These conditions are generated, for example, by learning the result of repeating the movement of the limbs in a specific pattern with the acceleration sensor 100a being worn as a practice of the detection target state with the user (for example, a child) wearing the acceleration sensor 100a. On the other hand, the condition / processing icon 4713b indicates three sound effects respectively corresponding to the above three conditions. In FIG. 15, the software 603d used for reproducing the sound effect according to the acceleration pattern is the condition set in the acceleration sensor 100a in the example shown in FIG. 44 and the speaker corresponding to this condition. It is incorporated in the function of 100e.
 (3.ハードウェア構成)
 次に、図45を参照して、本開示の実施形態に係る情報処理装置のハードウェア構成について説明する。図45は、本開示の実施形態に係る情報処理装置のハードウェア構成例を示すブロック図である。図示された情報処理装置900は、例えば、上記の実施形態におけるエレメント100、マネージャ200、サーバ300、および/またはUI装置400を実現しうる。
(3. Hardware configuration)
Next, the hardware configuration of the information processing apparatus according to the embodiment of the present disclosure will be described with reference to FIG. FIG. 45 is a block diagram illustrating a hardware configuration example of the information processing apparatus according to the embodiment of the present disclosure. The illustrated information processing apparatus 900 can realize, for example, the element 100, the manager 200, the server 300, and / or the UI apparatus 400 in the above-described embodiment.
 情報処理装置900は、CPU(Central Processing unit)901、ROM(Read Only Memory)903、およびRAM(Random Access Memory)905を含む。また、情報処理装置900は、ホストバス907、ブリッジ909、外部バス911、インターフェース913、入力装置915、出力装置917、ストレージ装置919、ドライブ921、接続ポート923、通信装置925を含んでもよい。さらに、情報処理装置900は、必要に応じて、撮像装置933、およびセンサ935を含んでもよい。情報処理装置900は、CPU901に代えて、またはこれとともに、DSP(Digital Signal Processor)またはASIC(Application Specific Integrated Circuit)と呼ばれるような処理回路を有してもよい。 The information processing apparatus 900 includes a CPU (Central Processing unit) 901, a ROM (Read Only Memory) 903, and a RAM (Random Access Memory) 905. The information processing apparatus 900 may include a host bus 907, a bridge 909, an external bus 911, an interface 913, an input device 915, an output device 917, a storage device 919, a drive 921, a connection port 923, and a communication device 925. Furthermore, the information processing apparatus 900 may include an imaging device 933 and a sensor 935 as necessary. The information processing apparatus 900 may include a processing circuit called DSP (Digital Signal Processor) or ASIC (Application Specific Integrated Circuit) instead of or in addition to the CPU 901.
 CPU901は、演算処理装置および制御装置として機能し、ROM903、RAM905、ストレージ装置919、またはリムーバブル記録媒体927に記録された各種プログラムに従って、情報処理装置900内の動作全般またはその一部を制御する。ROM903は、CPU901が使用するプログラムや演算パラメータなどを記憶する。RAM905は、CPU901の実行において使用するプログラムや、その実行において適宜変化するパラメータなどを一次記憶する。CPU901、ROM903、およびRAM905は、CPUバスなどの内部バスにより構成されるホストバス907により相互に接続されている。さらに、ホストバス907は、ブリッジ909を介して、PCI(Peripheral Component Interconnect/Interface)バスなどの外部バス911に接続されている。 The CPU 901 functions as an arithmetic processing device and a control device, and controls all or a part of the operation in the information processing device 900 according to various programs recorded in the ROM 903, the RAM 905, the storage device 919, or the removable recording medium 927. The ROM 903 stores programs and calculation parameters used by the CPU 901. The RAM 905 primarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like. The CPU 901, the ROM 903, and the RAM 905 are connected to each other by a host bus 907 configured by an internal bus such as a CPU bus. Further, the host bus 907 is connected to an external bus 911 such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 909.
 入力装置915は、例えば、マウス、キーボード、タッチパネル、ボタン、スイッチおよびレバーなど、ユーザによって操作される装置である。入力装置915は、例えば、赤外線やその他の電波を利用したリモートコントロール装置であってもよいし、情報処理装置900の操作に対応した携帯電話などの外部接続機器929であってもよい。入力装置915は、ユーザが入力した情報に基づいて入力信号を生成してCPU901に出力する入力制御回路を含む。ユーザは、この入力装置915を操作することによって、情報処理装置900に対して各種のデータを入力したり処理動作を指示したりする。 The input device 915 is a device operated by the user, such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever. The input device 915 may be, for example, a remote control device that uses infrared rays or other radio waves, or may be an external connection device 929 such as a mobile phone that supports the operation of the information processing device 900. The input device 915 includes an input control circuit that generates an input signal based on information input by the user and outputs the input signal to the CPU 901. The user operates the input device 915 to input various data and instruct processing operations to the information processing device 900.
 出力装置917は、取得した情報をユーザに対して視覚的または聴覚的に通知することが可能な装置で構成される。出力装置917は、例えば、LCD(Liquid Crystal Display)、PDP(Plasma Display Panel)、有機EL(Electro-Luminescence)ディスプレイなどの表示装置、スピーカおよびヘッドホンなどの音声出力装置、ならびにプリンタ装置などでありうる。出力装置917は、情報処理装置900の処理により得られた結果を、テキストまたは画像などの映像として出力したり、音声または音響などの音声として出力したりする。 The output device 917 is a device that can notify the user of the acquired information visually or audibly. The output device 917 can be, for example, a display device such as an LCD (Liquid Crystal Display), a PDP (Plasma Display Panel), an organic EL (Electro-Luminescence) display, an audio output device such as a speaker and headphones, and a printer device. . The output device 917 outputs the result obtained by the processing of the information processing device 900 as video such as text or an image, or outputs it as audio such as voice or sound.
 ストレージ装置919は、情報処理装置900の記憶部の一例として構成されたデータ格納用の装置である。ストレージ装置919は、例えば、HDD(Hard Disk Drive)などの磁気記憶部デバイス、半導体記憶デバイス、光記憶デバイス、または光磁気記憶デバイスなどにより構成される。このストレージ装置919は、CPU901が実行するプログラムや各種データ、および外部から取得した各種のデータなどを格納する。 The storage device 919 is a data storage device configured as an example of a storage unit of the information processing device 900. The storage device 919 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device. The storage device 919 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
 ドライブ921は、磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリなどのリムーバブル記録媒体927のためのリーダライタであり、情報処理装置900に内蔵、あるいは外付けされる。ドライブ921は、装着されているリムーバブル記録媒体927に記録されている情報を読み出して、RAM905に出力する。また、ドライブ921は、装着されているリムーバブル記録媒体927に記録を書き込む。 The drive 921 is a reader / writer for a removable recording medium 927 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and is built in or externally attached to the information processing apparatus 900. The drive 921 reads information recorded on the attached removable recording medium 927 and outputs the information to the RAM 905. In addition, the drive 921 writes a record in the attached removable recording medium 927.
 接続ポート923は、機器を情報処理装置900に直接接続するためのポートである。接続ポート923は、例えば、USB(Universal Serial Bus)ポート、IEEE1394ポート、SCSI(Small Computer System Interface)ポートなどでありうる。また、接続ポート923は、RS-232Cポート、光オーディオ端子、HDMI(登録商標)(High-Definition Multimedia Interface)ポートなどであってもよい。接続ポート923に外部接続機器929を接続することで、情報処理装置900と外部接続機器929との間で各種のデータが交換されうる。 The connection port 923 is a port for directly connecting a device to the information processing apparatus 900. The connection port 923 can be, for example, a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface) port, or the like. The connection port 923 may be an RS-232C port, an optical audio terminal, an HDMI (registered trademark) (High-Definition Multimedia Interface) port, or the like. By connecting the external connection device 929 to the connection port 923, various types of data can be exchanged between the information processing apparatus 900 and the external connection device 929.
 通信装置925は、例えば、通信ネットワーク931に接続するための通信デバイスなどで構成された通信インターフェースである。通信装置925は、例えば、有線または無線LAN(Local Area Network)、Bluetooth(登録商標)、またはWUSB(Wireless USB)用の通信カードなどでありうる。また、通信装置925は、光通信用のルータ、ADSL(Asymmetric Digital Subscriber Line)用のルータ、または、各種通信用のモデムなどであってもよい。通信装置925は、例えば、インターネットや他の通信機器との間で、TCP/IPなどの所定のプロトコルを用いて信号などを送受信する。また、通信装置925に接続される通信ネットワーク931は、有線または無線によって接続されたネットワークであり、例えば、インターネット、家庭内LAN、赤外線通信、ラジオ波通信または衛星通信などである。 The communication device 925 is a communication interface configured with, for example, a communication device for connecting to the communication network 931. The communication device 925 may be, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB). The communication device 925 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication. The communication device 925 transmits and receives signals and the like using a predetermined protocol such as TCP / IP with the Internet and other communication devices, for example. The communication network 931 connected to the communication device 925 is a wired or wireless network, such as the Internet, a home LAN, infrared communication, radio wave communication, or satellite communication.
 撮像装置933は、例えば、CCD(Charge Coupled Device)またはCMOS(Complementary Metal Oxide Semiconductor)などの撮像素子、および撮像素子への被写体像の結像を制御するためのレンズなどの各種の部材を用いて実空間を撮像し、撮像画像を生成する装置である。撮像装置933は、静止画を撮像するものであってもよいし、また動画を撮像するものであってもよい。 The imaging device 933 uses various members such as an imaging element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor), and a lens for controlling the formation of a subject image on the imaging element. It is an apparatus that images a real space and generates a captured image. The imaging device 933 may capture a still image or may capture a moving image.
 センサ935は、例えば、加速度センサ、ジャイロセンサ、地磁気センサ、光センサ、音センサなどの各種のセンサである。センサ935は、例えば情報処理装置900の筐体の姿勢など、情報処理装置900自体の状態に関する情報や、情報処理装置900の周辺の明るさや騒音など、情報処理装置900の周辺環境に関する情報を取得する。また、センサ935は、GPS(Global Positioning System)信号を受信して装置の緯度、経度および高度を測定するGPSセンサを含んでもよい。 The sensor 935 is various sensors such as an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, and a sound sensor. The sensor 935 acquires information about the state of the information processing apparatus 900 itself, such as the posture of the information processing apparatus 900, and information about the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900, for example. To do. The sensor 935 may include a GPS sensor that receives a GPS (Global Positioning System) signal and measures the latitude, longitude, and altitude of the apparatus.
 以上、情報処理装置900のハードウェア構成の一例を示した。上記の各構成要素は、汎用的な部材を用いて構成されていてもよいし、各構成要素の機能に特化したハードウェアにより構成されていてもよい。かかる構成は、実施する時々の技術レベルに応じて適宜変更されうる。 Heretofore, an example of the hardware configuration of the information processing apparatus 900 has been shown. Each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Such a configuration can be appropriately changed according to the technical level at the time of implementation.
 (4.補足)
 本開示の実施形態は、例えば、上記で説明したような情報処理装置(マネージャ、またはマネージャとしても機能するエレメント、サーバ、またはUI装置)、システム、情報処理装置またはシステムで実行される情報処理方法、情報処理装置を機能させるためのプログラム、およびプログラムが記録された一時的でない有形の媒体を含みうる。
(4. Supplement)
Embodiments of the present disclosure include, for example, an information processing apparatus (a manager, or an element, a server, or a UI apparatus that also functions as a manager), a system, an information processing apparatus, or an information processing method executed by the system And a program for causing the information processing apparatus to function, and a non-temporary tangible medium on which the program is recorded.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)1または複数のセンサによって提供されるセンサデータを取得するセンサデータ取得機能と、
 検出対象状態の実践中に取得された前記センサデータに基づいて、前記検出対象状態を検出するための前記センサデータに関する条件を設定する条件設定機能と
 を実現する処理回路を備える情報処理装置。
(2)前記条件設定機能は、前記検出対象状態の実践中に取得された前記センサデータを訓練データとして用いる機械学習を実施することによって前記条件を設定する機能を含む、前記(1)に記載の情報処理装置。
(3)前記処理回路は、前記検出対象状態の実践のタイミングの指示を出力させる機能をさらに実現する、前記(2)に記載の情報処理装置。
(4)前記条件の定義は、前記センサデータに含まれる検出値の範囲を含み、
 前記処理回路は、前記範囲内の1または複数の検出値に対応するように前記検出対象状態を実践する指示を出力させる機能をさらに実現する、前記(2)または(3)に記載の情報処理装置。
(5)前記センサデータ取得機能は、前記検出対象状態の実践中およびその前後に前記センサデータを取得し、
 前記処理回路は、前記取得されたセンサデータの中から前記検出対象状態の実践中に取得された前記センサデータを抽出するセンサデータ抽出機能をさらに実現する、前記(2)~(4)のいずれか1項に記載の情報処理装置。
(6)前記処理回路は、前記検出対象状態の実践中に記録されたタイムスタンプを取得する機能をさらに実現し、
 前記センサデータ抽出機能は、前記タイムスタンプに基づいて前記センサデータを抽出する、前記(5)に記載の情報処理装置。
(7)前記センサデータ抽出機能は、ユーザ操作に基づいて前記センサデータを抽出し、
 前記処理回路は、前記検出対象状態の実践中に記録された画像または音声を取得する機能と、前記画像または前記音声を前記センサデータに対応付けてユーザに提示する機能とをさらに実現する、前記(5)または(6)に記載の情報処理装置。
(8)前記検出対象状態は、1回だけ実践される、前記(2)~(7)のいずれか1項に記載の情報処理装置。
(9)前記検出対象状態は、繰り返し実践される、前記(2)~(7)のいずれか1項に記載の情報処理装置。
(10)前記条件設定機能は、前記検出対象状態の実践中に取得された前記センサデータに基づいて前記条件を検証する機能を含む、前記(1)に記載の情報処理装置。
(11)前記処理回路は、既存の条件を読み込む既存条件読込機能をさらに実現し、
 前記条件は、単一の前記既存の条件、または複数の前記既存の条件の組み合わせを含む、前記(10)に記載の情報処理装置。
(12)前記処理回路は、前記条件の検証中に、ユーザ操作に基づいて前記条件の設定を変更する機能をさらに実現する、前記(10)または(11)に記載の情報処理装置。
(13)前記処理回路は、利用可能なセンサ群の中からユーザ操作に基づいて前記1または複数のセンサを選択するセンサ選択機能をさらに実現する、前記(1)~(12)のいずれか1項に記載の情報処理装置。
(14)前記条件の定義は、前記センサデータに含まれる検出値に設定される閾値を含む、前記(1)~(13)のいずれか1項に記載の情報処理装置。
(15)前記条件は、複数のセンサによって提供されるセンサデータの組み合わせによって成立する条件を含む、前記(1)~(14)のいずれか1項に記載の情報処理装置。
(16)前記処理回路は、前記1または複数のセンサを備える装置とは異なる装置を介してユーザへの指示または情報を出力または提示させる機能をさらに実現する、前記(1)~(15)のいずれか1項に記載の情報処理装置。
(17)前記処理回路は、前記条件が成立した場合にアクチュエータを動作させる機能と、前記アクチュエータを介してユーザへの指示または情報を出力または提示させる機能とをさらに実現する、前記(1)~(16)のいずれか1項に記載の情報処理装置。
(18)1または複数のセンサと、
 前記1または複数のセンサによって提供されるセンサデータを取得し、検出対象状態の実践中に取得された前記センサデータに基づいて、前記検出対象状態を検出するための前記センサデータに関する条件を設定する情報処理装置と、
 前記情報処理装置が提供する、前記条件の設定に関する指示または情報を出力または提示する端末装置と
 を含むシステム。
(19)情報処理装置の処理回路が、
  1または複数のセンサによって提供されるセンサデータを取得することと、
  検出対象状態の実践中に取得された前記センサデータに基づいて、前記検出対象状態を検出するための前記センサデータに関する条件を設定することと
 を含む情報処理方法。
(20)1または複数のセンサによって提供されるセンサデータを取得するセンサデータ取得機能と、
 検出対象状態の実践中に取得された前記センサデータに基づいて、前記検出対象状態を検出するための前記センサデータに関する条件を設定する条件設定機能と
 を処理回路に実現させるためのプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1) a sensor data acquisition function for acquiring sensor data provided by one or more sensors;
An information processing apparatus comprising: a processing circuit that realizes a condition setting function for setting a condition relating to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.
(2) The condition setting function includes the function of setting the condition by performing machine learning using the sensor data acquired during practice of the detection target state as training data. Information processing device.
(3) The information processing apparatus according to (2), wherein the processing circuit further realizes a function of outputting an instruction of a timing of practice of the detection target state.
(4) The definition of the condition includes a range of detection values included in the sensor data,
The information processing according to (2) or (3), wherein the processing circuit further realizes a function of outputting an instruction to practice the detection target state so as to correspond to one or more detection values within the range. apparatus.
(5) The sensor data acquisition function acquires the sensor data during and before and after the detection target state is put into practice,
The processing circuit further realizes a sensor data extraction function for extracting the sensor data acquired during practice of the detection target state from the acquired sensor data. The information processing apparatus according to claim 1.
(6) The processing circuit further realizes a function of acquiring a time stamp recorded during practice of the detection target state,
The information processing apparatus according to (5), wherein the sensor data extraction function extracts the sensor data based on the time stamp.
(7) The sensor data extraction function extracts the sensor data based on a user operation,
The processing circuit further realizes a function of acquiring an image or sound recorded during practice of the detection target state, and a function of presenting the image or the sound in association with the sensor data to a user, The information processing apparatus according to (5) or (6).
(8) The information processing apparatus according to any one of (2) to (7), wherein the detection target state is practiced only once.
(9) The information processing apparatus according to any one of (2) to (7), wherein the detection target state is repeatedly practiced.
(10) The information processing apparatus according to (1), wherein the condition setting function includes a function of verifying the condition based on the sensor data acquired during practice of the detection target state.
(11) The processing circuit further realizes an existing condition reading function for reading an existing condition,
The information processing apparatus according to (10), wherein the condition includes a single existing condition or a combination of a plurality of existing conditions.
(12) The information processing apparatus according to (10) or (11), wherein the processing circuit further realizes a function of changing the setting of the condition based on a user operation during the verification of the condition.
(13) Any one of (1) to (12), wherein the processing circuit further realizes a sensor selection function for selecting the one or more sensors from a group of available sensors based on a user operation. The information processing apparatus according to item.
(14) The information processing apparatus according to any one of (1) to (13), wherein the definition of the condition includes a threshold value set for a detection value included in the sensor data.
(15) The information processing apparatus according to any one of (1) to (14), wherein the condition includes a condition established by a combination of sensor data provided by a plurality of sensors.
(16) The processing circuit according to (1) to (15), wherein the processing circuit further realizes a function of outputting or presenting instructions or information to a user via a device different from the device including the one or more sensors. The information processing apparatus according to any one of claims.
(17) The processing circuit further realizes a function of operating an actuator when the condition is satisfied, and a function of outputting or presenting an instruction or information to a user via the actuator. The information processing apparatus according to any one of (16).
(18) one or more sensors;
Acquire sensor data provided by the one or more sensors, and set conditions relating to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state An information processing device;
A terminal device that outputs or presents an instruction or information related to the setting of the condition provided by the information processing device.
(19) The processing circuit of the information processing apparatus is
Obtaining sensor data provided by one or more sensors;
An information processing method comprising: setting a condition related to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.
(20) a sensor data acquisition function for acquiring sensor data provided by one or more sensors;
A program for causing a processing circuit to realize a condition setting function for setting a condition relating to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.
 10  システム
 100  エレメント
 110  通信部
 120  制御部
 121  条件判別部
 130  機能部
 140  電源部
 200  マネージャ
 210  通信部
 220  制御部
 222  統合制御部
 223  条件調整・生成部
 224  条件管理部
 225  条件判別部
 226  条件ハンドリング部
 230  記憶部
 300  サーバ
 310  通信部
 320  制御部
 330  記憶部
 400  UI装置
 410  通信部
 420  制御部
 430  入出力部
DESCRIPTION OF SYMBOLS 10 System 100 Element 110 Communication part 120 Control part 121 Condition determination part 130 Function part 140 Power supply part 200 Manager 210 Communication part 220 Control part 222 Integrated control part 223 Condition adjustment / generation part 224 Condition management part 225 Condition determination part 226 Condition handling part 230 Storage Unit 300 Server 310 Communication Unit 320 Control Unit 330 Storage Unit 400 UI Device 410 Communication Unit 420 Control Unit 430 Input / Output Unit

Claims (20)

  1.  1または複数のセンサによって提供されるセンサデータを取得するセンサデータ取得機能と、
     検出対象状態の実践中に取得された前記センサデータに基づいて、前記検出対象状態を検出するための前記センサデータに関する条件を設定する条件設定機能と
     を実現する処理回路を備える情報処理装置。
    A sensor data acquisition function for acquiring sensor data provided by one or more sensors;
    An information processing apparatus comprising: a processing circuit that realizes a condition setting function for setting a condition relating to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.
  2.  前記条件設定機能は、前記検出対象状態の実践中に取得された前記センサデータを訓練データとして用いる機械学習を実施することによって前記条件を設定する機能を含む、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the condition setting function includes a function of setting the condition by performing machine learning using the sensor data acquired during practice of the detection target state as training data. .
  3.  前記処理回路は、前記検出対象状態の実践のタイミングの指示を出力させる機能をさらに実現する、請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the processing circuit further realizes a function of outputting an instruction of a timing of practice of the detection target state.
  4.  前記条件の定義は、前記センサデータに含まれる検出値の範囲を含み、
     前記処理回路は、前記範囲内の1または複数の検出値に対応するように前記検出対象状態を実践する指示を出力させる機能をさらに実現する、請求項2に記載の情報処理装置。
    The definition of the condition includes a range of detection values included in the sensor data,
    The information processing apparatus according to claim 2, wherein the processing circuit further realizes a function of outputting an instruction to practice the detection target state so as to correspond to one or a plurality of detection values within the range.
  5.  前記センサデータ取得機能は、前記検出対象状態の実践中およびその前後に前記センサデータを取得し、
     前記処理回路は、前記取得されたセンサデータの中から前記検出対象状態の実践中に取得された前記センサデータを抽出するセンサデータ抽出機能をさらに実現する、請求項2に記載の情報処理装置。
    The sensor data acquisition function acquires the sensor data during and before and after the detection target state is put into practice,
    The information processing apparatus according to claim 2, wherein the processing circuit further realizes a sensor data extraction function for extracting the sensor data acquired during practice of the detection target state from the acquired sensor data.
  6.  前記処理回路は、前記検出対象状態の実践中に記録されたタイムスタンプを取得する機能をさらに実現し、
     前記センサデータ抽出機能は、前記タイムスタンプに基づいて前記センサデータを抽出する、請求項5に記載の情報処理装置。
    The processing circuit further realizes a function of obtaining a time stamp recorded during practice of the detection target state,
    The information processing apparatus according to claim 5, wherein the sensor data extraction function extracts the sensor data based on the time stamp.
  7.  前記センサデータ抽出機能は、ユーザ操作に基づいて前記センサデータを抽出し、
     前記処理回路は、前記検出対象状態の実践中に記録された画像または音声を取得する機能と、前記画像または前記音声を前記センサデータに対応付けてユーザに提示する機能とをさらに実現する、請求項5に記載の情報処理装置。
    The sensor data extraction function extracts the sensor data based on a user operation,
    The processing circuit further realizes a function of acquiring an image or sound recorded during practice of the detection target state and a function of presenting the image or the sound in association with the sensor data to a user. Item 6. The information processing device according to Item 5.
  8.  前記検出対象状態は、1回だけ実践される、請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the detection target state is practiced only once.
  9.  前記検出対象状態は、繰り返し実践される、請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the detection target state is repeatedly practiced.
  10.  前記条件設定機能は、前記検出対象状態の実践中に取得された前記センサデータに基づいて前記条件を検証する機能を含む、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the condition setting function includes a function of verifying the condition based on the sensor data acquired during practice of the detection target state.
  11.  前記処理回路は、既存の条件を読み込む既存条件読込機能をさらに実現し、
     前記条件は、単一の前記既存の条件、または複数の前記既存の条件の組み合わせを含む、請求項10に記載の情報処理装置。
    The processing circuit further realizes an existing condition reading function for reading an existing condition,
    The information processing apparatus according to claim 10, wherein the condition includes a single existing condition or a combination of a plurality of the existing conditions.
  12.  前記処理回路は、前記条件の検証中に、ユーザ操作に基づいて前記条件の設定を変更する機能をさらに実現する、請求項10に記載の情報処理装置。 The information processing apparatus according to claim 10, wherein the processing circuit further realizes a function of changing the setting of the condition based on a user operation during the verification of the condition.
  13.  前記処理回路は、利用可能なセンサ群の中からユーザ操作に基づいて前記1または複数のセンサを選択するセンサ選択機能をさらに実現する、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the processing circuit further realizes a sensor selection function of selecting the one or more sensors from a group of available sensors based on a user operation.
  14.  前記条件の定義は、前記センサデータに含まれる検出値に設定される閾値を含む、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the definition of the condition includes a threshold value set for a detection value included in the sensor data.
  15.  前記条件は、複数のセンサによって提供されるセンサデータの組み合わせによって成立する条件を含む、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the condition includes a condition established by a combination of sensor data provided by a plurality of sensors.
  16.  前記処理回路は、前記1または複数のセンサを備える装置とは異なる装置を介してユーザへの指示または情報を出力または提示させる機能をさらに実現する、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the processing circuit further realizes a function of outputting or presenting an instruction or information to a user via a device different from the device including the one or more sensors.
  17.  前記処理回路は、前記条件が成立した場合にアクチュエータを動作させる機能と、前記アクチュエータを介してユーザへの指示または情報を出力または提示させる機能とをさらに実現する、請求項1に記載の情報処理装置。 The information processing according to claim 1, wherein the processing circuit further realizes a function of operating an actuator when the condition is satisfied, and a function of outputting or presenting an instruction or information to a user via the actuator. apparatus.
  18.  1または複数のセンサと、
     前記1または複数のセンサによって提供されるセンサデータを取得し、検出対象状態の実践中に取得された前記センサデータに基づいて、前記検出対象状態を検出するための前記センサデータに関する条件を設定する情報処理装置と、
     前記情報処理装置が提供する、前記条件の設定に関する指示または情報を出力または提示する端末装置と
     を含むシステム。
    One or more sensors;
    Acquire sensor data provided by the one or more sensors, and set conditions relating to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state An information processing device;
    A terminal device that outputs or presents an instruction or information regarding the setting of the condition provided by the information processing device.
  19.  情報処理装置の処理回路が、
      1または複数のセンサによって提供されるセンサデータを取得することと、
      検出対象状態の実践中に取得された前記センサデータに基づいて、前記検出対象状態を検出するための前記センサデータに関する条件を設定することと
     を含む情報処理方法。
    The processing circuit of the information processing device
    Obtaining sensor data provided by one or more sensors;
    An information processing method comprising: setting a condition related to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.
  20.  1または複数のセンサによって提供されるセンサデータを取得するセンサデータ取得機能と、
     検出対象状態の実践中に取得された前記センサデータに基づいて、前記検出対象状態を検出するための前記センサデータに関する条件を設定する条件設定機能と
     を処理回路に実現させるためのプログラム。
    A sensor data acquisition function for acquiring sensor data provided by one or more sensors;
    A program for causing a processing circuit to realize a condition setting function for setting a condition relating to the sensor data for detecting the detection target state based on the sensor data acquired during practice of the detection target state.
PCT/JP2015/054820 2014-05-15 2015-02-20 Information-processing device, system, information-processing method, and program WO2015174113A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-101509 2014-05-15
JP2014101509 2014-05-15

Publications (1)

Publication Number Publication Date
WO2015174113A1 true WO2015174113A1 (en) 2015-11-19

Family

ID=54479660

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/054820 WO2015174113A1 (en) 2014-05-15 2015-02-20 Information-processing device, system, information-processing method, and program

Country Status (1)

Country Link
WO (1) WO2015174113A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017142798A (en) * 2016-02-10 2017-08-17 セールスフォース ドット コム インコーポレイティッド State machine builder equipped with improved interface and processing of state independent event
JP2022536417A (en) * 2019-06-17 2022-08-16 三菱重工業株式会社 Surveillance system with event detection for infrastructure and/or vehicles

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060095893A1 (en) * 2004-09-30 2006-05-04 The Regents Of The University Of California A California Corporation Embedded systems building blocks
JP2007280057A (en) * 2006-04-06 2007-10-25 Sony Corp Data processor, data processing method, and program
JP2014057129A (en) * 2012-09-11 2014-03-27 Casio Comput Co Ltd Mobile device, control method of mobile device, and control program of mobile device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060095893A1 (en) * 2004-09-30 2006-05-04 The Regents Of The University Of California A California Corporation Embedded systems building blocks
JP2007280057A (en) * 2006-04-06 2007-10-25 Sony Corp Data processor, data processing method, and program
JP2014057129A (en) * 2012-09-11 2014-03-27 Casio Comput Co Ltd Mobile device, control method of mobile device, and control program of mobile device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017142798A (en) * 2016-02-10 2017-08-17 セールスフォース ドット コム インコーポレイティッド State machine builder equipped with improved interface and processing of state independent event
JP2022536417A (en) * 2019-06-17 2022-08-16 三菱重工業株式会社 Surveillance system with event detection for infrastructure and/or vehicles

Similar Documents

Publication Publication Date Title
US11693530B2 (en) Information processing device, display control method, and program
EP3040806B1 (en) Media synchronized control of peripherals
US11991580B2 (en) Method and system for realizing function by causing elements of hardware to perform linkage operation
KR20160013107A (en) Signaling device for teaching learning devices
KR20160034243A (en) Apparatus and methods for providing a persistent companion device
US10321091B2 (en) Information processing device and method of information processing
KR102498714B1 (en) Electronic device and method for providing content
CN105637448A (en) Contextualizing sensor, service and device data with mobile devices
CN109248414A (en) Training based reminding method, device, equipment and readable storage medium storing program for executing
KR20160110634A (en) Smart vegetation cultivating apparatus and smart vegetation cultivating method
JP7031578B2 (en) Information processing equipment, information processing methods and programs
WO2015174113A1 (en) Information-processing device, system, information-processing method, and program
CN106564059B (en) A kind of domestic robot system
CN110178159A (en) Audio/video wearable computer system with integrated form projector
JP6897696B2 (en) Servers, methods, and programs
JPWO2020022371A1 (en) Robots and their control methods and programs
CN106534517B (en) Operating status method of adjustment, device and electronic equipment
CN109257490A (en) Audio-frequency processing method, device, wearable device and storage medium
CN105049924A (en) Proximity detection of candidate companion display device in same room as primary display using camera
JPWO2020153038A1 (en) Information processing device and information processing method
JP2009214235A (en) Autonomous action equipment control system for autonomous action equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15792060

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 15792060

Country of ref document: EP

Kind code of ref document: A1