WO2023286105A1 - Information processing device, information processing system, information processing program, and information processing method - Google Patents

Information processing device, information processing system, information processing program, and information processing method Download PDF

Info

Publication number
WO2023286105A1
WO2023286105A1 PCT/JP2021/026060 JP2021026060W WO2023286105A1 WO 2023286105 A1 WO2023286105 A1 WO 2023286105A1 JP 2021026060 W JP2021026060 W JP 2021026060W WO 2023286105 A1 WO2023286105 A1 WO 2023286105A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
sensor
target
unit
state
Prior art date
Application number
PCT/JP2021/026060
Other languages
French (fr)
Japanese (ja)
Inventor
井上哲也
小栗佳祐
Original Assignee
株式会社ウフル
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ウフル filed Critical 株式会社ウフル
Priority to JP2023534430A priority Critical patent/JPWO2023286105A1/ja
Priority to PCT/JP2021/026060 priority patent/WO2023286105A1/en
Publication of WO2023286105A1 publication Critical patent/WO2023286105A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to an information processing device, an information processing system, an information processing program, and an information processing method.
  • AI Artificial intelligence
  • teacher data which is a set of known inputs and outputs
  • Patent Document 1 Japanese Patent No. 6492880 below proposes machine learning using feature information extracted from an image and user information input by a user visually recognizing the image. Inference results by AI depend on teacher data used for machine learning. Therefore, in the field of machine learning, a technique for generating teacher data for machine learning according to the use of AI is desired.
  • the detection result of the sensor that detects the target is acquired, and the state information obtained from the information different from the detection result of the sensor and representing the state of the target is associated with the target.
  • An information processing apparatus includes an acquisition unit that acquires from a predetermined terminal, and a generation unit that generates data used for machine learning using the detection result and state information acquired by the acquisition unit.
  • the information processing apparatus processes the detection result obtained by the obtaining unit using a model generated by machine learning by the learning unit to estimate the state of the object.
  • An information processing system comprising: an estimator;
  • the computer acquires the detection result of the sensor that detects the target, and the state information obtained from the information different from the detection result of the sensor and representing the state of the target is transmitted to the target. and generating data used for machine learning using the obtained detection results and state information.
  • an information processing method using one or more computers wherein the communication unit of the computer receives the detection result of a sensor that detects an object, and receiving state information obtained from different information and representing the state of the object from a predetermined terminal associated with the object; and generating data utilized for machine learning.
  • FIG. 1 is a diagram showing an information processing system according to a first embodiment
  • FIG. FIG. 4 is a diagram showing processing related to a predetermined terminal
  • FIG. 4 is a diagram showing association between sensors and terminals
  • It is a figure which shows the process which derives the detection result of the sensor corresponding to state information.
  • It is a figure which shows the process which performs machine learning.
  • It is a figure which shows the information processing method which concerns on 1st Embodiment.
  • It is a figure which shows the information processing system which concerns on 2nd Embodiment.
  • It is a figure which shows the information processing system which concerns on 3rd Embodiment.
  • It is a figure which shows the information processing system which concerns on 4th Embodiment.
  • It is a figure which shows the computer which concerns on embodiment.
  • FIG. 1 is a diagram showing an information processing system according to the first embodiment.
  • the information processing system 1 associates the first information and the second information regarding the target 2 to generate data used for machine learning.
  • This data includes teacher data for machine learning or data that is the source of the teacher data.
  • data used for machine learning is collectively referred to as teacher data as appropriate.
  • the first information corresponds to the result of detection of the target 2 by the sensor 10, and the second information corresponds to the result of the person 3 judging the state of the target 2.
  • the subject 2 is a person.
  • the target 2 will be referred to as the target person 2 as appropriate.
  • the subject 2 is, for example, an employee working in an office.
  • the person 3 is, for example, an evaluator who evaluates the working condition of the subject person 2 . Person 3 will be referred to as evaluator 3 as appropriate in the following description.
  • the information processing system 1 uses machine learning to generate an AI model that estimates the second information from the first information.
  • the above model will be referred to as an inference model as appropriate.
  • the information processing system 1 associates, for example, the detection result (first information) of the detection of the subject 2 by the sensor 10 with the evaluation result (second information) of the state of the subject 2 evaluated by the evaluator 3. to perform machine learning.
  • the information processing system 1 generates an AI inference model for estimating the state of the subject 2 from the detection result of the sensor 10 by machine learning.
  • the information processing system 1 estimates the state of the subject 2 from the detection results of the sensor 10 using AI in which this inference model is implemented. For example, the information processing system 1 estimates the working condition as the condition of the subject 2 .
  • the information processing system 1 provides the application 60 with the estimated work status of the subject person 2 .
  • the application 60 includes, for example, an application that manages attendance of the subject 2 .
  • the application 60 for example, accumulates the results of estimating the working condition of the subject 2 and creates a log.
  • the superior of the subject 2 for example, the evaluator 3 can grasp the attendance of the subject 2 from the log generated by the application 60.
  • the AI in which the inference model is implemented estimates the state of the subject. and can be managed. For example, after the inference model is generated, the information processing system 1 reduces the time and effort of the evaluator 3 to evaluate and objectively evaluates the state of the subject 2 in managing the state of the subject 2. help to do
  • the information processing system 1 includes a sensor 10 , a terminal 20 , an information processing device 30 , an information processing device 40 , and an information processing device 50 .
  • the sensor 10 detects the target 2.
  • Sensors 10 include, for example, sensors associated with subject 2 .
  • the sensor 10 may be a wearable device such as a smartwatch worn by the subject 2 .
  • the sensor 10 detects, for example, information representing the life activity of the subject 2 .
  • the information representing vital activity includes, for example, at least one of the subject's 2 body temperature, the subject's 2 surface humidity, pulse, blood pressure, blood oxygen concentration, and respiratory rate.
  • the sensor 10 may detect the movement (eg, acceleration) of the part (eg, hand) of the subject 2 to which the sensor 10 is attached.
  • the sensor 10 may include smart glasses and may detect the line of sight of the subject 2 .
  • the terminal 20 includes the terminal of the evaluator 3 associated with the subject 2.
  • the status information includes, for example, information input to the terminal 20 by the evaluator 3 (working status of the subject 2, etc.).
  • the terminal 20 may be a portable device (eg, smart phone, tablet) possessed by the evaluator 3, or may be a stationary device (eg, personal computer).
  • FIG. 2 is a diagram showing an example of processing related to a terminal.
  • the terminal 20 is a smartphone, for example, and includes a display unit 21 .
  • the display unit 21 is overlapped with, for example, a transmissive touch pad, and the display unit 21 and the touch pad constitute a touch panel.
  • the terminal 20 displays information D1 to D4 on the display unit 21, for example.
  • the information D1 includes information indicating that the input of the condition of the subject 2 is accepted.
  • the information D1 includes the text "What is your status?" as information notifying that the status input is being accepted.
  • the information D1 may include information specifying the target 2 .
  • the information D1 includes text of the name of the target person 2 (eg, “Mr. A”) as information specifying the target 2 .
  • the information specifying the target 2 may include information different from the target person's name, and may include, for example, identification information (eg, employee number) of the target person 2 .
  • the information specifying the target 2 may be displayed separately from the information notifying that the status of the target is being received.
  • the information D1 may not include information specifying the target 2 . For example, when the number of subjects 2 corresponding to the terminal 20 is 1, the evaluator 3 can specify the subject whose state is to be input even if the information specifying the subject 2 is not displayed.
  • Information D2-D4 includes information corresponding to a plurality of candidates representing the state of target 2.
  • information D2 to information D4 are items (eg, icons) in the displayed image.
  • information D2, information D3, and information D4 will be referred to as icon D2, icon D3, and icon D4, respectively.
  • Icon D2 includes the text "at work”.
  • Icon D3 includes the text "resting”.
  • Icon D4 contains the text "Other”.
  • the texts "at work”, "resting", and “others” are information representing candidate states of the subject 2, respectively.
  • Information representing candidates for the state of the object 2 may be represented in a form different from text (for example, a pictogram).
  • the terminal 20 detects that information associated with the icon D2 (eg, "at work") has been input as information representing the state of the target 2. For example, the evaluator 3 visually recognizes the subject 2 and determines that the subject 2 is at work. In this case, when the evaluator 3 touches the icon D2, information corresponding to "at work" is input to the terminal 20 as the status information.
  • information associated with the icon D2 eg, "at work”
  • the terminal 20 detects that information associated with the icon D2 (eg, "at work") has been input as information representing the state of the target 2. For example, the evaluator 3 visually recognizes the subject 2 and determines that the subject 2 is at work. In this case, when the evaluator 3 touches the icon D2, information corresponding to "at work" is input to the terminal 20 as the status information.
  • the "Other” icon D4 has a state of the target 2 different from the state candidates represented by the other icons (icon D2, icon D3), for example, when the state of the target 2 is "absent". is selected when it is determined that
  • the “other” icon D4 may be selected when the evaluator 3 cannot determine the condition of the subject 2, such as when the evaluator 3 is in a position where the subject 2 cannot be recognized.
  • the evaluator 3 may directly recognize the real object of the subject 2 and input the state of the subject 2 into the terminal 20 .
  • Direct recognition includes, for example, evaluator 3 obtaining information about subject 2 without going through a device.
  • the evaluator 3 may judge the state of the subject 2 by directly seeing the subject 2 or directly hearing the sound emitted by the subject 2 , and input the state of the subject 2 to the terminal 20 .
  • the evaluator 3 may ask the subject 2 what state he or she is in using voice, text, or gestures, and input the state of the subject 2 into the terminal 20 based on the response from the subject 2. .
  • the evaluator 3 may indirectly recognize the subject 2 and input the state of the subject 2 to the terminal 20 .
  • Indirectly perceiving includes, for example, evaluator 3 obtaining information about subject 2 via a device.
  • the evaluator 3 inputs the state of the target 2 to the terminal 20 using the result of detecting the target 2 with a sensor (suitably referred to as a second sensor) different from the sensor 10 (suitably referred to as a first sensor).
  • a sensor suitably referred to as a second sensor
  • the second sensor may differ from the first sensor in the type of information it detects.
  • the first sensor may be a sensor that detects the subject's 2 vital activity
  • the second sensor may be a camera that detects the subject's 2 appearance.
  • the evaluator 3 may input the state of the subject 2 to the terminal 20 by viewing an image (eg, moving image) of the subject 2 captured by a camera.
  • the positional relationship of the second sensor with respect to the target 2 may be different from that of the first sensor.
  • the second sensor may have a different distance from the target 2 than the first sensor.
  • the first sensor may be a sensor attached to the target 2, such as a wearable device, and the second sensor may be a sensor remote from the target 2, such as a surveillance camera.
  • the second sensor may be a sensor that detects the target 2 in a direction different from that of the first sensor.
  • both the first sensor and the second sensor include a camera, the first sensor detects (eg, photographs) the object 2 from a first direction (eg, lateral), and the second sensor , the second sensor may detect (eg, photograph) the target 2 from a second direction (eg, front) different from the first direction.
  • the information processing device 30 includes a communication unit 31, a processing unit 32, and a storage unit 33.
  • the communication unit 31 controls communication performed by the information processing device 30 with other devices.
  • the processing unit 32 processes information.
  • the processing executed by the processing unit 32 includes, for example, at least one of arithmetic processing, image processing, analysis processing, determination processing, inference processing, and control processing.
  • the storage unit 33 stores at least one of information acquired by the information processing device 30 from another device, information used for processing executed by the processing unit 32, and information generated by processing executed by the processing unit 32. memorize
  • the storage unit 33 stores, for example, candidate information D6 as information used for processing executed by the processing unit 32 .
  • the candidate information D6 will be described later.
  • the processing unit 32 includes an acquisition unit 34, an identification unit 35, and a generation unit 36.
  • the acquisition unit 34 acquires the detection result of the sensor 10 that detects the subject 2 .
  • the acquisition unit 34 acquires the status information of the subject 2 from the terminal 20 associated with the subject 2 .
  • the state information is information obtained from information different from the detection result of the sensor 10 and includes information representing the state of the target 2 .
  • status information includes information (eg, at work, on break, or other) entered by subject 2's superior (evaluator 3).
  • the storage unit 33 stores candidate information D6.
  • the candidate information D ⁇ b>6 includes information that associates the candidate of the terminal 20 used for inputting the status information of the subject 2 with the subject 2 .
  • the specifying unit 35 specifies the terminal 20 from candidates associated with the subject 2 in the candidate information stored in the storage unit 33 .
  • FIG. 3 is a diagram showing an example of association between sensors and terminals.
  • the subject 2 includes multiple subjects (2A, 2B).
  • the sensor 10 includes multiple sensors (10A, 10B).
  • the sensor 10A corresponds to the subject 2A and detects the subject 2A.
  • the sensor 10B corresponds to the subject 2B and detects the subject 2B.
  • evaluators 3 include multiple evaluators (3A, 3B).
  • evaluator 3A corresponds to each of the subject 2A and the subject 2B.
  • the evaluator 3A is, for example, the superior of the subject 2A and the subject 2B.
  • Evaluator 3B corresponds to subject 2B.
  • the evaluator 3B is, for example, the superior of the subject 2B.
  • the subject 2B may be an employee belonging to two departments, and the evaluators 3A and 3B may be managers of the respective departments to which the subject 2B belongs.
  • the terminal 20 includes multiple terminals (20A, 20B).
  • Terminal 20A is associated with evaluator 3A.
  • the terminal 20A receives the input of the condition information of the subject 2 from the evaluator 3A.
  • the evaluator 3A judges the condition of the subject 2A corresponding to the evaluator 3A, and inputs the condition information representing the judgment result to the terminal 20A.
  • the evaluator 3A judges the condition of the subject 2B corresponding to the evaluator 3A, and inputs the condition information indicating the judgment result to the terminal 20A.
  • Terminal 20B is associated with evaluator 3B.
  • the terminal 20B receives the input of the condition information of the subject 2 from the evaluator 3B.
  • the evaluator 3B determines the state of the subject 2B corresponding to the evaluator 3B, and inputs state information indicating the determination result to the terminal 20B.
  • reference D6 represents candidate information that associates the terminal 20 candidate with the target 2.
  • the candidate information D6 is, for example, table data.
  • Candidate information D6 includes, for example, items of sensor ID, target ID, terminal ID, and evaluator ID.
  • the sensor ID includes information (eg, identification information) that identifies each sensor. Sensor IDs are assigned so as not to duplicate among a plurality of sensors.
  • the sensor ID of the sensor 10A is "S10A" and the sensor ID of the sensor 10B is "S10B”.
  • the target ID includes information (eg, identification information) specifying each target 2 . Target IDs are assigned so as not to duplicate among multiple targets 2 .
  • the terminal ID includes information identifying each terminal 20 . Terminal IDs are assigned so as not to duplicate among a plurality of terminals 20 .
  • the terminal ID of the terminal 20A is "T20A” and the terminal ID of the terminal 20B is "T20B”.
  • the evaluator ID includes information (eg, identification information) that identifies each evaluator 3 . Evaluator IDs are assigned so as not to overlap among multiple evaluators.
  • the evaluator ID of the evaluator 3A is "E01" and the evaluator ID of the evaluator 3B is "E02".
  • the values of each item arranged on the same line are associated with each other.
  • the sensor ID "S10A” and the target ID "U02A” are arranged on the same line and have a correspondence relationship with each other.
  • the sensor 10A whose sensor ID is "S10A” has a corresponding relationship with the target person 2A whose target ID is "U02A”.
  • the sensor 10A detects the target person 2A in correspondence.
  • the sensor 10A is installed in a place where the subject 2A is assumed to be in an office.
  • the detection result output by the sensor 10A is treated as the result of detecting the subject 2A.
  • the sensor ID "S10A” and the terminal ID “T20A” are arranged on the same line and have a corresponding relationship.
  • the sensor 10A whose sensor ID is “S10A” has a corresponding relationship with the terminal 20A whose terminal ID is "T20A”.
  • the detection result of the sensor 10A is associated with information output from the corresponding terminal 20A and used as teacher data or its original data.
  • the sensor ID "S10A” and the evaluator ID “E03A” are arranged on the same line and have a correspondence relationship with each other.
  • the sensor 10A whose sensor ID is "S10A” has a corresponding relationship with the evaluator 3A whose evaluator ID is "E03A”.
  • the detection result of the sensor 10A is associated with state information input by the evaluator 3A, which has a corresponding relationship, and is used as teacher data or its original data.
  • the target ID "U02A” and the terminal ID “T20A” are arranged on the same line and have a corresponding relationship.
  • the target person 2A whose target ID is “U02A” has a corresponding relationship with the terminal 20A whose terminal ID is "T20A”.
  • the status information of the subject 2A is output from the corresponding terminal 20A. At least part of the information output from the terminal 20A is treated as the subject's 2A status information.
  • the subject ID "U02A” and the evaluator ID "E03A” are arranged on the same line and have a correspondence relationship with each other.
  • the subject 2A whose subject ID is "U02A” has a corresponding relationship with the evaluator 3A whose evaluator ID is "E03A”.
  • the status information of the subject 2A is input by the evaluator 3A who has a corresponding relationship.
  • terminal ID "T20A” and the evaluator ID "E03A” are arranged on the same line and have a corresponding relationship.
  • Terminal 20A whose terminal ID is "T20A” is in correspondence with evaluator 3A whose evaluator ID is "E03A”.
  • the status information input to the terminal 20A is treated as input by the evaluator 3A.
  • the sensor 10A for example, information including a value obtained by detection (arbitrarily referred to as a sensor value) or a processing result of processing the sensor value, time information indicating the timing at which detection was performed (for example, a time stamp), It provides the information processing device 30 with the detection result including the sensor ID of its own device.
  • the evaluator 3A judges the state of the subject 2A in correspondence.
  • the evaluator 3A inputs information representing the condition of the subject 2A to the corresponding terminal 20A.
  • the terminal 20A for example, stores state information including information including an input value or a processing result of processing the input value, time information (for example, time stamp) indicating the timing of the input, and the terminal ID of its own device. provided to the processing device 30;
  • the evaluator 3A judges the state of the subject 2A who has a corresponding relationship with himself.
  • the evaluator 3A inputs information representing the condition of the subject 2A to the terminal 20A, which has a corresponding relationship with him/herself.
  • the terminal 20A provides the information processing apparatus 30 with state information including, for example, state information, time information (for example, time stamp) indicating input timing, and the terminal ID of the terminal itself.
  • state information output by the terminal 20A may include one or both of state information input to the terminal 20A and state information obtained by processing information (eg, input values) input to the terminal 20A.
  • the sensor 10A may detect at least one of a target other than the target person 2A, a person other than the target, and an object other than the target.
  • the sensor 10A may include a camera installed in the space where the subject 2A works.
  • the sensor 10A or a device external to the sensor 10 may analyze the image captured by the camera and detect the subject 2A appearing in the image.
  • the sensor 10A or the above-mentioned external device may perform image analysis on an image showing a plurality of subjects 2 and identify the subject 2A appearing in the image.
  • the target ID does not have to be associated with the sensor ID.
  • the target ID may be associated with feature information representing the external features of the target person 2A.
  • the sensor 10A or the external device may identify the target by comparing the detection result of the sensor 10A and the feature information, and associate the detection result of the specified target with the target ID associated with the feature information.
  • the sensor 10A may detect a plurality of targets 2 in chronological order according to a predetermined schedule.
  • the target 2 detected by the sensor 10A is specified using the timing at which the detection was performed and the above schedule.
  • the sensor 10A may detect a plurality of targets 2 by detecting the target 2 in a predetermined direction and switching the predetermined direction.
  • the target 2 detected by the sensor 10A is specified using information that associates the direction of detection with the target 2 in advance, and the direction of detection that has been performed.
  • the terminal 20A may output the information designating the subject 2 in the form of an image or voice when accepting the input of the status information.
  • the terminal 20A may display information specifying the target person 2 (for example, "Mr. A") as the information specifying the target person 2 as shown in FIG.
  • the terminal 20A may receive input of information specifying the subject 2 corresponding to the input state information.
  • the evaluator 3A may input information specifying the subject 2B (eg, name of the subject 2B, subject ID) and state information about the subject 2B into the terminal 20A.
  • the terminal 20A may provide the information processing device 30 with information including information specifying the subject 2B and state information.
  • the evaluator 3A may be a different person from the subject 2A, or may be the same person as the subject 2A.
  • the subject 2A may be the same person as the evaluator 3A
  • the terminal 20A may be a terminal possessed by the subject 2A.
  • the subject 2A may input his/her status information to the terminal 20A.
  • the evaluator ID may not be associated with the subject ID.
  • the state information output from the terminal 20A may be associated with the detection result of the sensor 10A associated with the terminal 20A and used as teacher data.
  • the specifying unit 35 specifies the correspondence relationship between the information output from the sensor 10 and the information output from the terminal 20.
  • the specifying unit 35 specifies the state information acquired from the terminal 20 corresponding to the detection result acquired from the sensor 10 by referring to the candidate information D6.
  • the sensor ID included in the detection result obtained from the sensor 10 is "S10A.”
  • the identifying unit 35 identifies the sensor ID (here, “S10A”) from the information acquired from the sensor 10 .
  • the identifying unit 35 identifies the terminal ID (here, "T20A”) corresponding to "S10A" in the candidate information D6.
  • the identifying unit 35 identifies the state information acquired from the terminal 20A whose terminal ID is "T20A" as state information corresponding to the detection result of the sensor 10A whose sensor ID is "S10A".
  • the identifying unit 35 may identify the detection result obtained from the sensor 10 corresponding to the state information obtained from the terminal 20 .
  • the evaluator 3 inputs information specifying the subject 2 (eg, name, subject ID) when inputting the status information of the subject 2 .
  • the terminal 20 is the terminal 20A in FIG. 3, and when outputting the state information, the terminal ID of its own device (here, "T20A” and the information specifying the target 2 input by the evaluator 3 ( For example, the target ID “U02B”) is output from the identification unit 35 based on the information acquired from the terminal 20A.
  • the identifying unit 35 identifies the sensor ID (here, "S10B") associated with "T20A" and "S02B" in the candidate information D6.
  • the identifying unit 35 identifies the sensor ID "S10B”. is the state information about "U02B" acquired from the terminal 20A whose terminal ID is "T20A”.
  • the information processing device 30 may be the above-described external device and identify the target 2 from the detection result output from the sensor 10 .
  • the sensor 10A may include a camera that captures the subject 2A, and the information processing device 30 (eg, the identification unit 35) may identify the subject 2A by analyzing the image captured by the camera of the sensor 10A.
  • the identifying unit 35 identifies the terminal ID (here, “T20A”) associated in the candidate information D6 with the target ID (here, “U02A”) of the subject 2A identified from the detection result of the sensor 10A.
  • the identifying unit 35 may associate the detection result of the sensor 10A with the terminal ID (here, "T20A") identified using the detection result of the sensor 10A.
  • the sensor ID may not be associated with the target ID in the candidate information D6, and may not be included in the candidate information D6.
  • the generation unit 36 in FIG. 1 generates data by associating the detection result of the sensor 10 with the state information.
  • the detection result may be a sensor value of the sensor 10 or a value obtained by processing the sensor value of the sensor 10 (for example, a value derived as a detection result).
  • the generation unit 36 uses the detection result acquired by the acquisition unit 34 to derive the sensor detection result at the time when the state information was obtained, and associates the derived sensor detection result with the state information to generate data.
  • the state information may be information output by the terminal 20, or may be information obtained by processing this information (for example, information derived as state information).
  • the generating unit 36 generates data by associating, for example, the detection result derived from the sensor value of the sensor 10 with the state information specified by the specifying unit 35 as the state information corresponding to this detection result.
  • FIG. 4 is a diagram showing an example of the process of deriving the sensor detection results corresponding to the state information.
  • the generation unit 36 derives the timing at which the state information is acquired, and derives (eg, estimates) the sensor value at this timing.
  • the generating unit 36 uses the time indicated by the time stamp output by the terminal 20 together with the state information as the timing at which the state information was acquired.
  • the generation unit 36 derives the timing at which the state information was acquired by the process of specifying the value of the time stamp corresponding to the state information.
  • the sensor 10 performs detection operations at, for example, a predetermined sampling frequency.
  • the sensor 10 outputs a sensor value, for example, at predetermined time intervals.
  • symbol t is the time derived by the generator 36 as the time when the state information was obtained.
  • the time t is information obtained from a time stamp output by the terminal 20, for example.
  • the generation unit 36 derives the sensor value as the detection result corresponding to the state information by the process of specifying the sensor value closest to the time t from the time-series data of the sensor value.
  • the generation unit 36 may derive (eg, calculate) the detection result by statistically processing a plurality of sensor values in a predetermined period T including time t.
  • the generation unit 36 may calculate an average value of multiple sensor values in a predetermined period T as the detection result.
  • This average value may be an arithmetic mean value or a geometric mean value of the sensor values in the predetermined period T.
  • the average value may be a geometric mean value using a weight according to the time between the time when the sensor value is obtained and the time t when the state information is obtained.
  • the generation unit 36 may calculate the detection result by approximation or interpolation using a plurality of sensor values in the predetermined period T.
  • the generation unit 36 may generate teacher data that associates one detection result (eg, sensor value) obtained from the sensor 10 with one piece of state information.
  • the generation unit 36 may generate teacher data that associates one piece of information (eg, a waveform of sensor values with respect to time) that is a set of a plurality of detection results obtained from the sensor 10 and one piece of state information. .
  • the terminal 20 does not have to output the time stamp corresponding to the timing at which the state information was acquired.
  • the generator 36 may derive the time at which the information processing device 30 acquires the state information as the timing at which the state information was acquired.
  • the generation unit 36 may derive the time when the communication unit 31 received the state information as the timing at which the state information was acquired.
  • the information processing device 30 eg, the processing unit 32
  • the generation unit 36 may derive the timing at which the state information was acquired using the time when the communication unit 31 transmitted the state information request to the terminal 20 .
  • the generator 36 may derive the time at which the state information request was transmitted as the timing at which the state information was acquired. As the timing at which the state information is acquired, the generating unit 36 selects the time between the first time when the communication unit 31 transmitted the state information request and the second time when the communication unit 31 received the state information (for example, , the average value of the first time and the second time) may be derived. The generation unit 36 may derive the timing at which the state information is acquired using one or both of the first time and the second time and an estimated value of the time required for communication.
  • the sensor 10 may detect the target 2 periodically, or may detect the target 2 irregularly.
  • the sensor 10 may detect the target 2 upon receiving information requesting detection of the target 2 (hereinafter referred to as a detection request).
  • the sensor 10 may output a sensor value or information obtained by processing the sensor value as a response to the detection request.
  • the device that outputs the detection request may be the information processing device 30, a device different from the information processing device 30 in the information processing system 1 (eg, the information processing device 50 for dictation), or an external device of the information processing system 1. good.
  • the information processing device 30 provides the data generated by the generation unit 36 to the information processing device that executes machine learning.
  • the information processing device 40 includes a learning unit 41 that executes machine learning, and the information processing device 30 provides the information processing device 40 with teacher data for machine learning or data that is the source of the teacher data. .
  • FIG. 5 is a diagram showing an example of processing for executing machine learning.
  • model M is an inference model generated by deep learning.
  • the model M receives the detection result of the sensor 10 and outputs the estimation result of the state of the object 2 .
  • the model M includes an input layer M1, an intermediate layer M2 and an output layer M3.
  • the input layer M1 is a layer to which original data for inference is input.
  • the output layer M3 is a layer that outputs data indicating an inference result.
  • the intermediate layer M2 is a layer arranged between the input layer M1 and the output layer M3.
  • the number of intermediate layers M2 is arbitrary, and may be one layer or multiple layers.
  • the learning unit 41 inputs the detection result of the sensor 10 in the teacher data to the input layer M1.
  • a value input to the input layer M1 propagates to the output layer M3 via the intermediate layer M2.
  • the model M includes parameters (eg, coupling coefficients, biases) for propagating values from the input layer M1 to the output layer M3.
  • the learning unit 41 optimizes the parameters so that the output data output from the output layer M3 approaches the data representing the state information when the input data in the teacher data is input to the input layer M1.
  • the machine learning performed by the learning unit 41 is arbitrary and does not have to be deep learning.
  • the machine learning method may be a neural network that does not include the intermediate layer M2, or may be a method other than the neural network.
  • the information processing device 30 may provide the learning unit 41 with teacher data obtained from the detection results and state information of a plurality of targets 2 as teacher data for one inference model.
  • the obtained inference model can be used, for example, as an inference model commonly used for a plurality of targets 2 (arbitrarily referred to as a shared model).
  • the information processing device 30 may provide the learning unit 41 with teacher data obtained from the detection results and state information of a specific target 2 (for example, the target 2A) as teacher data for one inference model.
  • the obtained inference model can be used, for example, as an inference model (arbitrarily referred to as an individual model) for estimating the state of a specific subject 2 (eg, subject 2A).
  • the number of targets 2 included in the specific target 2 may be one, or two or more.
  • a plurality of targets 2 may be a predetermined set, and a particular target 2 may be a part (eg, a subset) of the predetermined set.
  • the predetermined set may be a set of employees belonging to a specific company
  • the subset may be a set of employees belonging to a predetermined department within the specific company.
  • the learning unit 41 generates a shared model when the amount of teacher data accumulated for the specific target 2 is less than a predetermined amount, and generates an individual model when the amount of teacher data accumulated for the specific target 2 is equal to or greater than the predetermined amount.
  • model may be generated.
  • the learning unit 41 generates a common model using the detection results obtained by the sensor 10 in a first period, and uses the detection results obtained by the sensor 10 in a second period longer than the first period. Individual models may be generated.
  • the learning unit 41 provides the generated inference model to the information processing device that estimates the state of the target 2 using the detection result of the sensor 10 .
  • the information processing device 40 provides the inference model generated by the learning unit 41 to the information processing device 50 .
  • the information processing device 50 includes a processing unit realized by a general-purpose processor or the like.
  • the processing unit of the information processing device 50 configures the estimating unit 51 by executing the computation defined by the inference model.
  • the estimating unit 51 is, for example, an AI in which an inference model generated by machine learning by the learning unit 41 is implemented.
  • the information processing device 30 provides the information processing device 50 with the detection result obtained by the obtaining unit 34 from the sensor 10 .
  • the estimation unit 51 of the information processing device 50 estimates the state of the target 2 by performing an operation defined by the inference model on the detection result acquired by the acquisition unit 34 .
  • the information processing device 50 provides, for example, the application 60 with the state of the target 2 estimated by the estimation unit 51 (arbitrarily referred to as an estimation result of the estimation unit 51).
  • the application 60 is an application that processes the estimation result of the estimation unit 51 .
  • the application 60 uses the estimation result of the estimation unit 51, for example, to execute a process of creating a log indicating the work status of the subject 2.
  • the processing executed by the application 60 is arbitrary, and may include processing for analyzing the estimation result of the estimation unit 51 .
  • the application 60 may estimate the time during which the subject 2 was working intensively, or calculate an index indicating the work status of the subject 2 (eg, efficiency, degree of concentration, amount of work). good too.
  • the application 60 may perform processing for outputting (eg, displaying) one or both of the estimation result of the estimation unit 51 and the processing result obtained by processing the estimation result of the estimation unit 51 .
  • the application 60 may be installed in at least one of the information processing devices included in the information processing system 1 .
  • the application 60 may be installed in the information processing device 30 .
  • the application 60 may be installed in at least one information processing device external to the information processing system 1 .
  • the number of applications to which the estimation result of the estimation unit 51 is provided may be one or plural.
  • the application 60 may include a web application, an application on the cloud, or an on-premises application.
  • the uses of the estimation results may be the same or different for the multiple applications.
  • a first application among the plurality of applications may manage the work status of the subject 2 and a second application among the plurality of applications may manage the health status of the subject 2 .
  • FIG. 6 is a diagram showing an information processing method according to the first embodiment.
  • FIG. 1 to FIG. 5 are referred to appropriately for each part of the information processing system 1 .
  • the sensor 10 shown in FIG. 1 detects the target 2.
  • the sensor 10 transmits the detection result of detecting the target 2 .
  • the acquisition unit 34 acquires the detection result from the sensor 10 .
  • the acquisition unit 34 controls the communication unit 31 to receive the detection result transmitted by the sensor 10 .
  • the evaluator 3 judges the condition of the subject 2 and inputs information representing the judgment result into the terminal 20 as condition information.
  • the acquisition unit 34 acquires state information from the terminal 20 .
  • the acquisition unit 34 controls the communication unit 31 to receive the detection result transmitted by the terminal 20 .
  • step S3 the generating unit 36 generates data used for machine learning.
  • Step S3 includes, for example, steps S4 to S6.
  • step S4 the generator 36 derives the timing at which the state information was acquired.
  • step S5 the generator 36 derives the detection result at the timing derived in step S4.
  • step S6 the generation unit 36 associates the state information acquired in step S2 with the detection result derived in step S5. For example, the generation unit 36 sets the detection result and the state information as input teacher data for the model M (see FIG. 5) and the state information as the output teacher data for the model M (see FIG. 5).
  • step S7 the information processing device 30 provides data to the learning unit 41.
  • the communication unit 31 of the information processing device 30 transmits data including teacher data generated by the generation unit 36 to the information processing device 40 including the learning unit 41 .
  • the learning unit 41 uses the data provided in step S7 to generate a model through machine learning.
  • step S ⁇ b>9 the learning unit 41 provides the model generated in step S ⁇ b>9 to the estimation unit 51 .
  • the information processing device 40 including the learning unit 41 transmits data representing the inference model generated by the learning unit 41 to the information processing device 40 including the estimation unit 51 .
  • step S10 the estimation unit 51 processes the detection result of the sensor 10 using the model provided in step S9 to estimate the state of the object.
  • the communication unit 31 of the information processing device 30 transmits the sensor detection result to the information processing device 50, and the estimation unit 51 estimates the target state using the sensor detection result received by the information processing device 50. do.
  • the estimating unit 51 inputs, for example, the model M (see FIG. 5) in which the result of machine learning is reflected to the input layer M1, and derives the data output from the output layer M3 as the estimation result.
  • step S11 the estimation unit 51 provides the application 60 with information indicating the target state estimated in step S10.
  • the information processing device 50 transmits data indicating the estimation result of the estimation unit 51 to the information processing device on which the application 60 is installed.
  • step S2 may be performed before at least part of the process of step S1, or may be performed in parallel with at least part of the process of step S1.
  • step S3 the generation unit 36 may derive the timing at which the detection result of the sensor 10 is acquired, and derive the state information at the derived timing.
  • the information processing system 1 may repeatedly execute the processing from step S1 to step S3 (referred to as teacher generation processing as appropriate) and accumulate the teacher data generated in step S3.
  • the information processing system 1 may execute the processes after step S7 when the data amount of the accumulated teacher data reaches or exceeds a predetermined amount.
  • step S8 may be repeatedly executed.
  • a first teacher generation process may generate a first model
  • a second teacher generation process executed after the first teacher generation process may generate a second model.
  • the second model may be generated using teacher data having a larger amount of data than the first model.
  • the first model may be generated using first teacher data
  • the second model may be generated using second teacher data accumulated after the generation of the first teacher data.
  • a second model may be generated using the first teacher data and the second teacher data.
  • the second teacher data may be generated by adding (for example, feedback) the error of the estimation result using the first model.
  • the second training data may differ from the first training data in a set of objects corresponding to the original data when generated.
  • the first teacher data is teacher data based on the detection results and state information about the plurality of targets 2
  • the second teacher data is a part of the plurality of targets 2 (e.g., a target person 2A) may be teacher data based only on detection results and state information.
  • the second model may differ from the first model in terms of the state estimation target.
  • the first model is used for estimating the status of employees belonging to a given company
  • the second model is used for estimating the status of employees belonging to a given department within a given company.
  • the second model may not be generated.
  • the process of step S10 and the process of step S11 may be repeatedly performed using the first model.
  • the information processing system 1 in the above-described embodiment associates the detection result of the sensor 10 with information different from the detection result of the sensor 10 to generate an inference model.
  • the state of the target 2 can be estimated by the estimation unit 51 .
  • Such an information processing system 1 contributes to grasping the state of the subject 2 even if, for example, the evaluation by the evaluator 3 is simplified or omitted after the inference model is generated.
  • the information processing system 1 can facilitate, for example, associating different information about the target 2, and contributes to facilitating multifaceted analysis of the target 2.
  • the information processing system 1 can, for example, express the state of the target 2 as data according to a certain rule based on an inference model, and contributes to objective evaluation of the state of the target 2 .
  • the first application example has been described in which the subject 2 is an employee working in an office and the evaluator 3 is the employee's boss.
  • Application examples of the information processing system 1 will be described below, but the scope of application of the information processing system 1 is not limited to the examples described above and the examples described later.
  • subject 2 is a worker at a construction site
  • evaluator 3 is a site supervisor.
  • the sensor 10 detects the temperature of the target 2 (eg, body temperature), the humidity in the vicinity of the target 2 (eg, sweating state), the movement of the target 2 (eg, hand movement, posture), and the vital activity of the target 2 (eg, , pulse, blood pressure).
  • the evaluator 3 evaluates the health condition of the subject 2 based on, for example, the complexion and the like, and inputs the evaluation result into the terminal 20 as condition information.
  • the information processing system 1 uses the detection result of the sensor 10 and state information to generate an inference model for estimating the health condition of the worker, for example.
  • the estimation unit 51 acquires the detection result of the sensor 10 in real time, and estimates the worker's health condition (eg, physical condition, degree of fatigue) in real time using an inference model.
  • the application 60 may use the estimation result of the estimation unit 51 to determine whether the worker's health condition is suitable for continuing work. For example, the application 60 estimates the health condition of the worker after a predetermined period of time has elapsed using the history of estimation results regarding the health condition of the worker. For example, the application 60 estimates the health condition of the worker by calculating an index indicating the health condition of the worker after a predetermined period of time has passed. The application 60 compares the health condition indicator with a threshold value to determine whether the worker's health condition is suitable for continuing work. For example, the worse the health condition, the lower the index, and the application 60 determines that the health condition of the worker is not suitable for continuing work when the calculated index is less than the threshold.
  • the application 60 may output the determination result.
  • the application 60 is provided in the information processing device 30, and the information processing device 30 sends a warning about the worker's health condition to the terminal 20 (eg, the site supervisor's terminal) associated with the target 2 who is the worker, or It may be provided to other devices.
  • the site supervisor may make the worker stop the work or take a rest, referring to the above warning.
  • the information processing system 1 can be used for health management of workers.
  • the position of the subject 2 and the relationship between the subject 2 and the evaluator 3 are not limited to the above examples.
  • the subject 2 may be a care recipient and the evaluator 3 may be the subject 2's caregiver.
  • the subject 2 may be a protected person and the evaluator 3 may be the subject 2's guardian.
  • the subject 2 may be a patient and the evaluator 3 may be a medical practitioner in charge of the subject 2 .
  • the target 2 may be an animal other than humans.
  • the subject 2 is a domesticated animal and the evaluator 3 is the keeper of this animal.
  • the animal may be an animal raised in a zoo or the like, or may be a pet raised by an individual.
  • the sensor 10 detects, for example, information about vital activity of the animal, which is the subject 2, and the evaluator 3 inputs the health condition of the animal, which is the subject 2, into the terminal 20 as condition information.
  • the information processing system 1 generates an inference model by the same processing as in the second application example, and uses this inference model to estimate the state of health of the animal. Such an information processing system 1 can be used for health management of the subject 2, as in the second application example.
  • the information processing system 1 may estimate the growth state as the state of the animal that is the target 2 .
  • the estimation result of the growth state may be used, for example, as a reference for adjusting breeding conditions, or as a reference for deciding shipping of livestock products (eg, shipping amount, shipping time).
  • Object 2 may be an organism other than an animal.
  • the information processing system 1 determines the state of the target 2 (eg, health condition, growth) as in the case where the target 2 is an animal as described above. Contribute to managing (e.g., understanding).
  • the target 2 may be an inanimate object (eg, machine, article).
  • the target 2 is a machine
  • the evaluator 3 is a manager (eg, operator, maintenance person, inspection person) of the target 2 which is the machine.
  • the sensor 10 detects, for example, information (eg, temperature, vibration) indicating the operating state of the machine that is the object 2 .
  • the evaluator 3 inspects the object 2, which is, for example, a machine, and inputs information indicating the operating state to the terminal 20 as state information.
  • the operating state may include information as to whether the operation is normal, information indicating the degree of certainty that a failure has occurred, and information indicating whether inspection or maintenance is necessary.
  • the information processing system 1 uses the detection result of the sensor 10 and state information to generate an inference model for estimating the operating state of the machine, for example.
  • the estimating unit 51 for example, acquires the detection result of the sensor 10 and estimates the operating state of the machine using an inference model.
  • the application 60 uses the estimation result of the estimation unit 51 to output (eg, notify) information about the machine when the estimated operating state satisfies a predetermined condition.
  • the sensor 10 detects vibrations of the target 2, which is a machine, and the estimation unit 51 estimates the probability that the target 2 is out of order (arbitrarily referred to as failure probability).
  • the application 60 is provided in the information processing device 30 and outputs a warning prompting inspection of the target 2 when the failure probability estimated by the estimation unit 51 is equal to or greater than a predetermined value.
  • the information processing device 30 provides the warning output by the application 60 to the terminal 20 associated with the target 2 (for example, the terminal of the person in charge of inspection).
  • FIG. 7 is a diagram showing an information processing system according to the second embodiment.
  • the same reference numerals are given to the same configurations as those of the above-described embodiment, and the description thereof is omitted or simplified.
  • the information processing device 30 includes a control section 37 .
  • the control unit 37 requests the predetermined terminal specified by the specifying unit 35 to provide the state information.
  • the acquisition unit 34 acquires state information from the terminal 20 as a response to the request.
  • the control unit 37 requests the terminal 20 to provide state information when the detection result of the sensor 10 satisfies a predetermined condition.
  • the control unit 37 determines whether the detection result obtained from the sensor 10 by the obtaining unit 34 satisfies a predetermined condition.
  • the sensor 10 is a wearable device (eg, smart watch) worn by the subject 2 and detects information indicating the vital activity of the subject 2 .
  • the sensor 10 is worn on the hand of the subject 2 and detects acceleration.
  • the detection results of the sensor 10 show different values or waveforms, for example, when the subject 2 is working (for example, operating a computer) and when the subject 2 is resting.
  • the sensor 10 detects at least one of the subject's 2 heart rate, blood pressure, body temperature, and perspiration.
  • the detection result of the sensor 10 shows different values or waveforms depending on, for example, the subject's 2 stress, tension, and concentration level.
  • the control unit 37 performs frequency analysis on the waveform of the sensor value at predetermined time intervals to calculate the characteristic frequency of the waveform. For example, when the difference between the characteristic frequency in the first period and the characteristic frequency in the second period following the first period exceeds a threshold, the control unit 37 determines that the predetermined condition is satisfied.
  • control unit 37 may perform the above determination by comparing one value (eg, sensor value, moving average value of sensor values with respect to time) indicating the detection result of the sensor 10 with a threshold value.
  • the predetermined condition may be set to the same condition for a plurality of targets 2 or may be set for each target 2 .
  • the predetermined conditions for at least one target 2 among the plurality of targets 2 may differ from the predetermined conditions for other targets.
  • a plurality of targets 2 are divided into a plurality of groups, and the predetermined condition may be set for each group. For example, a plurality of target persons 2 may be divided into a plurality of groups (eg, departments), and predetermined conditions may be set according to the work content of each group.
  • the control unit 37 monitors the detection result of the sensor 10, for example.
  • the control unit 37 determines that the detection result of the sensor 10 satisfies a predetermined condition, it generates a state information request.
  • the state information request is information (eg, command) that causes the terminal 20 to provide (eg, transmit) state information.
  • the information processing device 30 designates the target 2 and requests provision of state information about the target 2 .
  • the status information request includes information identifying the target 2 for which status information is requested (eg, the target ID shown in FIG. 3).
  • the specifying unit 35 refers to the candidate information D6 stored in the storage unit 33, and specifies the target ID corresponding to the sensor ID of the sensor 10 that provided the detection result used in the determination.
  • the control unit 37 generates a state information request including, for example, the target ID specified by the specifying unit 35 and a message requesting state information.
  • the information processing device 30 transmits a state information request to the terminal 20 to which the state information is requested.
  • the identifying unit 35 identifies the terminal 20 corresponding to the sensor 10 that provided the detection result used in the above determination.
  • the specifying unit 35 refers to the candidate information D6 stored in the storage unit 33, and specifies the terminal ID corresponding to the sensor ID (see FIG. 3) of the sensor 10 that provided the detection result used in the determination. do.
  • the control unit 37 causes the communication unit 31 to transmit the state information request to the terminal 20 specified by the specifying unit 35 .
  • the communication unit 31 causes the storage unit 33 to store information (eg, time stamp) indicating the timing of transmitting the state information request.
  • the terminal 20 When the terminal 20 receives the state information request, it outputs a notification (eg, image, voice) prompting the user to enter the state information. For example, the terminal 20 displays an image for accepting input of state information on the display unit 21 as shown in FIG. This image includes, for example, information identifying the subject 2 (eg, the name of the subject 2). Status information is input to the terminal 20 by the evaluator 3 , and the terminal 20 provides the input status information to the information processing device 30 .
  • the information processing device 30 receives the state information transmitted from the terminal 20 .
  • the generation unit 36 of the information processing device 30 uses one or both of the first timing at which the state information request is transmitted and the second timing at which the state information is transmitted so that the state information is transmitted by the terminal 20. Deriving (eg, estimating) the obtained timing.
  • the generation unit 36 calculates the time between the time indicating the first timing and the time indicating the second timing as the timing at which the terminal 20 acquires the state information.
  • the generator 36 derives the detection result of the sensor 10 at the derived timing and generates teacher data.
  • the information processing device 30 provides teacher data to the information processing device 40 as described in the first embodiment.
  • the learning unit 41 of the information processing device 40 executes machine learning using teacher data to generate a model.
  • the information processing device 40 provides the model to the information processing device 50 .
  • the estimation unit 51 of the information processing device 50 processes the detection result of the sensor 10 using a model to estimate the state of the target 2 .
  • the information processing device 50 provides the estimation result of the state of the target 2 to the application 60 .
  • the information processing system 1 can selectively cause the evaluator 3 to evaluate the subject 2 in a situation where it is estimated that the state of the subject 2 has changed. Such an information processing system 1 contributes to, for example, reducing the load required for the process of acquiring state information when the state of the target 2 is substantially the same.
  • FIG. 8 is a diagram showing an information processing system according to the third embodiment.
  • the same reference numerals are given to the same configurations as those of the above-described embodiment, and the description thereof will be omitted or simplified.
  • the identifying unit 35 of the information processing system 1 identifies a predetermined terminal from among the terminals 20 existing within a predetermined range with respect to the detection target position of the sensor 10 .
  • the generation unit 36 generates teacher data using the state information provided from the predetermined terminal identified by the identification unit 35 and the detection result of the sensor 10 .
  • the storage unit 33 of the information processing device 30 stores, for example, the location information D7, and the specifying unit 35 uses the location information D7 to specify the location of the predetermined terminal.
  • the position information D7 includes, for example, information on the position of the detection target of each sensor 10 (arbitrarily referred to as sensor position information).
  • the sensor position information includes, for example, information indicating the position where each sensor 10 is installed and information indicating the detectable range of each sensor 10 .
  • Sensor location information may include information indicating the location of the target 2 associated with the sensor 10 .
  • the sensor position information may be the position information of the seat of the subject 2 in the office.
  • the sensor position information may be the central position of the field of view (angle of view) when the sensor 10 takes an image.
  • the sensor position information may be given as a fixed value when the movement of the sensor 10 is not assumed (for example, when the sensor 10 is a fixed point camera).
  • the sensor position information may be given as a variable value when the sensor 10 is assumed to move (for example, when the sensor 10 is a wearable device).
  • a device eg, wearable device, mobile device
  • Any method may be used to detect the position of the sensor 10.
  • a method using GPS, a method using signals (e.g., beacon signals) received from a plurality of other devices, or other methods may be used. good.
  • a device that detects the position of the sensor 10 may be a device including the sensor 10 or a device different from this device (for example, a device external to the sensor 10).
  • the information processing device 30 acquires the position information of the sensor 10 from a device including the sensor 10 or a device different from this device, and stores the information in the storage unit 33 using the acquired information. may update the sensor position information stored by .
  • the position information D7 includes, for example, the position information of the terminal 20 (arbitrarily referred to as terminal position information).
  • the terminal location information may be given as a fixed value when the terminal 20 is not expected to move (for example, when the terminal 20 is a stationary computer).
  • the terminal location information may include information indicating the location where the terminal 20 is installed.
  • the terminal location information may include information indicating the location of the evaluator 3 associated with the terminal 20 .
  • the terminal 20 may be installed at the seat of the evaluator 3 in the office, and the terminal position information may include the position information of the seat of the evaluator 3 .
  • the terminal location information may be given as a variable value when the terminal 20 is expected to move (for example, when the terminal 20 is a smartphone).
  • the terminal 20 eg, smart phone
  • Any method may be used to detect the position of the terminal 20.
  • a method using GPS, a method using signals (e.g., beacon signals) received from a plurality of other devices, or other methods may be used. good.
  • a device that detects the position of the terminal 20 may be the terminal 20 or a device different from the terminal 20 (for example, a device external to the terminal 20).
  • the information processing device 30 acquires the position information of the terminal 20 from the terminal 20 or a device different from the terminal 20, and the storage unit 33 stores the acquired information.
  • Terminal location information may be updated.
  • the specifying unit 35 refers to the position information D7 stored in the storage unit 33 and specifies (eg, determines) the terminal 20 to which the state information of the target 2 is input. For example, the specifying unit 35 sets the predetermined range R using the sensor position information included in the position information D7.
  • the predetermined range R is, for example, a range in which the distance from the detection target position of the sensor 10 is equal to or less than a predetermined value.
  • the position of the detection target of the sensor 10 is the position of the object 2 .
  • the predetermined range R is, for example, an area inside a circle (eg, geofence) centered on the position of the target 2 . The radius of this circle corresponds to the predetermined value.
  • the predetermined range R is set, for example, to a range in which the evaluator 3 can directly recognize the target 2 .
  • the radius of the predetermined range R is set in advance to a distance (eg, several meters) at which a person can recognize the person with normal eyesight. be done.
  • the specifying unit 35 determines whether or not the terminal 20 exists inside the predetermined range R using the terminal location information included in the location information D7.
  • the specifying unit 35 sets, among the terminals whose terminal location information is included in the location information D7, terminals existing inside a predetermined range R (terminals 20A and 20B in FIG. 8) as predetermined terminal candidates.
  • the specifying unit 35 excludes a terminal (the terminal 25 in FIG. 8) existing outside the predetermined range R from among the terminals whose terminal position information is included in the position information D7 from the predetermined terminal candidates.
  • the generating unit 36 generates teacher data by associating the state information provided by the terminal set as a predetermined terminal candidate by the specifying unit 35 with the detection result of the sensor 10 that detects the target 2 .
  • the information processing device 30 transmits a state information request to at least one of the predetermined terminal candidates.
  • the control unit 37 shown in FIG. 7 selects at least one terminal 20 from the predetermined terminal candidates as the predetermined terminal.
  • the control unit 37 may randomly select a predetermined terminal from the predetermined terminal candidates.
  • the control unit 37 may select a predetermined terminal from the predetermined terminal candidates according to a predetermined rule. For example, multiple terminals are registered in advance in the information processing device 30 .
  • a priority order may be set in advance for a plurality of registered terminals, and the control unit 37 may select a predetermined terminal according to the priority order.
  • the predetermined terminal candidates include the terminal 20A of the boss of the subject 2 and the terminal 20B of the colleague of the subject 2, and the control unit 37 prioritizes the terminal 20A of the boss over the terminal 20B of the colleague. You may choose to
  • the information processing device 30 when the information processing device 30 acquires the detection result of the sensor 10, it provides a state information request including information specifying the target 2 corresponding to this sensor 10 to a predetermined terminal.
  • the terminal 20 does not have to be pre-associated with the target 2, and the generator 36 associates the detection result of the sensor 10 with the state information acquired as a response to the state information request to generate teacher data.
  • the information specifying the target 2 may be associated in advance with the information specifying the sensor 10 or may be derived using the detection result of the sensor 10 .
  • the sensor 10 includes a camera, and the information processing device 30 extracts the features of the target 2 appearing in the image captured by the sensor 10, compares the information of the extracted features with pre-registered information, and extracts the feature of the target. 2 may be specified. In this case, the information specifying the target 2 may not be associated in advance with the information specifying the sensor 10 .
  • FIG. 9 is a diagram showing an information processing system according to the fourth embodiment.
  • the same reference numerals are given to the same configurations as those of the above-described embodiment, and the description thereof will be omitted or simplified.
  • the information processing system 1 generates teacher data using the detection result of the sensor 11 and the detection result of the sensor 12 different from the sensor 11 .
  • the sensor 10 is, for example, a sensor similar to the sensor 10 shown in FIG.
  • the sensor 12 differs from the sensor 11 in the type of information it detects, for example.
  • the sensor 11 may be a sensor that detects vital activity of the subject 2 and the sensor 12 may be a camera that detects the appearance of the subject 2 .
  • Sensor 11 may be a sensor that detects sounds produced by object 2 and sensor 12 may be a camera that detects the appearance of object 2 .
  • the sensor 12 may differ from the sensor 11 in positional relationship with respect to the target 2 .
  • sensor 12 may have a different distance from object 2 than sensor 11 .
  • the sensor 11 may be a sensor attached to the target 2 like a wearable device, and the sensor 12 may be a sensor remote from the target 2 like a surveillance camera.
  • the sensor 12 may be a sensor that detects the target 2 in a direction different from that of the sensor 11 .
  • the sensors 11 and 12 both include cameras, the sensor 11 detects (eg, photographs) the target 2 from a first direction (eg, lateral), and the sensor 12 detects the target 2 from the first direction. Detection (eg, photographing) may be performed from a second direction (eg, front) different from the direction.
  • the sensor 12 provides the detection result to the terminal 26.
  • the terminal 26 may be the same terminal as the terminal 20 described in FIG. 1, or may be a terminal different from the terminal 20.
  • FIG. The terminal 26 has an estimator 27 .
  • the estimation unit 27 estimates the state of the subject 2 using the detection result of the sensor 12 .
  • the estimation unit 27 processes the detection result of the sensor 12 using a model generated by machine learning to estimate the state of the object.
  • the terminal 20 uses the estimation result of the estimation unit 27 to generate state information.
  • the terminal 20 provides the generated state information to the information processing device 30 .
  • the information processing device 30 generates training data using the detection result of the sensor 11 and the state information provided from the terminal 20.
  • the information processing device 30 provides the generated training data to the information processing device 40.
  • the learning unit 41 of the information processing device 40 executes machine learning using the teacher data provided from the information processing device 30 to generate an inference model.
  • the information processing device 40 provides the generated inference model to the information processing device 50 .
  • the estimation unit 52 of the information processing device 50 may be the same as or different from the estimation unit 51 described with reference to FIG.
  • the estimation unit 52 estimates the state of the target 2 using the detection result of the sensor 11 and the inference model generated by the learning unit 41 .
  • the information processing device 50 provides the estimation result of the estimation unit 52 to the application 60 .
  • the information processing system 1 is used, for example, as follows. As described with reference to FIG. 1, the information processing system 1 generates teacher data using the detection result of the sensor 10 and the state information input to the terminal 20 by the evaluator 3, and uses the generated teacher data. Generate a first inference model.
  • the sensor 12 in FIG. 9 includes a sensor that detects the same type of information as the sensor 10, and the estimation unit 27 of the terminal 26 performs estimation processing using the first inference model.
  • the sensor 10 in FIG. 1 and the sensor 12 in FIG. 9 both include sensors that detect information indicating vital activity of the subject 2 (eg, heart rate).
  • a first inference model is a model for estimating the state of the subject 2 from information indicating the vital activity of the subject 2 .
  • the estimation unit 27 of the terminal 26 processes the information indicating the vital activity of the subject 2 detected by the sensor 12 and estimates state information (eg, health condition of the subject 2).
  • Sensor 11 includes, for example, a camera.
  • the information processing apparatus 30 uses the data of the image of the subject 2 photographed by the sensor 11 as the detection result, and the data indicating the health condition of the subject 2 estimated by the estimation unit 27 as the state information, as the second teacher data. to generate
  • the learning unit 41 generates an inference model for estimating the health condition of the target 2 from the image of the target 2 using the second learning data. By repeating such processing, the information processing system 1 can generate an inference model that associates a plurality of types of information.
  • FIG. 10 is a diagram showing an example of a computer according to the embodiment.
  • the computer 100 is, for example, at least an information processing device according to the embodiment (eg, information processing device 30, information processing device 40, information processing device 50) and a terminal (eg, terminal 20, terminal 26). used for one.
  • Computer 100 includes CPU 101 , ROM 102 , RAM 103 , bus 104 , input section 105 , output section 106 , storage section 107 , communication section 108 and drive 109 .
  • the CPU 101, ROM 102 and RAM 103 are interconnected via a bus 104, respectively.
  • An input unit 105 , an output unit 106 , a storage unit 107 , a communication unit 108 and a drive 109 are connected to the bus 104 .
  • the CPU 101 includes a Central Processing Unit.
  • ROM 102 includes Read Only Memory.
  • RAM 103 includes Random Access Memory.
  • the CPU 101 executes various processes according to programs recorded in the ROM 102 or programs loaded from the storage unit 107 to the RAM 103 .
  • the RAM 103 also stores data necessary for the CPU 101 to execute various kinds of processing.
  • the input unit 105 includes, for example, at least one of a keyboard, mouse, trackball, and touchpad.
  • the input unit 105 receives input of various information by being operated by the user.
  • the input unit 105 may include a microphone or the like that receives voice input.
  • the output unit 106 includes, for example, a display device (eg, display) that outputs images.
  • the output unit 106 may include a speaker that outputs audio.
  • the input unit 105 and the output unit 106 may include a touch panel in which a transmissive touch pad is superimposed on the display unit.
  • the storage unit 107 includes, for example, at least one of a hard disk, solid state drive, and nonvolatile memory.
  • the storage unit 107 stores at least part of various data externally input to the computer 100 , various data used in processing by the computer 100 , and various data generated by the computer 100 .
  • a communication unit 108 controls communication with another device via a network.
  • a removable medium 110 is appropriately attached to the drive 109 .
  • Removable media 110 include, for example, storage media such as magnetic disks, optical disks, magneto-optical disks, and semiconductor memories.
  • Drive 109 reads information from removable media 110 .
  • Information read from the removable medium 110 is stored in the storage unit 107 .
  • drive 109 reads a program stored on removable media 110 and this program is installed on computer 100 .
  • the computer 100 does not have to include one or both of the input unit 105 and the output unit 106 .
  • the output unit 106 may be a device externally attached to the computer 100, and the computer 100 may include an interface to which the output unit 106 can be connected.
  • the information processing device and the terminal read the program stored in the storage unit and the storage device, and execute various processes according to this program.
  • This program for example, causes the computer to receive the detection result of the sensor that detects the target and the state information representing the state of the target, which is information obtained from information different from the detection result of the sensor, from a predetermined terminal associated with the target. Acquiring and using the acquired detection results and state information to generate data used for machine learning are performed.
  • the above program may be recorded on a computer-readable storage medium and provided.
  • the above program may be a differential program (also referred to as a differential file as appropriate) that executes various processes in combination with a program (eg, operating system) recorded in a computer system.
  • a program eg, operating system
  • the information processing system 1 associates the first information and the second information about the target to generate data used for machine learning.
  • the second information includes information different from the first information.
  • the first information includes information obtained by detecting the target 2 with a sensor
  • the second information includes information representing the state of the target judged by a person.
  • the second information may include information obtained by processing the first information by the device.
  • the first information may correspond to the detection result of a sensor that has detected an object, and the second information may include the result of arithmetic processing, statistical processing, or inference processing using the first information.
  • the second information includes information representing the state of the object obtained by other systems.
  • the first information may include information obtained by detecting the target 2 with a sensor
  • the second information may include information on the target 2 provided by a system that monitors or provides information.
  • the first information may be information of a different type from the second information.
  • the first information may be image-based information and the second information may be material composition, temperature, or force-based information.
  • the first information may be visually based information and the second information may be auditory, olfactory, gustatory, or tactile based.
  • the first information may include information acquired by a sensor attached to the target, and the second information may include information acquired by a sensor installed away from the target.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

[Problem] To generate data to be used for machine learning. [Solution] This information processing device comprises: an acquisition unit that acquires a result of detection by a sensor that detects a subject, and acquires, from a predetermined terminal associated with the subject, state information representing the state of the subject and obtained from information different from the result of detection by the sensor; and a generation unit that generates data to be used for machine learning by using the result of detection and state information acquired by the acquisition unit.

Description

情報処理装置、情報処理システム、情報処理プログラム、及び情報処理方法Information processing device, information processing system, information processing program, and information processing method
 本発明は、情報処理装置、情報処理システム、情報処理プログラム、及び情報処理方法に関する。 The present invention relates to an information processing device, an information processing system, an information processing program, and an information processing method.
 ディープラーニング等の機械学習を利用した人工知能(以下、AIと表す)は、様々な分野において活用が期待される。機械学習には、例えば、既知の入力と出力とを一組にした教師データが用いられる。下記の特許文献1(日本国特許第6492880号)には、画像から抽出される特徴情報と、画像をユーザが視認して入力されるユーザ情報とを用いた機械学習が提案されている。AIによる推論結果は機械学習に用いられる教師データに依存する。そのため、機械学習の分野において、AIの用途に応じた機械学習の教師データを生成する技術が望まれる。 Artificial intelligence (hereinafter referred to as AI) that uses machine learning such as deep learning is expected to be used in various fields. For machine learning, for example, teacher data, which is a set of known inputs and outputs, is used. Patent Document 1 (Japanese Patent No. 6492880) below proposes machine learning using feature information extracted from an image and user information input by a user visually recognizing the image. Inference results by AI depend on teacher data used for machine learning. Therefore, in the field of machine learning, a technique for generating teacher data for machine learning according to the use of AI is desired.
特許第6492880号Patent No. 6492880
 本発明の第1の態様に従えば、対象を検出するセンサの検出結果を取得し、センサの検出結果と異なる情報から得られる情報であって対象の状態を表す状態情報を、対象に関連付けられる所定の端末から取得する取得部と、取得部が取得した検出結果および状態情報を用いて、機械学習に利用されるデータを生成する生成部と、を備える情報処理装置が提供される。 According to the first aspect of the present invention, the detection result of the sensor that detects the target is acquired, and the state information obtained from the information different from the detection result of the sensor and representing the state of the target is associated with the target. An information processing apparatus is provided that includes an acquisition unit that acquires from a predetermined terminal, and a generation unit that generates data used for machine learning using the detection result and state information acquired by the acquisition unit.
 本発明の第2の態様に従えば、第1の態様の情報処理装置と、取得部が取得した検出結果を、学習部が機械学習によって生成したモデルによって処理して、対象の状態を推定する推定部と、を備える情報処理システムが提供される。 According to the second aspect of the present invention, the information processing apparatus according to the first aspect processes the detection result obtained by the obtaining unit using a model generated by machine learning by the learning unit to estimate the state of the object. An information processing system is provided, comprising: an estimator;
 本発明の第3の態様に従えば、コンピュータに、対象を検出するセンサの検出結果を取得し、センサの検出結果と異なる情報から得られる情報であって対象の状態を表す状態情報を、対象に関連付けられる所定の端末から取得することと、取得した検出結果および状態情報を用いて、機械学習に利用されるデータを生成することと、を実行させる情報処理プログラムが提供される。 According to the third aspect of the present invention, the computer acquires the detection result of the sensor that detects the target, and the state information obtained from the information different from the detection result of the sensor and representing the state of the target is transmitted to the target. and generating data used for machine learning using the obtained detection results and state information.
 本発明の第4の態様に従えば、1又は2以上のコンピュータを用いた情報処理方法であって、コンピュータの通信部が、対象を検出するセンサの検出結果を受信し、センサの検出結果と異なる情報から得られる情報であって対象の状態を表す状態情報を、対象に関連付けられる所定の端末から受信することと、コンピュータの処理部が、通信部が受信した検出結果および状態情報を用いて、機械学習に利用されるデータを生成することと、を含む情報処理方法が提供される。 According to a fourth aspect of the present invention, an information processing method using one or more computers, wherein the communication unit of the computer receives the detection result of a sensor that detects an object, and receiving state information obtained from different information and representing the state of the object from a predetermined terminal associated with the object; and generating data utilized for machine learning.
第1実施形態に係る情報処理システムを示す図である。1 is a diagram showing an information processing system according to a first embodiment; FIG. 所定に端末に係る処理を示す図である。FIG. 4 is a diagram showing processing related to a predetermined terminal; センサと端末の関連付けを示す図である。FIG. 4 is a diagram showing association between sensors and terminals; 状態情報に対応するセンサの検出結果を導出する処理を示す図である。It is a figure which shows the process which derives the detection result of the sensor corresponding to state information. 機械学習を実行する処理を示す図である。It is a figure which shows the process which performs machine learning. 第1実施形態に係る情報処理方法を示す図である。It is a figure which shows the information processing method which concerns on 1st Embodiment. 第2実施形態に係る情報処理システムを示す図である。It is a figure which shows the information processing system which concerns on 2nd Embodiment. 第3実施形態に係る情報処理システムを示す図である。It is a figure which shows the information processing system which concerns on 3rd Embodiment. 第4実施形態に係る情報処理システムを示す図である。It is a figure which shows the information processing system which concerns on 4th Embodiment. 実施形態に係るコンピュータを示す図である。It is a figure which shows the computer which concerns on embodiment.
[第1実施形態]
 第1実施形態について説明する。図1は、第1実施形態に係る情報処理システムを示す図である。情報処理システム1は、対象2に関する第1の情報と第2の情報とを関連付けて、機械学習に利用されるデータを生成する。このデータは、機械学習の教師データ又は教師データの元になるデータを含む。以下の説明において適宜、機械学習に利用されるデータを総称して教師データという。
[First embodiment]
A first embodiment will be described. FIG. 1 is a diagram showing an information processing system according to the first embodiment. The information processing system 1 associates the first information and the second information regarding the target 2 to generate data used for machine learning. This data includes teacher data for machine learning or data that is the source of the teacher data. In the following description, data used for machine learning is collectively referred to as teacher data as appropriate.
 本実施形態において、第1の情報は対象2をセンサ10が検出した結果に相当し、第2の情報は対象2の状態を人3が判断した結果に相当する。本実施形態において、対象2は人である。以下の説明において適宜、対象2を対象者2と表す。対象者2は、例えば、オフィスにおいて勤務する従業者である。人3は、例えば、対象者2の勤務状態を評価する評価者である。以下の説明において適宜、人3を評価者3と表す。 In the present embodiment, the first information corresponds to the result of detection of the target 2 by the sensor 10, and the second information corresponds to the result of the person 3 judging the state of the target 2. In this embodiment, the subject 2 is a person. In the following description, the target 2 will be referred to as the target person 2 as appropriate. The subject 2 is, for example, an employee working in an office. The person 3 is, for example, an evaluator who evaluates the working condition of the subject person 2 . Person 3 will be referred to as evaluator 3 as appropriate in the following description.
 情報処理システム1は、機械学習によって、第1の情報から第2の情報を推定するAIのモデルを生成する。以下の説明において適宜、上記モデルを推論モデルという。情報処理システム1は、例えば、センサ10が対象者2を検出した検出結果(第1の情報)と、評価者3が対象者2の状態を評価した評価結果(第2の情報)とを関連付けて、機械学習を実行する。情報処理システム1は、機械学習によって、センサ10の検出結果から対象者2の状態を推定するAIの推論モデルを生成する。 The information processing system 1 uses machine learning to generate an AI model that estimates the second information from the first information. In the following description, the above model will be referred to as an inference model as appropriate. The information processing system 1 associates, for example, the detection result (first information) of the detection of the subject 2 by the sensor 10 with the evaluation result (second information) of the state of the subject 2 evaluated by the evaluator 3. to perform machine learning. The information processing system 1 generates an AI inference model for estimating the state of the subject 2 from the detection result of the sensor 10 by machine learning.
 情報処理システム1は、推論モデルを生成した後、この推論モデルが実装されたAIによって、センサ10の検出結果から対象者2の状態を推定する。例えば、情報処理システム1は、対象者2の状態として勤務状態を推定する。情報処理システム1は、推定した対象者2の勤務状態をアプリケーション60に提供する。アプリケーション60は、例えば、対象者2の勤怠を管理するアプリケーションを含む。アプリケーション60は、例えば、対象者2の勤務状態の推定結果を蓄積してログを作成する。例えば、対象者2の上司(例えば、評価者3)は、アプリケーション60が生成したログによって、対象者2の勤怠を把握できる。 After generating the inference model, the information processing system 1 estimates the state of the subject 2 from the detection results of the sensor 10 using AI in which this inference model is implemented. For example, the information processing system 1 estimates the working condition as the condition of the subject 2 . The information processing system 1 provides the application 60 with the estimated work status of the subject person 2 . The application 60 includes, for example, an application that manages attendance of the subject 2 . The application 60, for example, accumulates the results of estimating the working condition of the subject 2 and creates a log. For example, the superior of the subject 2 (for example, the evaluator 3) can grasp the attendance of the subject 2 from the log generated by the application 60. FIG.
 本実施形態における情報処理システム1によれば、例えば、対象者2のデータ(物理量等)を検出するセンサ10の検出結果に基づいて、推論モデルが実装されたAIによって、対象者の状態を推定し、管理することができる。情報処理システム1は、例えば、推論モデルが生成された後において、対象者2の状態を管理する上で、評価者3が評価する手間を減らすことや、対象者2の状態を客観的に評価することに役立つ。 According to the information processing system 1 in the present embodiment, for example, based on the detection result of the sensor 10 that detects the data (physical quantity, etc.) of the subject 2, the AI in which the inference model is implemented estimates the state of the subject. and can be managed. For example, after the inference model is generated, the information processing system 1 reduces the time and effort of the evaluator 3 to evaluate and objectively evaluates the state of the subject 2 in managing the state of the subject 2. help to do
 次に、情報処理システム1の各部について説明する。情報処理システム1は、センサ10と、端末20と、情報処理装置30と、情報処理装置40と、情報処理装置50とを備える。 Next, each part of the information processing system 1 will be described. The information processing system 1 includes a sensor 10 , a terminal 20 , an information processing device 30 , an information processing device 40 , and an information processing device 50 .
 センサ10は、対象2を検出する。センサ10は、例えば、対象者2に付帯するセンサを含む。センサ10は、対象者2が装着するスマートウォッチ等のウェアラブル機器でもよい。センサ10は、例えば、対象者2の生命活動を表す情報を検出する。生命活動を表す情報は、例えば、対象者2の体温、対象者2の表面の湿度、脈拍、血圧、血中酸素濃度、及び呼吸数の少なくとも1つを含む。センサ10は、対象者2においてセンサ10が装着された部位(例えば、手)の動き(例えば、加速度)を検出してもよい。センサ10は、スマートグラスを含んでもよく、対象者2の視線を検出してもよい。 The sensor 10 detects the target 2. Sensors 10 include, for example, sensors associated with subject 2 . The sensor 10 may be a wearable device such as a smartwatch worn by the subject 2 . The sensor 10 detects, for example, information representing the life activity of the subject 2 . The information representing vital activity includes, for example, at least one of the subject's 2 body temperature, the subject's 2 surface humidity, pulse, blood pressure, blood oxygen concentration, and respiratory rate. The sensor 10 may detect the movement (eg, acceleration) of the part (eg, hand) of the subject 2 to which the sensor 10 is attached. The sensor 10 may include smart glasses and may detect the line of sight of the subject 2 .
 本実施形態において、端末20は、対象者2に関連付けられる評価者3の端末を含む。状態情報は、例えば、評価者3によって端末20に入力される情報(対象者2の勤務状態等)を含む。端末20は、評価者3が所持する携帯型の機器(例、スマートフォン、タブレット)でもよいし、据置型の機器(例、パーソナルコンピュータ)でもよい。 In this embodiment, the terminal 20 includes the terminal of the evaluator 3 associated with the subject 2. The status information includes, for example, information input to the terminal 20 by the evaluator 3 (working status of the subject 2, etc.). The terminal 20 may be a portable device (eg, smart phone, tablet) possessed by the evaluator 3, or may be a stationary device (eg, personal computer).
 図2は、端末に係る処理の一例を示す図である。ここでは、対象者2の名称を「A」とする。端末20は、例えばスマートフォンであり、表示部21を備える。表示部21は、例えば透過型のタッチパッドと重ねられ、表示部21およびタッチパッドはタッチパネルを構成する。端末20は、例えば、情報D1~D4を表示部21に表示する。 FIG. 2 is a diagram showing an example of processing related to a terminal. Here, the name of the target person 2 is assumed to be "A". The terminal 20 is a smartphone, for example, and includes a display unit 21 . The display unit 21 is overlapped with, for example, a transmissive touch pad, and the display unit 21 and the touch pad constitute a touch panel. The terminal 20 displays information D1 to D4 on the display unit 21, for example.
 情報D1は、対象者2の状態の入力を受け付けることを示す情報を含む。例えば、情報D1は、状態の入力を受付中であることを通知する情報として、「状態は?」というテキストを含む。情報D1は、対象2を指定する情報を含んでもよい。例えば、情報D1は、対象2を指定する情報として、対象者2の名称(例、「Aさん」)のテキストを含む。対象2を指定する情報は、対象者の名称と異なる情報を含んでもよく、例えば対象者2の識別情報(例、社員番号)を含んでもよい。対象2を指定する情報は、対象の状態を受付中であることを通知する情報と別に表示されてもよい。情報D1は、対象2を指定する情報を含まなくてよい。例えば、端末20に対応する対象2の数が1である場合、評価者3は、対象2を指定する情報が表示されなくても、状態を入力すべき対象を特定できる。 The information D1 includes information indicating that the input of the condition of the subject 2 is accepted. For example, the information D1 includes the text "What is your status?" as information notifying that the status input is being accepted. The information D1 may include information specifying the target 2 . For example, the information D1 includes text of the name of the target person 2 (eg, “Mr. A”) as information specifying the target 2 . The information specifying the target 2 may include information different from the target person's name, and may include, for example, identification information (eg, employee number) of the target person 2 . The information specifying the target 2 may be displayed separately from the information notifying that the status of the target is being received. The information D1 may not include information specifying the target 2 . For example, when the number of subjects 2 corresponding to the terminal 20 is 1, the evaluator 3 can specify the subject whose state is to be input even if the information specifying the subject 2 is not displayed.
 情報D2~D4は、対象2の状態を表す複数の候補に相当する情報を含む。図2において、情報D2~情報D4は、それぞれ、表示される画像中のアイテム(例、アイコン)である。以下の説明において適宜、情報D2、情報D3、情報D4をそれぞれ、アイコンD2、アイコンD3、アイコンD4と表す。アイコンD2は、「仕事中」というテキストを含む。アイコンD3は、「休憩中」というテキストを含む。アイコンD4は「その他」というテキストを含む。「仕事中」、「休憩中」、及び「その他」というテキストは、それぞれ、それぞれ、対象2の状態の候補を表す情報である。対象2の状態の候補を表す情報は、テキストと異なる形態(例、ピクトグラム)で表されてもよい。 Information D2-D4 includes information corresponding to a plurality of candidates representing the state of target 2. In FIG. 2, information D2 to information D4 are items (eg, icons) in the displayed image. In the following description, information D2, information D3, and information D4 will be referred to as icon D2, icon D3, and icon D4, respectively. Icon D2 includes the text "at work". Icon D3 includes the text "resting". Icon D4 contains the text "Other". The texts "at work", "resting", and "others" are information representing candidate states of the subject 2, respectively. Information representing candidates for the state of the object 2 may be represented in a form different from text (for example, a pictogram).
 端末20は、アイコンD2が選択された場合、対象2の状態を表す情報として、アイコンD2と関連付けられた情報(例、「仕事中」)が入力されたことを検出する。例えば、評価者3は、対象者2を視認して、対象者2が仕事中であると判断する。この場合、評価者3がアイコンD2にタッチすることによって、状態情報として「仕事中」に相当する情報が端末20に入力される。 When the icon D2 is selected, the terminal 20 detects that information associated with the icon D2 (eg, "at work") has been input as information representing the state of the target 2. For example, the evaluator 3 visually recognizes the subject 2 and determines that the subject 2 is at work. In this case, when the evaluator 3 touches the icon D2, information corresponding to "at work" is input to the terminal 20 as the status information.
 「その他」のアイコンD4は、例えば、対象2の状態が「不在」である場合等のように、対象2の状態が他のアイコン(アイコンD2、アイコンD3)によって表される状態の候補と異なると判断された場合に選択される。「その他」のアイコンD4は、評価者3が対象者2を認識できない位置に居る場合等のように、評価者3が対象者2の状態を判断できない場合に選択されてもよい。 The "Other" icon D4 has a state of the target 2 different from the state candidates represented by the other icons (icon D2, icon D3), for example, when the state of the target 2 is "absent". is selected when it is determined that The “other” icon D4 may be selected when the evaluator 3 cannot determine the condition of the subject 2, such as when the evaluator 3 is in a position where the subject 2 cannot be recognized.
 評価者3は、対象2の実物を直接的に認識して対象2の状態を端末20に入力してもよい。直接的に認識することは、例えば、評価者3が装置を介さずに対象2の情報を得ることを含む。例えば、評価者3は、対象2の実物を直接見たり、対象2が発する音を直接聞いたりして対象2の状態を判断し、対象2の状態を端末20に入力してもよい。評価者3は、対象者2に対してどのような状態かを音声、文字、又はジェスチャーで質問し、対象者2からの回答をもとに対象2の状態を端末20に入力してもよい。 The evaluator 3 may directly recognize the real object of the subject 2 and input the state of the subject 2 into the terminal 20 . Direct recognition includes, for example, evaluator 3 obtaining information about subject 2 without going through a device. For example, the evaluator 3 may judge the state of the subject 2 by directly seeing the subject 2 or directly hearing the sound emitted by the subject 2 , and input the state of the subject 2 to the terminal 20 . The evaluator 3 may ask the subject 2 what state he or she is in using voice, text, or gestures, and input the state of the subject 2 into the terminal 20 based on the response from the subject 2. .
 評価者3は、対象2を間接的に認識して対象2の状態を端末20に入力してもよい。間接的に認識することは、例えば、評価者3が装置を介して対象2の情報を得ることを含む。評価者3は、センサ10(適宜、第1のセンサと表す)と異なるセンサ(適宜、第2のセンサと表す)で対象2を検出した結果を用いて、対象2の状態を端末20に入力してもよい。第2のセンサは、検出する情報の種類が第1のセンサと異なってもよい。例えば、第1のセンサは対象者2の生命活動を検出するセンサであって、第2のセンサは対象者2の外観を検出するカメラであってもよい。この場合、評価者3は、対象2をカメラで撮影した画像(例、動画)を見て、対象2の状態を端末20に入力してもよい。 The evaluator 3 may indirectly recognize the subject 2 and input the state of the subject 2 to the terminal 20 . Indirectly perceiving includes, for example, evaluator 3 obtaining information about subject 2 via a device. The evaluator 3 inputs the state of the target 2 to the terminal 20 using the result of detecting the target 2 with a sensor (suitably referred to as a second sensor) different from the sensor 10 (suitably referred to as a first sensor). You may The second sensor may differ from the first sensor in the type of information it detects. For example, the first sensor may be a sensor that detects the subject's 2 vital activity, and the second sensor may be a camera that detects the subject's 2 appearance. In this case, the evaluator 3 may input the state of the subject 2 to the terminal 20 by viewing an image (eg, moving image) of the subject 2 captured by a camera.
 上記第2のセンサは、対象2に対する位置関係が第1のセンサと異なってもよい。例えば、第2のセンサは、対象2からの距離が第1のセンサと異なってもよい。例えば、第1のセンサは、ウェアラブル機器のように対象2に付帯するセンサであって、第2のセンサは、監視カメラのように対象2から離れたセンサであってもよい。また、第2のセンサは、対象2を検出する方向が第1のセンサと異なるセンサでもよい。例えば、第1のセンサおよび第2のセンサはいずれもカメラを含み、第1のセンサは、対象2を第1の方向(例、側方)から検出(例、撮影)し、第2のセンサは、第2のセンサは、対象2を第1の方向と異なる第2の方向(例、正面)から検出(例、撮影)してもよい。 The positional relationship of the second sensor with respect to the target 2 may be different from that of the first sensor. For example, the second sensor may have a different distance from the target 2 than the first sensor. For example, the first sensor may be a sensor attached to the target 2, such as a wearable device, and the second sensor may be a sensor remote from the target 2, such as a surveillance camera. Also, the second sensor may be a sensor that detects the target 2 in a direction different from that of the first sensor. For example, both the first sensor and the second sensor include a camera, the first sensor detects (eg, photographs) the object 2 from a first direction (eg, lateral), and the second sensor , the second sensor may detect (eg, photograph) the target 2 from a second direction (eg, front) different from the first direction.
 図1の説明に戻り、情報処理装置30は、通信部31と、処理部32と、記憶部33を含む。通信部31は、情報処理装置30が他の装置との間で実行する通信を制御する。処理部32は、情報を処理する。処理部32によって実行される処理は、例えば、演算処理、画像処理、解析処理、判定処理、推論処理、及び制御処理の少なくとも1つを含む。記憶部33は、情報処理装置30が他の装置から取得した情報と、処理部32が実行する処理に使用される情報と、処理部32が実行した処理によって生成された情報との少なくとも1つを記憶する。記憶部33は、例えば、処理部32が実行する処理に使用される情報として候補情報D6を記憶する。候補情報D6については後述する。 Returning to the description of FIG. 1, the information processing device 30 includes a communication unit 31, a processing unit 32, and a storage unit 33. The communication unit 31 controls communication performed by the information processing device 30 with other devices. The processing unit 32 processes information. The processing executed by the processing unit 32 includes, for example, at least one of arithmetic processing, image processing, analysis processing, determination processing, inference processing, and control processing. The storage unit 33 stores at least one of information acquired by the information processing device 30 from another device, information used for processing executed by the processing unit 32, and information generated by processing executed by the processing unit 32. memorize The storage unit 33 stores, for example, candidate information D6 as information used for processing executed by the processing unit 32 . The candidate information D6 will be described later.
 処理部32は、取得部34と、特定部35と、生成部36とを含む。取得部34は、対象者2を検出するセンサ10の検出結果を取得する。取得部34は、対象者2の状態情報を、対象者2に関連付けられる端末20から取得する。状態情報は、センサ10の検出結果と異なる情報から得られる情報であって、対象2の状態を表す情報を含む。例えば図2において、状態情報は、対象者2の上司(評価者3)によって入力された情報(例、仕事中、休憩中、又はその他)を含む。 The processing unit 32 includes an acquisition unit 34, an identification unit 35, and a generation unit 36. The acquisition unit 34 acquires the detection result of the sensor 10 that detects the subject 2 . The acquisition unit 34 acquires the status information of the subject 2 from the terminal 20 associated with the subject 2 . The state information is information obtained from information different from the detection result of the sensor 10 and includes information representing the state of the target 2 . For example, in FIG. 2, status information includes information (eg, at work, on break, or other) entered by subject 2's superior (evaluator 3).
 本実施形態において、記憶部33は候補情報D6を記憶する。候補情報D6は、対象者2の状態情報の入力に用いられる端末20の候補と、対象者2とを関連付けた情報を含む。特定部35は、記憶部33に記憶された候補情報において対象者2に関連付けられた候補から、端末20を特定する。 In this embodiment, the storage unit 33 stores candidate information D6. The candidate information D<b>6 includes information that associates the candidate of the terminal 20 used for inputting the status information of the subject 2 with the subject 2 . The specifying unit 35 specifies the terminal 20 from candidates associated with the subject 2 in the candidate information stored in the storage unit 33 .
 図3は、センサと端末の関連付けの一例を示す図である。本実施形態において、対象者2は、複数の対象者(2A、2B)を含む。以下の説明において適宜、複数の対象者を区別しない場合に対象者2と表し、複数の対象者を区別する場合に各対象者を対象者2A、対象者2Bと表す。本実施形態において、センサ10は、複数のセンサ(10A、10B)を含む。以下の説明において適宜、複数のセンサを区別しない場合にセンサ10と表し、複数のセンサを区別する場合に各センサをセンサ10A、センサ10Bと表す。センサ10Aは、対象者2Aに対応し、対象者2Aを検出する。センサ10Bは、対象者2Bに対応し、対象者2Bを検出する。 FIG. 3 is a diagram showing an example of association between sensors and terminals. In this embodiment, the subject 2 includes multiple subjects (2A, 2B). In the following description, when a plurality of subjects are not distinguished, they are referred to as subject 2, and when a plurality of subjects are distinguished, they are referred to as subject 2A and subject 2B. In this embodiment, the sensor 10 includes multiple sensors (10A, 10B). In the following description, when a plurality of sensors are not distinguished, they are referred to as sensor 10, and when they are distinguished, they are referred to as sensors 10A and 10B. The sensor 10A corresponds to the subject 2A and detects the subject 2A. The sensor 10B corresponds to the subject 2B and detects the subject 2B.
 本実施形態において、評価者3は、複数の評価者(3A、3B)を含む。以下の説明において適宜、複数の評価者を区別しない場合に評価者3と表し、複数の評価者を区別する場合に各評価者を評価者3A、評価者3Bと表す。評価者3Aは、対象者2A及び対象者2Bのそれぞれに対応する。評価者3Aは、例えば、対象者2A及び対象者2Bの上司である。評価者3Bは、対象者2Bに対応する。評価者3Bは、例えば、対象者2Bの上司である。例えば、対象者2Bは2つの部署に所属する従業者であり、評価者3A及び評価者3Bは、それぞれ、対象者2Bが所属する各部署のマネージャーでもよい。 In this embodiment, evaluators 3 include multiple evaluators (3A, 3B). In the following description, when a plurality of evaluators are not distinguished, they are referred to as evaluator 3, and when a plurality of evaluators are distinguished, they are referred to as evaluators 3A and 3B. The evaluator 3A corresponds to each of the subject 2A and the subject 2B. The evaluator 3A is, for example, the superior of the subject 2A and the subject 2B. Evaluator 3B corresponds to subject 2B. The evaluator 3B is, for example, the superior of the subject 2B. For example, the subject 2B may be an employee belonging to two departments, and the evaluators 3A and 3B may be managers of the respective departments to which the subject 2B belongs.
 本実施形態において、端末20は、複数の端末(20A、20B)を含む。以下の説明において適宜、複数の端末を区別しない場合に端末20と表し、複数の端末を区別する場合に各端末を端末20A、端末20Bと表す。端末20Aは、評価者3Aと関連付けられている。端末20Aは、評価者3Aから対象者2の状態情報の入力を受け付ける。例えば、評価者3Aは、評価者3Aに対応する対象者2Aの状態を判断し、その判断結果を表す状態情報を端末20Aに入力する。例えば、評価者3Aは、評価者3Aに対応する対象者2Bの状態を判断し、その判断結果を表す状態情報を端末20Aに入力する。端末20Bは、評価者3Bと関連付けられている。端末20Bは、評価者3Bから対象者2の状態情報の入力を受け付ける。例えば、評価者3Bは、評価者3Bに対応する対象者2Bの状態を判断し、その判断結果を表す状態情報を端末20Bに入力する。 In this embodiment, the terminal 20 includes multiple terminals (20A, 20B). In the following description, when a plurality of terminals are not distinguished, they are referred to as a terminal 20, and when a plurality of terminals are distinguished, they are referred to as a terminal 20A and a terminal 20B. Terminal 20A is associated with evaluator 3A. The terminal 20A receives the input of the condition information of the subject 2 from the evaluator 3A. For example, the evaluator 3A judges the condition of the subject 2A corresponding to the evaluator 3A, and inputs the condition information representing the judgment result to the terminal 20A. For example, the evaluator 3A judges the condition of the subject 2B corresponding to the evaluator 3A, and inputs the condition information indicating the judgment result to the terminal 20A. Terminal 20B is associated with evaluator 3B. The terminal 20B receives the input of the condition information of the subject 2 from the evaluator 3B. For example, the evaluator 3B determines the state of the subject 2B corresponding to the evaluator 3B, and inputs state information indicating the determination result to the terminal 20B.
 図3において符号D6は、端末20の候補と対象2とを関連付けた候補情報を表す。候補情報D6は、例えばテーブルデータである。候補情報D6は、例えば、センサID、対象ID、端末ID、及び評価者IDの項目を含む。センサIDは、各センサを特定する情報(例、識別情報)を含む。センサIDは、複数のセンサで重複しないように割り付けられる。ここでは、センサ10AのセンサIDが「S10A」であり、センサ10BのセンサIDが「S10B」であるとする。対象IDは、各対象2を特定する情報(例、識別情報)を含む。対象IDは、複数の対象2で重複しないように割り付けられる。ここでは、対象者2Aの対象IDが「U02A」であり、対象2Bの対象IDが「U02B」であるとする。端末IDは、各端末20を特定する情報を含む。端末IDは、複数の端末20で重複しないように割り付けられる。ここでは、端末20Aの端末IDが「T20A」であり、端末20Bの端末IDが「T20B」であるとする。評価者IDは、各評価者3を特定する情報(例、識別情報)を含む。評価者IDは、複数の評価者で重複しないように割り付けられる。ここでは、評価者3Aの評価者IDが「E01」であり、評価者3Bの評価者IDが「E02」であるとする。 In FIG. 3, reference D6 represents candidate information that associates the terminal 20 candidate with the target 2. In FIG. The candidate information D6 is, for example, table data. Candidate information D6 includes, for example, items of sensor ID, target ID, terminal ID, and evaluator ID. The sensor ID includes information (eg, identification information) that identifies each sensor. Sensor IDs are assigned so as not to duplicate among a plurality of sensors. Here, it is assumed that the sensor ID of the sensor 10A is "S10A" and the sensor ID of the sensor 10B is "S10B". The target ID includes information (eg, identification information) specifying each target 2 . Target IDs are assigned so as not to duplicate among multiple targets 2 . Here, it is assumed that the target ID of the target person 2A is "U02A" and the target ID of the target 2B is "U02B". The terminal ID includes information identifying each terminal 20 . Terminal IDs are assigned so as not to duplicate among a plurality of terminals 20 . Here, it is assumed that the terminal ID of the terminal 20A is "T20A" and the terminal ID of the terminal 20B is "T20B". The evaluator ID includes information (eg, identification information) that identifies each evaluator 3 . Evaluator IDs are assigned so as not to overlap among multiple evaluators. Here, it is assumed that the evaluator ID of the evaluator 3A is "E01" and the evaluator ID of the evaluator 3B is "E02".
 候補情報D6において、同じ行に配置される各項目の値は互いに関連付けられている。例えば、センサIDの「S10A」と対象IDの「U02A」とは、同じ行に配置されており、互いに対応関係にある。センサIDが「S10A」であるセンサ10Aは、対象IDが「U02A」である対象者2Aと対応関係にある。センサ10Aは、対応関係にある対象者2Aを検出する。例えば、センサ10Aは、オフィスにおいて対象者2Aが居ることが想定される場所に設置される。センサ10Aが出力した検出結果は、対象者2Aを検出した結果として扱われる。 In the candidate information D6, the values of each item arranged on the same line are associated with each other. For example, the sensor ID "S10A" and the target ID "U02A" are arranged on the same line and have a correspondence relationship with each other. The sensor 10A whose sensor ID is "S10A" has a corresponding relationship with the target person 2A whose target ID is "U02A". The sensor 10A detects the target person 2A in correspondence. For example, the sensor 10A is installed in a place where the subject 2A is assumed to be in an office. The detection result output by the sensor 10A is treated as the result of detecting the subject 2A.
 センサIDの「S10A」と端末IDの「T20A」とは、同じ行に配置されており、互いに対応関係にある。センサIDが「S10A」であるセンサ10Aは、端末IDが「T20A」である端末20Aと対応関係にある。センサ10Aの検出結果は、対応関係にある端末20Aから出力された情報と関連付けられて、教師データ又はその元データとして利用される。センサIDの「S10A」と評価者IDの「E03A」とは、同じ行に配置されており、互いに対応関係にある。センサIDが「S10A」であるセンサ10Aは、評価者IDが「E03A」である評価者3Aと対応関係にある。センサ10Aの検出結果は、対応関係にある評価者3Aが入力する状態情報と関連付けられて、教師データ又はその元データとして利用される。 The sensor ID "S10A" and the terminal ID "T20A" are arranged on the same line and have a corresponding relationship. The sensor 10A whose sensor ID is "S10A" has a corresponding relationship with the terminal 20A whose terminal ID is "T20A". The detection result of the sensor 10A is associated with information output from the corresponding terminal 20A and used as teacher data or its original data. The sensor ID "S10A" and the evaluator ID "E03A" are arranged on the same line and have a correspondence relationship with each other. The sensor 10A whose sensor ID is "S10A" has a corresponding relationship with the evaluator 3A whose evaluator ID is "E03A". The detection result of the sensor 10A is associated with state information input by the evaluator 3A, which has a corresponding relationship, and is used as teacher data or its original data.
 対象IDの「U02A」と端末IDの「T20A」とは、同じ行に配置されており、互いに対応関係にある。対象IDが「U02A」である対象者2Aは、端末IDが「T20A」である端末20Aと対応関係にある。対象者2Aの状態情報は、対応関係にある端末20Aから出力される。端末20Aから出力された情報の少なくとも一部は、対象者2Aの状態情報として扱われる。対象IDの「U02A」と評価者IDの「E03A」とは、同じ行に配置されており、互いに対応関係にある。対象IDが「U02A」である対象者2Aは、評価者IDが「E03A」である評価者3Aと対応関係にある。対象者2Aの状態情報は、対応関係にある評価者3Aによって入力される。 The target ID "U02A" and the terminal ID "T20A" are arranged on the same line and have a corresponding relationship. The target person 2A whose target ID is "U02A" has a corresponding relationship with the terminal 20A whose terminal ID is "T20A". The status information of the subject 2A is output from the corresponding terminal 20A. At least part of the information output from the terminal 20A is treated as the subject's 2A status information. The subject ID "U02A" and the evaluator ID "E03A" are arranged on the same line and have a correspondence relationship with each other. The subject 2A whose subject ID is "U02A" has a corresponding relationship with the evaluator 3A whose evaluator ID is "E03A". The status information of the subject 2A is input by the evaluator 3A who has a corresponding relationship.
 端末IDの「T20A」と評価者IDの「E03A」とは、同じ行に配置されており、互いに対応関係にある。端末IDが「T20A」である端末20Aは、評価者IDが「E03A」である評価者3Aと対応関係にある。端末20Aに入力された状態情報は、評価者3Aによって入力されたものとして扱われる。 The terminal ID "T20A" and the evaluator ID "E03A" are arranged on the same line and have a corresponding relationship. Terminal 20A whose terminal ID is "T20A" is in correspondence with evaluator 3A whose evaluator ID is "E03A". The status information input to the terminal 20A is treated as input by the evaluator 3A.
 センサ10Aは、例えば、検出により得られた値(適宜、センサ値という)、又はセンサ値を処理した処理結果を含む情報と、検出を実行したタイミングを示す時刻情報(例えば、タイムスタンプ)と、自装置のセンサIDとを含む検出結果を情報処理装置30に提供する。評価者3Aは、対応関係にある対象者2Aの状態を判断する。評価者3Aは、対応関係にある端末20Aに、対象者2Aの状態を表す情報を入力する。端末20Aは、例えば、入力値又は入力値を処理した処理結果を含む情報と、入力されたタイミングを示す時刻情報(例えば、タイムスタンプ)と、自装置の端末IDとを含む状態情報を、情報処理装置30に提供する。 The sensor 10A, for example, information including a value obtained by detection (arbitrarily referred to as a sensor value) or a processing result of processing the sensor value, time information indicating the timing at which detection was performed (for example, a time stamp), It provides the information processing device 30 with the detection result including the sensor ID of its own device. The evaluator 3A judges the state of the subject 2A in correspondence. The evaluator 3A inputs information representing the condition of the subject 2A to the corresponding terminal 20A. The terminal 20A, for example, stores state information including information including an input value or a processing result of processing the input value, time information (for example, time stamp) indicating the timing of the input, and the terminal ID of its own device. provided to the processing device 30;
 評価者3Aは、自身と対応関係にある対象者2Aの状態を判断する。評価者3Aは、自身と対応関係にある端末20Aに、対象者2Aの状態を表す情報を入力する。端末20Aは、例えば、状態情報と、入力されたタイミングを示す時刻情報(例えば、タイムスタンプ)と、自装置の端末IDとを含む状態情報を、情報処理装置30に提供する。端末20Aが出力する状態情報は、端末20Aに入力された状態情報と、端末20Aに入力された情報(例、入力値)を処理して得られる状態情報との一方又は双方を含んでもよい。 The evaluator 3A judges the state of the subject 2A who has a corresponding relationship with himself. The evaluator 3A inputs information representing the condition of the subject 2A to the terminal 20A, which has a corresponding relationship with him/herself. The terminal 20A provides the information processing apparatus 30 with state information including, for example, state information, time information (for example, time stamp) indicating input timing, and the terminal ID of the terminal itself. The state information output by the terminal 20A may include one or both of state information input to the terminal 20A and state information obtained by processing information (eg, input values) input to the terminal 20A.
 なお、センサ10Aは、対象者2A以外の対象と、対象以外の人物と、対象以外の物体との少なくとも1つを検出してもよい。例えば、センサ10Aは、対象者2Aが作業を行うスペースに設置されるカメラを含んでもよい。センサ10A又はセンサ10の外部の装置は、カメラが撮影した画像を画像解析して、画像に写っている対象者2Aを検出してもよい。センサ10A又は上記外部の装置は、複数の対象2が写っている画像を画像解析して、画像に写っている対象者2Aを特定してもよい。センサ10A又はセンサ10の外部の装置が検出結果から対象者2Aと特定する場合、対象IDは、センサIDと関連付けられていなくてもよい。例えば、対象IDは、対象者2Aの外見上の特徴を表す特徴情報と関連付けられていてもよい。センサ10A又は上記外部の装置は、センサ10Aの検出結果と特徴情報とを照合して対象を特定し、特定した対象の検出結果と、特徴情報に関連付けられた対象IDとを関連付けてもよい。 Note that the sensor 10A may detect at least one of a target other than the target person 2A, a person other than the target, and an object other than the target. For example, the sensor 10A may include a camera installed in the space where the subject 2A works. The sensor 10A or a device external to the sensor 10 may analyze the image captured by the camera and detect the subject 2A appearing in the image. The sensor 10A or the above-mentioned external device may perform image analysis on an image showing a plurality of subjects 2 and identify the subject 2A appearing in the image. When the sensor 10A or a device external to the sensor 10 identifies the target person 2A from the detection result, the target ID does not have to be associated with the sensor ID. For example, the target ID may be associated with feature information representing the external features of the target person 2A. The sensor 10A or the external device may identify the target by comparing the detection result of the sensor 10A and the feature information, and associate the detection result of the specified target with the target ID associated with the feature information.
 なお、センサ10Aは、予め定められたスケジュールに従って、複数の対象2を時間順次で検出してもよい。この場合、センサ10Aが検出した対象2は、検出が実行されたタイミングと、上記スケジュールとを用いて特定される。また、センサ10Aは、所定の方向の対象2を検出し、所定の方向を切替えることで複数の対象2を検出してもよい。この場合、センサ10Aが検出した対象2は、検出の方向と対象2とを予め関連付けた情報と、実行された検出の方向とを用いて特定される。 Note that the sensor 10A may detect a plurality of targets 2 in chronological order according to a predetermined schedule. In this case, the target 2 detected by the sensor 10A is specified using the timing at which the detection was performed and the above schedule. Further, the sensor 10A may detect a plurality of targets 2 by detecting the target 2 in a predetermined direction and switching the predetermined direction. In this case, the target 2 detected by the sensor 10A is specified using information that associates the direction of detection with the target 2 in advance, and the direction of detection that has been performed.
 なお、図3の例において、評価者3Aは、複数の対象者2と対応関係にある。このような場合、端末20Aは、状態情報の入力を受け付ける際に、対象者2を指定する情報を画像又は音声で出力してもよい。例えば、端末20Aは、図2に示したように対象者2を指定する情報として、対象者2を特定する情報(例、「Aさん」)を表示してもよい。また、端末20Aは、状態情報の入力を受け付ける際に、入力される状態情報に対応する対象者2を特定する情報の入力を受け付けてもよい。例えば、評価者3Aは、対象者2Bを特定する情報(例、対象者2Bの名称、対象ID)と、この対象者2Bについての状態情報とを端末20Aに入力してもよい。この場合、端末20Aは、対象者2Bを特定する情報と、状態情報とを含む情報を、情報処理装置30へ提供してもよい。 It should be noted that in the example of FIG. In such a case, the terminal 20A may output the information designating the subject 2 in the form of an image or voice when accepting the input of the status information. For example, the terminal 20A may display information specifying the target person 2 (for example, "Mr. A") as the information specifying the target person 2 as shown in FIG. Further, when receiving input of state information, the terminal 20A may receive input of information specifying the subject 2 corresponding to the input state information. For example, the evaluator 3A may input information specifying the subject 2B (eg, name of the subject 2B, subject ID) and state information about the subject 2B into the terminal 20A. In this case, the terminal 20A may provide the information processing device 30 with information including information specifying the subject 2B and state information.
 なお、評価者3Aは、対象者2Aと異なる人物でもよいし、対象者2Aと同じ人物でもよい。例えば、対象者2Aは、評価者3Aと同一の人物であって、端末20Aは対象者2Aが所持する端末でもよい。この場合、対象者2Aは、自身の状態情報を端末20Aに入力してもよい。対象者2Aは、評価者3Aと同一の人物である場合、評価者IDは対象IDと関連付けられなくてもよい。例えば、端末20Aから出力された状態情報は、端末20Aと関連付けられたセンサ10Aの検出結果と関連付けられて、教師データに利用されてもよい。 The evaluator 3A may be a different person from the subject 2A, or may be the same person as the subject 2A. For example, the subject 2A may be the same person as the evaluator 3A, and the terminal 20A may be a terminal possessed by the subject 2A. In this case, the subject 2A may input his/her status information to the terminal 20A. If the subject 2A is the same person as the evaluator 3A, the evaluator ID may not be associated with the subject ID. For example, the state information output from the terminal 20A may be associated with the detection result of the sensor 10A associated with the terminal 20A and used as teacher data.
 図1の説明に戻り、特定部35は、センサ10から出力される情報と、端末20から出力される情報との対応関係を特定する。例えば、特定部35は、候補情報D6を参照して、センサ10から取得された検出結果に対応する、端末20から取得された状態情報を特定する。ここでは、図1および図3を参照しつつ、センサ10から取得した検出結果に含まれるセンサIDが「S10A」であるとして説明する。特定部35は、センサ10から取得した情報から、センサID(ここでは「S10A」)を特定する。特定部35は、候補情報D6において「S10A」に対応する端末ID(ここでは「T20A」)を特定する。特定部35は、端末IDが「T20A」である端末20Aから取得された状態情報を、センサIDが「S10A」であるセンサ10Aの検出結果に対応する状態情報であると特定する。 Returning to the description of FIG. 1, the specifying unit 35 specifies the correspondence relationship between the information output from the sensor 10 and the information output from the terminal 20. For example, the specifying unit 35 specifies the state information acquired from the terminal 20 corresponding to the detection result acquired from the sensor 10 by referring to the candidate information D6. Here, with reference to FIGS. 1 and 3, it is assumed that the sensor ID included in the detection result obtained from the sensor 10 is "S10A." The identifying unit 35 identifies the sensor ID (here, “S10A”) from the information acquired from the sensor 10 . The identifying unit 35 identifies the terminal ID (here, "T20A") corresponding to "S10A" in the candidate information D6. The identifying unit 35 identifies the state information acquired from the terminal 20A whose terminal ID is "T20A" as state information corresponding to the detection result of the sensor 10A whose sensor ID is "S10A".
 なお、特定部35は、端末20から取得された状態情報に対応する、センサ10から取得された検出結果を特定してもよい。ここでは、評価者3は、対象者2の状態情報を入力する際に、対象者2を特定する情報(例、名称、対象ID)を入力するものとする。また、端末20は、図3の端末20Aであって、状態情報を出力する際に、自装置の端末ID(ここでは「T20A」と、評価者3から入力された対象2を特定する情報(例えば、対象IDの「U02B」)とを出力するものとする。特定部35は、端末20Aから取得した情報から、端末ID(ここでは「T20A」)と、対象ID(ここでは「U02B」)とを特定する。特定部35は、候補情報D6において「T20A」と「S02B」とに関連付けられたセンサID(ここでは「S10B」)を特定する。特定部35は、センサIDが「S10B」であるセンサ10Bの検出結果を、端末IDが「T20A」である端末20Aから取得された「U02B」に関する状態情報であると特定する。 Note that the identifying unit 35 may identify the detection result obtained from the sensor 10 corresponding to the state information obtained from the terminal 20 . Here, it is assumed that the evaluator 3 inputs information specifying the subject 2 (eg, name, subject ID) when inputting the status information of the subject 2 . The terminal 20 is the terminal 20A in FIG. 3, and when outputting the state information, the terminal ID of its own device (here, "T20A" and the information specifying the target 2 input by the evaluator 3 ( For example, the target ID “U02B”) is output from the identification unit 35 based on the information acquired from the terminal 20A. The identifying unit 35 identifies the sensor ID (here, "S10B") associated with "T20A" and "S02B" in the candidate information D6.The identifying unit 35 identifies the sensor ID "S10B". is the state information about "U02B" acquired from the terminal 20A whose terminal ID is "T20A".
 なお、情報処理装置30は、上記外部の装置であって、センサ10から出力された検出結果から対象2を特定してもよい。例えば、センサ10Aは対象者2Aを撮影するカメラを含み、情報処理装置30(例、特定部35)は、センサ10Aのカメラが撮影した画像を解析して対象者2Aを特定してもよい。特定部35は、センサ10Aの検出結果から特定された対象者2Aの対象ID(ここでは「U02A」)に対して候補情報D6において関連付けられた端末ID(ここでは、「T20A」)を特定してもよい。特定部35は、センサ10Aの検出結果と、センサ10Aの検出結果を用いて特定された端末ID(ここでは「T20A」)とを関連付けてもよい。上記外部の装置がセンサ10の検出結果から対象2を特定する場合、センサIDは、候補情報D6において対象IDと関連付けられていなくてもよいし、候補情報D6に含まれなくてもよい。 It should be noted that the information processing device 30 may be the above-described external device and identify the target 2 from the detection result output from the sensor 10 . For example, the sensor 10A may include a camera that captures the subject 2A, and the information processing device 30 (eg, the identification unit 35) may identify the subject 2A by analyzing the image captured by the camera of the sensor 10A. The identifying unit 35 identifies the terminal ID (here, “T20A”) associated in the candidate information D6 with the target ID (here, “U02A”) of the subject 2A identified from the detection result of the sensor 10A. may The identifying unit 35 may associate the detection result of the sensor 10A with the terminal ID (here, "T20A") identified using the detection result of the sensor 10A. When the external device specifies the target 2 from the detection result of the sensor 10, the sensor ID may not be associated with the target ID in the candidate information D6, and may not be included in the candidate information D6.
 図1の生成部36は、センサ10の検出結果と状態情報とを関連付けてデータを生成する。上記検出結果は、センサ10のセンサ値でもよいし、センサ10のセンサ値を処理して得られる値(例、検出結果として導出される値)でもよい。例えば、生成部36は、取得部34が取得した検出結果を用いて、状態情報が得られた時刻におけるセンサの検出結果を導出し、導出したセンサの検出結果と状態情報とを関連付けてデータを生成する。上記状態情報は、端末20が出力する情報でもよいし、この情報を処理して得られる情報(例、状態情報として導出される情報)でもよい。生成部36は、例えば、センサ10のセンサ値から導出される検出結果と、この検出結果に対応する状態情報として特定部35が特定した状態情報とを関連づけてデータを生成する。 The generation unit 36 in FIG. 1 generates data by associating the detection result of the sensor 10 with the state information. The detection result may be a sensor value of the sensor 10 or a value obtained by processing the sensor value of the sensor 10 (for example, a value derived as a detection result). For example, the generation unit 36 uses the detection result acquired by the acquisition unit 34 to derive the sensor detection result at the time when the state information was obtained, and associates the derived sensor detection result with the state information to generate data. Generate. The state information may be information output by the terminal 20, or may be information obtained by processing this information (for example, information derived as state information). The generating unit 36 generates data by associating, for example, the detection result derived from the sensor value of the sensor 10 with the state information specified by the specifying unit 35 as the state information corresponding to this detection result.
 図4は、状態情報に対応するセンサの検出結果を導出する処理の一例を示す図である。生成部36は、例えば、状態情報が取得されたタイミングを導出し、このタイミングにおけるセンサ値を導出(例、推定)する。例えば、生成部36は、端末20が状態情報とともに出力するタイムスタンプが示す時刻を、状態情報が取得されたタイミングとして利用する。例えば、生成部36は、状態情報に対応するタイムスタンプの値を特定する処理によって、状態情報が取得されたタイミングを導出する。 FIG. 4 is a diagram showing an example of the process of deriving the sensor detection results corresponding to the state information. The generation unit 36, for example, derives the timing at which the state information is acquired, and derives (eg, estimates) the sensor value at this timing. For example, the generating unit 36 uses the time indicated by the time stamp output by the terminal 20 together with the state information as the timing at which the state information was acquired. For example, the generation unit 36 derives the timing at which the state information was acquired by the process of specifying the value of the time stamp corresponding to the state information.
 センサ10は、例えば、所定のサンプリング周波数で検出動作を実行する。センサ10は、例えば、所定の時間間隔でセンサ値を出力する。図4において符号tは、状態情報が得られた時刻として生成部36が導出した時刻である。時刻tは、例えば、端末20が出力するタイムスタンプから得られる情報である。生成部36は、センサ値の時系列データから時刻tに最も近い時刻のセンサ値を特定する処理によって、このセンサ値を状態情報に対応する検出結果として導出する。 The sensor 10 performs detection operations at, for example, a predetermined sampling frequency. The sensor 10 outputs a sensor value, for example, at predetermined time intervals. In FIG. 4, symbol t is the time derived by the generator 36 as the time when the state information was obtained. The time t is information obtained from a time stamp output by the terminal 20, for example. The generation unit 36 derives the sensor value as the detection result corresponding to the state information by the process of specifying the sensor value closest to the time t from the time-series data of the sensor value.
 なお、状態情報に対応する検出結果を導出する処理は、上記の例と異なる処理でもよい。例えば、生成部36は、時刻tを含む所定の期間Tにおける複数のセンサ値を統計処理することによって、検出結果を導出(例、算出)してもよい。例えば、生成部36は、所定の期間Tにおける複数のセンサ値の平均値を、検出結果として算出してもよい。この平均値は、所定の期間Tにおけるセンサ値の相加平均による値でもよいし、相乗平均による値でもよい。例えば、上記平均値は、センサ値が得られた時刻と状態情報が得られた時刻tとの間の時間に応じた重みを用いた相乗平均の値でもよい。また、生成部36は、所定の期間Tにおける複数のセンサ値を用いた近似又は補間によって、検出結果を算出してもよい。生成部36は、センサ10から得られる1つの検出結果(例、センサ値)と、1つの状態情報とを関連付けた教師データを生成してもよい。生成部36は、センサ10から得られる複数の検出結果を一組にした1つの情報(例、時刻に対するセンサ値の波形)と、1つの状態情報とを関連付けた教師データを生成してもよい。 Note that the process of deriving the detection result corresponding to the state information may be different from the above example. For example, the generation unit 36 may derive (eg, calculate) the detection result by statistically processing a plurality of sensor values in a predetermined period T including time t. For example, the generation unit 36 may calculate an average value of multiple sensor values in a predetermined period T as the detection result. This average value may be an arithmetic mean value or a geometric mean value of the sensor values in the predetermined period T. FIG. For example, the average value may be a geometric mean value using a weight according to the time between the time when the sensor value is obtained and the time t when the state information is obtained. Further, the generation unit 36 may calculate the detection result by approximation or interpolation using a plurality of sensor values in the predetermined period T. The generation unit 36 may generate teacher data that associates one detection result (eg, sensor value) obtained from the sensor 10 with one piece of state information. The generation unit 36 may generate teacher data that associates one piece of information (eg, a waveform of sensor values with respect to time) that is a set of a plurality of detection results obtained from the sensor 10 and one piece of state information. .
 なお、端末20は、状態情報が取得されたタイミングに相当するタイムスタンプを出力しなくてもよい。この場合、生成部36は、情報処理装置30が状態情報を取得した時刻を状態情報が取得されたタイミングとして導出してもよい。例えば、生成部36は、状態情報が取得されたタイミングとして、通信部31が状態情報を受信した時刻を導出してもよい。情報処理装置30(例、処理部32)は、端末20に対して状態情報の送信を要求する情報(適宜、状態情報要求という)を送信してもよい。生成部36は、通信部31が端末20に対して状態情報要求を送信した時刻を用いて、状態情報が取得されたタイミングを導出してもよい。例えば、生成部36は、状態情報が取得されたタイミングとして、状態情報要求が送信された時刻を導出してもよい。生成部36は、状態情報が取得されたタイミングとして、通信部31が状態情報要求を送信した第1の時刻と、通信部31が状態情報を受信した第2の時刻との間の時刻(例、第1の時刻と第2の時刻との平均値)を導出してもよい。生成部36は、上記第1の時刻と第2の時刻との一方又は双方と、通信に要する時間の推定値とを用いて、状態情報が取得されたタイミングを導出してもよい。 It should be noted that the terminal 20 does not have to output the time stamp corresponding to the timing at which the state information was acquired. In this case, the generator 36 may derive the time at which the information processing device 30 acquires the state information as the timing at which the state information was acquired. For example, the generation unit 36 may derive the time when the communication unit 31 received the state information as the timing at which the state information was acquired. The information processing device 30 (eg, the processing unit 32) may transmit information requesting transmission of state information to the terminal 20 (arbitrarily referred to as a state information request). The generation unit 36 may derive the timing at which the state information was acquired using the time when the communication unit 31 transmitted the state information request to the terminal 20 . For example, the generator 36 may derive the time at which the state information request was transmitted as the timing at which the state information was acquired. As the timing at which the state information is acquired, the generating unit 36 selects the time between the first time when the communication unit 31 transmitted the state information request and the second time when the communication unit 31 received the state information (for example, , the average value of the first time and the second time) may be derived. The generation unit 36 may derive the timing at which the state information is acquired using one or both of the first time and the second time and an estimated value of the time required for communication.
 なお、センサ10は、対象2を定期的に検出してもよいし、対象2を不定期で検出してもよい。例えば、センサ10は、対象2の検出を要求する情報(適宜、検出要求という)を受けて、対象2の検出を実行してもよい。センサ10は、検出要求に対する応答として、センサ値又はセンサ値を処理して得られる情報を出力してもよい。検出要求を出力する装置は、情報処理装置30でもよいし、情報処理システム1において情報処理装置30と異なる装置(例、口述する情報処理装置50)でもよく、情報処理システム1の外部の装置でもよい。 Note that the sensor 10 may detect the target 2 periodically, or may detect the target 2 irregularly. For example, the sensor 10 may detect the target 2 upon receiving information requesting detection of the target 2 (hereinafter referred to as a detection request). The sensor 10 may output a sensor value or information obtained by processing the sensor value as a response to the detection request. The device that outputs the detection request may be the information processing device 30, a device different from the information processing device 30 in the information processing system 1 (eg, the information processing device 50 for dictation), or an external device of the information processing system 1. good.
 図1の説明に戻り、情報処理装置30は、生成部36が生成したデータを、機械学習を実行する情報処理装置へ提供する。本実施形態において、情報処理装置40は、機械学習を実行する学習部41を備え、情報処理装置30は、機械学習の教師データ又は教師データの元になるデータを、情報処理装置40へ提供する。 Returning to the description of FIG. 1, the information processing device 30 provides the data generated by the generation unit 36 to the information processing device that executes machine learning. In this embodiment, the information processing device 40 includes a learning unit 41 that executes machine learning, and the information processing device 30 provides the information processing device 40 with teacher data for machine learning or data that is the source of the teacher data. .
 図5は、機械学習を実行する処理の一例を示す図である。この例において、モデルMは、ディープラーニングによって生成される推論モデルである。モデルMは、例えば、センサ10の検出結果が入力され、対象2の状態の推定結果を出力とする。モデルMは、入力層M1と、中間層M2と、出力層M3とを含む。入力層M1は、推論の元データが入力される層である。出力層M3は、推論結果を示すデータを出力する層である。中間層M2は、入力層M1と出力層M3との間に配置される層である。中間層M2の数は任意であり、1層でもよいし、複数層でもよい。 FIG. 5 is a diagram showing an example of processing for executing machine learning. In this example, model M is an inference model generated by deep learning. For example, the model M receives the detection result of the sensor 10 and outputs the estimation result of the state of the object 2 . The model M includes an input layer M1, an intermediate layer M2 and an output layer M3. The input layer M1 is a layer to which original data for inference is input. The output layer M3 is a layer that outputs data indicating an inference result. The intermediate layer M2 is a layer arranged between the input layer M1 and the output layer M3. The number of intermediate layers M2 is arbitrary, and may be one layer or multiple layers.
 学習部41は、教師データのうちセンサ10の検出結果を入力層M1に入力する。入力層M1に入力された値は、中間層M2を介して出力層M3に伝播する。モデルMは、値を入力層M1から出力層M3へ伝播させる際のパラメータ(例えば、結合係数、バイアス)を含む。学習部41は、教師データにおける入力データが入力層M1に入力された際に、出力層M3から出力される出力データが状態情報を示すデータへ近づくように、上記パラメータを最適化する。 The learning unit 41 inputs the detection result of the sensor 10 in the teacher data to the input layer M1. A value input to the input layer M1 propagates to the output layer M3 via the intermediate layer M2. The model M includes parameters (eg, coupling coefficients, biases) for propagating values from the input layer M1 to the output layer M3. The learning unit 41 optimizes the parameters so that the output data output from the output layer M3 approaches the data representing the state information when the input data in the teacher data is input to the input layer M1.
 なお、学習部41が実行する機械学習は任意であり、ディープラーニングでなくてもよい。機械学習の手法は、中間層M2を含まないニューラルネットワークでもよいし、ニューラルネットワーク以外の手法でもよい。情報処理装置30は、複数の対象2についての検出結果および状態情報から得られる教師データを、1つの推論モデルの教師データとして学習部41へ提供してもよい。この場合、得られる推論モデルは、例えば、複数の対象2に関して共通で用いられる推論モデル(適宜、共用モデルという)として利用できる。   The machine learning performed by the learning unit 41 is arbitrary and does not have to be deep learning. The machine learning method may be a neural network that does not include the intermediate layer M2, or may be a method other than the neural network. The information processing device 30 may provide the learning unit 41 with teacher data obtained from the detection results and state information of a plurality of targets 2 as teacher data for one inference model. In this case, the obtained inference model can be used, for example, as an inference model commonly used for a plurality of targets 2 (arbitrarily referred to as a shared model).  
 なお、情報処理装置30は、特定の対象2(例、対象者2A)についての検出結果および状態情報から得られる教師データを、1つの推論モデルの教師データとして学習部41へ提供してもよい。この場合、得られる推論モデルは、例えば、特定の対象2(例、対象者2A)の状態を推定する推論モデル(適宜、個別モデルという)として利用できる。上記特定の対象2に含まれる対象2の数は1でもよいし、2以上でもよい。複数の対象2は所定の集合であって、特定の対象2は、所定の集合の一部(例、部分集合)でもよい。例えば、所定の集合は、特定の企業に所属する従業者の集合であって、部分集合は、特定の企業のうち所定の部署に所属する従業者の集合であってもよい。 The information processing device 30 may provide the learning unit 41 with teacher data obtained from the detection results and state information of a specific target 2 (for example, the target 2A) as teacher data for one inference model. . In this case, the obtained inference model can be used, for example, as an inference model (arbitrarily referred to as an individual model) for estimating the state of a specific subject 2 (eg, subject 2A). The number of targets 2 included in the specific target 2 may be one, or two or more. A plurality of targets 2 may be a predetermined set, and a particular target 2 may be a part (eg, a subset) of the predetermined set. For example, the predetermined set may be a set of employees belonging to a specific company, and the subset may be a set of employees belonging to a predetermined department within the specific company.
 なお、学習部41は、特定の対象2について蓄積される教師データが所定量未満である場合に共用モデルを生成し、特定の対象2について蓄積される教師データが所定量以上である場合に個別モデルを生成してもよい。例えば、学習部41は、センサ10が第1の期間に取得した検出結果を用いて共用モデルを生成し、センサ10が第1の期間よりも長い第2の期間に取得した検出結果を用いて個別モデルを生成してもよい。 Note that the learning unit 41 generates a shared model when the amount of teacher data accumulated for the specific target 2 is less than a predetermined amount, and generates an individual model when the amount of teacher data accumulated for the specific target 2 is equal to or greater than the predetermined amount. model may be generated. For example, the learning unit 41 generates a common model using the detection results obtained by the sensor 10 in a first period, and uses the detection results obtained by the sensor 10 in a second period longer than the first period. Individual models may be generated.
 図1の説明に戻り、学習部41は、生成した推論モデルを、センサ10の検出結果を用いて対象2の状態を推定する情報処理装置へ提供する。例えば、情報処理装置40は、学習部41が生成した推論モデルを、情報処理装置50へ提供する。情報処理装置50は、汎用プロセッサー等により実現される処理部を含む。情報処理装置50の処理部は、推論モデルによって定義された演算を実行することによって、推定部51を構成する。推定部51は、例えば、学習部41が機械学習によって生成した推論モデルが実装されたAIである。 Returning to the description of FIG. 1 , the learning unit 41 provides the generated inference model to the information processing device that estimates the state of the target 2 using the detection result of the sensor 10 . For example, the information processing device 40 provides the inference model generated by the learning unit 41 to the information processing device 50 . The information processing device 50 includes a processing unit realized by a general-purpose processor or the like. The processing unit of the information processing device 50 configures the estimating unit 51 by executing the computation defined by the inference model. The estimating unit 51 is, for example, an AI in which an inference model generated by machine learning by the learning unit 41 is implemented.
 本実施形態において、情報処理装置30は、取得部34がセンサ10から取得した検出結果を情報処理装置50へ提供する。情報処理装置50の推定部51は、取得部34が取得した検出結果に対して、推論モデルによって定義される演算を実行することによって、対象2の状態を推定する。情報処理装置50は、推定部51が推定した対象2の状態(適宜、推定部51の推定結果という)を、例えばアプリケーション60へ提供する。 In the present embodiment, the information processing device 30 provides the information processing device 50 with the detection result obtained by the obtaining unit 34 from the sensor 10 . The estimation unit 51 of the information processing device 50 estimates the state of the target 2 by performing an operation defined by the inference model on the detection result acquired by the acquisition unit 34 . The information processing device 50 provides, for example, the application 60 with the state of the target 2 estimated by the estimation unit 51 (arbitrarily referred to as an estimation result of the estimation unit 51).
 アプリケーション60は、推定部51の推定結果を処理するアプリケーションである。アプリケーション60は、例えば、推定部51の推定結果を利用して、対象者2の勤務状態を示すログを作成する処理を実行する。アプリケーション60が実行する処理は任意であり、推定部51の推定結果を分析する処理を含んでもよい。例えば、アプリケーション60は、対象者2が集中して作業していた時間を推定してもよいし、対象者2の勤務状態を示す指標(例、効率、集中度、作業量)を算出してもよい。アプリケーション60は、推定部51の推定結果と、推定部51の推定結果を処理して得られる処理結果との一方又は双方を出力(例、表示)する処理を実行してもよい。 The application 60 is an application that processes the estimation result of the estimation unit 51 . The application 60 uses the estimation result of the estimation unit 51, for example, to execute a process of creating a log indicating the work status of the subject 2. FIG. The processing executed by the application 60 is arbitrary, and may include processing for analyzing the estimation result of the estimation unit 51 . For example, the application 60 may estimate the time during which the subject 2 was working intensively, or calculate an index indicating the work status of the subject 2 (eg, efficiency, degree of concentration, amount of work). good too. The application 60 may perform processing for outputting (eg, displaying) one or both of the estimation result of the estimation unit 51 and the processing result obtained by processing the estimation result of the estimation unit 51 .
 アプリケーション60は、情報処理システム1が備える情報処理装置の少なくとも1つに搭載されてもよい。例えば、アプリケーション60は、情報処理装置30に搭載されてもよい。アプリケーション60は、情報処理システム1の外部の情報処理装置の少なくとも1つに搭載されてもよい。推定部51の推定結果が提供されるアプリケーションの数は、1つでもよいし、複数でもよい。アプリケーション60は、Webアプリケーションを含んでもよいし、クラウド上のアプリケーションを含んでもよく、オンプレミスのアプリケーションを含んでもよい。 The application 60 may be installed in at least one of the information processing devices included in the information processing system 1 . For example, the application 60 may be installed in the information processing device 30 . The application 60 may be installed in at least one information processing device external to the information processing system 1 . The number of applications to which the estimation result of the estimation unit 51 is provided may be one or plural. The application 60 may include a web application, an application on the cloud, or an on-premises application.
 推定部51の推定結果が複数のアプリケーションへ提供される場合、推定結果が利用される用途は、複数のアプリケーションで同じでもよいし、異なってもよい。例えば、複数のアプリケーションのうち第1のアプリケーションは対象者2の勤務状態を管理し、複数のアプリケーションのうち第2のアプリケーションは対象者2の健康状態を管理してもよい。 When the estimation results of the estimation unit 51 are provided to multiple applications, the uses of the estimation results may be the same or different for the multiple applications. For example, a first application among the plurality of applications may manage the work status of the subject 2 and a second application among the plurality of applications may manage the health status of the subject 2 .
 次に、上述の情報処理システム1に基づき、実施形態に係る情報処理方法について説明する。図6は、第1実施形態に係る情報処理方法を示す図である。情報処理システム1の各部については適宜、図1から図5を参照する。 Next, an information processing method according to the embodiment will be described based on the information processing system 1 described above. FIG. 6 is a diagram showing an information processing method according to the first embodiment. FIG. 1 to FIG. 5 are referred to appropriately for each part of the information processing system 1 .
 図1に示したセンサ10は、対象2を検出する。センサ10は、対象2を検出した検出結果を送信する。図6のステップS1において、取得部34は、センサ10から検出結果を取得する。例えば、取得部34は、通信部31を制御して、センサ10が送信した検出結果を受信する。評価者3は、対象2の状態を判断し、その判断結果を表す情報を状態情報として端末20に入力する。ステップS2において、取得部34は、端末20から状態情報を取得する。例えば、取得部34は、通信部31を制御して、端末20が送信した検出結果を受信する。 The sensor 10 shown in FIG. 1 detects the target 2. The sensor 10 transmits the detection result of detecting the target 2 . In step S<b>1 in FIG. 6 , the acquisition unit 34 acquires the detection result from the sensor 10 . For example, the acquisition unit 34 controls the communication unit 31 to receive the detection result transmitted by the sensor 10 . The evaluator 3 judges the condition of the subject 2 and inputs information representing the judgment result into the terminal 20 as condition information. In step S<b>2 , the acquisition unit 34 acquires state information from the terminal 20 . For example, the acquisition unit 34 controls the communication unit 31 to receive the detection result transmitted by the terminal 20 .
 ステップS3において、生成部36は、機械学習に利用されるデータを生成する。ステップS3は、例えば、ステップS4からステップS6の処理を含む。ステップS4において、生成部36は、状態情報が取得されたタイミングを導出する。ステップS5において、生成部36は、ステップS4において導出したタイミングにおける検出結果を導出する。ステップS6において、生成部36は、ステップS2において取得された状態情報と、ステップS5において導出した検出結果とを関連付ける。生成部36は、例えば、検出結果をモデルM(図5参照)に対する入力の教師データとし、状態情報をモデルM(図5参照)の出力の教師データとして、検出結果と状態情報とを一組にしたデータを教師データとして生成する。 In step S3, the generating unit 36 generates data used for machine learning. Step S3 includes, for example, steps S4 to S6. In step S4, the generator 36 derives the timing at which the state information was acquired. In step S5, the generator 36 derives the detection result at the timing derived in step S4. In step S6, the generation unit 36 associates the state information acquired in step S2 with the detection result derived in step S5. For example, the generation unit 36 sets the detection result and the state information as input teacher data for the model M (see FIG. 5) and the state information as the output teacher data for the model M (see FIG. 5). The data obtained by
 ステップS7において、情報処理装置30は、学習部41へデータを提供する。例えば、情報処理装置30の通信部31は、生成部36が生成した教師データを含むデータを、学習部41を備える情報処理装置40へ送信する。ステップS8において、学習部41は、ステップS7において提供されたデータを利用して、機械学習によりモデルを生成する。ステップS9において、学習部41は、ステップS9において生成したモデルを、推定部51に提供する。例えば、学習部41を備える情報処理装置40は、学習部41が生成した推論モデルを表すデータを、推定部51を備える情報処理装置40へ送信する。 In step S7, the information processing device 30 provides data to the learning unit 41. For example, the communication unit 31 of the information processing device 30 transmits data including teacher data generated by the generation unit 36 to the information processing device 40 including the learning unit 41 . In step S8, the learning unit 41 uses the data provided in step S7 to generate a model through machine learning. In step S<b>9 , the learning unit 41 provides the model generated in step S<b>9 to the estimation unit 51 . For example, the information processing device 40 including the learning unit 41 transmits data representing the inference model generated by the learning unit 41 to the information processing device 40 including the estimation unit 51 .
 ステップS10において、推定部51は、センサ10の検出結果を、ステップS9において提供されたモデルによって処理して、対象の状態を推定する。例えば、情報処理装置30の通信部31は、センサの検出結果を情報処理装置50へ送信し、推定部51は、情報処理装置50が受信したセンサの検出結果を用いて、対象の状態を推定する。推定部51は、例えば、機械学習の結果が反映されたモデルM(図5参照)の入力層M1へ入力し、推定結果として、出力層M3から出力されるデータを導出する。ステップS11において、推定部51は、ステップS10において推定した対象の状態を示す情報を、アプリケーション60に提供する。例えば、情報処理装置50は、推定部51の推定結果を示すデータを、アプリケーション60が実装された情報処理装置へ送信する。 In step S10, the estimation unit 51 processes the detection result of the sensor 10 using the model provided in step S9 to estimate the state of the object. For example, the communication unit 31 of the information processing device 30 transmits the sensor detection result to the information processing device 50, and the estimation unit 51 estimates the target state using the sensor detection result received by the information processing device 50. do. The estimating unit 51 inputs, for example, the model M (see FIG. 5) in which the result of machine learning is reflected to the input layer M1, and derives the data output from the output layer M3 as the estimation result. In step S11, the estimation unit 51 provides the application 60 with information indicating the target state estimated in step S10. For example, the information processing device 50 transmits data indicating the estimation result of the estimation unit 51 to the information processing device on which the application 60 is installed.
 なお、図6に示す情報処理方法において、ステップS2の処理は、ステップS1の処理の少なくとも一部よりも前に実行されてもよいし、ステップS1の処理の少なくとも一部と並行して実行されてもよい。ステップS3において、生成部36は、センサ10の検出結果が取得されたタイミングを導出し、導出されたタイミングにおける状態情報を導出してもよい。情報処理システム1は、ステップS1からステップS3までの処理(適宜、教師生成処理という)を繰り返し実行し、ステップS3において生成される教師データを蓄積してもよい。情報処理システム1は、蓄積された教師データのデータ量が所定量以上になった場合に、ステップS7以降の処理を実行してもよい。 In the information processing method shown in FIG. 6, the process of step S2 may be performed before at least part of the process of step S1, or may be performed in parallel with at least part of the process of step S1. may In step S3, the generation unit 36 may derive the timing at which the detection result of the sensor 10 is acquired, and derive the state information at the derived timing. The information processing system 1 may repeatedly execute the processing from step S1 to step S3 (referred to as teacher generation processing as appropriate) and accumulate the teacher data generated in step S3. The information processing system 1 may execute the processes after step S7 when the data amount of the accumulated teacher data reaches or exceeds a predetermined amount.
 なお、ステップS8の処理(適宜、モデル生成処理という)は、繰り返し実行されてもよい。例えば、第1の教師生成処理によって第1のモデルが生成され、第1の教師生成処理の後に実行される第2の教師生成処理によって第2のモデルが生成されてもよい。第2のモデルは、第1のモデルに比べて、データ量が多い教師データを用いて生成されてもよい。例えば、第1のモデルは、第1の教師データを用いて生成され、第2のモデルは、第1の教師データの生成後に蓄積された第2の教師データを用いて生成されてもよい。第2のモデルは、第1の教師データおよび第2の教師データを用いて生成されてもよい。第2の教師データは、第1のモデルを用いた推定結果の誤差が加味(例、フィードバック)されて生成されてもよい。 Note that the process of step S8 (arbitrarily referred to as model generation process) may be repeatedly executed. For example, a first teacher generation process may generate a first model, and a second teacher generation process executed after the first teacher generation process may generate a second model. The second model may be generated using teacher data having a larger amount of data than the first model. For example, the first model may be generated using first teacher data, and the second model may be generated using second teacher data accumulated after the generation of the first teacher data. A second model may be generated using the first teacher data and the second teacher data. The second teacher data may be generated by adding (for example, feedback) the error of the estimation result using the first model.
 なお、上記第2の教師データは、生成される際の元データに対応する対象の集合が第1の教師データと異なってもよい。例えば、上記第1の教師データは、複数の対象2に関する検出結果および状態情報に基づいた教師データであって、第2の教師データは、複数の対象2の一部の対象(例、対象者2A)のみ関する検出結果および状態情報とに基づいた教師データであってもよい。上記第2のモデルは、状態を推定する対象が第1のモデルと異なってもよい。例えば、第1のモデルは、所定の企業に所属する従業者の状態の推定に利用され、第2のモデルは、所定の企業のうち所定の部署に所属する従業者の状態の推定に利用されてもよい。上記第2のモデルは生成されなくてもよい。ステップS10の処理およびステップS11の処理は、第1のモデルを利用して繰り返し実行されてもよい。 It should be noted that the second training data may differ from the first training data in a set of objects corresponding to the original data when generated. For example, the first teacher data is teacher data based on the detection results and state information about the plurality of targets 2, and the second teacher data is a part of the plurality of targets 2 (e.g., a target person 2A) may be teacher data based only on detection results and state information. The second model may differ from the first model in terms of the state estimation target. For example, the first model is used for estimating the status of employees belonging to a given company, and the second model is used for estimating the status of employees belonging to a given department within a given company. may The second model may not be generated. The process of step S10 and the process of step S11 may be repeatedly performed using the first model.
 上述の実施形態における情報処理システム1は、センサ10の検出結果と、センサ10の検出結果と異なる情報とを関連付けて推論モデルを生成する。対象2の状態は、推定部51により推定可能になる。このような情報処理システム1は、推論モデルが生成された後において、例えば、評価者3による評価が簡略化または省略された場合でも対象2の状態を把握することに寄与する。情報処理システム1は、例えば、対象2に関する異なる情報を関連付けることを容易化することができ、対象2を多面的に解析することを容易化することに寄与する。情報処理システム1は、例えば、推論モデルによる一定のルールで対象2の状態をデータで表すことができ、対象2の状態を客観的に評価することに寄与する。 The information processing system 1 in the above-described embodiment associates the detection result of the sensor 10 with information different from the detection result of the sensor 10 to generate an inference model. The state of the target 2 can be estimated by the estimation unit 51 . Such an information processing system 1 contributes to grasping the state of the subject 2 even if, for example, the evaluation by the evaluator 3 is simplified or omitted after the inference model is generated. The information processing system 1 can facilitate, for example, associating different information about the target 2, and contributes to facilitating multifaceted analysis of the target 2. FIG. The information processing system 1 can, for example, express the state of the target 2 as data according to a certain rule based on an inference model, and contributes to objective evaluation of the state of the target 2 .
 上述の実施形態において、対象者2がオフィスにおいて勤務する従業者であり、評価者3が従業者の上司である第1の適用例を説明した。以下、情報処理システム1の適用例を説明するが、情報処理システム1の適用範囲は、上述の例および後述の例に限定されない。 In the above embodiment, the first application example has been described in which the subject 2 is an employee working in an office and the evaluator 3 is the employee's boss. Application examples of the information processing system 1 will be described below, but the scope of application of the information processing system 1 is not limited to the examples described above and the examples described later.
 第2の適用例において、対象2は工事現場の作業者であって、評価者3は現場監督者である。センサ10は、対象2の温度(例、体温)、対象2の近傍の湿度(例、発汗状態)、対象2の部位の移動(例、手の動き、姿勢)、対象2の生命活動(例、脈拍、血圧)の少なくとも1つを検出する。評価者3は、例えば対象2の顔色等から健康状態を評価し、その評価結果を状態情報として端末20に入力する。情報処理システム1は、センサ10の検出結果及び状態情報を利用して、例えば作業者の健康状態を推定する推論モデルを生成する。推定部51は、例えば、センサ10の検出結果をリアルタイムで取得し、推論モデルを用いて作業者の健康状態(例、体調、疲労度)をリアルタイムで推定する。 In the second application example, subject 2 is a worker at a construction site, and evaluator 3 is a site supervisor. The sensor 10 detects the temperature of the target 2 (eg, body temperature), the humidity in the vicinity of the target 2 (eg, sweating state), the movement of the target 2 (eg, hand movement, posture), and the vital activity of the target 2 (eg, , pulse, blood pressure). The evaluator 3 evaluates the health condition of the subject 2 based on, for example, the complexion and the like, and inputs the evaluation result into the terminal 20 as condition information. The information processing system 1 uses the detection result of the sensor 10 and state information to generate an inference model for estimating the health condition of the worker, for example. For example, the estimation unit 51 acquires the detection result of the sensor 10 in real time, and estimates the worker's health condition (eg, physical condition, degree of fatigue) in real time using an inference model.
 アプリケーション60は、推定部51の推定結果を用いて、作業者の健康状態が作業の継続に適するか否かを判定してもよい。例えば、アプリケーション60は、作業者の健康状態に関する推定結果の履歴を用いて、所定の時間が経過した後の作業者の健康状態を推定する。例えば、アプリケーション60は、所定の時間が経過した後の作業者の健康状態を示す指標を算出することで、作業者の健康状態を推定する。アプリケーション60は、健康状態を示す指標と閾値とを比較して、作業者の健康状態が作業の継続に適するか否かを判定する。例えば、上記指標は、健康状態が悪いほど低い数値で表され、アプリケーション60は、算出した指標が閾値未満である場合、作業者の健康状態が作業の継続に適さないと判定する。 The application 60 may use the estimation result of the estimation unit 51 to determine whether the worker's health condition is suitable for continuing work. For example, the application 60 estimates the health condition of the worker after a predetermined period of time has elapsed using the history of estimation results regarding the health condition of the worker. For example, the application 60 estimates the health condition of the worker by calculating an index indicating the health condition of the worker after a predetermined period of time has passed. The application 60 compares the health condition indicator with a threshold value to determine whether the worker's health condition is suitable for continuing work. For example, the worse the health condition, the lower the index, and the application 60 determines that the health condition of the worker is not suitable for continuing work when the calculated index is less than the threshold.
 アプリケーション60は、作業者の健康状態が作業の継続に適さないと判定した場合、判定結果を出力してもよい。例えば、アプリケーション60は情報処理装置30に設けられ、情報処理装置30は、作業者の健康状態に関する警告を、作業者である対象2に関連付けられた端末20(例、現場監督者の端末)又はその他の装置へ提供してもよい。現場監督者は、上記警告を参考に、作業者に作業を中止させてもよいし、作業者を休養させてもよい。このように、情報処理システム1は作業者の健康管理に利用可能である。なお、対象2が人間である場合、対象者2の立場、及び対象者2と評価者3の関係は上述の例に限定されない。例えば、対象者2が被介護者であって、評価者3は対象者2の介護者でもよい。対象者2が被保護者であって、評価者3は対象者2の保護者でもよい。対象者2が患者であって、評価者3は対象者2を担当する医療従事者でもよい。 When the application 60 determines that the health condition of the worker is not suitable for continuing the work, the application 60 may output the determination result. For example, the application 60 is provided in the information processing device 30, and the information processing device 30 sends a warning about the worker's health condition to the terminal 20 (eg, the site supervisor's terminal) associated with the target 2 who is the worker, or It may be provided to other devices. The site supervisor may make the worker stop the work or take a rest, referring to the above warning. Thus, the information processing system 1 can be used for health management of workers. When the subject 2 is human, the position of the subject 2 and the relationship between the subject 2 and the evaluator 3 are not limited to the above examples. For example, the subject 2 may be a care recipient and the evaluator 3 may be the subject 2's caregiver. The subject 2 may be a protected person and the evaluator 3 may be the subject 2's guardian. The subject 2 may be a patient and the evaluator 3 may be a medical practitioner in charge of the subject 2 .
 なお、対象2は、人間以外の動物でもよい。第3の適用例において、対象2は飼育される動物であり、評価者3はこの動物の飼育員である。この動物は、動物園などで飼育される動物でもよいし、個人が飼育するペットでもよい。センサ10は、例えば対象2である動物の生命活動に関する情報を検出し、評価者3は、対象2である動物の状態情報として健康状態を端末20に入力する。情報処理システム1は、第2の適用例と同様の処理によって推論モデルを生成し、この推論モデルを用いて動物の健康状態を推定する。このような情報処理システム1は、第2の適用例と同様に、対象2の健康管理に利用可能である。 It should be noted that the target 2 may be an animal other than humans. In a third application, the subject 2 is a domesticated animal and the evaluator 3 is the keeper of this animal. The animal may be an animal raised in a zoo or the like, or may be a pet raised by an individual. The sensor 10 detects, for example, information about vital activity of the animal, which is the subject 2, and the evaluator 3 inputs the health condition of the animal, which is the subject 2, into the terminal 20 as condition information. The information processing system 1 generates an inference model by the same processing as in the second application example, and uses this inference model to estimate the state of health of the animal. Such an information processing system 1 can be used for health management of the subject 2, as in the second application example.
 なお、対象2は、畜産業で飼育される動物でもよい。情報処理システム1は、対象2である動物の状態として生育状態を推定してもよい。生育状態の推定結果は、例えば、飼育条件を調整する際の判断材料に利用されてもよいし、畜産物の出荷(例、出荷量、出荷時期)の判断材料に利用されてもよい。対象2は動物以外の生物でもよい。対象2が動物以外の生物(例、植物)である場合、上述のように対象2が動物である場合と同様に、情報処理システム1は、対象2の状態(例、健康状態、生育)を管理(例、把握)することに寄与する。 It should be noted that the subject 2 may be an animal raised in the livestock industry. The information processing system 1 may estimate the growth state as the state of the animal that is the target 2 . The estimation result of the growth state may be used, for example, as a reference for adjusting breeding conditions, or as a reference for deciding shipping of livestock products (eg, shipping amount, shipping time). Object 2 may be an organism other than an animal. When the target 2 is an organism other than an animal (eg, a plant), the information processing system 1 determines the state of the target 2 (eg, health condition, growth) as in the case where the target 2 is an animal as described above. Contribute to managing (e.g., understanding).
 なお、対象2は無生物(例、機械、物品)でもよい。第4の適用例において、対象2は機械であり、評価者3は、機械である対象2の管理者(例、オペレータ、保守担当者、点検担当者)である。センサ10は、例えば、対象2である機械の動作状態を示す情報(例、温度、振動)を検出する。評価者3は、例えば機械である対象2を点検し、状態情報として、動作状態を示す情報を端末20へ入力する。動作状態は、正常な動作であるか否かの情報を含んでもよいし、故障が発生している確度を示す情報を含んでもよく、点検やメンテナンスの要否を示す情報を含んでもよい。情報処理システム1は、センサ10の検出結果及び状態情報を利用して、例えば機械の動作状態を推定する推論モデルを生成する。推定部51は、例えば、センサ10の検出結果を取得し、推論モデルを用いて機械の動作状態を推定する。 In addition, the target 2 may be an inanimate object (eg, machine, article). In the fourth application example, the target 2 is a machine, and the evaluator 3 is a manager (eg, operator, maintenance person, inspection person) of the target 2 which is the machine. The sensor 10 detects, for example, information (eg, temperature, vibration) indicating the operating state of the machine that is the object 2 . The evaluator 3 inspects the object 2, which is, for example, a machine, and inputs information indicating the operating state to the terminal 20 as state information. The operating state may include information as to whether the operation is normal, information indicating the degree of certainty that a failure has occurred, and information indicating whether inspection or maintenance is necessary. The information processing system 1 uses the detection result of the sensor 10 and state information to generate an inference model for estimating the operating state of the machine, for example. The estimating unit 51, for example, acquires the detection result of the sensor 10 and estimates the operating state of the machine using an inference model.
 アプリケーション60は、推定部51の推定結果を用いて、推定された動作状態が所定の条件を満たす場合に、機械に関する情報を出力(例、報知)する。例えば、センサ10は、機械である対象2の振動を検出し、推定部51は、対象2が故障している確率(適宜、故障確率という)を推定する。アプリケーション60は情報処理装置30に設けられ、推定部51が推定した故障確率が所定値以上である場合、対象2の点検を促す警告を出力する。情報処理装置30は、アプリケーション60が出力した警告を、対象2と関連付けられた端末20(例、点検担当者の端末)に提供する。 The application 60 uses the estimation result of the estimation unit 51 to output (eg, notify) information about the machine when the estimated operating state satisfies a predetermined condition. For example, the sensor 10 detects vibrations of the target 2, which is a machine, and the estimation unit 51 estimates the probability that the target 2 is out of order (arbitrarily referred to as failure probability). The application 60 is provided in the information processing device 30 and outputs a warning prompting inspection of the target 2 when the failure probability estimated by the estimation unit 51 is equal to or greater than a predetermined value. The information processing device 30 provides the warning output by the application 60 to the terminal 20 associated with the target 2 (for example, the terminal of the person in charge of inspection).
[第2実施形態]
 次に、第2実施形態について説明する。図7は、第2実施形態に係る情報処理システムを示す図である。本実施形態において、上述の実施形態と同様の構成については、同じ符号を付してその説明を省略あるいは簡略化する。本実施形態において、情報処理装置30は、制御部37を備える。制御部37は、特定部35が特定した所定の端末に対して状態情報の提供を要求する。取得部34は、要求に対する応答として、状態情報を端末20から取得する。
[Second embodiment]
Next, a second embodiment will be described. FIG. 7 is a diagram showing an information processing system according to the second embodiment. In this embodiment, the same reference numerals are given to the same configurations as those of the above-described embodiment, and the description thereof is omitted or simplified. In this embodiment, the information processing device 30 includes a control section 37 . The control unit 37 requests the predetermined terminal specified by the specifying unit 35 to provide the state information. The acquisition unit 34 acquires state information from the terminal 20 as a response to the request.
 本実施形態において、制御部37は、センサ10の検出結果が所定条件を満たす場合に、端末20に対して状態情報の提供を要求する。制御部37は、取得部34によってセンサ10から取得された検出結果が所定条件を満たすか否かを判定する。例えば、センサ10は、対象者2が装着したウェアラブル機器(例、スマートウォッチ)であり、対象者2の生命活動を示す情報を検出する。例えば、センサ10が対象者2の手に装着され、加速度を検出するものとする。センサ10の検出結果は、例えば、対象者2が作業(例、コンピュータの操作)を行っている状態と、対象者2が休憩している状態とで異なる値又は波形を示す。また、センサ10は、対象者2の心拍数、血圧、体温、及び発汗の少なくとも1つを検出するものとする。この場合、センサ10の検出結果は、例えば対象者2のストレスや緊張、集中のレベルに応じて、異なる値又は波形を示す。制御部37は、例えば、所定時間ごとのセンサ値の波形を周波数解析して、波形の特徴的な周波数を算出する。制御部37は、例えば、第1の期間における特徴的な周波数と、第1の期間に続く第2の期間における特徴的な周波数との差が閾値を超える場合、所定条件を満たすと判定する。 In this embodiment, the control unit 37 requests the terminal 20 to provide state information when the detection result of the sensor 10 satisfies a predetermined condition. The control unit 37 determines whether the detection result obtained from the sensor 10 by the obtaining unit 34 satisfies a predetermined condition. For example, the sensor 10 is a wearable device (eg, smart watch) worn by the subject 2 and detects information indicating the vital activity of the subject 2 . For example, it is assumed that the sensor 10 is worn on the hand of the subject 2 and detects acceleration. The detection results of the sensor 10 show different values or waveforms, for example, when the subject 2 is working (for example, operating a computer) and when the subject 2 is resting. Also, the sensor 10 detects at least one of the subject's 2 heart rate, blood pressure, body temperature, and perspiration. In this case, the detection result of the sensor 10 shows different values or waveforms depending on, for example, the subject's 2 stress, tension, and concentration level. For example, the control unit 37 performs frequency analysis on the waveform of the sensor value at predetermined time intervals to calculate the characteristic frequency of the waveform. For example, when the difference between the characteristic frequency in the first period and the characteristic frequency in the second period following the first period exceeds a threshold, the control unit 37 determines that the predetermined condition is satisfied.
 なお、制御部37は、センサ10の検出結果を示す1つの値(例、センサ値、時間に対するセンサ値の移動平均値)を閾値と比較して、上記判定を実行してもよい。上記所定の条件は、複数の対象2で同じ条件に設定されてもよいし、対象2ごとに設定されてもよい。複数の対象2のうち、少なくとも1の対象2についての所定の条件は、他の対象についての所定の条件と異なってもよい。複数の対象2は複数のグループに分かれており、上記所定条件は、グループごとに設定されてもよい。例えば、複数の対象者2は、複数のグループ(例、部署)に分かれており、各グループの業務内容に応じて所定条件が設定されてもよい。 Note that the control unit 37 may perform the above determination by comparing one value (eg, sensor value, moving average value of sensor values with respect to time) indicating the detection result of the sensor 10 with a threshold value. The predetermined condition may be set to the same condition for a plurality of targets 2 or may be set for each target 2 . The predetermined conditions for at least one target 2 among the plurality of targets 2 may differ from the predetermined conditions for other targets. A plurality of targets 2 are divided into a plurality of groups, and the predetermined condition may be set for each group. For example, a plurality of target persons 2 may be divided into a plurality of groups (eg, departments), and predetermined conditions may be set according to the work content of each group.
 上述のように、制御部37は、例えばセンサ10の検出結果を監視する。制御部37は、センサ10の検出結果が所定条件を満たすと判定した場合、状態情報要求を生成する。
状態情報要求は、端末20に対して状態情報の提供(例、送信)させる情報(例、指令)である。情報処理装置30は、例えば、対象2を指定して、この対象2についての状態情報の提供を要求する。例えば、状態情報要求は、状態情報が要求される対象2を特定する情報(例、図3に示した対象ID)を含む。例えば、特定部35は、記憶部33に記憶された候補情報D6を参照し、上記判定で使われた検出結果の提供元のセンサ10のセンサIDに対応する対象IDを特定する。制御部37は、例えば、特定部35が特定した対象IDと、状態情報を要求するメッセージとを含む状態情報要求を生成する。
As described above, the control unit 37 monitors the detection result of the sensor 10, for example. When the control unit 37 determines that the detection result of the sensor 10 satisfies a predetermined condition, it generates a state information request.
The state information request is information (eg, command) that causes the terminal 20 to provide (eg, transmit) state information. The information processing device 30 , for example, designates the target 2 and requests provision of state information about the target 2 . For example, the status information request includes information identifying the target 2 for which status information is requested (eg, the target ID shown in FIG. 3). For example, the specifying unit 35 refers to the candidate information D6 stored in the storage unit 33, and specifies the target ID corresponding to the sensor ID of the sensor 10 that provided the detection result used in the determination. The control unit 37 generates a state information request including, for example, the target ID specified by the specifying unit 35 and a message requesting state information.
 情報処理装置30は、状態情報の提供の要求先である端末20に対して、状態情報要求を送信する。例えば、特定部35は、上記判定で使われた検出結果の提供元のセンサ10に対応する端末20を特定する。例えば、特定部35は、記憶部33に記憶された候補情報D6を参照し、上記判定で使われた検出結果の提供元のセンサ10のセンサID(図3参照)に対応する端末IDを特定する。制御部37は、特定部35が特定した端末20に対して、状態情報要求を通信部31によって送信させる。通信部31は、状態情報要求を送信するタイミングを示す情報(例、タイムスタンプ)を記憶部33に記憶させる。 The information processing device 30 transmits a state information request to the terminal 20 to which the state information is requested. For example, the identifying unit 35 identifies the terminal 20 corresponding to the sensor 10 that provided the detection result used in the above determination. For example, the specifying unit 35 refers to the candidate information D6 stored in the storage unit 33, and specifies the terminal ID corresponding to the sensor ID (see FIG. 3) of the sensor 10 that provided the detection result used in the determination. do. The control unit 37 causes the communication unit 31 to transmit the state information request to the terminal 20 specified by the specifying unit 35 . The communication unit 31 causes the storage unit 33 to store information (eg, time stamp) indicating the timing of transmitting the state information request.
 端末20は、状態情報要求を受信した際に、状態情報の入力を促す通知(例、画像、音声)を出力する。例えば、端末20は、図2に示したように状態情報の入力を受け付ける画像を表示部21に表示する。この画像は、例えば、対象2を特定する情報(例、対象者2の名前)を含む。端末20には評価者3によって状態情報が入力され、端末20は、入力された状態情報を情報処理装置30へ提供する。端末20は、例えば、状態情報を送信するタイミングを示す情報(例、タイムスタンプ)を、状態情報に含めて又は状態情報とともに送信する。 When the terminal 20 receives the state information request, it outputs a notification (eg, image, voice) prompting the user to enter the state information. For example, the terminal 20 displays an image for accepting input of state information on the display unit 21 as shown in FIG. This image includes, for example, information identifying the subject 2 (eg, the name of the subject 2). Status information is input to the terminal 20 by the evaluator 3 , and the terminal 20 provides the input status information to the information processing device 30 . The terminal 20, for example, includes information indicating the timing of transmitting the state information (eg, time stamp) in the state information or transmits it together with the state information.
 情報処理装置30は、端末20から送信された状態情報を受信する。情報処理装置30の生成部36は、例えば、状態情報要求が送信される第1のタイミングと、状態情報が送信される第2のタイミングとの一方又は双方を用いて、端末20によって状態情報が取得されたタイミングを導出(例、推定)する。例えば、生成部36は、上記第1のタイミングを示す時刻と第2のタイミングを示す時刻との間の時刻を、端末20によって状態情報が取得されたタイミングとして算出する。生成部36は、導出したタイミングにおけるセンサ10の検出結果を導出し、教師データを生成する。 The information processing device 30 receives the state information transmitted from the terminal 20 . For example, the generation unit 36 of the information processing device 30 uses one or both of the first timing at which the state information request is transmitted and the second timing at which the state information is transmitted so that the state information is transmitted by the terminal 20. Deriving (eg, estimating) the obtained timing. For example, the generation unit 36 calculates the time between the time indicating the first timing and the time indicating the second timing as the timing at which the terminal 20 acquires the state information. The generator 36 derives the detection result of the sensor 10 at the derived timing and generates teacher data.
 情報処理装置30は、第1実施形態で説明したように、教師データを情報処理装置40へ提供する。情報処理装置40の学習部41は、教師データを用いて機械学習を実行してモデルを生成する。情報処理装置40は、モデルを情報処理装置50へ提供する。情報処理装置50の推定部51は、センサ10の検出結果を、モデルを用いて処理して対象2の状態を推定する。情報処理装置50は、対象2の状態の推定結果をアプリケーション60へ提供する。本実施形態に係る情報処理システム1は、例えば、対象2の状態に変化が生じたと推定される状況において選択的に、評価者3による対象2の評価を実行させることができる。このような情報処理システム1は、例えば、対象2の状態がほぼ同じである状態において、状態情報を取得する処理に要する負荷を低減することに寄与する。 The information processing device 30 provides teacher data to the information processing device 40 as described in the first embodiment. The learning unit 41 of the information processing device 40 executes machine learning using teacher data to generate a model. The information processing device 40 provides the model to the information processing device 50 . The estimation unit 51 of the information processing device 50 processes the detection result of the sensor 10 using a model to estimate the state of the target 2 . The information processing device 50 provides the estimation result of the state of the target 2 to the application 60 . For example, the information processing system 1 according to the present embodiment can selectively cause the evaluator 3 to evaluate the subject 2 in a situation where it is estimated that the state of the subject 2 has changed. Such an information processing system 1 contributes to, for example, reducing the load required for the process of acquiring state information when the state of the target 2 is substantially the same.
[第3実施形態]
 次に、第3実施形態について説明する。図8は、第3実施形態に係る情報処理システムを示す図である。本実施形態において、上述の実施形態と同様の構成については、同じ符号を付してその説明を省略あるいは簡略化する。本実施形態において、情報処理システム1の特定部35は、センサ10の検出先の位置に対して所定の範囲内に存在する端末20から所定の端末を特定する。生成部36は、特定部35が特定した所定の端末から提供される状態情報と、センサ10の検出結果とを用いて教師データを生成する。
[Third embodiment]
Next, a third embodiment will be described. FIG. 8 is a diagram showing an information processing system according to the third embodiment. In this embodiment, the same reference numerals are given to the same configurations as those of the above-described embodiment, and the description thereof will be omitted or simplified. In the present embodiment, the identifying unit 35 of the information processing system 1 identifies a predetermined terminal from among the terminals 20 existing within a predetermined range with respect to the detection target position of the sensor 10 . The generation unit 36 generates teacher data using the state information provided from the predetermined terminal identified by the identification unit 35 and the detection result of the sensor 10 .
 情報処理装置30の記憶部33は、例えば、位置情報D7を記憶し、特定部35は位置情報D7を用いて所定の端末の位置を特定する。位置情報D7は、例えば、各センサ10の検出先の位置の情報(適宜、センサ位置情報という)を含む。センサ位置情報は、例えば、各センサ10が設置された位置を示す情報と、各センサ10が検出可能な範囲を示す情報をとを含む。センサ位置情報は、センサ10と関連付けられた対象2の位置を示す情報を含んでもよい。例えば、センサ位置情報は、オフィスにおいて対象者2の座席の位置情報でもよい。センサ10がカメラである場合、センサ位置情報は、センサ10が撮影する際の視野(画角)の中心位置でもよい。センサ位置情報は、センサ10の移動が想定されない場合(例、センサ10が定点カメラである場合)、固定値で与えられてもよい。 The storage unit 33 of the information processing device 30 stores, for example, the location information D7, and the specifying unit 35 uses the location information D7 to specify the location of the predetermined terminal. The position information D7 includes, for example, information on the position of the detection target of each sensor 10 (arbitrarily referred to as sensor position information). The sensor position information includes, for example, information indicating the position where each sensor 10 is installed and information indicating the detectable range of each sensor 10 . Sensor location information may include information indicating the location of the target 2 associated with the sensor 10 . For example, the sensor position information may be the position information of the seat of the subject 2 in the office. When the sensor 10 is a camera, the sensor position information may be the central position of the field of view (angle of view) when the sensor 10 takes an image. The sensor position information may be given as a fixed value when the movement of the sensor 10 is not assumed (for example, when the sensor 10 is a fixed point camera).
 センサ位置情報は、センサ10の移動が想定される場合(例、センサ10がウェアラブル機器である場合)、変動値で与えられてもよい。例えば、センサ10を含む機器(例、ウエラブル機器、モバイル機器)は自装置の位置を検出し、その検出結果を情報処理装置30へ提供する。センサ10の位置を検出する手法は任意であり、例えば、GPSを利用する手法でもよいし、複数の他の機器から受信する信号(例、ビーコン信号)を利用する手法でもよく、その他の手法でもよい。センサ10の位置を検出する機器は、センサ10を含む機器でもよいし、この機器と異なる機器(例、センサ10の外部の機器)でもよい。センサ10が移動することが想定される場合、情報処理装置30は、センサ10の位置情報を、センサ10を含む機器又はこの機器と異なる機器から取得し、取得した情報を用いて、記憶部33が記憶するセンサ位置情報を更新してもよい。 The sensor position information may be given as a variable value when the sensor 10 is assumed to move (for example, when the sensor 10 is a wearable device). For example, a device (eg, wearable device, mobile device) including the sensor 10 detects its own position and provides the detection result to the information processing device 30 . Any method may be used to detect the position of the sensor 10. For example, a method using GPS, a method using signals (e.g., beacon signals) received from a plurality of other devices, or other methods may be used. good. A device that detects the position of the sensor 10 may be a device including the sensor 10 or a device different from this device (for example, a device external to the sensor 10). When the sensor 10 is assumed to move, the information processing device 30 acquires the position information of the sensor 10 from a device including the sensor 10 or a device different from this device, and stores the information in the storage unit 33 using the acquired information. may update the sensor position information stored by .
 位置情報D7は、例えば、端末20の位置情報(適宜、端末位置情報という)を含む。端末位置情報は、端末20の移動が想定されない場合(例、端末20が据置型のコンピュータである場合)、固定値で与えられてもよい。端末位置情報は、端末20が設置された位置を示す情報を含んでもよい。端末位置情報は、端末20と関連付けられた評価者3の位置を示す情報を含んでもよい。例えば、端末20は、オフィスにおける評価者3の座席に設置され、端末位置情報は、評価者3の座席の位置情報を含んでもよい。 The position information D7 includes, for example, the position information of the terminal 20 (arbitrarily referred to as terminal position information). The terminal location information may be given as a fixed value when the terminal 20 is not expected to move (for example, when the terminal 20 is a stationary computer). The terminal location information may include information indicating the location where the terminal 20 is installed. The terminal location information may include information indicating the location of the evaluator 3 associated with the terminal 20 . For example, the terminal 20 may be installed at the seat of the evaluator 3 in the office, and the terminal position information may include the position information of the seat of the evaluator 3 .
 端末位置情報は、端末20の移動が想定される場合(例、端末20がスマートフォンである場合)、変動値で与えられてもよい。例えば、端末20(例、スマートフォン)は自装置の位置を検出し、その検出結果を情報処理装置30へ提供する。端末20の位置を検出する手法は任意であり、例えば、GPSを利用する手法でもよいし、複数の他の機器から受信する信号(例、ビーコン信号)を利用する手法でもよく、その他の手法でもよい。端末20の位置を検出する機器は、端末20でもよいし、端末20と異なる機器(例、端末20の外部の機器)でもよい。端末20が移動することが想定される場合、情報処理装置30は、端末20の位置情報を、端末20又は端末20と異なる機器から取得し、取得した情報を用いて、記憶部33が記憶する端末位置情報を更新してもよい。 The terminal location information may be given as a variable value when the terminal 20 is expected to move (for example, when the terminal 20 is a smartphone). For example, the terminal 20 (eg, smart phone) detects its own position and provides the detection result to the information processing device 30 . Any method may be used to detect the position of the terminal 20. For example, a method using GPS, a method using signals (e.g., beacon signals) received from a plurality of other devices, or other methods may be used. good. A device that detects the position of the terminal 20 may be the terminal 20 or a device different from the terminal 20 (for example, a device external to the terminal 20). When the terminal 20 is expected to move, the information processing device 30 acquires the position information of the terminal 20 from the terminal 20 or a device different from the terminal 20, and the storage unit 33 stores the acquired information. Terminal location information may be updated.
 特定部35は、記憶部33に記憶された位置情報D7を参照し、対象2の状態情報が入力される端末20を特定(例、決定)する。例えば、特定部35は、位置情報D7に含まれるセンサ位置情報を用いて、所定の範囲Rを設定する。所定の範囲Rは、例えば、センサ10の検出先の位置からの距離が所定値以下である範囲である。図8において、センサ10の検出先の位置は、対象2の位置である。所定の範囲Rは、例えば、対象2の位置を中心とする円(例、ジオフェンス)の内側の領域である。この円の半径は、上記所定値に相当する。所定の範囲Rは、例えば、評価者3が対象2を直接的に認識できる範囲に設定される。例えば、評価者3が対象者2を目視により認識することが想定される場合、所定の範囲Rの半径は、人間が一般的な視力により人物を認識できる距離(例、数メートル)に予め設定される。 The specifying unit 35 refers to the position information D7 stored in the storage unit 33 and specifies (eg, determines) the terminal 20 to which the state information of the target 2 is input. For example, the specifying unit 35 sets the predetermined range R using the sensor position information included in the position information D7. The predetermined range R is, for example, a range in which the distance from the detection target position of the sensor 10 is equal to or less than a predetermined value. In FIG. 8 , the position of the detection target of the sensor 10 is the position of the object 2 . The predetermined range R is, for example, an area inside a circle (eg, geofence) centered on the position of the target 2 . The radius of this circle corresponds to the predetermined value. The predetermined range R is set, for example, to a range in which the evaluator 3 can directly recognize the target 2 . For example, when it is assumed that the evaluator 3 visually recognizes the subject 2, the radius of the predetermined range R is set in advance to a distance (eg, several meters) at which a person can recognize the person with normal eyesight. be done.
 特定部35は、位置情報D7に含まれる端末位置情報を用いて、端末20が所定の範囲Rの内側に存在するか否かを判定する。特定部35は、位置情報D7に端末位置情報が含まれる端末のうち、所定の範囲Rの内側に存在する端末(図8の端末20A、端末20B)を所定の端末の候補に設定する。特定部35は、位置情報D7に端末位置情報が含まれる端末のうち、所定の範囲Rの外側に存在する端末(図8の端末25)を所定の端末の候補から除外する。 The specifying unit 35 determines whether or not the terminal 20 exists inside the predetermined range R using the terminal location information included in the location information D7. The specifying unit 35 sets, among the terminals whose terminal location information is included in the location information D7, terminals existing inside a predetermined range R ( terminals 20A and 20B in FIG. 8) as predetermined terminal candidates. The specifying unit 35 excludes a terminal (the terminal 25 in FIG. 8) existing outside the predetermined range R from among the terminals whose terminal position information is included in the position information D7 from the predetermined terminal candidates.
 生成部36は、特定部35が所定の端末の候補に設定した端末から提供される状態情報と、対象2を検出するセンサ10の検出結果とを関連付けて教師データを生成する。例えば、情報処理装置30は、所定の端末の候補の少なくとも1つに対して状態情報要求を送信する。例えば、図7に示した制御部37は、所定の端末の候補から少なくとも1つの端末20を、所定の端末として選択する。制御部37は、所定の端末の候補から所定の端末をランダムで選択してもよい。制御部37は、所定の端末の候補から所定の端末を、所定のルールに従って選択してもよい。例えば、情報処理装置30には複数の端末が予め登録される。登録された複数の端末には優先順位が予め設定され、制御部37は、優先順位に従って所定の端末を選択してもよい。例えば、所定の端末の候補は、対象者2の上司の端末20Aと、対象者2の同僚の端末20Bとを含み、制御部37は、上司の端末20Aを、同僚の端末20Bよりも優先的に選択してもよい。 The generating unit 36 generates teacher data by associating the state information provided by the terminal set as a predetermined terminal candidate by the specifying unit 35 with the detection result of the sensor 10 that detects the target 2 . For example, the information processing device 30 transmits a state information request to at least one of the predetermined terminal candidates. For example, the control unit 37 shown in FIG. 7 selects at least one terminal 20 from the predetermined terminal candidates as the predetermined terminal. The control unit 37 may randomly select a predetermined terminal from the predetermined terminal candidates. The control unit 37 may select a predetermined terminal from the predetermined terminal candidates according to a predetermined rule. For example, multiple terminals are registered in advance in the information processing device 30 . A priority order may be set in advance for a plurality of registered terminals, and the control unit 37 may select a predetermined terminal according to the priority order. For example, the predetermined terminal candidates include the terminal 20A of the boss of the subject 2 and the terminal 20B of the colleague of the subject 2, and the control unit 37 prioritizes the terminal 20A of the boss over the terminal 20B of the colleague. You may choose to
 情報処理装置30は、例えば、センサ10の検出結果を取得した際に、このセンサ10に対応する対象2を特定する情報を含む状態情報要求を、所定の端末へ提供する。この場合、端末20は、対象2と予め関連付けられなくてもよく、生成部36は、センサ10の検出結果と、状態情報要求への応答として取得される状態情報とを関連付けて教師データを生成できる。上記対象2を特定する情報は、センサ10を特定する情報と予め関連付けられてもよいし、センサ10の検出結果を用いて導出されてもよい。例えば、センサ10はカメラを含み、情報処理装置30は、センサ10が撮影した画像に写っている対象2の特徴を抽出し、抽出した特徴の情報を予め登録された情報とに照合して対象2を特定してもよい。この場合、対象2を特定する情報は、センサ10を特定する情報と予め関連付けられていなくてもよい。 For example, when the information processing device 30 acquires the detection result of the sensor 10, it provides a state information request including information specifying the target 2 corresponding to this sensor 10 to a predetermined terminal. In this case, the terminal 20 does not have to be pre-associated with the target 2, and the generator 36 associates the detection result of the sensor 10 with the state information acquired as a response to the state information request to generate teacher data. can. The information specifying the target 2 may be associated in advance with the information specifying the sensor 10 or may be derived using the detection result of the sensor 10 . For example, the sensor 10 includes a camera, and the information processing device 30 extracts the features of the target 2 appearing in the image captured by the sensor 10, compares the information of the extracted features with pre-registered information, and extracts the feature of the target. 2 may be specified. In this case, the information specifying the target 2 may not be associated in advance with the information specifying the sensor 10 .
[第4実施形態]
 第4実施形態について説明する。図9は、第4実施形態に係る情報処理システムを示す図である。本実施形態において、上述の実施形態と同様の構成については、同じ符号を付してその説明を省略あるいは簡略化する。本実施形態において、情報処理システム1は、センサ11の検出結果、及びセンサ11と異なるセンサ12の検出結果を用いて教師データを生成する。センサ10は、例えば、図1に示したセンサ10と同様のセンサである。
[Fourth embodiment]
A fourth embodiment will be described. FIG. 9 is a diagram showing an information processing system according to the fourth embodiment. In this embodiment, the same reference numerals are given to the same configurations as those of the above-described embodiment, and the description thereof will be omitted or simplified. In this embodiment, the information processing system 1 generates teacher data using the detection result of the sensor 11 and the detection result of the sensor 12 different from the sensor 11 . The sensor 10 is, for example, a sensor similar to the sensor 10 shown in FIG.
 センサ12は、例えば、検出する情報の種類がセンサ11と異なる。例えば、センサ11は対象者2の生命活動を検出するセンサであって、センサ12は対象者2の外観を検出するカメラでもよい。センサ11は、対象2によって生じる音声を検出するセンサであって、センサ12は、対象2の外観を検出するカメラでもよい。センサ12は、対象2に対する位置関係がセンサ11と異なってもよい。例えば、センサ12は、対象2からの距離がセンサ11と異なってもよい。例えば、センサ11は、ウェアラブル機器のように対象2に付帯するセンサであって、センサ12は、監視カメラのように対象2から離れたセンサであってもよい。センサ12は、対象2を検出する方向がセンサ11と異なるセンサでもよい。例えば、センサ11およびセンサ12はいずれもカメラを含み、センサ11は、対象2を第1の方向(例、側方)から検出(例、撮影)し、センサ12は、対象2を第1の方向と異なる第2の方向(例、正面)から検出(例、撮影)してもよい。 The sensor 12 differs from the sensor 11 in the type of information it detects, for example. For example, the sensor 11 may be a sensor that detects vital activity of the subject 2 and the sensor 12 may be a camera that detects the appearance of the subject 2 . Sensor 11 may be a sensor that detects sounds produced by object 2 and sensor 12 may be a camera that detects the appearance of object 2 . The sensor 12 may differ from the sensor 11 in positional relationship with respect to the target 2 . For example, sensor 12 may have a different distance from object 2 than sensor 11 . For example, the sensor 11 may be a sensor attached to the target 2 like a wearable device, and the sensor 12 may be a sensor remote from the target 2 like a surveillance camera. The sensor 12 may be a sensor that detects the target 2 in a direction different from that of the sensor 11 . For example, the sensors 11 and 12 both include cameras, the sensor 11 detects (eg, photographs) the target 2 from a first direction (eg, lateral), and the sensor 12 detects the target 2 from the first direction. Detection (eg, photographing) may be performed from a second direction (eg, front) different from the direction.
 センサ12は、検出結果を端末26へ提供する。端末26は、図1で説明した端末20と同じ端末でもよいし、端末20と異なる端末でもよい。端末26は、推定部27を備える。推定部27は、センサ12の検出結果を用いて対象者2の状態を推定する。例えば、推定部27は、例えば、センサ12の検出結果を、機械学習によって生成したモデルによって処理して、対象の状態を推定する。端末20は、推定部27の推定結果を用いて状態情報を生成する。端末20は、生成した状態情報を情報処理装置30へ提供する。 The sensor 12 provides the detection result to the terminal 26. The terminal 26 may be the same terminal as the terminal 20 described in FIG. 1, or may be a terminal different from the terminal 20. FIG. The terminal 26 has an estimator 27 . The estimation unit 27 estimates the state of the subject 2 using the detection result of the sensor 12 . For example, the estimation unit 27 processes the detection result of the sensor 12 using a model generated by machine learning to estimate the state of the object. The terminal 20 uses the estimation result of the estimation unit 27 to generate state information. The terminal 20 provides the generated state information to the information processing device 30 .
 情報処理装置30は、センサ11の検出結果と、端末20から提供された状態情報とを用いて教師データを生成する、情報処理装置30は、生成した教師データを情報処理装置40へ提供する。情報処理装置40の学習部41は、情報処理装置30から提供された教師データを用いて機械学習を実行し、推論モデルを生成する。情報処理装置40は、生成した推論モデルを情報処理装置50へ提供する。情報処理装置50の推定部52は、図1で説明した推定部51と同様のものでもよいし、異なるものでもよい。推定部52は、センサ11の検出結果と、学習部41が生成した推論モデルとを用いて、対象2の状態を推定する。情報処理装置50は、推定部52の推定結果をアプリケーション60へ提供する。 The information processing device 30 generates training data using the detection result of the sensor 11 and the state information provided from the terminal 20. The information processing device 30 provides the generated training data to the information processing device 40. The learning unit 41 of the information processing device 40 executes machine learning using the teacher data provided from the information processing device 30 to generate an inference model. The information processing device 40 provides the generated inference model to the information processing device 50 . The estimation unit 52 of the information processing device 50 may be the same as or different from the estimation unit 51 described with reference to FIG. The estimation unit 52 estimates the state of the target 2 using the detection result of the sensor 11 and the inference model generated by the learning unit 41 . The information processing device 50 provides the estimation result of the estimation unit 52 to the application 60 .
 本実施形態に係る情報処理システム1は、例えば、以下のように利用される。情報処理システム1は、図1で説明したように、センサ10の検出結果と、評価者3によって端末20に入力される状態情報とを用いて教師データを生成し、生成した教師データを用いて第1の推論モデルを生成する。図9のセンサ12は、センサ10と同じ種類の情報を検出するセンサを含み、端末26の推定部27は、第1の推論モデルを用いて推定処理を実行する。例えば、図1のセンサ10及び図9のセンサ12は、いずれも対象2の生命活動を示す情報(例、心拍数)を検出するセンサを含む。第1の推論モデルは、対象2の生命活動を示す情報から対象2の状態を推定するモデルである。端末26の推定部27は、センサ12によって検出される対象2の生命活動を示す情報を処理して、状態情報(例、対象者2の健康状態)を推定する。センサ11は、例えばカメラを含む。情報処理装置30は、例えば、センサ11が対象者2を撮影した画像のデータを検出結果とし、推定部27が推定した対象者2の健康状態を示すデータを状態情報として、第2の教師データを生成する。学習部41は、第2の学習データを用いて、対象2の画像から対象2の健康状態を推定する推論モデルを生成する。情報処理システム1は、このような処理を繰り返すことにって、複数の種類の情報を関連付ける推論モデルを生成できる。 The information processing system 1 according to this embodiment is used, for example, as follows. As described with reference to FIG. 1, the information processing system 1 generates teacher data using the detection result of the sensor 10 and the state information input to the terminal 20 by the evaluator 3, and uses the generated teacher data. Generate a first inference model. The sensor 12 in FIG. 9 includes a sensor that detects the same type of information as the sensor 10, and the estimation unit 27 of the terminal 26 performs estimation processing using the first inference model. For example, the sensor 10 in FIG. 1 and the sensor 12 in FIG. 9 both include sensors that detect information indicating vital activity of the subject 2 (eg, heart rate). A first inference model is a model for estimating the state of the subject 2 from information indicating the vital activity of the subject 2 . The estimation unit 27 of the terminal 26 processes the information indicating the vital activity of the subject 2 detected by the sensor 12 and estimates state information (eg, health condition of the subject 2). Sensor 11 includes, for example, a camera. For example, the information processing apparatus 30 uses the data of the image of the subject 2 photographed by the sensor 11 as the detection result, and the data indicating the health condition of the subject 2 estimated by the estimation unit 27 as the state information, as the second teacher data. to generate The learning unit 41 generates an inference model for estimating the health condition of the target 2 from the image of the target 2 using the second learning data. By repeating such processing, the information processing system 1 can generate an inference model that associates a plurality of types of information.
 図10は、実施形態に係るコンピュータの一例を示す図である。このコンピュータ100の少なくとも一部は、例えば、実施形態に係る情報処理装置(例、情報処理装置30、情報処理装置40、情報処理装置50)、及び端末(例、端末20、端末26)の少なくとも1つに利用される。コンピュータ100は、CPU101と、ROM102と、RAM103と、バス104と、入力部105と、出力部106と、記憶部107と、通信部108と、ドライブ109と、を備える。CPU101、ROM102及びRAM103は、それぞれ、バス104を介して相互に接続されている。バス104には、入力部105、出力部106、記憶部107、通信部108及びドライブ109が接続されている。 FIG. 10 is a diagram showing an example of a computer according to the embodiment. At least part of the computer 100 is, for example, at least an information processing device according to the embodiment (eg, information processing device 30, information processing device 40, information processing device 50) and a terminal (eg, terminal 20, terminal 26). used for one. Computer 100 includes CPU 101 , ROM 102 , RAM 103 , bus 104 , input section 105 , output section 106 , storage section 107 , communication section 108 and drive 109 . The CPU 101, ROM 102 and RAM 103 are interconnected via a bus 104, respectively. An input unit 105 , an output unit 106 , a storage unit 107 , a communication unit 108 and a drive 109 are connected to the bus 104 .
 CPU101は、Central Processing Unitを含む。ROM102は、Read Only Memoryを含む。RAM103は、Random Access Memoryを含む。CPU101は、ROM102に記録されているプログラム、または、記憶部107からRAM103にロードされたプログラムに従って各種の処理を実行する。RAM103には、CPU101が各種の処理を実行する上において必要なデータ等も適宜記憶される。 The CPU 101 includes a Central Processing Unit. ROM 102 includes Read Only Memory. RAM 103 includes Random Access Memory. The CPU 101 executes various processes according to programs recorded in the ROM 102 or programs loaded from the storage unit 107 to the RAM 103 . The RAM 103 also stores data necessary for the CPU 101 to execute various kinds of processing.
 入力部105は、例えば、キーボード、マウス、トラックボール、及びタッチパッドの少なくとも1つを含む。入力部105は、ユーザに操作されることで、各種情報の入力を受け付ける。入力部105は、音声の入力を受け付けるマイク等を含んでもよい。出力部106は、例えば、画像を出力する表示装置(例、ディスプレイ)を含む。出力部106は、音声を出力するスピーカを含んでもよい。入力部105および出力部106は、表示部上に透過型のタッチパッドが重ねられたタッチパネルを含んでもよい。 The input unit 105 includes, for example, at least one of a keyboard, mouse, trackball, and touchpad. The input unit 105 receives input of various information by being operated by the user. The input unit 105 may include a microphone or the like that receives voice input. The output unit 106 includes, for example, a display device (eg, display) that outputs images. The output unit 106 may include a speaker that outputs audio. The input unit 105 and the output unit 106 may include a touch panel in which a transmissive touch pad is superimposed on the display unit.
 記憶部107は、例えば、ハードディスク、ソリッドステートドライブ、及び不揮発性メモリの少なくとも1つを含む。記憶部107は、外部からコンピュータ100に入力される各種データと、コンピュータ100による処理において使用される各種データと、コンピュータ100が生成する各種データとの少なくとも一部を記憶する。通信部108は、ネットワークを介して他の装置との間で行う通信を制御する。 The storage unit 107 includes, for example, at least one of a hard disk, solid state drive, and nonvolatile memory. The storage unit 107 stores at least part of various data externally input to the computer 100 , various data used in processing by the computer 100 , and various data generated by the computer 100 . A communication unit 108 controls communication with another device via a network.
 ドライブ109には、リムーバブルメディア110が適宜装着される。リムーバブルメディア110は、例えば、磁気ディスク、光ディスク、光磁気ディスク、半導体メモリ等の記憶媒体を含む。ドライブ109は、リムーバブルメディア110から情報を読み出す。リムーバブルメディア110から読み出された情報は、記憶部107に記憶される。例えば、ドライブ109は、リムーバブルメディア110に記憶されたプログラムを読み出し、このプログラムは、コンピュータ100にインストールされる。 A removable medium 110 is appropriately attached to the drive 109 . Removable media 110 include, for example, storage media such as magnetic disks, optical disks, magneto-optical disks, and semiconductor memories. Drive 109 reads information from removable media 110 . Information read from the removable medium 110 is stored in the storage unit 107 . For example, drive 109 reads a program stored on removable media 110 and this program is installed on computer 100 .
 なお、コンピュータ100は、入力部105と出力部106との一方または双方を備えなくてもよい。例えば、出力部106は、コンピュータ100に外付けされる装置であって、コンピュータ100は、出力部106を接続可能なインターフェースを備えてもよい。 Note that the computer 100 does not have to include one or both of the input unit 105 and the output unit 106 . For example, the output unit 106 may be a device externally attached to the computer 100, and the computer 100 may include an interface to which the output unit 106 can be connected.
 上述の実施形態において情報処理装置や端末は、記憶部や記憶装置に記憶されているプログラムを読み出し、このプログラムに従って各種の処理を実行する。このプログラムは、例えば、コンピュータに、対象を検出するセンサの検出結果およびセンサの検出結果とは異なる情報から得られる情報であって対象の状態を表す状態情報を、対象に関連付けられる所定の端末から取得することと、取得した検出結果および状態情報を用いて、機械学習に利用されるデータを生成することと、を実行させる。 In the above-described embodiments, the information processing device and the terminal read the program stored in the storage unit and the storage device, and execute various processes according to this program. This program, for example, causes the computer to receive the detection result of the sensor that detects the target and the state information representing the state of the target, which is information obtained from information different from the detection result of the sensor, from a predetermined terminal associated with the target. Acquiring and using the acquired detection results and state information to generate data used for machine learning are performed.
 上記プログラムは、コンピュータ読み取り可能な記憶媒体に記録されて提供されてもよい。上記プログラムは、コンピュータシステムに記録されているプログラム(例、オペレーティングシステム)との組み合わせによって、各種処理を実行する差分プログラム(適宜、差分ファイルという)でもよい。 The above program may be recorded on a computer-readable storage medium and provided. The above program may be a differential program (also referred to as a differential file as appropriate) that executes various processes in combination with a program (eg, operating system) recorded in a computer system.
 上述の実施形態において、情報処理システム1は、対象に関する第1の情報と第2の情報とを関連付けて、機械学習に利用されるデータを生成する。上記第2の情報は、上記第1の情報と異なる情報を含む。例えば、第1の情報は、対象2をセンサが検出して得られる情報を含み、第2の情報は、人が判断した対象の状態を表す情報を含む。第2の情報は、上記第1の情報を装置が処理して得られる情報を含んでもよい。例えば、第1の情報は、対象を検出したセンサの検出結果に相当し、第2の情報は、第1の情報を用いた演算処理、統計処理、又は推論処理の処理結果を含んでもよい。例えば、第2の情報は、他のシステムによって取得された対象の状態を表す情報を含む。例えば、第1の情報は、対象2をセンサが検出して得られる情報を含み、第2の情報は、監視あるいは情報提供等を行うシステムによって提供される対象2の情報を含んでもよい。 In the above-described embodiment, the information processing system 1 associates the first information and the second information about the target to generate data used for machine learning. The second information includes information different from the first information. For example, the first information includes information obtained by detecting the target 2 with a sensor, and the second information includes information representing the state of the target judged by a person. The second information may include information obtained by processing the first information by the device. For example, the first information may correspond to the detection result of a sensor that has detected an object, and the second information may include the result of arithmetic processing, statistical processing, or inference processing using the first information. For example, the second information includes information representing the state of the object obtained by other systems. For example, the first information may include information obtained by detecting the target 2 with a sensor, and the second information may include information on the target 2 provided by a system that monitors or provides information.
 第1の情報は、第2の情報と種類が異なる情報でもよい。例えば、第1の情報は、画像に基づく情報であって、第2の情報は、物質の成分、温度、又は力に基づく情報でもよい。第1の情報は、視覚に基づく情報であって、第2の情報は、聴覚、嗅覚、味覚、又は触覚に基づく情報でもよい。第1の情報は、対象に付帯するセンサにより取得される情報を含み、第2の情報は対象から離れて設置されたセンサにより取得される情報を含んでもよい。 The first information may be information of a different type from the second information. For example, the first information may be image-based information and the second information may be material composition, temperature, or force-based information. The first information may be visually based information and the second information may be auditory, olfactory, gustatory, or tactile based. The first information may include information acquired by a sensor attached to the target, and the second information may include information acquired by a sensor installed away from the target.
 なお、本発明の技術範囲は、上述の実施形態などで説明した態様に限定されるものではない。上述の実施形態などで説明した要件の1つ以上は、省略されることがある。また、上述の実施形態などで説明した要件は、適宜組み合わせることができる。また、法令で許容される限りにおいて、上述の実施形態などで引用した全ての文献の開示を援用して本文の記載の一部とする。 It should be noted that the technical scope of the present invention is not limited to the aspects described in the above embodiments. One or more of the requirements described in the above embodiments and the like may be omitted. Also, the requirements described in the above-described embodiments and the like can be combined as appropriate. In addition, as long as it is permitted by laws and regulations, the disclosure of all the documents cited in the above-described embodiments and the like is used as part of the description of the text.
1・・・情報処理システム、2・・・対象、3・・・評価者、10・・・センサ、20・・・端末、27・・・推定部、31・・・通信部、32・・・処理部、33・・・記憶部、34・・・取得部、35・・・特定部、36・・・生成部、37・・・制御部、41・・・学習部、51・・・推定部、52・・・推定部、60・・・アプリケーション、D6・・・候補情報、M・・・モデル 1 Information processing system 2 Target 3 Evaluator 10 Sensor 20 Terminal 27 Estimation unit 31 Communication unit 32 Processing unit 33 Storage unit 34 Acquisition unit 35 Identification unit 36 Generation unit 37 Control unit 41 Learning unit 51 Estimation unit 52 Estimation unit 60 Application D6 Candidate information M Model

Claims (12)

  1.  対象を検出するセンサの検出結果を取得し、前記センサの検出結果と異なる情報から得られる情報であって前記対象の状態を表す状態情報を、前記対象に関連付けられる所定の端末から取得する取得部と、
     前記取得部が取得した前記検出結果および前記状態情報を用いて、機械学習に利用されるデータを生成する生成部と、を備える情報処理装置。
    An acquisition unit that acquires a detection result of a sensor that detects an object, and acquires state information representing the state of the object, which is obtained from information different from the detection result of the sensor, from a predetermined terminal that is associated with the object. When,
    and a generating unit that generates data used for machine learning, using the detection result and the state information acquired by the acquiring unit.
  2.  前記所定の端末は、前記対象に関連付けられる評価者の端末を含み、
     前記状態情報は、前記評価者によって前記所定の端末に入力される情報を含む、
     請求項1に記載の情報処理装置。
    the predetermined terminal includes an evaluator's terminal associated with the subject;
    The state information includes information input into the predetermined terminal by the evaluator,
    The information processing device according to claim 1 .
  3.  前記所定の端末は、前記対象の状態を表す複数の候補を表示部に表示し、前記複数の候補から前記対象の状態を指定する入力を、前記状態情報として受け付ける、
     請求項2に記載の情報処理装置。
    The predetermined terminal displays a plurality of candidates representing the state of the target on a display unit, and receives an input designating the state of the target from the plurality of candidates as the state information.
    The information processing apparatus according to claim 2.
  4.  前記所定の端末は、前記センサと異なるセンサの検出結果を用いて前記対象の状態を推定して前記状態情報を生成し、生成した前記状態情報を前記取得部へ提供する、
     請求項1に記載の情報処理装置。
    The predetermined terminal generates the state information by estimating the state of the target using a detection result of a sensor different from the sensor, and provides the generated state information to the acquisition unit.
    The information processing device according to claim 1 .
  5.  前記所定の端末の候補と前記対象とを関連付けた候補情報を記憶する記憶部と、
     前記記憶部に記憶された前記候補情報において前記対象に関連付けられた候補から、前記所定の端末を特定する特定部と、を備える
     請求項1から請求項4のいずれか一項に記載の情報処理装置。
    a storage unit that stores candidate information that associates the predetermined terminal candidate with the target;
    The information processing according to any one of claims 1 to 4, further comprising a specifying unit that specifies the predetermined terminal from candidates associated with the target in the candidate information stored in the storage unit. Device.
  6.  前記センサの検出先の位置に対して所定の範囲内に存在する端末から前記所定の端末を特定する特定部を備える
     請求項1から請求項4のいずれか一項に記載の情報処理装置。
    The information processing apparatus according to any one of claims 1 to 4, further comprising a specifying unit that specifies the predetermined terminal from terminals existing within a predetermined range with respect to the position of the detection target of the sensor.
  7.  前記特定部が特定した前記所定の端末に対して前記状態情報の提供を要求する制御部を備え、
     前記取得部は、前記要求に対する応答として、前記状態情報を前記所定の端末から取得する、
     請求項5または請求項6に記載の情報処理装置。
    A control unit that requests the predetermined terminal specified by the specifying unit to provide the state information,
    The acquisition unit acquires the state information from the predetermined terminal as a response to the request.
    The information processing apparatus according to claim 5 or 6.
  8.  前記制御部は、前記検出結果が所定条件を満たす場合に、前記所定の端末に対して前記状態情報の提供を要求する、
     請求項7に記載の情報処理装置。
    The control unit requests the predetermined terminal to provide the state information when the detection result satisfies a predetermined condition.
    The information processing apparatus according to claim 7.
  9.  前記生成部は、前記取得部が取得した前記検出結果を用いて、前記状態情報が得られた時刻における前記センサの検出結果を導出し、導出した前記センサの検出結果と前記状態情報とを関連付けて前記データを生成する、
     請求項1から請求項8のいずれか一項に記載の情報処理装置。
    The generation unit uses the detection result acquired by the acquisition unit to derive the detection result of the sensor at the time when the state information was obtained, and associates the derived detection result of the sensor with the state information. to generate said data;
    The information processing apparatus according to any one of claims 1 to 8.
  10.  請求項1から請求項9のいずれか一項に記載の情報処理装置と、
     前記取得部が取得した前記検出結果を、前記学習部が前記機械学習によって生成したモデルによって処理して、前記対象の状態を推定する推定部と、を備える情報処理システム。
    an information processing apparatus according to any one of claims 1 to 9;
    An information processing system comprising: an estimation unit that processes the detection result acquired by the acquisition unit using a model generated by the machine learning by the learning unit to estimate the state of the target.
  11.  コンピュータに、
     対象を検出するセンサの検出結果を取得し、前記センサの検出結果と異なる情報から得られる情報であって前記対象の状態を表す状態情報を、前記対象に関連付けられる所定の端末から取得することと、
     前記取得した前記検出結果および前記状態情報を用いて、機械学習に利用されるデータを生成することと、を実行させる情報処理プログラム。
    to the computer,
    Acquiring a detection result of a sensor that detects an object, and acquiring, from a predetermined terminal associated with the object, state information representing the state of the object, which is information obtained from information different from the detection result of the sensor. ,
    An information processing program for generating data used for machine learning using the obtained detection result and the state information.
  12.  1又は2以上のコンピュータを用いた情報処理方法であって、
     前記コンピュータの通信部が、対象を検出するセンサの検出結果を受信し、前記センサの検出結果と異なる情報から得られる情報であって前記対象の状態を表す状態情報を、前記対象に関連付けられる所定の端末から受信することと、
     前記コンピュータの処理部が、前記通信部が受信した前記検出結果および前記状態情報を用いて、機械学習に利用されるデータを生成することと、を含む情報処理方法。
    An information processing method using one or more computers,
    The communication unit of the computer receives a detection result of a sensor that detects a target, and transmits state information representing the state of the target, which is information obtained from information different from the detection result of the sensor, to a predetermined state associated with the target. receiving from a terminal of
    An information processing method, comprising a processing unit of the computer generating data used for machine learning using the detection result and the state information received by the communication unit.
PCT/JP2021/026060 2021-07-12 2021-07-12 Information processing device, information processing system, information processing program, and information processing method WO2023286105A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023534430A JPWO2023286105A1 (en) 2021-07-12 2021-07-12
PCT/JP2021/026060 WO2023286105A1 (en) 2021-07-12 2021-07-12 Information processing device, information processing system, information processing program, and information processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/026060 WO2023286105A1 (en) 2021-07-12 2021-07-12 Information processing device, information processing system, information processing program, and information processing method

Publications (1)

Publication Number Publication Date
WO2023286105A1 true WO2023286105A1 (en) 2023-01-19

Family

ID=84919130

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/026060 WO2023286105A1 (en) 2021-07-12 2021-07-12 Information processing device, information processing system, information processing program, and information processing method

Country Status (2)

Country Link
JP (1) JPWO2023286105A1 (en)
WO (1) WO2023286105A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003284139A (en) * 2002-03-20 2003-10-03 Sharp Corp Information providing service and information providing system
JP6522173B1 (en) * 2018-01-16 2019-05-29 株式会社エンライブ INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING PROGRAM
WO2020170986A1 (en) * 2019-02-21 2020-08-27 ソニー株式会社 Information processing device, method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003284139A (en) * 2002-03-20 2003-10-03 Sharp Corp Information providing service and information providing system
JP6522173B1 (en) * 2018-01-16 2019-05-29 株式会社エンライブ INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING PROGRAM
WO2020170986A1 (en) * 2019-02-21 2020-08-27 ソニー株式会社 Information processing device, method, and program

Also Published As

Publication number Publication date
JPWO2023286105A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
Golan et al. A framework for operator–workstation interaction in Industry 4.0
US20100153390A1 (en) Scoring Deportment and Comportment Cohorts
CN112119396A (en) Personal protective equipment system with augmented reality for security event detection and visualization
KR20210015942A (en) Personal protective equipment and safety management system with active worker detection and evaluation
US10104509B2 (en) Method and system for identifying exceptions of people behavior
US20180330815A1 (en) Dynamically-adaptive occupant monitoring and interaction systems for health care facilities
JP2018142259A (en) Manufacturing management device, method, and program
JP2021523466A (en) Personal protective equipment and safety management system for comparative safety event evaluation
WO2018233858A1 (en) Method for robot social interaction
CN115797868A (en) Behavior early warning method, system, device and medium for monitoring object
JP2016058078A (en) Obtaining metrics for position using frames classified by associative memory
Sanchez et al. Hidden markov models for activity recognition in ambient intelligence environments
TW202025173A (en) Intelligent method for processing physiological data and system thereof
Bangaru et al. Gesture recognition–based smart training assistant system for construction worker earplug-wearing training
US11704615B2 (en) Risk assessment apparatus and related methods
Cook et al. Assessing leadership behavior with observational and sensor-based methods: A brief overview
US20210271217A1 (en) Using Real Time Data For Facilities Control Systems
WO2023286105A1 (en) Information processing device, information processing system, information processing program, and information processing method
CN113678148A (en) Dynamic message management for a PPE
JP2018147452A (en) Door system and monitoring method using door system
Aksüt et al. Using wearable technological devices to improve workplace health and safety: An assessment on a sector base with multi-criteria decision-making methods
US20200381131A1 (en) System and method for healthcare compliance
US11636359B2 (en) Enhanced collection of training data for machine learning to improve worksite safety and operations
US20210142047A1 (en) Salient feature extraction using neural networks with temporal modeling for real time incorporation (sentri) autism aide
Nair et al. Evaluation of an IoT framework for a workplace wellbeing application

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21950049

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023534430

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE