WO2016084499A1 - 行動分類システム、行動分類装置及び行動分類方法 - Google Patents

行動分類システム、行動分類装置及び行動分類方法 Download PDF

Info

Publication number
WO2016084499A1
WO2016084499A1 PCT/JP2015/078664 JP2015078664W WO2016084499A1 WO 2016084499 A1 WO2016084499 A1 WO 2016084499A1 JP 2015078664 W JP2015078664 W JP 2015078664W WO 2016084499 A1 WO2016084499 A1 WO 2016084499A1
Authority
WO
WIPO (PCT)
Prior art keywords
action
person
state
acceleration
behavior
Prior art date
Application number
PCT/JP2015/078664
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
田中 毅
Original Assignee
株式会社日立システムズ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立システムズ filed Critical 株式会社日立システムズ
Publication of WO2016084499A1 publication Critical patent/WO2016084499A1/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to a technique for classifying a person's action based on data measured by a sensor attached to the person.
  • Patent Document 1 discloses a technique for displaying operation contents such as a golf swing form on a display using a wristband type sensor.
  • the display of the operation content by the wristband type sensor shown in Patent Document 1 is based on the acceleration and gyro data, estimating the position and locus of the hand to which the sensor is added, the estimated hand locus, and the golf swing
  • the body and leg forms defined in advance are displayed together, and it is impossible to estimate anything other than hand movements.
  • the behavior classification system of the present invention holds model information including rules for discriminating the behavior of the person based on the physical feature amount calculated from the acceleration of a predetermined part of the person.
  • various actions of a person can be classified based on measurement data of a sensor attached to a predetermined part of the person.
  • FIG. 1 is a block diagram showing the main configuration of the behavior classification system according to the first embodiment of the present invention.
  • the behavior classification system includes a sensor device 1 worn by a user, a PC 2 or a smartphone 3 that communicates with the sensor device 1, and a server 5 that can communicate with the PC 2 or the smartphone 3 via a network 4. Yes.
  • the sensor device 1 transmits the measured sensor data to the server 5 via the PC 2 or the smartphone 3.
  • the server 5 analyzes the sensor data, estimates the movement of each part such as the user's limb, and classifies the action content from the content of the estimated movement of the part.
  • the PC 2 or the smartphone 3 can download the analyzed result from the server 5 and allow the user to browse.
  • the sensor device 1 mainly includes a microcomputer (MCU: Micro Control Unit) 10, an acceleration sensor 11, a memory 12, an input unit 13, a display unit 14, and a communication unit 15.
  • the acceleration sensor 11 always measures acceleration due to user movement or the like at a predetermined frequency (typically, about 20 to 1000 times per second).
  • the microcomputer 10 reads the acceleration data measured by the acceleration sensor 11 by controlling communication and records it in the memory 12.
  • the sensor device 1 is attached to a predetermined part of the user's body.
  • a predetermined part of the user's body In the present embodiment, an example of attaching to the wrist portion of the user's forearm will be described.
  • the sensor data recorded in the memory 12 is stored in the PC 2 or the smartphone 3 automatically at a timing at which the microcomputer 10 controls the communication unit 15 and can communicate with the PC 2 or the smartphone 3 by wireless or wired, or at any timing of the user. Can be sent to. Data to be transmitted can be transmitted without being read by another user or the like by being encrypted by a known function of the communication unit 15 or the microcomputer 10.
  • the data measured by the acceleration sensor 11 is not only transmitted to the outside, but also analyzed by a simple analysis function recorded in advance in the microcomputer 10 as a program, and the resulting step count or calorie consumption, for example, is displayed. It can be displayed on the part 14.
  • the PC 2 and the smartphone 3 can communicate with the sensor device 1 and a server 5 connected to a network 4 such as the Internet.
  • the PC 2 and the smartphone 3 can transfer the sensor data received from the sensor device 1 to the server 5, and can display and operate published information recorded in the server 5.
  • the server 5 includes a CPU 51, a communication unit 52, a memory 54, and a database 70.
  • the memory 54 stores a WEB display program 53 and an analysis program 60.
  • the server 5 analyzes the sensor data of the sensor device 1 transmitted from the PC 2 or the smartphone 3 by the analysis program 60, converts it into information on the state of the human part or the action content, and records it in the database 70.
  • the server 5 can also generate an algorithm to be analyzed and the rule itself.
  • the database 70 is configured by a storage device such as a hard disk drive.
  • the analysis program 60 and the WEB display program 53 may be stored in this storage device, and a part or all of them may be copied to the memory 54 as necessary.
  • the CPU 51 can execute processing of the analysis program 60 and the WEB display program 53 and perform calculations. That is, in the following description, the processing executed by these programs is actually executed by the CPU 51 in accordance with instructions described in these programs. In other words, when the CPU executes these programs, a functional module that executes processing to be described later is realized.
  • the functional module of the present embodiment is realized by the CPU 51 executing the program as described above, but this is an example, and may be realized by another means, for example, a dedicated logic circuit.
  • the communication unit 52 communicates with other devices such as the PC 2 or the smartphone 3 via the network 4 and can transmit / receive data to / from them.
  • the acceleration data received from the sensor device 1 via the PC 2 or the smartphone 3 is recorded in the database 70.
  • the WEB display program 53 can open the data recorded in the database 70 to the user of the sensor device 1 via the network 4.
  • the analysis program 60 includes a physical feature amount calculation program 61, a part state estimation program 62, an action content classification program 63, a part state determination model generation program 64, and a part state determination model selection program 65.
  • the database 70 includes acceleration data 80, physical feature data 71, part state data 72, action classification data 73, part state determination model data 74, user profile data 75, and part / physical quantity teacher data 76.
  • the physical feature amount calculation program 61 is a program that processes the acceleration data 80 measured and transmitted by the sensor device 1, calculates a feature amount related to human movement or behavior, and records it in the physical feature amount data 71. Only necessary features can be extracted from a large amount of time-series data.
  • the part state estimation program 62 is a program that estimates the movement of each part of the user at that time based on the physical feature data 71 and records it in the part state data 72. From the movement characteristics of the sensor device 1, Regarding the movement of the part where the sensor device 1 is not directly attached, the statistical characteristics of the combination of the part states, the distribution bias, and the characteristics and the distribution bias, which are recorded in the part state determination model data 74 in advance. It can be estimated by a dividing algorithm. Further, the part state estimation program 62 receives attribute information such as the gender, physical characteristics such as weight and height of the user who transmitted the acceleration data 80 that is the basis of analysis, and lifestyle characteristics such as hobbies and occupations.
  • the corresponding part state determination model data 74 is selected and used by referring to a part or all of the part state determination model selection program 65 selected. This is because the statistical relationship between the movement of the sensor device 1 and the movement and state of each part may vary depending on physical characteristics and lifestyle.
  • the action content classification program 63 is a program for classifying the action contents into an arbitrary number based on the part state data 72 and recording the action contents in the action classification data 73, without defining the name or content of the action.
  • the daily action content can be classified only by the combination of the state of each part.
  • the part state discrimination model generation program 64 is a program for generating part state determination model data 74 based on the part / physical quantity teacher data 76 acquired in advance.
  • the part / physical quantity teacher data 76 includes motion data indicating the movement of each part of a person in daily life, and acceleration data of the sensor device 1 recorded at the same time, or physical feature values calculated from the acceleration data. Yes.
  • a plurality of measurement target users wearing the sensor device 1 behave in the same manner as in daily life, and the sensor device 1 has a plurality of times in between.
  • Get acceleration data At the same time, motion data indicating movements of a plurality of parts of the measurement target user is acquired by a known technique, for example, motion capture which is a known product.
  • the acceleration data acquired in this way or the physical feature amount calculated therefrom and the motion data are associated with each other based on the acquired time, and are recorded as the part / physical quantity teacher data 76.
  • the part state discriminating model generation program 64 finds statistical features and distribution bias of physical feature amounts corresponding to each part state based on the relational data between the part state and the physical feature quantity, and the part state based on the statistical feature Are generated and recorded in the part state determination model data 74.
  • FIG. 2 is an explanatory diagram showing an appearance of the sensor device 1 according to the first embodiment of the present invention and a typical arrangement of the acceleration sensor 11.
  • the arrangement of the acceleration sensor 11 defines the direction (X, Y, Z) of the triaxial acceleration to be measured.
  • the server 5 actually processes acceleration data from a plurality of sensor devices 1 attached to different users.
  • the analysis program 60 can perform the same calculation in each sensor device 1 when the arrangement of the acceleration sensor 11 of each sensor device 1 is the same.
  • the sensor device 1 or the server 5 that has received the acceleration data from the sensor device 1 executes a coordinate system conversion process for unifying the orientation. There is a need to.
  • FIG. 3 is an explanatory diagram illustrating a typical processing procedure of part state estimation and action content classification executed by the server 5 according to the first embodiment of the present invention.
  • FIG. 3 shows a procedure of processing the acceleration data 80 in the order of the physical feature amount calculation program 61, the part state estimation program 62, and the action content classification program 63.
  • the physical feature quantity calculation program 61 cuts out acceleration data in a predetermined time unit and performs an operation. The section to be cut out depends on the time for estimating the part state. For example, the physical feature quantity calculation program 61 calculates acceleration data every minute if the time unit for estimating the part state is a minute unit, and calculates acceleration data every second if the unit is seconds.
  • the main feature amounts calculated by the physical feature amount calculation program 61 are predetermined values of the average value, the maximum value, the minimum value, and the variance value of the amplitude 111 of each axis, the frequency 112, and the arm rotation angles ⁇ 113 and ⁇ 114.
  • Statistical values such as average, maximum, minimum, variance, etc. of the time interval of, but not limited to, use general statistical values calculated from triaxial acceleration or signal processed values as feature quantities Can do.
  • the arm rotation angle ⁇ 113 is a pitch angle indicating the amount of movement of raising and lowering the arm about the shoulder or elbow
  • the arm rotation angle ⁇ 114 is a movement for rotating the forearm about the longitudinal direction of the forearm, that is, the wrist It is a roll angle indicating the amount of movement to twist.
  • the physical feature amount is recorded in the physical feature amount data 71 of the database 70.
  • the part state estimation program 62 estimates the part state using the part state determination model data 74 from the physical feature quantity data 71 in the same time interval or close to each other, and the arm state 121, the elbow angle 122, the thigh State 123 and posture 124 are estimated. Since the data necessary for estimation of each part state is different, the part state estimation program 62 extracts only necessary data from the physical feature amount data 71. Further, the part state determination model selection program 65 selects a model that matches the user profile data 75 of the user to be estimated from the part state determination model data 74, and the part state estimation program 62 uses it. Part states such as arm state 121, elbow angle 122, thigh state 123, and posture 124 are recorded in part state data 72 of database 70.
  • the action content classification program 63 classifies the action contents by referring to the part state data 72 of a predetermined section and clustering it into a set of data having similar characteristics of the combination of the state of each part. In the example of FIG. 3, the actions A131 to D134 are classified.
  • clustering may be performed on all users' data without limiting the conditions, or the data may be limited to a group of users with similar profiles or limited to one user. Then, clustering may be performed.
  • the time interval for example, clustering may be performed for a time interval within a short period such as one week, or a time interval within a long period such as one year or several years may be targeted.
  • the part state determination model data 74 is generated by the part state determination model generation program 64 from the part / physical quantity teacher data 76 acquired in advance.
  • the part / physical quantity teacher data 76 is a database that associates the part state acquired by the actual user with the physical feature quantity, and the part state discriminating model generation program 64 performs statistical features or distributions by a known process such as machine learning. A rule or a threshold value that is discriminated based on the bias of is recorded as part state discrimination model data 74.
  • FIG. 4 is a flowchart showing a typical processing procedure executed by the server 5 according to the first embodiment of the present invention.
  • the processing shown in FIG. 4 is performed by the physical feature quantity calculation program 61, the part state determination model selection program 65, the part state estimation program 62, and the action content classification program 63 in order, and the time based on the acceleration sensor data.
  • the movement and state of each part are estimated, and the action content is classified using the movement and state of each part.
  • the server 5 starts the analysis process from the process S100 after the server 5 receives the data measured by the sensor device 1.
  • the physical feature quantity calculation program 61 reads acceleration sensor data for a period to be analyzed from the received acceleration sensor data.
  • the target period is, for example, an unanalyzed period in the newly received acceleration sensor data of one user.
  • the physical feature quantity calculation program 61 calculates the read acceleration sensor data, calculates the physical feature quantity, and records the result in the physical feature quantity data 71 of the database 70.
  • the part state determination model selection program 65 reads out user profile data 75 including body information and lifestyle information of the user wearing the sensor device 1 that has measured the data read out in process S102.
  • the part state discriminating model most suitable for the read user profile (for example, the part state discriminating model generated based on the part / physical quantity teacher data acquired from the group of users having a profile similar to the read user profile) Is selected from the part state determination model data 74.
  • An ID for identifying a user corresponding to the data is attached to the measured acceleration sensor data, and is similarly added to the physical feature data 71 and the part state data 72, which are analysis results.
  • the part state estimation program 62 reads out the physical feature quantity of the target section to be analyzed from the physical feature quantity data 71, determines the part state by the part state discrimination model selected in the process S103, and obtains the result for each part. Every time, it is recorded in the part state data 72.
  • the action content classification program 63 reads each part state from the part state data 72 in order to detect a break of the action contents, and the action change point where the change per time of the state of each part is the largest. Detect as. At this time, for example, when observing the behavior on a daily basis, if the behavior break is made too short (for example, in seconds), it is difficult for a person to understand the behavior for each break later. It is desirable to set the delimiter to an appropriate length that is not too short (for example, in units of one minute).
  • the action content classification program 63 sets an action section (that is, a time zone in which each action is estimated to be performed) by dividing the action contents based on the change point detected in the process S105.
  • the action content classification program 63 classifies the action contents by clustering the divided action sections according to the vectors of the respective part states, and records them in the action classification data 73.
  • the typical process of the present invention is terminated.
  • FIG. 5 is a flowchart showing a typical processing procedure executed by the server 5 according to the first embodiment of the present invention to generate a part state determination model.
  • FIG. 5 shows a part state determination model generation process that is necessary in advance for executing the process for classifying the behavior shown in FIG. 4, and starts from process S200.
  • the time in which both the motion data of the part for which the determination model is to be generated in advance and the acceleration data measured by the sensor device 1 at the same time as the motion data can be statistically determined Only the length is necessary.
  • the part state determination model generation program 64 reads motion data for a period of analysis measured in advance.
  • the part state determination model generation program 64 reads acceleration data 80 having the same time length starting from the same time as the motion data acquired in process S201.
  • the part state determination model generation program 64 reads out the profile data 75 of the user who is the target for measuring the motion data to be analyzed and the acceleration data.
  • the user profile data 75 is attribute information indicating the physical information, lifestyle, and the like of the user who has measured motion data and acceleration data.
  • the part state determination model generation program 64 uses the user profile data 75.
  • a discrimination model that is common to all users may be generated without acquiring. In any case, it is possible to classify with high accuracy by generating a discrimination model based on actually measured data.
  • the part state determination model generation program 64 sets a time unit to be analyzed.
  • the time unit is, for example, a minute unit or a second unit, and a time unit suitable for the movement of the part to be determined may be set, or may be set according to the granularity of the part state to be obtained. For example, when the action to be finally classified is determined by the minute unit, it is desirable to record the result of determining the part state by the minute unit in the database 70.
  • the part state determination model generation program 64 calculates a physical feature amount in units of time set in process S204.
  • the part state determination model generation program 64 reads the part state defined by the time unit set in the process S205 from the motion data. The part state and physical feature obtained in steps S205 and S206 are used as teacher data 76.
  • the part state discrimination model generation program 64 generates a discrimination model by a statistical discriminant analysis method such as known machine learning. The part state determination model generation program 64 records the determination model generated in step S207 in the part state determination model data 74 in association with the user profile information acquired in step S203.
  • FIG. 6 is an explanatory diagram showing two typical physical feature quantities that can be calculated from the three-axis acceleration data of the arm that can be measured by the sensor device 1 according to the first embodiment of the present invention.
  • the zero crossing number 202 is the total number of times that the acceleration waveform 201 has changed over a predetermined threshold per unit time.
  • the pitch angle ( ⁇ ) 211 is an angle generated by raising and lowering the sensor device 1 with respect to the ground (in other words, the longitudinal direction of the forearm, for example, the angle formed by the horizontal plane and the longitudinal direction of the forearm), and the roll angle ( ⁇ ).
  • Reference numeral 212 denotes a rotation angle of the forearm about the longitudinal direction of the forearm.
  • the orientation of the sensor device 1 with respect to the direction of gravitational acceleration is calculated from the triaxial acceleration data measured by the sensor device 1.
  • the triaxial acceleration data measured by the sensor device 1.
  • FIG. 7 is an explanatory diagram showing a typical part state that can be discriminated from the three-axis acceleration data of the arm that can be measured by the sensor device 1 according to the first embodiment of the present invention.
  • the standing state 221 and the sitting state 222 are defined as the part states corresponding to the thigh angle, and the part state estimation program 62 determines the part state of the thigh at each time.
  • the usage of the arm in the standing position and the sitting position is greatly different, so that statistical determination can be made from the three-axis acceleration data of the sensor device 1 attached to the arm.
  • the elbow extension state 223, the elbow bending state 224, etc. are defined. Also in this example, based on the difference in how to use the arm according to the state of the elbow, the part state of the elbow can be statistically determined.
  • FIG. 8 is an explanatory diagram illustrating a typical method in which the server 5 according to the first embodiment of the present invention detects a behavior change point from the part state data 72 and divides the behavior content.
  • FIG. 8 shows an example in which the thigh, elbow, and arm part states are used.
  • the thigh state 231, the elbow state 232, and the arm state 233 each have a continuous or discrete value.
  • g (t) indicating the thigh state 231 may be a discrete value indicating either standing or sitting
  • k ( t) may be discrete values indicating the elbow and arm angles, respectively.
  • the action content classification program 63 calculates a total change of the thigh state 231, the elbow state 232, and the arm state 233 as the change amount 234.
  • a value f (t) indicating the change amount 234 is calculated by, for example, Expression (3). This is a value obtained by multiplying a change amount of a value indicating each part state by a predetermined weighting coefficient a, b, c, and totaling them.
  • the amount of change in the value indicating the part state is a difference between the part state estimated at time t and the part state estimated at the previous time ti, and i is, for example, a time interval at which the part state is estimated. It may be.
  • f (t) a (g (t) -g (t-i)) + b (h (t) -h (t-i)) + c (k (t) -k (t-i)) (3)
  • the action content classification program 63 detects a point where the change amount 234 exceeds a predetermined threshold value 235 as a change point 236 (S105).
  • a predetermined threshold value 235 is desirably set so that the interval between the change points 236 is always a certain length or more.
  • the certain length is suitable for a length that allows a person to recognize what he / she was doing later, for example, 10 minutes or more.
  • Actions 238, 239 and the like divided by the change points 236 are sections for classifying the action contents (that is, time zones where each action is estimated to have been performed).
  • FIG. 9 is an explanatory diagram illustrating a configuration example of the physical feature data 71 held by the server 5 according to the first embodiment of this invention.
  • One line of the physical feature data 71 indicates one hour unit data of one user.
  • a user ID 301 is an ID for identifying a user.
  • the date and time 302 indicates the actual date and time when the data is measured, and is a time unit suitable for determining the part state. For example, when determining the part state in minutes, it is desirable to calculate the physical feature value in minutes as shown by the date 302.
  • Zero crosses 303, 304, and 305 respectively indicate the number of zero crosses of the X, Y, and Z axes calculated by the method shown in FIG.
  • Other statistical values such as maximum, minimum, average, and variance in time units of amplitude and differential value can be applied.
  • the maximum angles 306 and 307 respectively indicate the maximum values of the pitch angle ( ⁇ ) 211 and roll angle ( ⁇ ) 212 during the unit time.
  • statistical values such as minimum values, average values, and variances of measured values can be applied as physical feature amounts.
  • FIG. 10 is an explanatory diagram of a first example of the part state determination model according to the first embodiment of the present invention.
  • FIG. 10 shows an example of distributions 241 and 242 in the physical feature amount space of the physical feature amounts in the standing state 221 and the sitting state 222.
  • These are the discrimination models for discriminating between the standing state 221 and the sitting state 222 shown in FIG.
  • the physical feature is divided into distributions 241 and 242 by a known machine learning discriminant analysis method, so that whether the thigh is in the standing state 221 or the sitting state 222 is determined based on the physical feature. Is possible. Similar discrimination is possible for other parts.
  • the discriminant model is generated by dividing the user into groups of users having similar profiles based on user profile information that can be known in advance. Is optimal.
  • FIG. 11 is an explanatory diagram of a second example of the part state determination model according to the first embodiment of the present invention.
  • FIG. 11 shows a transition model between the states 251, 252, and 253, each of which is an upper arm state. These are discrimination models for discriminating the direction of the upper arm, and are necessary for estimating the elbow bending state 224 and the elbow extension state 223 shown in FIG.
  • the transition model shown in FIG. 11 defines the transition from the state of each upper arm to the state of another upper arm according to the direction and strength of the forearm swing specified from the measured acceleration. The direction is estimated. In the daily movement, this transition model starts from the downward state 252 based on the fact that the upper arm is downward with high probability when the arm is downward, and the state 251 depends on the strength of the upward swing.
  • a transition to a state (that is, a state where the upper arm faces upward) or a state 253 (that is, a state where the upper arm faces the horizontal direction (horizontal direction)) can be distinguished.
  • the part state discriminating model data 74 may include a rule for discriminating the part state at a certain point of time based on the part state at a point before that as described above. In some cases, it is possible to determine the state of other parts by the same method. In this way, by generating a discrimination model that takes into account the probability of state transition, it becomes possible to classify actions with high accuracy.
  • FIG. 12 is an explanatory diagram illustrating a first structure example of the part state data 72 held by the server 5 according to the first embodiment of this invention.
  • FIG. 12 shows an example of a data structure for a part particularly suitable for minute unit discrimination.
  • One row of the part state data 72 shows data for each user's time unit.
  • the user ID 311 is information for identifying the user who wears the sensor device 1, and is added to the measurement value data when measured by the sensor device 1, and the feature amount and part calculated from the measurement value after analysis It is carried over to values such as state.
  • the date and time 312 is information added based on the date and time when the acceleration is measured by the sensor device 1.
  • the thigh state 313 indicates the discrimination result between the standing state 221 and the sitting state 222 shown in FIG. 7 in binary (for example, one is 0 and the other is 1).
  • the posture state 314 indicates whether the body is in a sleeping state or standing state (in other words, the orientation of the whole body) in binary.
  • FIG. 13 is an explanatory diagram illustrating a second structure example of the part state data 72 held by the server 5 according to the first embodiment of this invention.
  • FIG. 13 shows an example of a data structure for a part particularly suitable for discrimination in seconds.
  • One row of the part state data 72 shows data for each user's time unit.
  • the user ID 321 is information for identifying the user wearing the sensor device 1, as with the user ID 311 in FIG. 12, is added when measured by the sensor device 1, and is inherited after analysis.
  • Date and time 322 is information added based on the date and time when the sensor device 1 measured acceleration.
  • the elbow state 323 indicates the determination result of the elbow bending state 224 and the elbow extension state 223 shown in FIG.
  • the upper arm state 324 indicates the state of the upper arm with discrete values (for example, by assigning values such as 0, 1, and 2 to upward, lateral, and downward, respectively), and a discrimination result based on the discrimination model example shown in FIG. It is.
  • FIG. 14 is an explanatory diagram illustrating a configuration example of the physical feature data 71 held by the server 5 according to the first embodiment of this invention.
  • One row of the physical feature data 71 shows data for each user for each time unit.
  • FIG. 14 shows physical data in units of seconds that are particularly necessary when performing site state determination in units of seconds. The example of the feature-value is shown.
  • User ID 331 is an ID for identifying a user.
  • the date and time 332 indicates the actual date and time when the data is measured, and is a time unit suitable for determining the part state.
  • Amplitudes 333, 334, and 335 respectively represent average values per unit of acceleration intensity of the X, Y, and Z axes calculated by the method illustrated in FIG.
  • Other statistical values such as the number of zero crossings and the maximum, minimum, average, and variance in the time unit of the differential value can be applied.
  • the average angles 336 and 337 indicate the average values of the pitch angle 211 and the roll angle 212 during the unit time, respectively.
  • statistical values such as the minimum value, maximum value, and variance of the pitch angle 211 and the roll angle 212 can be applied as physical feature amounts.
  • FIG. 15 is an explanatory diagram illustrating an example of action content classification by the server 5 according to the first embodiment of this invention.
  • the action content classification program 63 performs change point detection using a plurality of part states recorded in the part state data 72, and for each action section delimited by the change points, a plurality of user's corresponding to the action section are detected.
  • a part state vector having elements of part state values (for example, thigh state, posture state, elbow state, upper arm state, etc.) is generated.
  • FIG. 15 shows an example of the distribution in the part state vector space of the part state vectors of the plurality of action sections of the plurality of users generated as described above, for example.
  • the action content classification program 63 can classify the action of each action section by clustering the part state vector. Yes (S107).
  • the action content is classified into action classifications 261 to 264 by clustering.
  • the bias of the part state is similar to each other, it is estimated that the same kind of action (having the same or similar contents) was performed.
  • any known method can be used, and thus detailed description thereof is omitted.
  • FIG. 16 is an explanatory diagram showing an example of the data structure of the action classification data 73 held by the server 5 according to the first embodiment of the present invention.
  • the behavior of one action section of one user is recorded in one line of the action classification data 73, and the time width of the action section delimited by the change point of the part state and the ID of the classification are recorded.
  • the user ID 351 is an ID for identifying the user.
  • the start time 352 and the end time 353 indicate the start and end dates and times of one section of the action, respectively.
  • the action 354 records the ID or name of the classified action.
  • FIG. 17 is an explanatory diagram illustrating a configuration example of a screen that the server 5 according to the first embodiment of the present invention outputs in order to display an action record.
  • a screen 401 shown in FIG. 17 displays the daily action contents of the user based on the part state data 72 and the action classification data 73, and the contents of the movement in each action section are concretely indicated by the part state data.
  • the candidate of the actual action assumed from there is displayed, and the user can input the actual action content and record the result.
  • the action content display 402 color-codes the action section divided by the change in the part state according to the result of classifying the action (for example, by displaying the action section estimated to be the same kind of action in the same color), Can be displayed in series.
  • Such color coding is an example of a display method, and various methods can be employed such as, for example, distinguishing the content of an action by a pattern, a symbol, or a character as long as the result of classifying the action can be read. Thereby, the user can easily recognize the result of classifying the behavior.
  • the action content display 403 specifically shows the content of one action section selected from the plurality of action sections displayed on the action content display 402.
  • the action content display 403 shows the movement of the user according to the shape of the person specified from the part state and FIG. 403A showing the position in the vector space of the part state that the action classification of the selected action section is characterized FIG. 403B, an action name candidate 403C, and a form 403D in which the user inputs an actual action name can be shown.
  • the name of the action input by the user in the form 403D is recorded in a database or the like, and can be referred to as a candidate name by the user himself / herself in the same action classification of another user.
  • this screen 401 various actions of the user can be comprehensively visualized, and input of correct names can be promoted and collected.
  • a cluster including the part state vector of the selected action section is displayed among the part state vector clusters generated in step S107.
  • the state of each part of the user specified from the part state vector belonging to the cluster is displayed.
  • the shape of the user's body such as the sitting position, the upper arm facing sideways (horizontal), and the forearm horizontal, estimated from the state of each part of the identified user may be illustrated.
  • Action name candidates 403 ⁇ / b> C display action name candidates estimated from the shape of the user's body. For example, if the sitting position, the upper arm is horizontal, and the forearm is horizontal, “PC work”, “car driving”, and the like are listed as candidates.
  • the user refers to these displays, recalls the action he / she actually performed during the time zone of this action section, and selects the name corresponding to the actual action if it is displayed as a candidate To form 403D.
  • the name corresponding to the actual action is not displayed as a candidate, the user can input the name in the form 403D.
  • the name input in this way can be displayed as a candidate name 403C for another action section displayed after the next time. Specifically, when the part state vector of the other action section belongs to the same cluster as the part state vector of the action section whose name has been previously input (that is, classified into the same kind of action), the input is performed.
  • the name can be displayed as a name candidate 403C.
  • the Web display program 53 creates and outputs screen data for displaying the screen 401
  • the server 5 transmits the screen data to the PC 2 or the smartphone 3 via the network 4, and the PC 2 or the smartphone 3
  • the screen 401 may be displayed based on the screen data.
  • the PC 2 or the smartphone 3 is used as a display device.
  • the input to the form 401D may be performed on the PC 2 or the smartphone 3.
  • the PC 2 or the smartphone 3 is used as an input device.
  • FIG. 18 is an explanatory diagram illustrating a first configuration example of a screen that the server 5 according to the first embodiment of the present invention outputs in order to display the result of behavior analysis.
  • the analysis screen 411 shown in FIG. 18 is an example of a screen viewed by a user of the sensor device 1 or a leader who gives health advice to the user of the sensor device 1, and is recorded separately from the acceleration data measured by the sensor device 1. It is characterized by analyzing behavioral contents highly related to daily weight gain and displaying the results.
  • the analysis screen 411 includes a related action ratio display 412 and action content displays 413 and 414.
  • the action content display 413 indicates a time zone in which action contents highly related to weight gain appear at a high frequency (for example, a predetermined value or more) in daily life. By combining the display of this time zone and the ratio display 412, it is possible to encourage the instructor or the user to recall the actual action content of the user at that time.
  • the action content display 414 can indicate the movement of the estimated action content in the form of a person, and can provide a candidate for a name corresponding to the shape and advice such as prompting increase or decrease in the time during which the action is performed.
  • the action shape classification program 63 and the like can specify the human shape and the corresponding name candidate in the same manner as described with reference to FIG.
  • the advice corresponding to the estimated action content can be generated from the correlation between the change in weight and the length of action time related to weight gain.
  • the candidates for the name of the action B of the user shown in FIG. 18 are “PC work” and “driving a car”, and the daily weight increase of the user recorded so far and the time when the action B is performed are shown.
  • the correlation with the length is higher (or higher than a predetermined threshold) compared with the correlation with the time of other actions, the correlation is a positive correlation, and the user's weight is not increased.
  • Information indicating action B having a high correlation in the example shown, the shape of a person indicating the movement of action B
  • a message prompting to reduce the time for performing action B is displayed as advice.
  • the time for performing the action B is increased. A prompt message appears.
  • the action content having a high correlation with weight gain is a combination of one of the classified actions and the length of time for which the action is performed (for example, the length of time for which the action B is performed).
  • the other parameter indicating the content of the classified action may be such a parameter when, for example, a parameter indicating the intensity of a person's movement when performing the classified action can be acquired. If it is possible to count the number of times the classified action has been executed, that number may be used.
  • FIG. 19 is an explanatory diagram illustrating a second configuration example of a screen that the server 5 according to the first embodiment of the present invention outputs in order to display the result of the behavior analysis.
  • FIG. 19 is another example of a screen for analyzing the relationship between information measured by a device other than the measuring device such as the sensor device 1 and the action content and displaying the result, as in FIG.
  • Subjective information that is difficult to measure with a sensor, such as fatigue, is collected from the user in a questionnaire format, and the relationship between the collected result and the action content measured by the sensor device 1 can be shown.
  • the analysis screen 421 includes an action ratio display 422, an action content display 424, an action time and subjective information change graph 423, and a subjective information input form 425.
  • the action ratio display 422 and the action content display 424 are the same as the action ratio display 412 and the action content display 414 shown in FIG.
  • the user can input information indicating the current degree of fatigue by operating the input form 425.
  • the input form 425 can collect information indicating the degree of fatigue in a question format in addition to an input in an analog format as shown in the figure.
  • the change graph 423 displays the time change of the action having the highest correlation with the input subjective information and the change of the subjective information. By viewing this, the user can intuitively grasp the relationship between subjective information (for example, the degree of fatigue) and behavior.
  • the degree of weight gain and fatigue shown in the examples of FIGS. 18 and 19 are examples of parameters indicating changes in the user's health condition, and the server 5 uses the same method as described above for other parameters (for example, blood pressure). May analyze the correlation with the action content and output the result.
  • the analysis of the correlation between the parameter indicating the health condition and the action content and the generation of the advice based on the result are performed by the action content classification program 63 in S107, but these processes are performed after other functions modules after S107. (For example, a dedicated program not shown) may be performed.
  • FIG. 20 is an explanatory diagram of a typical example of a conventional method for estimating the action content from the sensor data.
  • This method is to add only the movement of the arm after setting constraints such as golf swing in advance.
  • step 81 three-axis acceleration, gyro, and the like are measured.
  • step 82 a physical quantity such as a speed is estimated in step 82.
  • step 84 the wrist trajectory is estimated by matching the speed and angle. Since golf swing conditions are set in advance in step 85, positions other than the movable wrist and arm are fixed with a golf foam.
  • step 86 the trajectory of the position of the movable wrist is added to the form of the golf swing, and the movement of the arm and the golf club accompanying the movement of the wrist is added. The movement of the arm and golf club can be easily estimated from the position of the wrist.
  • the movement of the extremities that are not directly connected is estimated using a statistical pattern acquired in advance in daily life.
  • the action content is classified by a statistical set of estimated limb movement combinations and limb movement combinations in daily life.
  • a sensor that can be easily worn in daily life such as a wristband type sensor, for example, not only a specific action such as a golf swing shown in FIG.
  • Various action contents can be classified in detail, and the action contents at each time can be estimated. For this reason, in exercise guidance for lifestyle-related disease prevention, it is possible to visualize the action content to be improved and the result of improving the action.
  • the sensor device 1 of the first embodiment has a function of transmitting data acquired by measuring acceleration, and does not perform calculation for behavior classification, display of the calculation result, and the like. Therefore, the sensor device 1 can be easily reduced in size, weight, and power consumption, thereby reducing the burden on the person wearing the sensor device 1 for a long time.
  • the PC 2 and the smartphone 3 of the first embodiment have functions as an acceleration data relay device, a behavior classification result display device, and a behavior content input device, and do not perform computation for behavior classification. It can be realized by an easy small computer. Since the server 5 according to the first embodiment communicates with the sensor device 1, the PC 2, the smartphone 3, and the like via the network 4, it is not necessary to carry the server 5 itself, so a computer having sufficient calculation capability is used. It is possible to execute a process for classifying a person's action at high speed.
  • Example 2 of the present invention will be described. Except for the differences described below, each part of the behavior classification system of the second embodiment has the same functions as the parts denoted by the same reference numerals of the first embodiment shown in FIGS. Description of is omitted.
  • FIG. 21 is a block diagram showing the main configuration of the behavior classification system according to the second embodiment of the present invention.
  • the smartphone 3 has a display unit 55, and the display unit 55 displays screens as shown in FIGS. 17 to 19 based on the screen data created by the WEB display program 53.
  • the display unit 14 of the sensor device 1 has a sufficient display capability, the screen data created by the WEB display program 53 is transmitted from the smartphone 3 to the sensor device 1, and the display unit 14 based on the screen data is displayed in FIG. Etc. may be displayed.
  • the PC 2 or any other type of terminal device may be used instead of the smartphone 3.
  • the configuration of the PC 2 and the like is the same as that of the smartphone 3 shown in FIG.
  • the smartphone 3 and the sensor device 1 may be integrated and attached to a person.
  • the data measured by the sensor device 1 is analyzed without being sent to the server 5 via the Internet. Therefore, an effect of excellent response speed can be obtained.
  • each part of the behavior classification system of the second embodiment has the same functions as the parts denoted by the same reference numerals of the first embodiment shown in FIGS. Description of is omitted.
  • FIG. 22 is a block diagram showing the main configuration of the behavior classification system according to the third embodiment of the present invention.
  • the analysis program 60 of the third embodiment is obtained by adding an action content determination model generation program 66, an action content determination model selection program 67, and an action content estimation program 68 to the analysis program 60 of the first embodiment shown in FIG. .
  • the database 70 of the third embodiment is obtained by adding behavior content determination model data 77 and behavior / part teacher data 78 to the database 70 of the first embodiment shown in FIG.
  • the action content classification program 63 of the first embodiment classifies the action content into actions that do not specify the meaning according to the characteristics of the part state. That is, in the first embodiment, it is possible to specify a plurality of action sections performing the same kind of action, but it is not possible to specify what the action is.
  • the action content estimation program 68 does not simply classify the action contents, but has a name and makes a meaningful action such as deskwork or housework by determining it as a meaningful action. Can be classified into different types.
  • the action content estimation program 68 uses action content discrimination model data 77 generated in advance in order to discriminate the action content from the part state. Further, the action content discrimination model selection program 67 can select a discrimination model suitable for the user measured by the sensor device 1 based on the user profile data 75. This is because the association between the part state and the action content may differ depending on the user's age, physique, or lifestyle. However, as in the case of the determination of the part state in the first embodiment, the action content may be determined without considering the user profile.
  • the action content determination model generation program 66 can generate the action content determination model data 77 using the action / part teacher data 78 in which the pre-measured action and the part state are simultaneously recorded.
  • the acceleration data and motion data of a plurality of measurement target users are acquired by the same method as in the first embodiment, the acceleration data and the motion data are further acquired. Record the name indicating the content of the action that was being performed, and based on them, record the information associating the content of the executed action with the part state when the action was executed as action / part teacher data 78 Also good.
  • the behavior content determination model generation program 66 calculates a statistical feature or the like of the part state corresponding to the content of each action based on the behavior / part teacher data 78, and sets rules for determining the behavior content based on the statistical characteristics.
  • the action content discrimination model data 77 including is generated. At this time, as in the case of the part state determination model data 74, the action content determination model generation program 66 creates the action content determination model data 77 in association with the user profile data 75 indicating the attributes of the measurement target user. Also good.
  • the behavior content determination model generation program 66 calculates a statistical feature or the like of the physical feature amount corresponding to the content of each behavior based on the teacher data, and the behavior including a rule for determining the behavior content based on the statistical feature.
  • the content determination model data 77 may be generated. In this case, it is not necessary for the part state estimation program 62 to estimate the state of each part, and the action content estimation program 68 estimates the action content based on the physical feature amount calculated from the acceleration data and the action content determination model data 77. To do.
  • FIG. 23 is an explanatory diagram illustrating a typical processing procedure of part state estimation and action content estimation executed by the server 5 according to the third embodiment of the present invention.
  • the processing according to the third embodiment is characterized in that the action content estimation program 68 uses the result of the part state estimation to estimate the action contents as a meaningful action content with a name, rather than classifying the action contents without meaning. It is said.
  • the action content determination model selection program 67 selects the action content determination model data 77 based on the user profile data 75.
  • the action content estimation program 68 is based on the part state data 72 and the selected action content determination model data 77, and the likelihood previously defined in the action content determination model data 77 such as desk work 141, housework 142, reading 143, or sport 144 is presumed. It can be determined that the action content 87 is likely.
  • FIG. 24 is an explanatory diagram showing a data structure example of the action content data 79 held by the server 5 according to the third embodiment of the present invention.
  • the action content data 79 of Example 3 is characterized in that the action content estimated in a specific time interval is recorded.
  • the start date and time 362 indicates the date and time when the estimated action starts, and the end date and time 363 indicates the date and time when the estimated action ends.
  • the action 364 indicates the estimated action content, and is determined and selected from names predefined in the action / part teacher data 78 or the action content determination model data 77.
  • this invention is not limited to the Example mentioned above, Various modifications are included.
  • the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • Information such as programs, tables, and files that realize each function is a memory, hard disk drive, storage device such as SSD (Solid State Drive), or computer-readable non-transitory data such as an IC card, SD card, or DVD. It can be stored in a storage medium.

Landscapes

  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
PCT/JP2015/078664 2014-11-26 2015-10-08 行動分類システム、行動分類装置及び行動分類方法 WO2016084499A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-238581 2014-11-26
JP2014238581A JP6362521B2 (ja) 2014-11-26 2014-11-26 行動分類システム、行動分類装置及び行動分類方法

Publications (1)

Publication Number Publication Date
WO2016084499A1 true WO2016084499A1 (ja) 2016-06-02

Family

ID=56074077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/078664 WO2016084499A1 (ja) 2014-11-26 2015-10-08 行動分類システム、行動分類装置及び行動分類方法

Country Status (2)

Country Link
JP (1) JP6362521B2 (es)
WO (1) WO2016084499A1 (es)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021060290A1 (ja) * 2019-09-24 2021-04-01 石井 亮 行動影響分析システム、行動影響分析プログラム、及び行動影響分析方法
WO2023228620A1 (ja) * 2022-05-26 2023-11-30 株式会社日立製作所 共同作業認識システム、共同作業認識方法

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7110568B2 (ja) * 2017-09-19 2022-08-02 富士フイルムビジネスイノベーション株式会社 行動推定装置及び行動推定プログラム
KR102089002B1 (ko) * 2017-10-27 2020-03-13 김현우 행동에 대한 피드백을 제공하는 웨어러블 디바이스 및 방법
JP7128736B2 (ja) * 2018-12-27 2022-08-31 川崎重工業株式会社 ロボット制御装置、ロボットシステム及びロボット制御方法
JP7152357B2 (ja) * 2019-05-24 2022-10-12 株式会社日立製作所 正解データ作成支援システムおよび正解データ作成支援方法
JP7456129B2 (ja) * 2019-11-18 2024-03-27 中国電力株式会社 管理システム
JP7309152B2 (ja) * 2020-03-04 2023-07-18 日本電信電話株式会社 行動推定装置、行動推定方法および行動推定プログラム
JP7322985B2 (ja) * 2020-03-19 2023-08-08 カシオ計算機株式会社 運動支援装置、運動支援方法及びプログラム
WO2021210162A1 (ja) * 2020-04-17 2021-10-21 住友電工オプティフロンティア株式会社 光ファイバのための融着接続システム、融着接続機、及び光ファイバを融着接続する方法
WO2021210161A1 (ja) * 2020-04-17 2021-10-21 住友電工オプティフロンティア株式会社 融着接続機、融着接続システム、及び光ファイバを融着接続する方法
CN116322508A (zh) * 2020-07-30 2023-06-23 株式会社村田制作所 行动状态推测装置、行动状态推测方法、行动状态学习装置、以及行动状态学习方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005021450A (ja) * 2003-07-03 2005-01-27 Toshiba Corp 生体状態分析装置及び生体状態分析方法
JP2010207488A (ja) * 2009-03-12 2010-09-24 Gifu Univ 行動解析装置及びプログラム
JP2011227918A (ja) * 2011-06-24 2011-11-10 Seiko Instruments Inc 電子歩数計
JP2013022090A (ja) * 2011-07-15 2013-02-04 Tohoku Univ エネルギー消費量呈示装置およびエネルギー消費量推定方法
JP2014128459A (ja) * 2012-12-28 2014-07-10 Kddi Corp 行動対応情報を適時に提示可能なユーザインタフェース装置、プログラム及び方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005021450A (ja) * 2003-07-03 2005-01-27 Toshiba Corp 生体状態分析装置及び生体状態分析方法
JP2010207488A (ja) * 2009-03-12 2010-09-24 Gifu Univ 行動解析装置及びプログラム
JP2011227918A (ja) * 2011-06-24 2011-11-10 Seiko Instruments Inc 電子歩数計
JP2013022090A (ja) * 2011-07-15 2013-02-04 Tohoku Univ エネルギー消費量呈示装置およびエネルギー消費量推定方法
JP2014128459A (ja) * 2012-12-28 2014-07-10 Kddi Corp 行動対応情報を適時に提示可能なユーザインタフェース装置、プログラム及び方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021060290A1 (ja) * 2019-09-24 2021-04-01 石井 亮 行動影響分析システム、行動影響分析プログラム、及び行動影響分析方法
WO2023228620A1 (ja) * 2022-05-26 2023-11-30 株式会社日立製作所 共同作業認識システム、共同作業認識方法

Also Published As

Publication number Publication date
JP6362521B2 (ja) 2018-07-25
JP2016097228A (ja) 2016-05-30

Similar Documents

Publication Publication Date Title
JP6362521B2 (ja) 行動分類システム、行動分類装置及び行動分類方法
EP3566231B1 (en) Apparatus and method for triggering a fall risk alert to a person
Yang et al. Lifelogging data validation model for internet of things enabled personalized healthcare
Meyer et al. Wearables and deep learning classify fall risk from gait in multiple sclerosis
US10959647B2 (en) System and method for sensing and responding to fatigue during a physical activity
Parkka et al. Activity classification using realistic data from wearable sensors
US20110246123A1 (en) Personal status monitoring
JP2017169884A (ja) 睡眠状態推定装置
JP2012527292A (ja) 着用位置を検出するためのセンシングデバイス
WO2019187099A1 (ja) 身体機能自立支援装置およびその方法
WO2017122705A1 (ja) トレーニング分類システム、トレーニング分類方法およびトレーニング分類サーバ
Zhao et al. Recognition of human fall events based on single tri-axial gyroscope
JP2016097228A5 (es)
US20230004795A1 (en) Systems and methods for constructing motion models based on sensor data
JP6569233B2 (ja) 姿勢推定装置、姿勢推定システム、姿勢推定方法及びプログラム
Vandarkuzhali et al. Hybrid RF and PCA method: The number and Posture of piezoresistive sensors in a multifunctional technology for respiratory monitoring
Andrić et al. Sensor-based activity recognition and performance assessment in climbing: A review
Vermander et al. Intelligent systems for sitting posture monitoring and anomaly detection: an overview
Nguyen et al. An instrumented measurement scheme for the assessment of upper limb function in individuals with Friedreich Ataxia
Mani et al. Evaluation of a Combined Conductive Fabric-Based Suspender System and Machine Learning Approach for Human Activity Recognition
JP6530350B2 (ja) 逐次姿勢識別装置、逐次姿勢識別方法および逐次姿勢識別プログラム
Poli et al. Identification issues associated with the use of wearable accelerometers in lifelogging
Poli et al. ADLs Monitoring by accelerometer-based wearable sensors: effect of measurement device and data uncertainty on classification accuracy
KR20150071729A (ko) 3축 가속도 센서를 이용한 실시간 운동측정장치 및 방법
JP6870733B2 (ja) 情報処理装置、情報処理システム、および情報処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15862514

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15862514

Country of ref document: EP

Kind code of ref document: A1