CN110826383B - Analysis system, analysis method, program, and storage medium - Google Patents

Analysis system, analysis method, program, and storage medium Download PDF

Info

Publication number
CN110826383B
CN110826383B CN201910227735.0A CN201910227735A CN110826383B CN 110826383 B CN110826383 B CN 110826383B CN 201910227735 A CN201910227735 A CN 201910227735A CN 110826383 B CN110826383 B CN 110826383B
Authority
CN
China
Prior art keywords
job
time series
series data
processing unit
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910227735.0A
Other languages
Chinese (zh)
Other versions
CN110826383A (en
Inventor
浪冈保男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of CN110826383A publication Critical patent/CN110826383A/en
Application granted granted Critical
Publication of CN110826383B publication Critical patent/CN110826383B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Provided are an analysis system, an analysis method, a program, and a storage medium, which can analyze a job more automatically and can reduce the time required for analysis. The analysis system of the embodiment comprises an acquisition unit and a processing unit. The acquisition unit acquires time series data representing the actions of the operator in step 1 including a plurality of operations. The processing unit detects a plurality of change points of the state in the time series data, and associates the time series data with each of the plurality of jobs using the plurality of change points.

Description

Analysis system, analysis method, program, and storage medium
Technical Field
Embodiments of the present invention relate to an analysis system, an analysis method, a program, and a storage medium.
Background
Conventionally, in order to improve productivity in a manufacturing site, a method of recording information related to a job using video shooting or a stopwatch (stopwatch) and analyzing the information has been employed. When a process including a plurality of operations is repeated, extraction of the cycle of the process, creation of a time chart as a detail thereof, extraction of the difference in operations among operators, and the like are performed in the analysis.
In order to shorten the time required for analysis, tools for supporting these analyses exist. However, even when the tool is used, it is necessary to perform information marking and judgment by a person. Further, variations in judgment occur depending on the technique, experience, and the like of the person to be analyzed. Accordingly, it is desired to develop a technique capable of more automatically analyzing and capable of further shortening the time required for analysis.
Prior art literature
Patent literature
Patent document 1: japanese patent laid-open publication No. 2017-091249
Disclosure of Invention
Problems to be solved by the invention
The present invention provides an analysis system, an analysis method, a program, and a storage medium, which can analyze a job more automatically and can reduce the time required for analysis.
Means for solving the problems
The analysis system of the embodiment comprises an acquisition unit and a processing unit. The acquisition unit acquires time series data representing the actions of the operator in step 1 including a plurality of operations. The processing unit detects a plurality of points of change in the state of the time series data, and uses the plurality of points of change to make a correspondence between the time series data and each of the plurality of jobs.
Drawings
Fig. 1 is a block diagram showing the structure of an analysis system according to embodiment 1.
Fig. 2 is a schematic diagram for explaining the processing in the analysis system according to embodiment 1.
Fig. 3 is a schematic diagram for explaining the processing in the analysis system according to embodiment 1.
Fig. 4 is a schematic diagram for explaining the processing in the analysis system according to embodiment 1.
Fig. 5 is a schematic diagram for explaining the processing in the analysis system according to embodiment 1.
Fig. 6 is a flowchart showing a process in the analysis system according to embodiment 1.
Fig. 7 is a schematic diagram for explaining the processing in the analysis system according to the 1 st modification of embodiment 1.
Fig. 8 is a flowchart showing a process in the analysis system according to modification 1 of embodiment 1.
Fig. 9 is a schematic diagram for explaining the processing in the analysis system according to modification 2 of embodiment 1.
Fig. 10 is a flowchart showing a process in the analysis system according to modification 2 of embodiment 1.
Fig. 11 is a schematic diagram for explaining the processing in the analysis system according to modification 3 of embodiment 1.
Fig. 12 is a flowchart showing a process in the analysis system according to modification 3 of embodiment 1.
Fig. 13 is a schematic diagram for explaining the processing in the analysis system according to modification 4 of embodiment 1.
Fig. 14 is a flowchart showing a process in the analysis system according to modification 4 of embodiment 1.
Fig. 15 is a block diagram showing the structure of an analysis system according to embodiment 2.
Fig. 16 is a schematic diagram showing a recurrent neural network.
Fig. 17 is a block diagram showing the LSTM configuration.
Fig. 18 is a flowchart showing a process in the analysis system according to embodiment 2.
Fig. 19 is a graph showing data obtained in the example.
Fig. 20 is a graph showing data relating to the embodiment.
Detailed Description
Embodiments of the present application are described below with reference to the drawings.
In the present description and the drawings, the same elements as those already described are given the same reference numerals, and detailed description thereof is omitted as appropriate.
Fig. 1 is a block diagram showing the structure of an analysis system according to embodiment 1.
Fig. 2 to 5 are schematic diagrams for explaining the processing in the analysis system according to embodiment 1.
As shown in fig. 1, the analysis system 1 includes an acquisition unit 10, a processing unit 20, a storage unit 30, and a display unit 40.
The analysis system 1 is used for analyzing the actions of an operator who is engaged in a certain process. Hereinafter, the operation of the operator in step 1 including a plurality of operations will be described.
The acquisition unit 10 acquires time series data indicating the operation of the operator in step 1.
The acquisition unit 10 includes, for example, an imaging device. The acquisition unit 10 photographs an operator who is working, and extracts bone information of the operator from the obtained image. The acquisition unit 10 acquires, as time series data representing the operation of the operator, a positional change of a part of the skeleton (for example, the head) with the lapse of time.
Alternatively, the acquisition unit 10 may extract the joint angle from the bone information. The acquisition unit 10 acquires a change in the angle of the joint (for example, the angle of the elbow) with the passage of time as time series data indicating the operation of the operator. The angle of the joint is less dependent on the build. Therefore, by using the angle change of the joint as time series data, the influence of different physique on analysis can be reduced, and the accuracy of analysis can be improved.
The extraction of bone information, the detection of a change in the position of the bone, the detection of an indirect change in the angle, and the like may be performed by the processing unit 20 described later.
Alternatively, the acquisition unit 10 may include an accelerometer. The accelerometer is mounted on a part of the body such as the wrist or foot of the operator. The acquisition unit 10 acquires information such as acceleration, angular velocity, and azimuth obtained when the operator performs step 1, as time series data indicating the operation of the operator.
The acquisition unit 10 stores the acquired data in the storage unit 30. The storage unit 30 is a hard disk drive, a flash memory, a network hard disk, or the like.
Fig. 2 (a) shows an example of time series data acquired by the acquisition unit 10 and stored in the storage unit 30. Fig. 2 (a) shows acceleration information obtained by an accelerometer attached to the wrist of the operator in step 1. The time series data of fig. 2 (b) and subsequent shows the processing performed by the processing unit 20. In the time series data included in fig. 2 to 5, the horizontal axis represents time Ti and the vertical axis represents acceleration Ac.
The processing unit 20 (processing circuit) analyzes the time series data stored in the storage unit 30.
First, the processing unit 20 divides the time series data into a plurality of states, and extracts a change point of the state. The extraction of the change points of the states uses, for example, a hidden Markov model (HDP-HMM), a k-means method, an x-means method or spectral clustering for the hierarchical Dirichlet process. Fig. 2 (b) shows the result of the division performed by the processing unit 20. As shown in fig. 2 (B), the time series data is divided into a plurality of states a, and the change points B between the states a are extracted.
Next, the processing unit 20 refers to the standard time required for each job included in step 1. The standard time of each job is stored in advance in the storage unit 30, for example. The standard time may be determined by a person, or may be a time recorded in an operation manual or the like for an input job. Fig. 2 (c) shows an example thereof. In the example of fig. 2 (C), the 1 st step C includes a job C1 (1 st job), a job C2 (2 nd job), and a job C3 (3 rd job). The jobs C1 to C3 require 18 seconds, 35 seconds, and 19 seconds, respectively.
Next, the processing unit 20 analyzes the correspondence between the time series data and each job using the extraction result of the change point and the standard time of the job. First, the processing unit 20 sets a start point of the 1 st step C (job C1) in the time series data. For example, one of the plurality of change points B is set as a start point. Alternatively, the start point may be set randomly or based on some rule.
When setting the start point of the job C1, the processing unit 20 samples candidates of the start point of the job C2. Here, a case will be described in which the start point of the job C2 is regarded as coinciding with the end point of the job C1.
As an example, as shown in fig. 3 a, a start point D1 of the job C1 is determined, and samples D2a to D2C are set as the start point of the job C2 (end point of the job C1). For example, the sample is set to be centered on the time after the time (18 seconds) required for the job C1 has elapsed from the start point D1. The number of samples is determined in consideration of the time required for processing and the accuracy of the necessary analysis.
The samples are set at regular intervals or randomly within a predetermined range from the center, for example. Alternatively, the number of samples may be set to be larger near the center and smaller as it is farther from the center. As an example, for each of the jobs C1 to C3, as shown in fig. 3 (b), a probability distribution is set in advance. In fig. 3 (b), the horizontal axis represents time Ti, and the vertical axis represents probability P that each time can be obtained. The peak of probability P is at the standard time. The processing unit 20 may set the samples according to the probability distribution.
The processing unit 20 calculates an evaluation value (2 nd evaluation value) using the distance between each sample and the change point B closest to each sample. Examples of the distance include euclidean distance, manhattan distance, and mahalanobis distance. For example, the higher the evaluation value calculated for a sample, the shorter the distance between the sample and the change point B closest to the sample is shown. The processing unit 20 extracts 1 or more samples including the sample having the highest evaluation value. In the example shown in fig. 3 (a), the sample D2a is closest to the change point B1. The processing unit 20 sets the sample D2a as the start point D2 of the job C2.
Thereafter, the sampling of the start point and the setting of the start point of each job by the processing unit 20 are repeated similarly.
That is, as shown in fig. 3 (C), the processing unit 20 sets the samples D3a to D3C as candidates for the start point of the job C3. The processing unit 20 sets the sample D3C closest to the change point B2 as the start point D3 of the job C3.
As shown in fig. 4 (a), the processing unit 20 sets samples D4a to D4C as candidates for the end point of the job C3 (step C1). The processing unit 20 sets the sample D4B closest to the change point B3 as the end point D4 of the job C3.
By this series of processing, as shown in fig. 4 (b), a sample path E1 including a plurality of samples corresponding to the start points D1 to D3 and the end point D4 is generated. The processing unit 20 repeatedly performs the same processing while changing the position (time) of the start point D1. As a result, as shown in fig. 5 a, a plurality of sample paths E1 to Ex (x is an integer of 2 or more) are generated.
The processing unit 20 calculates an evaluation value (1 st evaluation value) for each of the generated sample paths. The evaluation value is calculated based on the fitness between the positions of the start point and the end point included in the sample path and the positions of the plurality of change points B. As the degree of fit, for example, a distance between a sample and a change point B closest to the sample is used. For example, for each sample included in a sample path, the distance to the closest change point B is calculated, and the evaluation value is calculated based on the sum of these distances. For example, the shorter the distance, the higher the evaluation value is calculated. The processing unit 20 selects one sample path from among the plurality of sample paths based on the evaluation value.
The selected sample path indicates to which job's action the portion of the time series data corresponds. For example, in the time series data, data between a sample set as the start point of the job C1 and a sample set as the start point of the job C2 indicates an action of an operator in the job C1.
The processing unit 20 outputs the correspondence between the time series data and each of the plurality of jobs to the outside. For example, the sample path E1 is selected from among a plurality of sample paths. As shown in fig. 5 (b), the processing unit 20 causes the display unit 40 to display the start points D1 to D3 and the end point D4 of the sample path E1, a part of the time series data corresponding to the sample path E1, and the required time for each job calculated from the start points D1 to D3 and the end point D4.
The acquisition unit 10, the processing unit 20, the storage unit 30, and the display unit 40 are connected to each other, for example, by wire or wirelessly. Alternatively, at least a part of these may be connected to each other via a network.
According to the analysis system 1 and the analysis method of embodiment 1, it is possible to automatically analyze which part corresponds to which job with respect to time series data indicating the actions of the operator. Thus, it is not necessary to make a correspondence and a mark between time series data and each job by a person, and the time required for analysis can be shortened. Since the analysis time is shortened, for example, analysis closer to real time can be realized. Further, the establishment of the correspondence is performed based on the change point of the state in the time series data. Therefore, even a user who does not have a technique or experience concerning analysis can analyze a job with high accuracy.
Fig. 6 is a flowchart showing a process in the analysis system according to embodiment 1.
As an example, the processing unit 20 executes the processing shown in fig. 6. First, as shown in fig. 2 b, a plurality of change points in time series data are extracted (step S1). One of the plurality of change points is set as a start point of the nth job (step S2). In step S2, n=1 is set. That is, in step S2, the start point of the first job is set. As shown in fig. 3 (a), N candidates of the start point of the n+1th job following the N-th job are sampled (step S3). For each set sample, an evaluation value is calculated (step S4). Based on the evaluation value, as shown in fig. 3 (c) and the like, the start point of the n+1th job is determined (step S5). As n, n+1 is set (step S6). It is determined whether or not the n+1th job exists (step S7). If any, step S3 is performed again. If not, it means that a plurality of samples corresponding to the start points of all the operations included in step 1 and the end points of step 1 have been set. As a result, as shown in fig. 4 (b), one sample path including the start point of each job and the end point of the 1 st step is generated.
Next, it is determined whether the end condition is sufficient (step S8). When the end condition is insufficient, another one of the plurality of change points is set as the start point of the nth job (step S9). In step S9, n=1 is set. Thus, the start point of the first job is set to a position (time) different from that up to this point. After step S9, steps S3 to S8 are repeated.
The end condition is that, for example, each of the plurality of change points extracted in step S1 is set as the start point of step 1 and steps S3 to S7 are executed. Alternatively, the steps S3 to S7 may be executed for each change point within a predetermined range of time series data, and the end condition may be set.
When the end condition is sufficient, a plurality of sample paths are generated as shown in fig. 5 (a). For these sample paths, evaluation values are calculated, respectively (step S10). Based on the evaluation value, 1 or more sample paths are determined (step S11). The result related to the determined sample path is displayed (step S12). As a result, as described above, the start point and the end point included in the selected sample path, a part of the time series data corresponding to the sample path, the required time of each job, and the like are displayed.
In addition, although the processing is different from the processing shown in the flowchart of fig. 6, the evaluation value of the generated sample path may be calculated when the judgment of step S8 is performed. For example, when the evaluation value satisfies a predetermined condition, it is determined that the end condition is sufficient. In this case, steps S10 and S11 are omitted, and the result on the sample path on which the evaluation value satisfying the condition is obtained is displayed in step S12.
In the above example, after sampling the start point of the next job, the sample is reduced, and based on the reduced sample, the next start point is sampled. In addition to this method, the next start point may be sampled based on each sample without scaling down the sample. In this method, since the number of sample paths to be finally generated increases, the probability of obtaining a sample path more suitable for a plurality of change points B increases. Thus, the correspondence relationship between the time series data and the job can be analyzed with higher accuracy. On the other hand, in the case where the sample is reduced and then the sample is sampled next, the calculation amount is reduced, so that the time required for analysis can be shortened.
(modification 1: resampling)
Fig. 7 is a schematic diagram for explaining the processing in the analysis system according to the 1 st modification of embodiment 1.
The starting point of a subsequent job may be sampled based on the starting point of a certain job and then resampled.
Fig. 7 (a) is an enlarged view of a part of the time series data shown in fig. 2 (a). As shown in fig. 7 (a), when the samples D2a to D2C are set as candidates for the start point of the job C2, the processing unit 20 calculates the evaluation value of each sample. Then, the processing unit 20 defines a probability distribution based on the calculated evaluation value. The processing unit 20 resamples (resamples) candidates of the start point of the job C2 according to the defined probability distribution.
In the example shown in fig. 7 (a), the sample D2a is closest to the change point B1, and the sample D2c is farthest from the change point B1. Accordingly, the probability distribution defined based on the evaluation values of these samples becomes high in the vicinity of the sample D2a, and becomes low in the vicinity of the sample D2 c. According to the probability distribution, as shown in fig. 7 (b), for example, the processing unit 20 sets a plurality of samples D2a centering on the sample D2a 1 ~D2a 3 And a plurality of samples D2b are set with the sample D2b as the center 1 D2b 2 . With respect to sample D2c, no sample was set. The number of samples set for the samples D2a to D2c by resampling is obtained based on the evaluation value. That is, for samples having a short distance from the change point, more samples are set by resampling.
After resampling, the processing unit 20 samples D2a, for example 1 ~D2a 3 、D2b 1 D2b 2 Each set as a candidate for the start point of the job C2. The processing unit 20 samples and resamples candidates of the start point of the next job based on each sample.
Fig. 8 is a flowchart showing a process in the analysis system according to modification 1 of embodiment 1.
The flowchart of modification 1 shown in fig. 8 differs from the flowchart shown in fig. 6 in that steps S20 and S21 are included instead of step S5.
When the evaluation value is calculated in step S4, a probability distribution is defined based on the evaluation value (step S20). The start point of the n+1th job is resampled M pieces according to the defined probability distribution (step S21). Thereafter, in the case where there is a next job, for each of the M samples, the start point of the next job is sampled in step S3.
By resampling based on the result of the sampling, the start point and the end point which are more coincident with the plurality of change points B are easily obtained. This can improve the accuracy of analysis.
(modification 2: ripple)
Fig. 9 is a schematic diagram for explaining the processing in the analysis system according to modification 2 of embodiment 1.
The processing unit 20 may detect a characteristic pattern (moire) repeatedly appearing in the time series data. First, the processing unit 20 cuts out a part of the time series data. The range (length of time) of the cut data is set based on, for example, the time required for any one of a plurality of jobs. As an example, as shown in fig. 9 (b), the processing unit 20 cuts out a part of data from the entire time series data shown in fig. 9 (a).
Next, the processing unit 20 compares the cut data with other data of the same time length in the time series data. For example, the processing unit 20 calculates a DTW (Dynamic TimeWarping ) distance between the cut data and other data. By using the DTW distance, the intensity of the correlation between these data can be found regardless of the length of time. The processing unit 20 compares the data with other data before finding data similar to (more relevant to) the cut data. For example, the processing unit 20 compares the cut data with the other data while shifting the range of the data by one frame at a time.
The processing section 20 extracts a portion similar to the cut data. For example, a portion having a DTW distance smaller than a predetermined value is extracted. When there is no portion where the DTW distance is smaller than the predetermined value, the processing unit 20 cuts out data of other range or data of other time length from the time series data, and compares the cut data with other data.
By this process, a plurality of portions F (waves) similar to each other are extracted as shown in fig. 9 (c). The processing section 20 calculates the period G of the similar section F. The period G is calculated by averaging, for example, the time between centers of adjacent similar portions F. The processing unit 20 cuts out a part of the time series data based on the period G. For example, the processing unit 20 cuts out data having a length 2 times the period G from the entire time series data. The position of the cut-out may be random or based on a predetermined rule. As in the method shown in fig. 2 (B), the processing unit 20 extracts a plurality of change points B for the cut data.
Fig. 10 is a flowchart showing a process in the analysis system according to modification 2 of embodiment 1.
The flowchart of modification 2 shown in fig. 10 differs from the flowchart shown in fig. 6 in that steps S30 to S32 are included instead of step S1.
If the time series data is acquired, the similar part (ripple) in the time series data is extracted first (step S30). Based on the extracted similar portion, a part of the time series data is cut out (step S31). A plurality of change points in the cut time series data are extracted (step S32). Thereafter, step S2 is performed based on the extracted plurality of change points.
A plurality of similar portions similar to each other are extracted from the time series data, and a part of the entire data is cut out based on these similar portions, so that the time required for the subsequent processing can be greatly shortened.
In addition, as a method of cutting out a part of the entire data, there is also a method using the standard time of step 1C. For example, the processing unit 20 may cut out data having a length 2 times or 3 times the standard time of the 1 st step C from the entire data. In this case, too, the time required for the subsequent processing can be significantly shortened.
However, in reality, the time required for the operator to perform the 1 st step C may be significantly different from the standard time of the 1 st step C. If the skill of the operator is high, the time actually required in step 1C may be shorter than the standard time. That is, since data of a length equal to or longer than necessary is cut out, there is a possibility that the time required for the subsequent processing becomes longer than the case of cutting out data based on the similar portion. On the other hand, if the skill of the operator is low, the time actually required in step 1C may be longer than the standard time. Therefore, the cut data may not include all operations of the job, and may not be analyzed properly.
By using the similar portion, the time required for processing can be effectively shortened while suppressing a decline in the accuracy of analysis.
(modification 3: similarity)
Fig. 11 (a) and 11 (b) are schematic diagrams for explaining the processing in the analysis system according to modification 3 of embodiment 1.
In calculating the evaluation value of the sample path, the processing unit 20 may refer to the similarity between the jobs included in step 1. The similarity is stored in the storage unit 30, for example.
As an example, the similarity (2 nd similarity) between the jobs is set as in the table shown in fig. 11 (a). The similarity may be input by a person in advance, or may be automatically set by the processing unit 20. In the case of automatic setting, the similarity can be set based on the article described in the operation manual. For example, the similarity between the names of the respective jobs described in the operation manual is calculated, and the similarity is used. Alternatively, when the operation content is described in the operation manual, the similarity between the articles of the operation content may be used.
The more similar the contents of the 2 jobs, the higher the likelihood that waveforms of time-series data obtained in these jobs are similar to each other. Fig. 11 (b) shows data corresponding to the sample path E1 among the plurality of sample paths E1 to Ex shown in fig. 5 (a). When calculating the evaluation value of the sample path, the processing unit 20 calculates the similarity (1 st similarity) between the data corresponding to each job, in addition to the start point and the distance between the end point and the change point. As the similarity, a DTW distance may be used.
For example, when calculating the DTW distance between the data corresponding to the job C1 and the data corresponding to the job C2, the processing unit 20 refers to the table shown in fig. 11 (a). According to the table, the similarity between the jobs C1 and C2 is low. Therefore, when the DTW distance is long, the information matches with the information of the similarity set in advance. If the DTW distance between the data corresponding to the job C1 and the data corresponding to the job C3 is short, the data matches the similarity stored in the table. The processing unit 20 increases the evaluation value as the similarity between the data corresponding to each job and the similarity stored in the table are matched.
Fig. 12 is a flowchart showing a process in the analysis system according to modification 3 of embodiment 1.
The flowchart of modification 3 shown in fig. 12 differs from the flowchart shown in fig. 6 in that steps S40 and S41 are included instead of step S10.
If it is determined in step S8 that the end condition is sufficient, the similarity between the jobs set in advance is referred to (step S40). Alternatively, the similarity may be calculated by reading an operation manual or the like in step S40. Next, the evaluation value of each sample path is calculated using the similarity (step S41).
For example, the distances to the plurality of change points are calculated for the start point and the end point of each sample path. Further, based on each sample path, a part of time series data corresponding to each job is extracted. Then, the similarity between the extracted data is calculated, and the similarity between the data and the similarity between the jobs are compared. The evaluation value is calculated based on the calculation result of the distance and the comparison result of the similarity. Then, a sample path is determined based on the calculated evaluation value.
According to the method, the corresponding relation between time series data and the operation can be analyzed with better precision.
(modification 4: branching of the operation)
Fig. 13 is a schematic diagram for explaining the processing in the analysis system according to modification 4 of embodiment 1.
Here, a case where there are a plurality of flows of the work performed in step 1 will be described. For example, as shown in fig. 13 (a), the 1 st step C further includes a work C4. There are cases where job C4 is executed between job C2 and job C3. In the case where the job C4 is not executed, the job C3 is executed immediately after the job C2. That is, the job that may be performed after the job C2 is branched into the job C3 and the job C4.
In this case, after determining the start point of the job C2, the processing unit 20 samples candidates of the start points of the jobs C3 and C4, respectively. In the example shown in fig. 13 (a), the standard time required for the job C4 is 40 seconds. After the start point D3 is determined as shown in fig. 13 (b), the processing unit 20 sets the samples D4a to D4C in the vicinity of 19 seconds, which is the standard time of the job C3. Further, the processing unit 20 sets the samples D4D to D4f around 40 seconds, which is the standard time of the job C4.
The processing unit 20 calculates the distance between each sample and the closest change point B, and calculates an evaluation value. The processing unit 20 defines a probability distribution based on the evaluation value, for example. Next, as in modification 1, the processing unit 20 resamples the probability distribution. Then, based on the sample set by resampling, the start point of the next job is sampled. For a sample for which no subsequent job exists, the processing is ended and the sample path is stored in the storage unit 30.
Fig. 14 is a flowchart showing a process in the analysis system according to modification 4 of embodiment 1.
The flowchart of modification 4 shown in fig. 14 differs from the flowchart shown in fig. 6 in that step S50 is included instead of step S3, steps S20 and S21 are included instead of step S5, and steps S51 to S53 are included instead of step S7.
If the start point of the N-th job is set in step S2 or S9, N number of start points of the n+1-th job are sampled in step S50. At this time, as the n+1th job, if there are a plurality of jobs, N number of start points of the respective jobs are sampled. Next, an evaluation value of each sample is calculated (step S4), and resampling is performed according to a probability distribution based on the evaluation value (steps S20 and S21). Then, the sample having the n+1th job is extracted from the samples set by the resampling in step S21, and a set S of samples is generated (step S51). Further, for a sample in which no n+1th job exists, a sample path is stored (step S52).
After steps S51 and S52, it is determined whether or not there is a sample in the set S (step S53). When there are samples in the set S, the N number of start points of the n+1th job are sampled again for each sample included in the set S (step S50). If there is no sample in the set S, step S8 is executed in the same manner as the flowchart shown in fig. 6.
If there is a branch in the flow of the operation included in step 1, if the branch is not considered in the analysis, the correspondence between the time series data and the operation cannot be performed with high accuracy. In the analysis system and the analysis method according to this modification, even when there is a branch of such a job, each job after the branch is sampled. Thus, the accuracy of the correspondence between the time series data and the job can be improved.
(embodiment 2)
Fig. 15 is a block diagram showing the structure of an analysis system according to embodiment 2.
As shown in fig. 15, the analysis system 2 according to embodiment 2 further includes a learning data storage unit 50 and an RNN storage unit 60.
According to the analysis system 1 of embodiment 1, a part of the time series data corresponding to step 1 can be extracted from the entire time series data. The analysis system 2 according to embodiment 2 performs learning of a recurrent neural network (hereinafter referred to as RNN) using a part of the extracted time series data. The analysis system 2 also uses the RNN to analyze the actions of the operator. Next, the learning of RNN by the processing unit 20 and the analysis using RNN will be described.
(learning)
When extracting a part of the time series data corresponding to step 1, the processing unit 20 stores the data in the learning data storage unit 50. The learning data storage unit 50 stores information on the proficiency of the worker who has acquired the time series data, in addition to the time series data. The proficiency level is stored in the learning data storage unit 50 in advance by the user. Alternatively, the proficiency may be calculated and stored based on the time length of the time series data. In this case, the shorter the time required for the 1 st step, the higher the proficiency is calculated.
The processing unit 20 learns RNNs stored in the RNN storage unit 60 using the time series data stored in the learning data storage unit 50. RNN is one type of neural network. Neural networks are obtained by using a plurality of artificial neurons (nodes) to simulate a biological recognition system. The plurality of neurons are connected to each other by artificial synapses (connecting lines) for which weights are set.
Fig. 16 is a schematic diagram showing a recurrent neural network.
As shown in fig. 16, RNN200 includes an input layer 201, an intermediate layer 202, and an output layer 203. Further, in RNN, the output of the neuron N of the intermediate layer 202 in a certain time division is connected to the output of the neuron N of the intermediate layer 202 in a subsequent time division.
The processing unit 20 inputs time series data for learning to the neurons N included in the input layer 201. The processing unit 20 inputs teacher data to the neurons N included in the output layer 203. As teacher data, a value indicating proficiency is set. That is, when the input time series data is based on the action of the skilled person, a value indicating high proficiency is set for the neuron N of the output layer 203. The weight of each synapse included in the RNN is changed by learning so that the difference between the input time series data and the teacher data becomes small. The learning is performed using time series data acquired from a plurality of operators having different proficiency levels. The processing unit 20 stores the learned RNN in the RNN storage unit 60.
The learning data storage unit 50 and the RNN storage unit 60 are a hard disk drive, a flash memory, a network hard disk, or the like. One storage device may function as the storage unit 30, the learning data storage unit 50, and the RNN storage unit 60.
(analysis)
In the analysis using RNN, for example, the degree of proficiency of an action of a certain operator can be investigated. Further, a point to be improved can be found in the operation of the operator. In the analysis, time series data indicating the operation in step 1 of the operator to be analyzed is stored in the storage unit 30 in advance. The time series data is extracted by, for example, the process described in the analysis system 1.
The processing unit 20 inputs the time series data to be analyzed to the RNN of the RNN storage unit 60, which completes the learning. If the input layer 201 of the RNN is input with time series data, the neuron N of the output layer 203 may react. The processing unit 20 detects the response of the neuron N of the output layer 203. For example, the processing unit 20 compares the activity of the neuron N with a predetermined threshold value. When the activity level of the neuron N is higher than the predetermined threshold value, the processing unit 20 detects that the neuron N is reacting. The processing unit 20 may extract data of a period in which the neuron N reacts in the time series data. Alternatively, the processing unit 20 may extract a part of the time series data and data indicating the activity level of the neuron N in the period.
The display unit 40 displays the detection result of the processing unit 20. For example, the display unit 40 displays a part of the time series data extracted by the processing unit 20 so as to be distinguishable from other parts. In the case where no reaction of the neuron N is detected, the display unit 40 may display a result indicating that no reaction is detected.
The neurons N of the middle layer 202 of RNN have, for example, LSTM (Long Short TermMemory, long short memory) architecture. The neural network having the LSTM structure can improve the recognition rate of time series data having a longer period of operation as compared with the neural networks having other structures.
Fig. 17 is a block diagram showing the LSTM configuration.
As shown in fig. 17, LSTM construct 300 includes a forget gate (gate) 310, an input gate 320, and an output gate 330.
In FIG. 17, x t The input value to neuron N in time t is represented. C (C) t The state of neuron N in time t is represented. f (f) t Representing the output value of the forget gate 310 at time t. i.e t The output value of the input gate in time t is indicated. o (o) t The output value of the output gate in time t is indicated. h is a t The output value of neuron N in time t is represented. f (f) t 、i t 、C t 、o t And ht are represented by the following "expression 1" to "expression 5", respectively.
[ math 1 ]
f t =σ(W f ·[h t-1 ,x t ]+b f )
[ formula 2 ]
i t =σ(W i ·[h t--1 ,x t ]+b i )
[ formula 3 ]
C t =f t *C t-1 +i t *tanh(W C ·[h t-1 ,x t ]+b C )
[ math figure 4 ]
o t =σ(W o ·[h t-1 ,x t ]+b o )
[ formula 5 ]
h t =o t *tanh(C t )
In addition, not limited to the example shown in fig. 17, the neuron N of the intermediate layer 202 of RNN may have a gate cycle unit (Gated Recurrent Unit) structure, a bi-directional (bi-directional) LSTM structure, or the like.
Fig. 18 is a flowchart showing a process in the analysis system according to embodiment 2.
First, the analysis processing shown in the flowcharts of fig. 6, 8, 10, 12, 14, etc. is executed, and time series data indicating the operation in step 1 is acquired (step S60). Next, RNN is learned using the time series data acquired in step S60 (step S61). Next, time series data to be analyzed is acquired (step S62). The acquired time series data is input to the RNN after learning, and the response of the neuron is detected (step S63). The result of the detection is displayed on the display unit 40 (step S64).
Effects of embodiment 2 will be described.
According to the analysis system 2 and the analysis method of embodiment 2, it is possible to detect whether or not the action in step 1 of a certain operator includes a point to be improved. Conventionally, whether or not an action should be improved is confirmed by, for example, observation by a person. However, in this case, the observer has to observe all the jobs of each operator, and a long time is required. Moreover, since the extraction of points to be improved depends on the subjective, experience, and proficiency of the observer, deviations may occur.
On the other hand, in the analysis system 2 and the detection method according to the present embodiment, the point to be improved is detected based on RNN. The RNN is learned using data based on actions of other operators. Thus, the point to be improved is objectively detected without depending on the experience of the observer or the like. Further, the points to be improved are automatically detected by the analysis system 2 and the detection method, so that the observer is no longer required. Thus, according to the present embodiment, an analysis system and a detection method capable of automatically and objectively detecting an action of an operator to be improved are provided.
Here, the description is made with a view to detecting an operation of an operator to be improved. However, the application of the analysis system 2 and the detection method according to the embodiment is not limited to this example. For example, the analysis system 2 and the detection method according to the present embodiment may be used to detect an excellent operation of an operator.
The analysis system 2 uses RNN to analyze the job. The following effects are obtained by using RNN.
The work cycle varies from one worker to another. Thus, even if each worker starts a job at the same time, the jobs performed by each worker at a certain time point deviate as time passes. In this regard, by using RNN, it is possible to eliminate the influence of the variation in the work cycle and to detect the operation to be improved at the time of work. By using RNN, it is possible to consider the correlation between an operation of a certain time division and an operation of a subsequent time division. As a result, not only an operation that causes an increase in the working time but also an operation that causes the operation to be connected to the operation can be detected.
Further, in the analysis system 2 and the detection method according to the present embodiment, it is desirable that the neuron N in the intermediate layer 202 of the RNN200 has an LSTM structure shown in fig. 17. By adopting the LSTM structure, the state of the neuron N included in the intermediate layer 202 can be maintained for a longer period of time. The interdependence relationship of the actions at the respective time points can be analyzed over a longer period. Thus, not only the operation at a certain time point but also a certain neuron N included in the output layer 203 can react to the operation before and after the operation. As a result, the operator's actions to be improved can be detected more collectively.
Further, by using the processing method described in embodiment 1, time series data required for RNN learning can be automatically extracted. Thus, the time required for preparation of RNN-based analysis can be greatly shortened.
Example (example)
Fig. 19 (a) and 19 (b) are graphs showing data obtained in the examples.
Fig. 20 (a) is a graph showing data obtained in the examples. Fig. 20 (b) and 20 (c) are graphs showing the activity of neurons in examples.
The data shown in fig. 19 (a), 19 (b), and 20 (a) are obtained based on the operations performed by the 1 st, 2 nd, and 3 rd operators when the 1 st operation is performed, respectively. The data of fig. 19 (a), 19 (b) and 20 (a) are obtained by attaching accelerometers to the right wrists of the 1 st, 2 nd and 3 rd operators.
In fig. 19 (a), 19 (b) and 20 (a), the horizontal axis represents time Ti and the vertical axis represents acceleration Ac. The solid line and the broken line represent the acceleration in the X-axis direction and the acceleration in the Y-axis direction, respectively.
The acquisition unit 10 acquires the data shown in fig. 19 (a) and 19 (b), and stores the data in the storage unit 30. The processing unit 20 refers to the data shown in fig. 19 (a) and 19 (b) and learns RNNs stored in the RNN storage unit 60. As can be seen from a comparison between fig. 19 (a) and fig. 19 (b), the time required for the 1 st worker to perform the 1 st work is shorter than the time required for the 2 nd worker to perform the 1 st work. That is, the proficiency of the 1 st worker is more excellent than that of the 2 nd worker.
The processing unit 20 inputs the data shown in fig. 20 (a) to the RNN in which learning is completed, which is stored in the RNN storage unit 60. Fig. 20 (b) and 20 (c) are graphs showing the activity levels of the 1 st and 2 nd neurons when the data shown in fig. 20 (a) is input, respectively. When the activity level of the 1 st neuron is high, the operation corresponding to the inputted data corresponds to the 1 st proficiency level of the 1 st worker. When the activity level of the 2 nd neuron is high, the operation corresponding to the inputted data corresponds to the 2 nd proficiency level of the 2 nd worker. In fig. 20 (b) and 20 (c), the horizontal axis represents time Ti, and the vertical axis represents the activity Act of the neuron. The greater the absolute value of the activity of a neuron, the more reactive that neuron is.
As is clear from fig. 20 (b) and 20 (c), the activity of the 2 nd neuron increases during the period from time T1 to time T2. That is, it is shown that the operation of the 3 rd worker in the period from time T1 to time T2 corresponds to the 2 nd proficiency of the 2 nd worker. For example, during the period from time T1 to time T2, the activity level of the 2 nd neuron exceeds the threshold value. The processing unit 20 detects that the 2 nd neuron is reacting. The display unit 40 displays, for example, as shown in fig. 20 (a), data of the 3 rd operator from time T1 to time T2, so as to be distinguishable from other parts. From the results of this example, it is found that it is preferable to improve the operation of detecting the response of the 2 nd neuron.
By using the analysis system and the analysis method according to the above-described embodiments, the work in step 1 can be analyzed more automatically, and the time required for analysis can be shortened. Also, by using a program for causing the processing unit to execute the above-described processing or a storage medium storing the program, the job in step 1 can be analyzed more automatically, and the time required for analysis can be shortened.
While the present invention has been described with reference to several embodiments, these embodiments are presented by way of example and are not intended to limit the scope of the invention. These novel embodiments can be implemented in other various forms, and various omissions, substitutions, modifications, and the like can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and their equivalents. The above embodiments can be combined with each other.
Description of the reference numerals
1,2 analysis system, 10 acquisition unit, 20 processing unit, 30 storage unit, 40 display unit, 50 learning data storage unit, 60RNN storage unit, 201 input layer, 202 intermediate layer, 203 output layer, 300LSTM structure, 310 forget gate, 320 input gate, 330 output gate, a state, B1 to B3 change point, C process, C1 to C4 operation, D1 to D3 start point, D4 end point, D2a to D2C, D2a 1 ~D2a 3 ,D2b 1 ,D2b 2 Samples D3 a-D3 c, D4 a-D4F, E1-Ex sample paths, F portions (ripples), G periods, N neurons, steps S1-12, S20, S21, S30-S32, S40, S41, S50-S53, S60-S64, ti times, T1, T2 times

Claims (11)

1. An analysis system, comprising:
an acquisition unit that acquires time series data representing an operation of an operator in step 1 including a plurality of operations; and
a processing unit configured to detect a plurality of change points of the state in the time series data, and to use the plurality of change points to associate the time series data with each of the plurality of jobs,
the plurality of jobs include a 1 st job and a 2 nd job executed after the 1 st job,
the processing unit, in the association,
Generating a sample path including a plurality of samples corresponding to the start point and the end point of the 1 st job and the 2 nd job, respectively,
calculating a 1 st evaluation value based on the degree of fit between the plurality of samples and the plurality of change points,
in the associating, the time series data is associated with the 1 st job and the 2 nd job by using the 1 st evaluation value.
2. The analysis system of claim 1, wherein,
the processing unit may be configured to perform the processing,
extracting a plurality of similar parts similar to each other from the above-described time series data,
based on the length of time between the similar parts, a part of the time series data is cut out,
in the associating, the cut time series data is associated with each of the plurality of jobs.
3. The analysis system of claim 1, wherein,
the processing unit, in the association,
extracting 1 st data corresponding to the 1 st job and 2 nd data corresponding to the 2 nd job from the time series data based on the plurality of samples,
calculating the 1 st similarity between the 1 st data and the 2 nd data,
when calculating the 1 st evaluation value, the 1 st evaluation value is calculated based on the fitness, the 1 st similarity, and the 2 nd similarity between the 1 st job and the 2 nd job.
4. The analysis system according to claim 1 or 3, wherein,
the processing unit generates a sample path,
setting one of the plurality of changing points as a starting point of the 1 st job,
setting a plurality of samples as candidates for the start point of the 2 nd job based on the time required for the 1 st job,
for each of the plurality of samples, calculating a distance to the closest change point,
using the plurality of distances, one of the plurality of samples is set as a start point of the 2 nd operation.
5. The analysis system according to claim 1 or 3, wherein,
the processing unit generates a sample path,
setting one of the plurality of changing points as a starting point of the 1 st job,
setting a plurality of samples as candidates for the start point of the 2 nd job based on the time required for the 1 st job,
for each of the plurality of samples, calculating a 2 nd evaluation value using a distance to the closest change point,
other samples are set as candidates for the start point of the 2 nd job according to probability distribution defined by using the plurality of 2 nd evaluation values.
6. The analysis system according to claim 1 to 3, wherein,
the processing unit may be configured to perform the processing,
generating a plurality of sample paths while changing the change point set as the start point of the 1 st operation,
for each of the plurality of sample paths, the 1 st evaluation value is calculated,
selecting one of the plurality of sample paths based on the plurality of 1 st evaluation values,
the plurality of samples included in the one sample path of the plurality of sample paths are set as a start point and an end point of the plurality of jobs.
7. The analysis system according to claim 1 to 3, wherein,
the processing unit calculates a time length of a portion of the time series data corresponding to each of the plurality of jobs and outputs the calculated time length to the outside.
8. The analysis system according to claim 1 to 3, wherein,
the processing unit learns the recurrent neural network using the time series data associated with each of the plurality of jobs and information on the proficiency of the worker.
9. The analysis system of claim 8, wherein,
the processing unit inputs other time series data indicating the actions of other operators in the step 1 to the learned recurrent neural network, and detects the response of the output layer of the recurrent neural network.
10. An analysis method, wherein,
acquiring time series data representing an operation of an operator in step 1 including a plurality of operations;
detecting a plurality of change points of the state in the time sequence data; and
using the plurality of change points to make the time sequence data and each of the plurality of jobs correspond,
the plurality of jobs include a 1 st job and a 2 nd job executed after the 1 st job,
in the above-mentioned establishment of the correspondence,
generating a sample path including a plurality of samples corresponding to the start point and the end point of the 1 st job and the 2 nd job, respectively,
calculating a 1 st evaluation value based on the degree of fit between the plurality of samples and the plurality of change points,
in the associating, the time series data is associated with the 1 st job and the 2 nd job by using the 1 st evaluation value.
11. A storage medium, wherein,
a program for causing a processing unit to execute a method including:
detecting a plurality of change points of the state from time series data representing actions of an operator in step 1 including a plurality of jobs; and
Using the plurality of change points to make the time sequence data and each of the plurality of jobs correspond,
the plurality of jobs include a 1 st job and a 2 nd job executed after the 1 st job,
in the above-mentioned establishment of the correspondence,
generating a sample path including a plurality of samples corresponding to the start point and the end point of the 1 st job and the 2 nd job, respectively,
calculating a 1 st evaluation value based on the degree of fit between the plurality of samples and the plurality of change points,
in the associating, the time series data is associated with the 1 st job and the 2 nd job by using the 1 st evaluation value.
CN201910227735.0A 2018-08-09 2019-03-25 Analysis system, analysis method, program, and storage medium Active CN110826383B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018150213A JP7049212B2 (en) 2018-08-09 2018-08-09 Analytical systems, analytical methods, programs, and storage media
JP2018-150213 2018-08-09

Publications (2)

Publication Number Publication Date
CN110826383A CN110826383A (en) 2020-02-21
CN110826383B true CN110826383B (en) 2023-12-01

Family

ID=69547574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910227735.0A Active CN110826383B (en) 2018-08-09 2019-03-25 Analysis system, analysis method, program, and storage medium

Country Status (2)

Country Link
JP (1) JP7049212B2 (en)
CN (1) CN110826383B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486072B (en) * 2020-03-17 2024-06-18 中国移动通信集团福建有限公司 Data analysis method, device, electronic equipment and computer readable storage medium
JP7530220B2 (en) 2020-06-23 2024-08-07 株式会社東芝 Analytical device, analytical system, analytical method, program, and storage medium
WO2023276083A1 (en) * 2021-06-30 2023-01-05 慎平 大杉 Information processing system, trained model, information processing method, and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997505A (en) * 2015-11-11 2017-08-01 株式会社东芝 Analytical equipment and analysis method
JP2018045512A (en) * 2016-09-15 2018-03-22 オムロン株式会社 Workability evaluation device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5884220B2 (en) * 2011-03-07 2016-03-15 国立大学法人 筑波大学 Work management system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997505A (en) * 2015-11-11 2017-08-01 株式会社东芝 Analytical equipment and analysis method
JP2018045512A (en) * 2016-09-15 2018-03-22 オムロン株式会社 Workability evaluation device

Also Published As

Publication number Publication date
CN110826383A (en) 2020-02-21
JP7049212B2 (en) 2022-04-06
JP2020027324A (en) 2020-02-20

Similar Documents

Publication Publication Date Title
CN110826383B (en) Analysis system, analysis method, program, and storage medium
CN108602191B (en) Operation information generating device, operation information generating method, and recording medium
JP4752721B2 (en) Movement pattern identification device, movement pattern identification method, movement pattern identification program, and recording medium recording the same
US6256033B1 (en) Method and apparatus for real-time gesture recognition
JP6897037B2 (en) Workability evaluation device
CN109740446A (en) Classroom students ' behavior analysis method and device
KR101872907B1 (en) Motion analysis appratus and method using dual smart band
US20200302593A1 (en) Analysis apparatus and analysis method
KR20190072652A (en) Information processing apparatus and information processing method
JP2012178036A (en) Similarity evaluation device and method, and similarity evaluation program and storage medium for the same
JP2008077424A (en) Operation analysis system and method
Calvo et al. Human activity recognition using multi-modal data fusion
CN110132276B (en) Self-adaptive step length estimation method based on pedestrian motion state
JP2017156978A (en) Work operation recognition system
JP2019159885A (en) Operation analysis device, operation analysis method, operation analysis program and operation analysis system
CN113379207B (en) Control method of training platform, training platform and readable storage medium
KR20210054349A (en) Method for predicting clinical functional assessment scale using feature values derived by upper limb movement of patients
Malawski et al. Real-time action detection and analysis in fencing footwork
JP7165108B2 (en) Work training system and work training support method
JPH08115408A (en) Finger language recognition device
JP7016680B2 (en) Biometric information processing equipment, biometric information processing methods and programs
CN108984507A (en) The resume generation method and device of mobility worker
CN112932469A (en) CNN + Transformer-based triaxial acceleration activity identification method
Rusydi et al. Facial Features Extraction Based on Distance and Area of Points for Expression Recognition
EP3758023B1 (en) Method and device for assessing physical movement of an operator during a work cycle execution on an industrial manufacturing work station

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant