WO2022034685A1 - Explanation presentation device, explanation presentation method, and explanation presentation program - Google Patents

Explanation presentation device, explanation presentation method, and explanation presentation program Download PDF

Info

Publication number
WO2022034685A1
WO2022034685A1 PCT/JP2020/030891 JP2020030891W WO2022034685A1 WO 2022034685 A1 WO2022034685 A1 WO 2022034685A1 JP 2020030891 W JP2020030891 W JP 2020030891W WO 2022034685 A1 WO2022034685 A1 WO 2022034685A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
feature amount
skill
sensor data
explicit knowledge
Prior art date
Application number
PCT/JP2020/030891
Other languages
French (fr)
Japanese (ja)
Inventor
雄一 佐々木
翔貴 宮川
勝 木村
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2022542564A priority Critical patent/JP7158633B2/en
Priority to PCT/JP2020/030891 priority patent/WO2022034685A1/en
Publication of WO2022034685A1 publication Critical patent/WO2022034685A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Definitions

  • This disclosure relates to an explanation presentation device, an explanation presentation method, and an explanation presentation program.
  • AI will explain to the loan applicant what will happen if a situation that is not actually realized occurs (ie, anti-).
  • a real virtual explanation may be presented to the applicant. For example, AI says, “If your annual income is ... 10,000 yen higher, you can borrow the loan you applied for.” Or “If the balance of other loans is ... 10,000 yen less, you can borrow the loan you applied for.” It presents an anti-real virtual explanation such as ".”
  • Patent Document 1 proposes a work support device for making the subject learn the work.
  • This device is a device for presenting a template of instruction content to a target person (for example, a worker) so that the movement of the target person approaches the movement of a model person (for example, a skilled person).
  • the AI is applied to the device described in Patent Document 1, and the subject is based on the difference in the feature amount between the subject and the model (for example, the difference in skill level).
  • it is possible to present an anti-real virtual explanation such as "If ..., if ... was achieved.”
  • tacit knowledge which is knowledge used empirically but cannot be easily explained in words, the subject is presented. It may not be possible to understand the meaning of the anti-real virtual explanation (that is, the meaning of the behavior of AI).
  • tacit knowledge is subjective knowledge consisting of an individual's past experience, for example, knowledge based on experience or intuition.
  • This disclosure is made to solve the above problems, and aims to make it easier to understand the anti-real and virtual explanations presented based on the results of machine learning.
  • the explanation presenting device is a feature that acquires the feature amount of the behavior from a database that stores the sensor data obtained by detecting the behavior of the worker and the explicit knowledge that is the acquired knowledge that can be interpreted by a human.
  • an explanatory extraction unit for extracting presentation information including the feature amount and the explicit knowledge associated with the feature amount.
  • the explanation presentation method is a method executed by the explanation presentation device, and stores the sensor data obtained by detecting the behavior of the worker and the explicit knowledge that is the acquired knowledge that can be interpreted by a human.
  • FIG. 1 It is a figure which shows the example of the hardware composition of the explanatory presentation device.
  • (A) and (B) are diagrams showing an example of the configuration of a learning device that performs machine learning to generate a trained model and a reasoning device that performs inference using the trained model and outputs an inference result.
  • It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 1.
  • FIG. It is a flowchart which shows the generation operation of the trained model in the explanatory presentation apparatus which concerns on Embodiment 1.
  • FIG. It is a figure which shows the example of the model using a fully connected neural network.
  • FIG. It is a flowchart which shows the generation operation of the trained model which associates explicit knowledge with the implicit feature quantity in the explanatory presentation apparatus which concerns on Embodiment 1.
  • FIG. It is a flowchart which shows the example of the operation of the explanatory presentation apparatus which concerns on Embodiment 1.
  • FIG. It is a figure which shows the example of the data generated by the explanatory presentation apparatus which concerns on Embodiment 1 in a table format (Table 1). It is a figure which shows the example of the data generated by the explanatory presentation apparatus which concerns on Embodiment 1 in a table format (Table 2).
  • Table 1 table format
  • Table 2 Table 2
  • FIG. 2 It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 2.
  • FIG. It is a flowchart which shows the example of the operation of the explanatory presentation apparatus which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the data generated by the explanatory presentation apparatus which concerns on Embodiment 2 in the table form (Table 3).
  • Table 3 It is a figure which shows the example of the obtained correlation coefficient in tabular form (Table 4).
  • Table 5 It is a figure which shows the behavior recommended for obtaining the skill obtained by the sum of the values weighted by the correlation coefficient of the explicit knowledge associated with the behavior in the tabular form
  • Table 5 It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 3.
  • FIG. 3 It is a flowchart which shows the example of the operation of the explanatory presentation apparatus which concerns on Embodiment 2.
  • FIG. It is a figure which shows the example of the data
  • FIG. It is a figure which shows the configuration example of the network which executes multitask learning. It is a flowchart which shows the example of the operation of the explanatory presentation apparatus which concerns on Embodiment 3.
  • FIG. It is a figure which shows the example of the data generated by the explanatory presentation apparatus which concerns on Embodiment 3 in a tabular form (Table 6). It is a figure which visualized the sensor data shown in FIG. 20, and displayed particularly different parts as change # 1 and change # 2. It is a figure which shows the example of Attention Branch Network. It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 4.
  • FIG. It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 5.
  • FIG. 1 is a diagram showing an example of a hardware (H / W) configuration of the explanatory presentation device 1 according to the first embodiment.
  • the explanatory presentation device 1 includes a processor 11 as an information processing unit, a memory 12 for storing information, a storage device 13 such as a hard disk drive (HDD) and a solid state drive (SSD), and a storage device 13. It has an operation device 14 as a user interface that accepts user operations.
  • the description presentation device 1 is, for example, a computer.
  • the explanation presenting device 1 may have a display 15 which is an image display unit for displaying information to a human being.
  • the display 15 may be a device having a function of audio output.
  • the explanation presenting device 1 may have a sensor 16 which is a detection unit for detecting the behavior (for example, movement) of a human (for example, a worker).
  • the sensor 16 may be a camera that is an image pickup device that captures an image.
  • the sensor 16 outputs sensor data.
  • the sensor data includes video data.
  • the processor 11 executes, for example, generation of a trained model by machine learning and inference using the trained model that is the result of machine learning.
  • the processor that performs machine learning and the processor that performs inference are one common processor, but may be different processors.
  • the processor 11 executes, for example, a program that is software stored in the memory 12.
  • the program stored in the memory 12 can include an explanatory presentation program for causing the explanatory presentation device 1 according to the first embodiment to implement the explanatory presentation method.
  • the explanatory presentation program is provided to the explanatory presentation device 1 as a program recorded on a recording medium readable by a computer, for example. Alternatively, the explanatory presentation program is provided to the explanatory presentation device 1 by downloading via a network.
  • the explanatory presentation devices 2 to 5 according to the second to fifth embodiments described later also have the same configuration as the hardware configuration shown in FIG.
  • FIGS. 2A and 2B show an example of the configuration of a learning device 20 that performs machine learning to generate a trained model and an inference device 30 that performs inference using the trained model and outputs an inference result. It is a figure.
  • the learning device 20 has a data acquisition unit 21 and a model generation unit 22.
  • the learning device 20 may include a trained model storage unit 23.
  • the data acquisition unit 21 and the model generation unit 22 can be realized by, for example, the memory 12 and the processor 11 shown in FIG.
  • the trained model storage unit 23 can be realized by, for example, the storage device 13 shown in FIG.
  • the data acquisition unit 21 acquires learning data.
  • the training data includes, for example, sensor data (that is, input signal) and correct answer data (that is, teacher signal).
  • the model generation unit 22 generates a trained model used for inferring the optimum output based on the learning data output from the data acquisition unit 21.
  • the model generation unit 22 learns by supervised learning according to, for example, a model of a neural network.
  • Supervised learning is a method of learning the features existing in the learning data by giving a set of data of input and result (label) to the learning device as learning data, and making it possible to infer the result from the input. ..
  • the model generation unit 22 generates and outputs a trained model by executing the above learning.
  • the trained model storage unit 23 stores the trained model output from the model generation unit 22.
  • the inference device 30 includes a data acquisition unit 31 and an inference unit 32.
  • the inference device 30 may include a trained model storage unit 33.
  • the data acquisition unit 31 and the inference unit 32 can be realized by, for example, the memory 12 and the processor 11 shown in FIG.
  • the trained model storage unit 33 can be realized by, for example, the storage device 13 shown in FIG.
  • the trained model storage unit 33 and the trained model storage unit 23 shown in FIG. 2A are provided in different storage devices, they may be provided in the same storage device.
  • the data acquisition unit 31 acquires the sensor data of the worker who is the target person who wants to acquire the skill.
  • the sensor data of the worker is, for example, video data obtained by photographing the worker.
  • the inference unit 32 outputs the inference result obtained by using the trained model generated by the learning device 20. That is, the reasoning unit 32 can output the reasoning result by inputting the sensor data acquired by the data acquisition unit 31 into the trained model stored in the trained model storage unit 33.
  • FIG. 3 is a functional block diagram showing the configuration of the explanatory presentation device 1 according to the first embodiment.
  • the explanation presenting device 1 includes a data storage unit 101, a feature extraction unit 102 as a feature acquisition unit, a skill determination unit 103, an explicit knowledge linking unit 104, and an anti-real virtual explanation. It has an explanatory extraction unit 105, which is an extraction unit, and an explicit knowledge selection unit 106.
  • the data storage unit 101 is, for example, a part of the storage device 13 shown in FIG.
  • the data storage unit 101 stores the database.
  • the data storage unit 101 may be an external storage device (for example, a storage device of a server on a network) capable of communicating with the explanation presentation device 1.
  • the feature extraction unit 102, the skill determination unit 103, the explicit knowledge association unit 104, the explicit knowledge extraction unit 105, and the explicit knowledge selection unit 106 are realized by, for example, a processor 11 that executes an explicit knowledge presentation program.
  • the feature extraction unit 102, the skill determination unit 103, and the explicit knowledge association unit 104 can form, for example, the learning device 20 shown in FIG. 2 (A). Further, the explanatory extraction unit 105 and the explicit knowledge selection unit 106 can form, for example, the inference device 30 shown in FIG. 2 (B).
  • the data storage unit 101 uses explicit knowledge acquired from experts in various fields (that is, skilled workers who have already acquired advanced skills) and the same behavior as the expert's behavior (similar behavior).
  • the result obtained by taking.) Is stored as a teacher signal.
  • explicit knowledge is knowledge that can be interpreted by the acquired human being.
  • Explicit knowledge can include, for example, sensor data (for example, video data) obtained by photographing an expert, information obtained from an expert by hearing or the like, and the like.
  • Results obtained by taking the same actions as those of a skilled person include skill levels (eg, skill level, machining accuracy, etc.).
  • the skill level is a value indicating the skill level of the worker. Skill level is the degree of ability to perform a task. Proficiency may be subjective and qualitative.
  • the skill level is, for example, a value obtained by hearing from a worker through hearings (including a questionnaire) and evaluating the skill as a numerical value. Further, the skill level may be corrected based on the result of comparing the value obtained from the worker with the quantitative data or the like.
  • some people have the same skill level but highly value their own skill level (for example, those who answer "I have a very high skill level"), while others themselves. Some people humbly evaluate their proficiency (for example, those who say "my skill level is not enough"). Therefore, it is also possible to set the proficiency level as, for example, a value obtained by removing the deviation from the average of the scores of all the collected questionnaires.
  • the skill level is a quantification of the skill of the worker obtained through hearings such as the worker or his / her manager. Further, at the time of learning, in addition to the above-mentioned qualitative data obtained through hearings and the like, it is also possible to learn quantitative data such as processing accuracy together as a teacher.
  • the processing accuracy is an index for evaluating the accuracy of the dimensions and shape of the object processed by the worker, and is included in the skill level because it is a value corresponding to the skill level. Skill levels can also include the accuracy, speed, stability, etc. of the work performed by the worker.
  • Accuracy is an index showing the degree of whether or not each work is performed in the correct procedure when performing multiple operations.
  • Stability is an index showing the degree of whether or not each work is performed in a certain procedure when a plurality of works are performed.
  • Speed is an index showing the length of time spent on each work when performing multiple works.
  • FIG. 4 is a flowchart showing the operation of the explanatory presentation device 1 according to the first embodiment during machine learning.
  • the feature extraction unit 102 and the skill determination unit 103 acquire learning data stored in the data storage unit 101.
  • step S102 for example, a combination of sensor data and teacher signal data stored in the data storage unit 101 is used as learning data by a learning device (for example, according to a model of a neural network (for example, shown in FIG. 5 described later)).
  • a learning device for example, according to a model of a neural network (for example, shown in FIG. 5 described later)
  • the output obtained by machine learning the features in the learning data 20) in FIG. 2A and inferring the result from the input for example, skill level, processing accuracy, etc.
  • Machine learning In the machine learning of the neural network, the trained model used by the skill determination unit 103 and the feature extraction unit 102 is used by using an error back propagation method or the like so that the error between the teacher signal and the output of the neural network becomes small. Automatically adjust parameters.
  • FIG. 5 is a diagram showing an example of a model using a fully connected neural network.
  • X1 to X3 are input layers
  • Y1 and Y2 are intermediate layers
  • Z1 and Z2 are output layers
  • w11 to w16 and w21 to w24 are weight coefficients.
  • LSTM Long short-term memory
  • CNN Convolutional Neural Network
  • step S103 of FIG. 4 the data storage unit 101 stores the trained model M1, which is the first trained model generated by the feature extraction unit 102 and the skill determination unit 103.
  • the feature extraction unit 102 takes sensor data as an input, and extracts an implicit feature amount which is a feature amount for the skill determination unit 103 to determine a skill level such as skill level and processing accuracy. Tacit features are an example of tacit knowledge.
  • the implicit feature quantity is extracted from the layer before the final output layer (for example, Z1 and Z2 in FIG. 5) of the neural network.
  • the skill determination unit 103 obtains an implicit feature amount from the feature extraction unit 102, and determines a skill level such as skill level and processing accuracy based on the implicit feature amount.
  • FIG. 6 is a flowchart showing the generation operation of a trained model that associates explicit knowledge with implicit features.
  • the feature extraction unit 102 acquires the sensor data stored in the data storage unit 101.
  • step S202 the feature extraction unit 102 inputs the sensor data to the trained model M1.
  • step S203 the feature extraction unit 102 extracts and outputs an implicit feature amount.
  • the feature extraction unit 102 extracts, for example, the output value of the layer in front of the final output layer (for example, Z1 and Z2 in FIG. 5) of the neural network as an implicit feature amount.
  • step S204 the explicit knowledge linking unit 104 acquires the explicit knowledge stored in the data storage unit 101.
  • Explicit knowledge includes, for example, video data obtained by photographing an expert, information obtained from an expert by hearing or the like, and the like.
  • step S205 for example, according to the model of the neural network as shown in FIG. 5, a combination of data of implicit features and formal knowledge is given to the learning device (for example, the learning device 20 in FIG. 2A).
  • the features in those training data are machine-learned, and the output is machine-learned by inferring the result from the input.
  • the model parameters of the skill determination unit 103 and the feature extraction unit 102 are automatically adjusted by using an error back propagation method or the like so that the error between the teacher signal and the output of the neural network becomes small.
  • the data storage unit 101 stores the trained model M2, which is the second trained model generated by the explicit knowledge association unit 104.
  • the trained model that inputs implicit features and outputs explicit knowledge is not limited to neural networks, but estimates of missing values in the explicit knowledge part by SVR (Support Vector Regression), Naive Bayes, etc.
  • SVR Simple Vector Regression
  • Naive Bayes Naive Bayes
  • FIG. 7 is a flowchart showing an example of the operation of the explanatory presentation device 1 according to the first embodiment.
  • the worker P registers the sensor data related to the skill behavior to be improved in the data storage unit 101. That is, the worker P registers, for example, what the sensor data regarding the skill for which he / she desires to improve the skill level is, for example, in the operation device (operation device 14 in FIG. 1).
  • step S302 the feature extraction unit 102 acquires sensor data related to the skill that the worker P registered in the data storage unit 101 wants to improve the skill level.
  • step S303 the feature extraction unit 102 inputs the acquired sensor data to the trained model M1 stored in the data storage unit 101.
  • step S304 the feature extraction unit 102 acquires an implicit feature amount from the trained model M1.
  • the feature extraction unit 102 extracts, for example, the output value of the layer before the final output layer of the neural network as an implicit feature amount.
  • FIG. 8 is a diagram showing an example of data generated by the explanatory presentation device 1 according to the first embodiment in a table format (Table 1). Note that L is a positive integer, and i is an integer of 1 or more and L or less.
  • explicit knowledge S is labeled with techniques s 1 , s 2 , s 3 , ..., S m (m is a positive integer).
  • FIG. 9 is a diagram showing an example of data generated by the explanatory presentation device 1 according to the first embodiment in a table format (Table 2).
  • the explanatory extraction unit 105 changes the value of each element of the feature vector obtained based on the implicit feature amount x a acquired by the feature extraction unit 102, and changes the value of each element of the feature vector to the skill determination unit 103.
  • Table 2 the table format
  • step S305 the explanatory extraction unit 105 changes the value of each element of the obtained feature vector based on the implicit feature quantity x a acquired by the feature extraction unit 102 to obtain the implicit feature quantity x i .
  • step S306 the skill determination unit 103 inputs the implicit feature amount xi acquired from the explanation extraction unit 105 into the trained model M1.
  • step S307 the skill determination unit 103 outputs the skill level y i as the skill level corresponding to the input implicit feature amount x i .
  • step S308 the explicit knowledge association unit 104 inputs an implicit feature amount x i into the trained model M2.
  • step S309 the explicit knowledge linking unit 104 outputs explicit knowledge corresponding to the input implicit feature amount x i .
  • step S310 the data storage unit 101 stores the implicit feature amount, skill level, and explicit knowledge obtained in each step in the database.
  • step S305 as a method of changing the value of the explanatory extraction unit 105, when the value of each element is changed by ⁇ x from x a , the element having the highest skill level among x a + ⁇ x is stored.
  • An algorithm may be used that preferentially searches as x a gives a higher degree of skill (see, for example, FIG. 10).
  • FIG. 10 is a diagram showing an example of a search in an implicit feature space.
  • ⁇ x is added to each element, and the skill level determined by the skill determination unit 103 is stored in advance in the search queue.
  • the search queue is sorted so that the higher the skill level, the earlier it is. Then select the first element of the search queue.
  • FIG. 10 shows the case where the skill level was the highest when ⁇ x was added to the first element.
  • Gain skill level which is the skill level to do.
  • an evaluation function is used in which the evaluation becomes higher as the value Y of the specified explicit knowledge approaches 1.
  • an evaluation function is used in which the closer the implicit features are, the higher the evaluation is. Then, the implicit feature quantity with high evaluation is preferentially searched.
  • step S311 the explanatory extraction unit 105 is based on the explicit knowledge input by the worker P to the explicit knowledge selection unit 106, for example, an implicit feature amount having a high correlation with the selected explicit knowledge in Table 2 of FIG.
  • the axis is selected, and the plot information on the feature space due to the change of explicit knowledge and the presentation information for visualizing and displaying the skill level information are extracted.
  • the display 15 shown in FIG. 1 displays an image based on the presented information. A visualized example is shown in FIG. 11 below.
  • x a is input to the explicit knowledge linking unit 104, and the obtained explicit knowledge S techniques s i , ..., Sm are used as the initial label of the explicit knowledge selection unit 106, and the worker P formats. It may be configured so that explicit knowledge can be changed by the knowledge selection unit 106.
  • the explanation extraction unit 105 may extract presentation information for presenting the probability that the skill level becomes equal to or higher than the target value depending on the presence or absence of the explicit knowledge S techniques s i , ..., Sm . .. Explanation
  • the extraction unit 105 includes the techniques s i , ..., S m when the skill level exceeds the target under the assumption that the techniques s i , ..., S m are independent of each other.
  • the probability is calculated, and the probability that the explicit knowledge S, s i , ..., Sm is included as a factor that makes the skill level higher than the target by the Bay's rule is calculated, and this is calculated by the explicit knowledge selection unit 106.
  • the presentation information to be presented to the worker P may be extracted as the setting reference value of.
  • FIG. 11 is a diagram showing an example of a visualized explanation.
  • 901a and 901b are explicit knowledge S selected by the explicit knowledge selection unit 106 based on the operation of the worker P, from the search of the explicit knowledge s i , ..., sm , or the explanatory extraction unit 105.
  • the explicit knowledge S that contributes most to the improvement of skill level is s i , ..., sm .
  • 902a and 902b indicate changes in the techniques s i , ..., Sm, which are explicit knowledge S.
  • 903 is a plot of the position of the implicit feature amount corresponding to the sensor data input by the worker P.
  • 904 is a visualization of the target skill level by a heat map.
  • the heat map of FIG. 11 is a visualization graph expressing the values of two-dimensional data as shading.
  • 905a and 905b are the axis of the implicit feature amount having a high correlation with the explicit knowledge S techniques s i , ...
  • visualization displays only the explicit knowledge space without displaying the feature space as a way to help the worker P determine which technique s i , ..., S m to master. It may be done as follows. Further, the visualization may be performed so that the thermal map plotting the implicit feature amount corresponding to the sensor data other than the sensor data related to the skill desired to be acquired by the worker P is displayed side by side. Alternatively, the visualization may be performed so as to display side by side a thermal map plotting other implicit features obtained in the search of the explanatory extraction unit 105.
  • the explanation extraction unit 105 may extract presentation information for presenting the probability that the skill level becomes equal to or higher than the target value depending on the presence or absence of the techniques s i , ..., Sm .
  • the explanation extraction unit 105 includes the techniques s i , ..., S m when the skill level exceeds the target under the assumption that the techniques s i , ..., S m are independent of each other.
  • the probability that the skill s i , ..., Sm is included as a factor that the skill level exceeds the target is calculated by the Bayes rule, and this is used as the setting reference value of the explicit knowledge selection unit 106 for the worker P.
  • the presentation information for presentation may be extracted.
  • the display 15 shown in FIG. 1 displays an image based on the presented information.
  • the worker P intuitively understands how to acquire the skill based on the skills s i , ..., Sm . Will be possible.
  • Embodiment 2 In the first embodiment, explicit knowledge is associated with the implicit feature in order to easily understand the anti-real virtual explanation using the implicit feature, and the implicit feature and the explicit knowledge are combined. Is extracted to present to the worker P who wishes to improve his / her skill. However, in the first embodiment, the sensor data relating to the worker P who wishes to improve the skill is compared with the sensor data relating to another worker, and the process of referring to the comparison result is not performed. Therefore, for example, the following situations (1) to (3) may occur. (1) Even if acquiring explicit knowledge is the shortest path to achieve the output of the target class, there are no examples and it is an unrealistic proposal. (2) Situations where it is difficult to acquire skills because there is a long way to go. (3) Since the search space for the anti-real virtual explanation is wide, the processing time becomes long. Therefore, in the second embodiment, we propose an explanatory presentation device 2 that extracts presentation information for presenting a technique to be learned in consideration of sensor data of other workers.
  • FIG. 12 is a functional block diagram showing the configuration of the explanatory presentation device 2 according to the second embodiment.
  • the explanation presenting device 2 includes a data storage unit 201, a feature extraction unit 202 as a feature acquisition unit, a skill determination unit 203, an explicit knowledge linking unit 204, and an anti-real virtual explanation. It has an explanatory extraction unit 205, which is an extraction unit, an explicit knowledge selection unit 206, and a feature comparison unit 207.
  • the explanatory presentation device 2 according to the second embodiment has an implicit feature amount obtained by inputting the sensor data of the worker P into the feature extraction unit 202 and an implicit feature obtained from the sensor data of another worker.
  • the feature comparison unit 207 for comparing the amount is provided.
  • the data storage unit 201, the feature extraction unit 202, the skill determination unit 203, the explicit knowledge linking unit 204, the explicit knowledge extraction unit 205, and the explicit knowledge selection unit 206 are the data storage unit 101, the feature extraction unit 102, in the first embodiment. This is the same as the skill determination unit 103, the explicit knowledge linking unit 104, the explicit knowledge extraction unit 105, and the explicit knowledge selection unit 106, respectively.
  • FIG. 13 is a flowchart showing an example of the operation of the explanatory presentation device 2 according to the second embodiment.
  • the operation of the explanatory presentation device 2 is different from the operation of the explanatory presentation device 1 according to the first embodiment in that it has step S412 to be compared with the characteristics of other workers.
  • Steps S401 to S404 and S411 are the same as steps S300 to S304 and S311 in FIG. 7.
  • the feature comparison unit 207 is a distance between the implicit feature amount x a obtained from the sensor data of the worker P and the implicit feature amount of another worker stored in the data storage unit 201. Is calculated and stored in advance in the data storage unit 201.
  • the feature comparison unit 207 calculates the correlation-based similarity indicating the closeness of the implicit feature quantities by, for example, a variance-covariance matrix, and the data has a higher skill level than the skill level ya of the action A a . From among them, K pieces of data (K is a positive integer) are obtained in order of increasing distance.
  • the feature extraction unit 202 extracts a feature amount based on the distance.
  • the distance is based on correlation.
  • the distance used may be any distance obtained by various known distance calculation methods such as cosine distance, Euclidean distance after dimensional compression by PCA (principal component analysis), and the like. ..
  • the explanatory extraction unit 205 performs a search by changing the implicit feature amount.
  • the explanatory extraction unit 205 preferentially changes the element of the implicit feature amount so as to increase the expected value of the skill level based on the K data extracted by the feature comparison unit 207, and K.
  • the search is performed so as not to greatly deviate from the range of implicit features of individual data.
  • the explicit knowledge linking unit 204 is not only linked by supervised learning such as a neural network, but also a linking method that does not require model generation such as collaborative filtering. Can be used.
  • FIG. 14 is a diagram showing an example of data generated by the explanatory presentation device 2 according to the second embodiment in a table format (Table 3).
  • FIG. 15 is a diagram showing an example of the obtained correlation coefficient in a table format (Table 4).
  • FIG. 16 is a diagram showing in a table format (Table 5) the behaviors recommended for obtaining the skill obtained by the sum of the explicit knowledge associated with the behaviors weighted by the correlation coefficient.
  • the correlation coefficient as shown in Table 4 of FIG. 15 is obtained.
  • the action A1a and an action having a correlation coefficient of 0.7 or more are selected, the action A3a and the action ALa are selected as data for collaborative filtering.
  • the explanatory extraction unit 205 weights the explicit knowledge associated with each action with the above correlation coefficient, and acts on the sum of the weighted explicit knowledge. Extract the presentation information for Aa to present as a recommended action to acquire skill.
  • the display 15 shown in FIG. 1 displays an image based on the presented information.
  • the extraction unit 205 takes into consideration the explicit knowledge of other workers by changing how many threshold values of the correlation coefficient are set and how many target achievement proficiency levels are set in step S405. , It is possible to associate implicit features with explicit knowledge.
  • the explicit knowledge linking unit 204 is not limited to the method such as the above-mentioned cooperative filtering, and when the skill level y> ya is obtained by using a Bayesian network, the factor is explicit knowledge. By estimating the value based on the probability of occurrence of an event, such as by calculating the probability, the implicit feature amount and explicit knowledge may be linked.
  • the above explanation shows an example based on a comparison of only a worker and another worker.
  • the explanation extraction unit 205 gradually learns the method of a person whose behavior is close to each other, assuming that the worker P has acquired the skill. , It is possible to extract the presentation information for presenting the explanation so that the learning method approaches the target class.
  • the evaluation function F has a format in which the evaluation is lower as the difference in the distance between the implicit feature amount of another worker who has stored the search queue in FIG. 10 and the feature amount during the search is larger, and the evaluation reaches the target skill level. It is designed so that the smaller the fluctuation ⁇ S of knowledge, the higher the evaluation, and the higher the skill level, the higher the evaluation.
  • the evaluation function F may be designed as shown in the following equation (2).
  • the feature comparison unit 207 has an implicit feature amount x a obtained from the sensor data of the worker P and an implicit feature amount x a obtained from the sensor data of another worker stored in the data storage unit 201.
  • the sensor data is compared with the feature amount, and the sensor data is extracted so that the value is larger than the skill level ya and K or more sensor data js that can achieve the target skill level are included.
  • the feature comparison unit 207 obtains a score by the evaluation function F for all the obtained sensor data, and adds j to the comparison target set J for distance calculation.
  • the explanatory extraction unit 205 extracts the implicit feature quantity x i from the search queue in descending order of the value of the evaluation function F, starting from the action (a in Equation 2), and changes it to this value by ⁇ x. Is added, the score is calculated by the evaluation function F based on the sum of the J data extracted by the feature comparison unit 207 and the amount of change in the behavior of formal knowledge (from a to i-1 in Equation 2), and the search queue is calculated. Add data to. By repeating the process using the evaluation function F as described above, it is possible to perform a search in which the feature amount does not deviate significantly from the acquired data while achieving the target skill level.
  • a feature comparison unit 207 is provided to compare with the behavior of another worker, and the result is used to provide an anti-real virtual explanation that is far from reality. Can be prevented from being presented.
  • Embodiment 3 The explanation presenting device 1 according to the first embodiment associates explicit knowledge with the implicit feature amount automatically obtained, and works on what kind of technique should be learned to improve the skill level. Present to the person. However, if it is possible to specifically present to the worker what kind of behavior change should be made, it is considered that the acquisition of the worker's skill will be further accelerated. Therefore, the explanation presentation device 3 according to the third embodiment includes a skill data generation unit 302 as a feature acquisition unit. In the third embodiment, the skill data generation unit 302 does not automatically extract the feature amount by the feature extraction unit so that the skill determination unit can determine the skill level such as skill level and processing accuracy, but the original sensor data. Is extracted so that it can be reproduced as generated data.
  • FIG. 17 is a functional block diagram showing the configuration of the explanatory presentation device 3 according to the third embodiment.
  • the explanatory presentation device 3 includes a data storage unit 301, a skill data generation unit 302 as a feature acquisition unit, a skill determination unit 303, an explicit knowledge linking unit 304, and an anti-real virtual. It has an explanatory extraction unit 305, which is an explanatory extraction unit, and an explicit knowledge selection unit 306.
  • the explanation presenting device 3 according to the third embodiment is different from the explanation presenting device 1 according to the first embodiment in that the skill data generation unit 302 is provided as a feature acquisition unit.
  • the skill data generation unit 302 extracts the feature amount so that the original sensor data can be reproduced so that the skill determination unit can determine the skill level and the processing accuracy.
  • the data storage unit 301, the explicit knowledge determination unit 303, the explicit knowledge association unit 304, the explicit knowledge extraction unit 305, and the explicit knowledge selection unit 306 are the data storage unit 101, the explicit knowledge determination unit 103, and the explicit knowledge association unit in the first embodiment. It is the same as 104, the explicit knowledge extraction unit 105, and the explicit knowledge selection unit 106, respectively.
  • the skill data generation unit 302 compresses the feature amount by the neural network. After that, the skill determination unit 303 determines the skill level as the skill level. Further, a decoder is provided in the latter half of the skill data generation unit 302, and the skill data generation unit 302 performs multitask learning such as restoring the original sensor data.
  • FIG. 18 is a diagram showing a configuration example of a network that executes multitask learning.
  • sensor data di is given to the neural network 142a from the input layer 141a, and the skill level yi is output via the intermediate layer 145 and the neural network 142b.
  • the sensor data di is restored from the intermediate layer 145 of the neural network via the decoder 143, and is output from the output layer 141b.
  • the skill data generation unit 302 extracts the output value of this branch point as an implicit feature amount x i .
  • the learning method of FIG. 18 is the same as the model of the basic configuration diagram, and uses the skill level y i and the sensor data di i as teacher signals, and the skill level y i and the sensor data di i output from the neural network 142b.
  • the parameters are adjusted so that the loss function L composed of is small.
  • the data storage unit 301 stores the learned model M1 generated by the skill data generation unit 302 and the skill determination unit 303.
  • the loss function L can be defined, for example, by the weighted sum of the loss L decode of the decoder 143 and the loss Ly of the estimated portion of the skill level.
  • the decoder 143 in FIG. 18 can be a known model such as a VAE (variational autoencoder) or a GAN (Generative Advanced Network) as long as it is a model for generating data.
  • VAE variable autoencoder
  • GAN Generic Advanced Network
  • FIG. 19 is a flowchart showing an operation for providing an anti-real virtual explanation.
  • step S507 the skill data generation unit 302 and the skill determination unit 303 input the data acquired from the data storage unit 301 into the trained model M1 as an implicit feature amount x i . Since the model learned by the skill data generation unit 302 is configured as shown in FIG. 18, two data, skill level y i and sensor data di , are output from the implicit feature amount x i . ..
  • step S510 the data storage unit 301 stores the implicit feature amount x i , the sensor data di i, the skill level y i , and the explicit knowledge S, and obtains Table 6.
  • FIG. 20 is a diagram showing an example of data generated by the explanatory presentation device 3 according to the third embodiment in a table format (Table 6).
  • FIG. 21 is a diagram in which the sensor data shown in FIG. 20 is visualized, and particularly different parts are displayed as change # 1 and change # 2.
  • the extraction unit 305 visualizes the sensor data di of the anti - real virtual actions C1 to CN in Table 6 of FIG. 20, and the part particularly different from the sensor data da is the change # 1 in FIG. Extract the presentation information for highlighting like the part of change # 2.
  • the display 15 shown in FIG. 1 displays an image based on the presented information.
  • the skill data generation unit 302 performs both the extraction of the implicit feature amount and the restoration of the sensor data, and associates the feature expression for determining the skill level with the sensor data. This generates presentation information that shows how the sensor data actually changes when explicit knowledge is changed.
  • this correspondence (that is, the correspondence between the change of explicit knowledge and the change of sensor data) can be used to show hints of behavior change to be taken in order to master the technique.
  • a generative model for generating sensor data is used, but as a method of paying attention to which part of the sensor data the behavior should be transformed, an attention mechanism (Attention Mechanism) may be used. good.
  • FIG. 22 is a diagram showing an example of Attention Branch Network (ABN).
  • the fire and explanation presenting device 3 is provided with an attention mechanism for extracting which of the intermediate feature quantities should be focused on, as shown in ABN shown in FIG. 22, for example.
  • the presentation information for highlighting the sensor data corresponding to the place where the attention is high may be generated. For this highlighting, for example, the method described in Non-Patent Document 1 can be used.
  • Embodiment 4 the skill data generation unit 302 is used to confirm the area to be behavior-changed (for example, change # 1 and change # 2).
  • the fourth embodiment by giving a perturbation to the sensor data in the area related to the skill level, how the skill level such as the skill level and the processing accuracy is likely to change, and the credibility of the skill level are confirmed. It will be possible to do.
  • the explanation presenting device 4 according to the fourth embodiment includes a perturbation confirmation unit.
  • FIG. 23 is a functional block diagram showing the configuration of the explanatory presentation device 4 according to the fourth embodiment.
  • the explanatory presentation device 4 includes a data storage unit 401, a skill data generation unit 402 as a feature acquisition unit, a skill determination unit 403, an explicit knowledge linking unit 404, and an anti-real virtual. It has an explanatory extraction unit 405, which is an explanatory extraction unit, an explicit knowledge selection unit 406, and a perturbation confirmation unit 408.
  • the explanatory presentation device 4 according to the fourth embodiment is different from the explanatory presentation device 3 according to the third embodiment in that the perturbation confirmation unit 408 is provided.
  • the data storage unit 401, skill data generation unit 402, skill determination unit 403, explicit knowledge linking unit 404, explicit knowledge extraction unit 405, and explicit knowledge selection unit 406 are the data storage unit 301 and skill data generation unit in the third embodiment. It is the same as 302, skill determination unit 303, explicit knowledge association unit 304, explicit knowledge extraction unit 305, and explicit knowledge selection unit 306, respectively.
  • the worker P inputs sensor data corresponding to the behavior for which he / she wants to acquire the skill, and explicit knowledge is selected by the explicit knowledge selection unit 406.
  • the explanation extraction unit 405 changes the evaluation function F of the search so that only the selected explicit knowledge changes as much as possible, and changes the implicit feature quantity to implicitly as an anti-real virtual explanation.
  • the target feature amount x i is generated.
  • the explicit knowledge linking unit 404 acquires explicit knowledge corresponding to the generated implicit feature amount x i
  • the skill determination unit 403 determines the skill level y i
  • the skill data generation unit 402 determines the sensor data d. i is reproduced as generated data.
  • the explanation extraction unit 405 presents a candidate for explicit and virtual explicit knowledge to the worker P in order to satisfy the target skill level y by the above search.
  • the worker P perturbates (that is, changes the input sensor data) by the perturbation confirmation unit 408 for the important part of the presented sensor data that particularly affects the skill level.
  • the perturbation confirmation unit 408 registers the sensor data changed by the perturbation in the data storage unit 401, and the skill data generation unit 402 reads the sensor data to which the perturbation is applied and generates the original data. Register the sensor data in the data storage unit 401.
  • the perturbation confirmation unit 408 compares the sensor data input to the skill data generation unit 402 with the sensor data output by the skill data generation unit 402, and tells the worker P the difference between the generated data and the sensor data to which the perturbation is added. Perform the process for presenting.
  • the explanatory extraction unit 405 extracts the presentation information for presenting how the skill skill has changed as a result of the perturbation being added to the sensor data.
  • the display 15 shown in FIG. 1 displays an image based on the presented information.
  • the explanation presenting device 4 according to the fourth embodiment is appropriate when the data not included in the trained model is used by visualizing the behavior of the sensor data generated at the time of perturbation. Take advantage of the fact that explanations are difficult to generate and the accuracy of explanations drops significantly.
  • the explanatory presentation device 4 according to the fourth embodiment can determine the allowable range of data that can be handled by the trained model generated by such a method, and how the perturbation affects the skill level. You can know.
  • the worker P inputs the range of the sensor data visualization of the explanatory extraction unit 405 to the perturbation confirmation unit 408 (for example, the change # in FIG. 21).
  • the explanation extraction unit 405 searches for an implicit feature amount so that only the sensor data of the relevant part changes and the skill level is higher than the current level, and the sensor is used.
  • the fluctuation range of the change in the data may be presented.
  • the worker P can change the behavior that is allowed to acquire the skill. , It is possible to confirm before registering the action again.
  • Embodiment 5 >>
  • the trained model formed by the learning device which is a set of the skill determination unit 103, the feature extraction unit 102, and the explicit knowledge association unit 104 is one learning.
  • An example of the finished model M1 is described.
  • the explanatory presentation device 5 according to the fifth embodiment generates a plurality of learned models M1 and learns about a skill (that is, a related skill) close to the skill that the worker wants to acquire from among them. Find a finished model.
  • FIG. 24 is a functional block diagram showing the configuration of the explanatory presentation device 5 according to the fifth embodiment.
  • the explanatory presentation device 5 includes a data storage unit 501, a feature extraction unit 502 as a feature acquisition unit, a skill determination unit 503, an explicit knowledge linking unit 504, and an anti-real virtual explanation. It has an explanatory extraction unit 505, which is an extraction unit, an explicit knowledge selection unit 506, and a model priority determination unit 509.
  • the explanation presenting device 5 according to the fifth embodiment is close to the point that a plurality of trained models M1 are generated and the skill that the model priority determination unit 509 wants to acquire from among the plurality of trained models M1.
  • the explanation presenting device 5 has a plurality of sets of the skill determination unit 503, the feature extraction unit 502, and the explicit knowledge association unit 504, and the model priority determination unit 509 determines the priority of the plurality of sets. .. That is, the explanation presenting device 5 has a plurality of learning sets each consisting of a skill determination unit 503, a feature extraction unit 502, and an explicit knowledge association unit 504, and a model priority determination unit that determines the priority of the plurality of learning sets. With 509, the plurality of learning sets acquire sensor data from the database in a time-divided manner.
  • the data storage unit 501, the feature extraction unit 502, the skill determination unit 503, the explicit knowledge linking unit 504, the explicit knowledge extraction unit 505, and the explicit knowledge selection unit 506 are the data storage unit 101, the explicit knowledge extraction unit 102, in the first embodiment. This is the same as the skill determination unit 103, the explicit knowledge linking unit 104, the explicit knowledge extraction unit 105, and the explicit knowledge selection unit 106, respectively.
  • the model priority determination unit 509 reads out the sensor data of the worker P registered in the data storage unit 501, and the sensor data having a time width suitable for each of the plurality of trained models (that is, the time-divided sensor data). ).
  • FIG. 25 is a diagram showing an example of time-division sensor data in the explanatory presentation device 5 according to the fifth embodiment. Since the trained model # 1 uses the time data of the time R1 at the time of machine learning, the sensor data is divided by the time R1 and the time is advanced by t1 to perform time division. Since the trained model # 2 uses the time data of the time R2 at the time of machine learning, the sensor data is divided by the time R2, and the time is advanced by t2 to perform time division.
  • the model priority determination unit 509 inputs the acquired sensor data of each time width into all the trained models, and the distribution of the data in the final layer of the neural network of the skill determination unit 503 and the feature extraction unit 502 is normal. Check if it is out of the distribution. For this check, for example, the method described in Non-Patent Document 2 can be used.
  • the model priority determination unit 509 preferentially selects a combination of the trained model and the sensor data whose data distribution does not deviate from the normal distribution, and searches for the feature space of the explanation extraction unit 505. .
  • the model priority determination unit 509 narrows the selection range to the trained model including the explicit knowledge as learning data.
  • the model priority determination unit 509 inputs the time-divisioned sensor data d t1 , d t2 , ..., D tr to each trained model, and selects the trained model based on the easiness of fluctuation of the skill level. May be good. Further, the model priority determination unit 509 considers that the trained model in which the skill level does not change at all is irrelevant, and may preferentially use the trained model in which the change in skill level can be confirmed.
  • the skill data generation units 302 and 402 are dimensionally compressed so that the distribution of the data output from the intermediate layer becomes a normal distribution by using a method such as VAE or GAN.
  • the model priority determination unit 509 in the fifth embodiment is configured to restore the original sensor data, and the sensor data and the trained model are correlated depending on whether the distribution of the data deviates from the normal distribution. You may decide the set of.
  • FIG. 26 is a diagram showing an example of visualized presentation information.
  • the explanatory extraction unit 505 includes explicit knowledge 1 to m as a factor that makes the skill level higher than the target according to the Bayes rule, as shown in Table 7, based on the information in the database accumulated so far.
  • the probability may be calculated and displayed on a display (shown in FIG. 1) in order to present it to the worker P who wants to acquire the skill as a setting reference value for operating the explicit knowledge selection unit 506. .
  • Table 8 of FIG. 26 shows an example of visualization when the worker P who wants to acquire the skill selects “Skill 1” as explicit knowledge in the explicit knowledge selection unit 506.
  • the sensor data column of Table 8 an example is shown in which a portion where a large change has occurred is highlighted with a thick line frame or a color frame based on the data generated by the skill data generation unit 503. .. If the worker P does not select explicit knowledge, 10 examples of anti-real virtual behaviors are sorted in order of skill level, correlation of implicit features, or a combination of these scores. May be done.
  • the method of visualizing explicit knowledge shown in FIG. 26 is an example. The explicit knowledge is visualized based on the degree of superimposition of sensor data, the degree of correlation of feature amounts, and the like, and the connection between explicit knowledge is connected to the operator. It may be presented so that P can be understood. The example of the presented information shown in FIG. 26 may be applied to other embodiments.
  • the worker P selects a trained model suitable for the data to be input to the trained model from the plurality of trained models.
  • the operation to be performed becomes unnecessary or easy, and the explanation presenting device 5 can automatically select the part where the skill is likely to be extracted, and select the corresponding trained model together with this.
  • the explanatory presentation device 5 is associated with the sensor data in which the corresponding explicit knowledge is taken in and input when the worker P selects explicit knowledge in the explicit knowledge selection unit 506. It is possible to select a trained model that seems to have. Then, when dealing with a plurality of skills, an appropriate trained model is selected, and explicit knowledge suitable for acquiring the skill can be presented in addition to the implicit features.
  • Explanation presentation device 11 processor, 12 memory, 13 storage device, 14 operation device, 15 display, 16 sensor, 20 learning device, 21 data acquisition unit, 22 model generation unit, 23 trained model storage unit, 30 Inference device, 31 data acquisition unit, 32 inference unit, 33 trained model storage unit, 101, 201, 301, 401, 501 data storage unit, 102, 202, 502 feature extraction unit (feature acquisition unit), 302, 402 skills Data generation unit (feature acquisition unit), 103, 203, 303, 403, 503 skill judgment unit, 104, 204, 304, 404, 504 format knowledge linking unit, 105, 205, 305, 405, 505 explanation extraction unit, 106, 206, 306, 406, 506 Format knowledge selection unit, 207 feature comparison unit, 408 perturbation confirmation unit, 509 model priority determination unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Mathematical Physics (AREA)
  • Educational Technology (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

An explanation presentation device (1) comprises: a feature acquisition unit (102) that acquires a feature amount of an operator's action from a database for storing sensor data acquired by detecting the action and format knowledge which is knowledge of a human's understanding the acquired data; a skill determination unit (103) that determines the skill level of the operator on the basis of the feature amount and registers the skill level in the database; a format knowledge linking unit (104) that links the format knowledge with the feature amount in the database; and an explanation extraction unit (105) that extracts presentation information including the feature amount and the format knowledge linked with the feature amount.

Description

説明提示装置、説明提示方法、及び説明提示プログラムExplanation presentation device, explanation presentation method, and explanation presentation program
 本開示は、説明提示装置、説明提示方法、及び説明提示プログラムに関する。 This disclosure relates to an explanation presentation device, an explanation presentation method, and an explanation presentation program.
 機械学習によって取得された学習済モデルを用いるAI(Artificial Intelligence)の挙動として、「もし…していれば、……を達成していた。」というような反実仮想的な説明(Counterfactual Explanation)の提示がある。反実仮想的な説明では、実際にはデータとして観測されていないが仮説としてある状況が発生した場合に、どのようなことが実現されるか、が説明される。 As the behavior of AI (Artificial Integrity) using the trained model acquired by machine learning, a counterfactual explanation such as "If ..., if ... was achieved." There is a presentation of. The anti-real hypothetical explanation explains what will be achieved if a situation occurs that is not actually observed as data but is hypothesized.
 例えば、AIを活用した住宅ローン審査では、AIは、ローンの申請者に対し、実際には実現されていない状況が発生した場合に、どのようなことが実現されるかの説明(すなわち、反実仮想的な説明)を申請者に提示することがある。例えば、AIは、「もし年収が…万円高ければ、申請されたローンの借入が可能です。」又は「もし他のローンの借入残額が…万円少なければ、申請されたローンの借入が可能です。」、などのような反実仮想的な説明を提示する。 For example, in a mortgage loan review using AI, AI will explain to the loan applicant what will happen if a situation that is not actually realized occurs (ie, anti-). A real virtual explanation) may be presented to the applicant. For example, AI says, "If your annual income is ... 10,000 yen higher, you can borrow the loan you applied for." Or "If the balance of other loans is ... 10,000 yen less, you can borrow the loan you applied for." It presents an anti-real virtual explanation such as "."
 また、特許文献1は、対象者に作業を習得させるための作業支援装置を提案している。この装置は、対象者(例えば、作業者)に対し指導内容のテンプレートを提示することで、対象者の動作が模範者(例えば、熟練者)の動作に近付くようにするための装置である。上記住宅ローン審査の場合と同様に、特許文献1に記載の装置にAIを適用し、対象者と模範者との間の特徴量の差異(例えば、技能レベルの差異)に基づいて、対象者に対し、「もし…していれば、……を達成していた。」というような反実仮想的な説明を提示することが可能である。 Further, Patent Document 1 proposes a work support device for making the subject learn the work. This device is a device for presenting a template of instruction content to a target person (for example, a worker) so that the movement of the target person approaches the movement of a model person (for example, a skilled person). As in the case of the above-mentioned mortgage examination, the AI is applied to the device described in Patent Document 1, and the subject is based on the difference in the feature amount between the subject and the model (for example, the difference in skill level). On the other hand, it is possible to present an anti-real virtual explanation such as "If ..., if ... was achieved."
特開2020-034849号公報(例えば、要約書、段落0043、0054)Japanese Unexamined Patent Publication No. 2020-034849 (for example, abstract, paragraphs 0043, 0054)
 しかしながら、例えば、機械学習で用いられる特徴量が、経験的に使っている知識であるが簡単に言葉で説明できない知識である暗黙知(tacit knowledge)である場合に、対象者は、提示された反実仮想的な説明の意味(すなわち、AIの挙動の意味)を理解できない可能性がある。ここで、暗黙知は、個人の過去の経験から成り立つ主観的な知識、例えば、経験又は勘などに基づく知識のことである。 However, for example, when the feature quantity used in machine learning is tacit knowledge, which is knowledge used empirically but cannot be easily explained in words, the subject is presented. It may not be possible to understand the meaning of the anti-real virtual explanation (that is, the meaning of the behavior of AI). Here, tacit knowledge is subjective knowledge consisting of an individual's past experience, for example, knowledge based on experience or intuition.
 本開示は、上記課題を解決するためになされたものであり、機械学習の結果に基づいて提示される反実仮想的な説明を理解しやすくすることを目的とする。 This disclosure is made to solve the above problems, and aims to make it easier to understand the anti-real and virtual explanations presented based on the results of machine learning.
 本開示に係る説明提示装置は、作業者の行動を検出して得られたセンサデータと取得した人間が解釈できる知識である形式知とを記憶するデータベースから、前記行動の特徴量を取得する特徴取得部と、前記特徴量から前記作業者の技能レベルを判定し、前記データベースに前記技能レベルを登録する技能判定部と、前記データベースにおいて、前記特徴量に形式知を紐付ける形式知紐付け部と、前記特徴量と前記特徴量に紐付けられた前記形式知とを含む提示情報を抽出する説明抽出部とを有する。 The explanation presenting device according to the present disclosure is a feature that acquires the feature amount of the behavior from a database that stores the sensor data obtained by detecting the behavior of the worker and the explicit knowledge that is the acquired knowledge that can be interpreted by a human. The acquisition unit, the skill determination unit that determines the skill level of the worker from the feature amount and registers the skill level in the database, and the explicit knowledge linking unit that links explicit knowledge to the feature amount in the database. And an explanatory extraction unit for extracting presentation information including the feature amount and the explicit knowledge associated with the feature amount.
 本開示に係る説明提示方法は、説明提示装置によって実行される方法であって、作業者の行動を検出して得られたセンサデータと取得した人間が解釈できる知識である形式知とを記憶するデータベースから、前記行動の特徴量を取得するステップと、前記特徴量から前記作業者の技能レベルを判定し、前記データベースに前記技能レベルを登録するステップと、前記データベースにおいて、前記特徴量に形式知を紐付けるステップと、前記特徴量と前記特徴量に紐付けられた前記形式知とを含む提示情報を抽出するステップとを有する。 The explanation presentation method according to the present disclosure is a method executed by the explanation presentation device, and stores the sensor data obtained by detecting the behavior of the worker and the explicit knowledge that is the acquired knowledge that can be interpreted by a human. A step of acquiring the feature amount of the action from the database, a step of determining the skill level of the worker from the feature amount and registering the skill level in the database, and explicit knowledge of the feature amount in the database. It has a step of associating the feature amount and a step of extracting the presentation information including the explicit knowledge associated with the feature amount.
 本開示によれば、機械学習の結果に基づいて提示される反実仮想的な説明を理解しやすくすることができる。 According to this disclosure, it is possible to make it easier to understand the anti-real virtual explanation presented based on the result of machine learning.
説明提示装置のハードウェア構成の例を示す図である。It is a figure which shows the example of the hardware composition of the explanatory presentation device. (A)及び(B)は、機械学習を行って学習済モデルを生成する学習装置及び学習済モデルを用いた推論を行って推論結果を出力する推論装置の構成の例を示す図である。(A) and (B) are diagrams showing an example of the configuration of a learning device that performs machine learning to generate a trained model and a reasoning device that performs inference using the trained model and outputs an inference result. 実施の形態1に係る説明提示装置の構成を示す機能ブロック図である。It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る説明提示装置における学習済モデルの生成動作を示すフローチャートである。It is a flowchart which shows the generation operation of the trained model in the explanatory presentation apparatus which concerns on Embodiment 1. FIG. 全結合のニューラルネットワークを用いたモデルの例を示す図である。It is a figure which shows the example of the model using a fully connected neural network. 実施の形態1に係る説明提示装置において、暗黙的特徴量に形式知を紐付ける学習済モデルの生成動作を示すフローチャートである。It is a flowchart which shows the generation operation of the trained model which associates explicit knowledge with the implicit feature quantity in the explanatory presentation apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る説明提示装置の動作の例を示すフローチャートである。It is a flowchart which shows the example of the operation of the explanatory presentation apparatus which concerns on Embodiment 1. FIG. 実施の形態1に係る説明提示装置によって生成されるデータの例を表形式(表1)で示す図である。It is a figure which shows the example of the data generated by the explanatory presentation apparatus which concerns on Embodiment 1 in a table format (Table 1). 実施の形態1に係る説明提示装置によって生成されるデータの例を表形式(表2)で示す図である。It is a figure which shows the example of the data generated by the explanatory presentation apparatus which concerns on Embodiment 1 in a table format (Table 2). 暗黙的特徴量空間における探索の例を示す図である。It is a figure which shows the example of the search in the implicit feature space. 可視化された説明の例を示す図である。It is a figure which shows the example of the visualized explanation. 実施の形態2に係る説明提示装置の構成を示す機能ブロック図である。It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 2. FIG. 実施の形態2に係る説明提示装置の動作の例を示すフローチャートである。It is a flowchart which shows the example of the operation of the explanatory presentation apparatus which concerns on Embodiment 2. FIG. 実施の形態2に係る説明提示装置によって生成されるデータの例を表形式(表3)で示す図である。It is a figure which shows the example of the data generated by the explanatory presentation apparatus which concerns on Embodiment 2 in the table form (Table 3). 得られた相関係数の例を表形式(表4)で示す図である。It is a figure which shows the example of the obtained correlation coefficient in tabular form (Table 4). 行動に紐付けられた形式知を相関係数で重み付けた値の和によって得られる熟練技能を得るために推奨される行動を表形式(表5)で示す図である。It is a figure which shows the behavior recommended for obtaining the skill obtained by the sum of the values weighted by the correlation coefficient of the explicit knowledge associated with the behavior in the tabular form (Table 5). 実施の形態3に係る説明提示装置の構成を示す機能ブロック図である。It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 3. FIG. マルチタスクラーニングを実行するネットワークの構成例を示す図である。It is a figure which shows the configuration example of the network which executes multitask learning. 実施の形態3に係る説明提示装置の動作の例を示すフローチャートである。It is a flowchart which shows the example of the operation of the explanatory presentation apparatus which concerns on Embodiment 3. FIG. 実施の形態3に係る説明提示装置によって生成されるデータの例を表形式(表6)で示す図である。It is a figure which shows the example of the data generated by the explanatory presentation apparatus which concerns on Embodiment 3 in a tabular form (Table 6). 図20に示されるセンサデータを可視化し、特に異なる部分を変化#1、変化#2として表示した図である。It is a figure which visualized the sensor data shown in FIG. 20, and displayed particularly different parts as change # 1 and change # 2. Attention Branch Networkの例を示す図である。It is a figure which shows the example of Attention Branch Network. 実施の形態4に係る説明提示装置の構成を示す機能ブロック図である。It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 4. FIG. 実施の形態5に係る説明提示装置の構成を示す機能ブロック図である。It is a functional block diagram which shows the structure of the explanatory presentation apparatus which concerns on Embodiment 5. 実施の形態5に係る説明提示装置における時分割センサデータの例を示す図である。It is a figure which shows the example of the time division sensor data in the explanatory presentation apparatus which concerns on Embodiment 5. 実施の形態5に係る説明提示装置において可視化された提示情報の例を示す図である。It is a figure which shows the example of the presentation information visualized in the explanatory presentation apparatus which concerns on Embodiment 5.
 以下に、実施の形態に係る説明提示装置、説明提示方法、及び説明提示プログラムを、図面を参照しながら説明する。実施の形態に係る説明提示装置、説明提示方法、及び説明提示プログラムによれば、機械学習の結果を用いて説明を提示する場合に、形式知(explicit knowledge)を伴う説明を提示することで、例えば、反実仮想的な説明が暗黙知であっても、反実仮想的な説明を理解しやすくすることができる。ここで、形式知は、人間が理解できる知識である。形式知は、例えば、文章、図表、数式などによって説明、表現できる知識である。なお、以下の実施の形態は、例にすぎず、実施の形態を適宜組み合わせること及び各実施の形態を適宜変更することが可能である。 Hereinafter, the explanation presentation device, the explanation presentation method, and the explanation presentation program according to the embodiment will be described with reference to the drawings. According to the explanation presenting device, the explanation presentation method, and the explanation presentation program according to the embodiment, when the explanation is presented using the result of machine learning, the explanation accompanied by explicit knowledge is presented. For example, even if the anti-real virtual explanation is tacit knowledge, the anti-real virtual explanation can be easily understood. Here, explicit knowledge is knowledge that humans can understand. Explicit knowledge is knowledge that can be explained and expressed by, for example, sentences, charts, mathematical formulas, and the like. The following embodiments are merely examples, and it is possible to appropriately combine the embodiments and change the embodiments as appropriate.
《実施の形態1》
 図1は、実施の形態1に係る説明提示装置1のハードウェア(H/W)構成の例を示す図である。図1に示されるように、説明提示装置1は、情報処理部としてのプロセッサ11と、情報を記憶するメモリ12と、ハードディスクドライブ(HDD)、ソリッドステートドライブ(SSD)などの記憶装置13と、ユーザ操作を受け付けるユーザインタフェースとしての操作装置14とを有する。説明提示装置1は、例えば、コンピュータである。説明提示装置1は、人間に情報を表示する画像表示部である表示器15を有してもよい。表示器15は、音声出力の機能を備えた装置であってもよい。また、説明提示装置1は、人間(例えば、作業者)の行動(例えば、動き)を検出する検出部であるセンサ16を有してもよい。センサ16は、映像を撮影する撮像装置であるカメラであってもよい。センサ16は、センサデータを出力する。センサ16がカメラである場合、センサデータは、映像データを含む。
<< Embodiment 1 >>
FIG. 1 is a diagram showing an example of a hardware (H / W) configuration of the explanatory presentation device 1 according to the first embodiment. As shown in FIG. 1, the explanatory presentation device 1 includes a processor 11 as an information processing unit, a memory 12 for storing information, a storage device 13 such as a hard disk drive (HDD) and a solid state drive (SSD), and a storage device 13. It has an operation device 14 as a user interface that accepts user operations. The description presentation device 1 is, for example, a computer. The explanation presenting device 1 may have a display 15 which is an image display unit for displaying information to a human being. The display 15 may be a device having a function of audio output. Further, the explanation presenting device 1 may have a sensor 16 which is a detection unit for detecting the behavior (for example, movement) of a human (for example, a worker). The sensor 16 may be a camera that is an image pickup device that captures an image. The sensor 16 outputs sensor data. When the sensor 16 is a camera, the sensor data includes video data.
 プロセッサ11は、例えば、機械学習による学習済モデルの生成と、機械学習の結果である学習済モデルを用いる推論とを実行する。図1では、機械学習を行うプロセッサと推論を行うプロセッサとは、共通の1つのプロセッサであるが、異なるプロセッサであってもよい。プロセッサ11は、例えば、メモリ12に記憶されているソフトウェアであるプログラムを実行する。メモリ12に記憶されているプログラムは、実施の形態1に係る説明提示装置1に説明提示方法を実施させるための説明提示プログラムを含むことができる。説明提示プログラムは、例えば、コンピュータにより読み取り可能な記録媒体に記録されたプログラムとして説明提示装置1に提供される。或いは、説明提示プログラムは、ネットワークを介するダウンロードによって説明提示装置1に提供される。なお、後述される実施の形態2~5に係る説明提示装置2~5も、図1に示されるハードウェア構成と同様の構成を有する。 The processor 11 executes, for example, generation of a trained model by machine learning and inference using the trained model that is the result of machine learning. In FIG. 1, the processor that performs machine learning and the processor that performs inference are one common processor, but may be different processors. The processor 11 executes, for example, a program that is software stored in the memory 12. The program stored in the memory 12 can include an explanatory presentation program for causing the explanatory presentation device 1 according to the first embodiment to implement the explanatory presentation method. The explanatory presentation program is provided to the explanatory presentation device 1 as a program recorded on a recording medium readable by a computer, for example. Alternatively, the explanatory presentation program is provided to the explanatory presentation device 1 by downloading via a network. The explanatory presentation devices 2 to 5 according to the second to fifth embodiments described later also have the same configuration as the hardware configuration shown in FIG.
 図2(A)及び(B)は、機械学習を行って学習済モデルを生成する学習装置20及び学習済モデルを用いた推論を行って推論結果を出力する推論装置30の構成の例を示す図である。図2(A)に示されるように、学習装置20は、データ取得部21と、モデル生成部22とを有する。学習装置20は、学習済モデル記憶部23を備えてもよい。データ取得部21とモデル生成部22とは、例えば、図1に示されるメモリ12及びプロセッサ11によって実現可能である。学習済モデル記憶部23は、例えば、図1に示される記憶装置13によって実現可能である。 FIGS. 2A and 2B show an example of the configuration of a learning device 20 that performs machine learning to generate a trained model and an inference device 30 that performs inference using the trained model and outputs an inference result. It is a figure. As shown in FIG. 2A, the learning device 20 has a data acquisition unit 21 and a model generation unit 22. The learning device 20 may include a trained model storage unit 23. The data acquisition unit 21 and the model generation unit 22 can be realized by, for example, the memory 12 and the processor 11 shown in FIG. The trained model storage unit 23 can be realized by, for example, the storage device 13 shown in FIG.
 データ取得部21は、学習用データを取得する。学習用データは、例えば、センサデータ(すなわち、入力信号)と正解データ(すなわち、教師信号)とを含む。モデル生成部22は、データ取得部21から出力される学習用データに基づいて、最適な出力を推論するために用いられる学習済モデルを生成する。 The data acquisition unit 21 acquires learning data. The training data includes, for example, sensor data (that is, input signal) and correct answer data (that is, teacher signal). The model generation unit 22 generates a trained model used for inferring the optimum output based on the learning data output from the data acquisition unit 21.
 モデル生成部22が用いる学習アルゴリズムとしては、教師あり学習、教師なし学習、強化学習、などの公知のアルゴリズムを用いることができる。モデル生成部22は、例えば、ニューラルネットワークのモデルに従って、教師あり学習により学習する。教師あり学習とは、入力と結果(ラベル)とのデータの組を学習データとして学習装置に与えることで、学習用データに存在する特徴を学習し、入力から結果を推論可能にする方法である。モデル生成部22は、以上のような学習を実行することで学習済モデルを生成し、出力する。学習済モデル記憶部23は、モデル生成部22から出力された学習済モデルを記憶する。 As the learning algorithm used by the model generation unit 22, known algorithms such as supervised learning, unsupervised learning, and reinforcement learning can be used. The model generation unit 22 learns by supervised learning according to, for example, a model of a neural network. Supervised learning is a method of learning the features existing in the learning data by giving a set of data of input and result (label) to the learning device as learning data, and making it possible to infer the result from the input. .. The model generation unit 22 generates and outputs a trained model by executing the above learning. The trained model storage unit 23 stores the trained model output from the model generation unit 22.
 図2(B)に示されるように、推論装置30は、データ取得部31と、推論部32とを備える。推論装置30は、学習済モデル記憶部33を備えてもよい。データ取得部31と推論部32とは、例えば、図1に示されるメモリ12とプロセッサ11によって実現可能である。学習済モデル記憶部33は、例えば、図1に示される記憶装置13によって実現可能である。学習済モデル記憶部33と、図2(A)に示される学習済モデル記憶部23とは、異なる記憶装置に備えられているが、同じ記憶装置に備えられてもよい。 As shown in FIG. 2B, the inference device 30 includes a data acquisition unit 31 and an inference unit 32. The inference device 30 may include a trained model storage unit 33. The data acquisition unit 31 and the inference unit 32 can be realized by, for example, the memory 12 and the processor 11 shown in FIG. The trained model storage unit 33 can be realized by, for example, the storage device 13 shown in FIG. Although the trained model storage unit 33 and the trained model storage unit 23 shown in FIG. 2A are provided in different storage devices, they may be provided in the same storage device.
 データ取得部31は、技能を習得したい対象者である作業者のセンサデータを取得する。作業者のセンサデータは、例えば、作業者を撮影した映像データである。推論部32は、学習装置20で生成された学習済モデルを利用して得られる推論結果を出力する。すなわち、推論部32は、学習済モデル記憶部33に記憶されている学習済モデルにデータ取得部31で取得したセンサデータを入力することで、推論結果を出力することができる。 The data acquisition unit 31 acquires the sensor data of the worker who is the target person who wants to acquire the skill. The sensor data of the worker is, for example, video data obtained by photographing the worker. The inference unit 32 outputs the inference result obtained by using the trained model generated by the learning device 20. That is, the reasoning unit 32 can output the reasoning result by inputting the sensor data acquired by the data acquisition unit 31 into the trained model stored in the trained model storage unit 33.
 図3は、実施の形態1に係る説明提示装置1の構成を示す機能ブロック図である。図3に示されるように、説明提示装置1は、データ記憶部101と、特徴取得部としての特徴抽出部102と、技能判定部103と、形式知紐付け部104と、反実仮想的説明抽出部である説明抽出部105と、形式知選択部106とを有する。 FIG. 3 is a functional block diagram showing the configuration of the explanatory presentation device 1 according to the first embodiment. As shown in FIG. 3, the explanation presenting device 1 includes a data storage unit 101, a feature extraction unit 102 as a feature acquisition unit, a skill determination unit 103, an explicit knowledge linking unit 104, and an anti-real virtual explanation. It has an explanatory extraction unit 105, which is an extraction unit, and an explicit knowledge selection unit 106.
 データ記憶部101は、例えば、図1に示される記憶装置13の一部である。データ記憶部101は、データベースを記憶する。データ記憶部101は、説明提示装置1と通信可能な外部の記憶装置(例えば、ネットワーク上のサーバの記憶装置)であってもよい。特徴抽出部102、技能判定部103、形式知紐付け部104、説明抽出部105、及び形式知選択部106は、例えば、説明提示プログラムを実行するプロセッサ11によって実現される。特徴抽出部102、技能判定部103、及び形式知紐付け部104は、例えば、図2(A)に示される学習装置20を構成することができる。また、説明抽出部105及び形式知選択部106は、例えば、図2(B)に示される推論装置30を構成することができる。 The data storage unit 101 is, for example, a part of the storage device 13 shown in FIG. The data storage unit 101 stores the database. The data storage unit 101 may be an external storage device (for example, a storage device of a server on a network) capable of communicating with the explanation presentation device 1. The feature extraction unit 102, the skill determination unit 103, the explicit knowledge association unit 104, the explicit knowledge extraction unit 105, and the explicit knowledge selection unit 106 are realized by, for example, a processor 11 that executes an explicit knowledge presentation program. The feature extraction unit 102, the skill determination unit 103, and the explicit knowledge association unit 104 can form, for example, the learning device 20 shown in FIG. 2 (A). Further, the explanatory extraction unit 105 and the explicit knowledge selection unit 106 can form, for example, the inference device 30 shown in FIG. 2 (B).
 機械学習時には、データ記憶部101は、様々な分野の熟練者(すなわち、高度な技能を既に体得している熟練作業者)から取得した形式知と、熟練者の行動と同じ行動(同様の行動を含む。)をとることによって得られた結果とを、教師信号として記憶する。ここで、形式知は、取得した人間が解釈できる知識である。形式知は、例えば、熟練者を撮影することによって得られたセンサデータ(例えば、映像データなど)、熟練者からヒアリングなどによって聞き出した情報、などを含むことができる。熟練者の行動と同じ行動をとることによって得られた結果は、技能レベル(例えば、熟練度、加工精度、など)を含む。熟練度は、作業者の技能レベルを示す値である。技能レベルは、作業を遂行する能力の程度である。熟練度は、主観的で定性的である場合がある。熟練度は、例えば、ヒアリング(アンケートを含む。)などにより作業者から聞き出した、技能を数値として評価した値である。また、熟練度は、作業者から聞き出した値を定量データ等と照らし合わせた結果に基づいて修正されてもよい。アンケートでは、同じ熟練度であるにも関わらず、自分自身の熟練度を高く評価する人(例えば、「俺はすごく高い技能レベルを持っている。」と回答する人)もいれば、自分自身の熟練度を謙虚に評価する人(例えば、「私の技能レベルは十分ではない。」と回答する人)もいる。このため、熟練度を、例えば、集められたアンケート全体のスコアの平均で偏差を取り除いた値とすることも可能である。このように、熟練度は、作業者又はその管理者などのヒアリングを通して得られた作業者のスキルを、数値化したものである。また、学習時には、ヒアリングなどを通してえられた上記のような定性データに加え、加工精度のような定量データを、教師として合わせて学習するとも可能である。加工精度は、作業者が加工した対象物の寸法、形状の正確さを評価する指標であり、技能レベルに対応する値であるから、技能レベルに含まれる。また、技能レベルは、作業者が行う作業の正確性、速度、安定性、などを含むことも可能である。正確性は、複数回の作業を行う場合に、各作業が正しい手順で実行されているか否かの度合いを示す指標である。安定性は、複数回の作業を行う場合に、各作業が一定の手順で実行されているか否かの度合いを示す指標である。速度は、複数回の作業を行う場合に、各作業に費やす時間の長さを示す指標である。 At the time of machine learning, the data storage unit 101 uses explicit knowledge acquired from experts in various fields (that is, skilled workers who have already acquired advanced skills) and the same behavior as the expert's behavior (similar behavior). The result obtained by taking.) Is stored as a teacher signal. Here, explicit knowledge is knowledge that can be interpreted by the acquired human being. Explicit knowledge can include, for example, sensor data (for example, video data) obtained by photographing an expert, information obtained from an expert by hearing or the like, and the like. Results obtained by taking the same actions as those of a skilled person include skill levels (eg, skill level, machining accuracy, etc.). The skill level is a value indicating the skill level of the worker. Skill level is the degree of ability to perform a task. Proficiency may be subjective and qualitative. The skill level is, for example, a value obtained by hearing from a worker through hearings (including a questionnaire) and evaluating the skill as a numerical value. Further, the skill level may be corrected based on the result of comparing the value obtained from the worker with the quantitative data or the like. In the questionnaire, some people have the same skill level but highly value their own skill level (for example, those who answer "I have a very high skill level"), while others themselves. Some people humbly evaluate their proficiency (for example, those who say "my skill level is not enough"). Therefore, it is also possible to set the proficiency level as, for example, a value obtained by removing the deviation from the average of the scores of all the collected questionnaires. In this way, the skill level is a quantification of the skill of the worker obtained through hearings such as the worker or his / her manager. Further, at the time of learning, in addition to the above-mentioned qualitative data obtained through hearings and the like, it is also possible to learn quantitative data such as processing accuracy together as a teacher. The processing accuracy is an index for evaluating the accuracy of the dimensions and shape of the object processed by the worker, and is included in the skill level because it is a value corresponding to the skill level. Skill levels can also include the accuracy, speed, stability, etc. of the work performed by the worker. Accuracy is an index showing the degree of whether or not each work is performed in the correct procedure when performing multiple operations. Stability is an index showing the degree of whether or not each work is performed in a certain procedure when a plurality of works are performed. Speed is an index showing the length of time spent on each work when performing multiple works.
 図4は、実施の形態1に係る説明提示装置1の機械学習時における動作を示すフローチャートである。ステップS101において、特徴抽出部102と技能判定部103とは、データ記憶部101に記憶されている学習用データを取得する。学習用データは、センサデータd(i=1、2、…)と、教師信号とを含む。 FIG. 4 is a flowchart showing the operation of the explanatory presentation device 1 according to the first embodiment during machine learning. In step S101, the feature extraction unit 102 and the skill determination unit 103 acquire learning data stored in the data storage unit 101. The learning data includes sensor data di ( i = 1, 2, ...) And a teacher signal.
 ステップS102において、例えば、ニューラルネットワークのモデル(例えば、後述の図5に示される。)に従って、データ記憶部101に記憶されたセンサデータ、教師信号のデータの組み合わせを学習用データとして学習装置(例えば、図2(A)における学習装置20)に与え、それらの学習用データにある特徴を機械学習し、入力から結果を推論することにより得られた出力(例えば、熟練度、加工精度、など)を機械学習する。ニューラルネットワークの機械学習では、教師信号とニューラルネットワークの出力との間の誤差が小さくなるように、誤差逆伝播法などを用いて、技能判定部103及び特徴抽出部102が使用する学習済モデルのパラメータを自動調整する。 In step S102, for example, a combination of sensor data and teacher signal data stored in the data storage unit 101 is used as learning data by a learning device (for example, according to a model of a neural network (for example, shown in FIG. 5 described later)). , The output obtained by machine learning the features in the learning data 20) in FIG. 2A and inferring the result from the input (for example, skill level, processing accuracy, etc.). Machine learning. In the machine learning of the neural network, the trained model used by the skill determination unit 103 and the feature extraction unit 102 is used by using an error back propagation method or the like so that the error between the teacher signal and the output of the neural network becomes small. Automatically adjust parameters.
 図5は、全結合のニューラルネットワークを用いたモデルの例を示す図である。図5において、X1~X3は入力層、Y1及びY2は中間層、Z1及びZ2は出力層、w11~w16及びw21~w24は重み係数である。ただし、センサデータが時系列のデータである場合は、LSTM(Long short-term memory)を用いることが好適である。また、センサデータが画像系のデータである場合は、CNN(Convolutional Neural Network)を用いることが好適である。なお、学習済モデルの生成には、公知の様々なニューラルネットワークを用いることが可能である。 FIG. 5 is a diagram showing an example of a model using a fully connected neural network. In FIG. 5, X1 to X3 are input layers, Y1 and Y2 are intermediate layers, Z1 and Z2 are output layers, and w11 to w16 and w21 to w24 are weight coefficients. However, when the sensor data is time-series data, it is preferable to use LSTM (Long short-term memory). When the sensor data is image data, it is preferable to use CNN (Convolutional Neural Network). It is possible to use various known neural networks to generate the trained model.
 図4のステップS103において、データ記憶部101は、特徴抽出部102と技能判定部103によって生成された第1の学習済モデルである学習済モデルM1を記憶する。 In step S103 of FIG. 4, the data storage unit 101 stores the trained model M1, which is the first trained model generated by the feature extraction unit 102 and the skill determination unit 103.
 特徴抽出部102は、センサデータを入力とし、技能判定部103が熟練度、加工精度などの技能レベルを判定できるようにするための特徴量である暗黙的特徴量を抽出する。暗黙的特徴量は、暗黙知の一例である。暗黙的特徴量は、ニューラルネットワークの最終出力層(例えば、図5のZ1、Z2)の手前の層から抽出される。技能判定部103は、特徴抽出部102から暗黙的特徴量を得て、暗黙的特徴量に基づいて熟練度、加工精度、などの技能レベルを判定する。 The feature extraction unit 102 takes sensor data as an input, and extracts an implicit feature amount which is a feature amount for the skill determination unit 103 to determine a skill level such as skill level and processing accuracy. Tacit features are an example of tacit knowledge. The implicit feature quantity is extracted from the layer before the final output layer (for example, Z1 and Z2 in FIG. 5) of the neural network. The skill determination unit 103 obtains an implicit feature amount from the feature extraction unit 102, and determines a skill level such as skill level and processing accuracy based on the implicit feature amount.
 次に、人が暗黙的特徴量の挙動を容易に理解できるようにするために行う、形式知の紐付け動作について説明する。図6は、暗黙的特徴量に形式知を紐付ける学習済モデルの生成動作を示すフローチャートである。ステップS201において、特徴抽出部102は、データ記憶部101に記憶されたセンサデータを取得する。 Next, we will explain the explicit knowledge linking operation that is performed so that a person can easily understand the behavior of implicit features. FIG. 6 is a flowchart showing the generation operation of a trained model that associates explicit knowledge with implicit features. In step S201, the feature extraction unit 102 acquires the sensor data stored in the data storage unit 101.
 ステップS202において、特徴抽出部102は、センサデータを学習済モデルM1に入力する。ステップS203において、特徴抽出部102は、暗黙的特徴量を抽出して出力する。ここで、特徴抽出部102は、例えば、ニューラルネットワークの最終出力層(例えば、図5のZ1、Z2)の手前の層の出力値を、暗黙的特徴量として抽出する。 In step S202, the feature extraction unit 102 inputs the sensor data to the trained model M1. In step S203, the feature extraction unit 102 extracts and outputs an implicit feature amount. Here, the feature extraction unit 102 extracts, for example, the output value of the layer in front of the final output layer (for example, Z1 and Z2 in FIG. 5) of the neural network as an implicit feature amount.
 ステップS204において、形式知紐付け部104は、データ記憶部101に記憶された形式知を取得する。形式知は、例えば、熟練者を撮影することによって得られた映像データ、又は熟練者からヒアリングなどによって聞き出した情報、などを含む。 In step S204, the explicit knowledge linking unit 104 acquires the explicit knowledge stored in the data storage unit 101. Explicit knowledge includes, for example, video data obtained by photographing an expert, information obtained from an expert by hearing or the like, and the like.
 ステップS205において、例えば、図5に示されるようなニューラルネットワークのモデルに従って、暗黙的特徴量と形式知とのデータの組み合わせを学習装置(例えば、図2(A)における学習装置20)に与え、それらの学習用データにある特徴を機械学習し、入力から結果を推論することにより出力を機械学習する。ニューラルネットワークの機械学習では、教師信号とニューラルネットワークの出力との間の誤差が小さくなるように、誤差逆伝播法などを用いて技能判定部103及び特徴抽出部102のモデルパラメータを自動調整する。ステップS206において、データ記憶部101は、形式知紐付け部104が生成した第2の学習済モデルである学習済モデルM2を記憶する。 In step S205, for example, according to the model of the neural network as shown in FIG. 5, a combination of data of implicit features and formal knowledge is given to the learning device (for example, the learning device 20 in FIG. 2A). The features in those training data are machine-learned, and the output is machine-learned by inferring the result from the input. In the machine learning of the neural network, the model parameters of the skill determination unit 103 and the feature extraction unit 102 are automatically adjusted by using an error back propagation method or the like so that the error between the teacher signal and the output of the neural network becomes small. In step S206, the data storage unit 101 stores the trained model M2, which is the second trained model generated by the explicit knowledge association unit 104.
 暗黙的特徴量を入力とし、形式知を出力する学習済モデルは、ニューラルネットワークに限定されるものではなく、SVR(Support vector regression)、ナイーブベイズ(Naive Bayes)による形式知部分の欠損値推定などの方法を用いてもよい。 The trained model that inputs implicit features and outputs explicit knowledge is not limited to neural networks, but estimates of missing values in the explicit knowledge part by SVR (Support Vector Regression), Naive Bayes, etc. The method of may be used.
 最終的に、技能を体得したい作業者P(すなわち、ユーザ)が、形式知紐付け部104が紐付けし、技能判定部103が技能レベル(例えば、熟練度及び加工精度など)を判定するモデルに対して、反実仮想的な説明の提示を要求する動作について詳細に説明する。 Finally, a model in which the worker P (that is, the user) who wants to acquire the skill is associated by the explicit knowledge linking unit 104, and the skill determination unit 103 determines the skill level (for example, skill level and processing accuracy). The operation of requesting the presentation of an anti-real virtual explanation will be described in detail.
 図7は、実施の形態1に係る説明提示装置1の動作の例を示すフローチャートである。ステップS301において、作業者Pは、ある改善したい技能行動に関するセンサデータを、データ記憶部101に登録する。すなわち、作業者Pは、自身が技能レベルの向上を希望する技能に関するセンサデータが何であるかを、例えば、操作装置(図1における操作装置14)で登録する。 FIG. 7 is a flowchart showing an example of the operation of the explanatory presentation device 1 according to the first embodiment. In step S301, the worker P registers the sensor data related to the skill behavior to be improved in the data storage unit 101. That is, the worker P registers, for example, what the sensor data regarding the skill for which he / she desires to improve the skill level is, for example, in the operation device (operation device 14 in FIG. 1).
 ステップS302において、特徴抽出部102は、データ記憶部101に登録された作業者Pが技能レベルの向上を希望する技能に関するセンサデータを取得する。ステップS303において、特徴抽出部102は、取得したセンサデータを、データ記憶部101に記憶した学習済モデルM1に入力する。 In step S302, the feature extraction unit 102 acquires sensor data related to the skill that the worker P registered in the data storage unit 101 wants to improve the skill level. In step S303, the feature extraction unit 102 inputs the acquired sensor data to the trained model M1 stored in the data storage unit 101.
 ステップS304において、特徴抽出部102は、学習済モデルM1から暗黙的特徴量を取得する。ここで、特徴抽出部102は、例えば、ニューラルネットワークの最終出力層の手前の層の出力値を暗黙的特徴量として抽出する。 In step S304, the feature extraction unit 102 acquires an implicit feature amount from the trained model M1. Here, the feature extraction unit 102 extracts, for example, the output value of the layer before the final output layer of the neural network as an implicit feature amount.
 図8は、実施の形態1に係る説明提示装置1によって生成されるデータの例を表形式(表1)で示す図である。なお、Lは、正の整数であり、iは、1以上L以下の整数である。 FIG. 8 is a diagram showing an example of data generated by the explanatory presentation device 1 according to the first embodiment in a table format (Table 1). Note that L is a positive integer, and i is an integer of 1 or more and L or less.
 図8に表1として記述されるように、形式知Sは、技s,s,s,…,s(mは、正の整数)とラベル付けされている。図8に示されるように、データ記憶部101には、作業者の行動A(i=1,…,L)と対応付けされて、技s,技s,技s,…,技sと、センサデータdと、暗黙的特徴量xと、技能レベルである熟練度yとが、記憶されている。そして、作業者Pが、技能を体得したいと考えている行動Aがデータ記憶部101に入力される。ここで、作業者Pが、どのような形式知に該当する行動を行っているかは未知と仮定する。 As described as Table 1 in FIG. 8, explicit knowledge S is labeled with techniques s 1 , s 2 , s 3 , ..., S m (m is a positive integer). As shown in FIG. 8, the data storage unit 101 is associated with the worker's behavior Ai ( i = 1, ..., L), and has techniques s 1 , technique s 2 , technique s 3 , ..., The skill sm, the sensor data d , the implicit feature amount x, and the skill level y are stored. Then, the action Aa that the worker P wants to acquire the skill is input to the data storage unit 101. Here, it is assumed that it is unknown what kind of explicit knowledge the worker P is performing.
 図9は、実施の形態1に係る説明提示装置1によって生成されるデータの例を表形式(表2)で示す図である。ステップS305~S310を繰り返すことで、説明抽出部105は、特徴抽出部102が取得した暗黙的特徴量xを基に得られた特徴ベクトルの各要素の値を変更して、技能判定部103を通じて技能レベルがどのように変わるかの探索を行い、図9の表2に示されるような反実仮想的な説明に用いられるデータを得る。 FIG. 9 is a diagram showing an example of data generated by the explanatory presentation device 1 according to the first embodiment in a table format (Table 2). By repeating steps S305 to S310, the explanatory extraction unit 105 changes the value of each element of the feature vector obtained based on the implicit feature amount x a acquired by the feature extraction unit 102, and changes the value of each element of the feature vector to the skill determination unit 103. Through this, we search how the skill level changes and obtain the data used for the anti-real virtual explanation as shown in Table 2 of FIG.
 ステップS305において、説明抽出部105は、特徴抽出部102が取得した暗黙的特徴量xを基に、得られた特徴ベクトルの各要素の値を変更し暗黙的特徴量xを得る。ステップS306において、技能判定部103は、説明抽出部105から取得した暗黙的特徴量xを学習済モデルM1に入力する。ステップS307において、技能判定部103は、入力された暗黙的特徴量xに対応する技能レベルとして熟練度yを出力する。 In step S305, the explanatory extraction unit 105 changes the value of each element of the obtained feature vector based on the implicit feature quantity x a acquired by the feature extraction unit 102 to obtain the implicit feature quantity x i . In step S306, the skill determination unit 103 inputs the implicit feature amount xi acquired from the explanation extraction unit 105 into the trained model M1. In step S307, the skill determination unit 103 outputs the skill level y i as the skill level corresponding to the input implicit feature amount x i .
 ステップS308において、形式知紐付け部104は、学習済モデルM2に暗黙的特徴量xを入力する。ステップS309において、形式知紐付け部104は、入力された暗黙的特徴量xに対応する形式知を出力する。ステップS310において、データ記憶部101は、それぞれのステップで得られた暗黙的特徴量と、熟練度、形式知をデータベースに記憶する。 In step S308, the explicit knowledge association unit 104 inputs an implicit feature amount x i into the trained model M2. In step S309, the explicit knowledge linking unit 104 outputs explicit knowledge corresponding to the input implicit feature amount x i . In step S310, the data storage unit 101 stores the implicit feature amount, skill level, and explicit knowledge obtained in each step in the database.
 ステップS305において、説明抽出部105の値の変更方法としては、xからΔxずつ各要素の値に変更を加えた場合に、x+Δxのうち熟練度が最も大きくなる要素を記憶して、大きい熟練度を与えるxであるほど優先的に探索を行うアルゴリズムを用いてもよい(例えば、図10を参照)。 In step S305, as a method of changing the value of the explanatory extraction unit 105, when the value of each element is changed by Δx from x a , the element having the highest skill level among x a + Δ x is stored. An algorithm may be used that preferentially searches as x a gives a higher degree of skill (see, for example, FIG. 10).
 図10は、暗黙的特徴量空間における探索の例を示す図である。xを開始点として、Δxを各要素に足して技能判定部103が判定した熟練度を探索キューに予め記憶する。なお、探索キューには、熟練度が高いものほど前になるように並べ替える。そして、探索キューの先頭の要素を選ぶ。図10には、1番目の要素にΔxを加えた際に、熟練度が最も高かった場合が示されている。この場合x=x+Δxの暗黙的特徴量が次に探索する特徴量として選ばれ、暗黙的特徴量xの各要素にさらにΔxを加え、技能判定部103で各特徴量に対応する技能レベルである熟練度を得る。このように探索を繰り返していくことで、反実仮想的な暗黙的特徴量を得ることができる。 FIG. 10 is a diagram showing an example of a search in an implicit feature space. Starting from x a , Δx is added to each element, and the skill level determined by the skill determination unit 103 is stored in advance in the search queue. The search queue is sorted so that the higher the skill level, the earlier it is. Then select the first element of the search queue. FIG. 10 shows the case where the skill level was the highest when Δx was added to the first element. In this case, the implicit feature amount of xi = xa + Δx1 is selected as the feature amount to be searched next, Δx is further added to each element of the implicit feature amount xi , and the skill determination unit 103 corresponds to each feature amount. Gain skill level, which is the skill level to do. By repeating the search in this way, an anti-real virtual implicit feature can be obtained.
 図10では、例えば、指定した形式知の値Yが1に近づくほど評価が高くなるような評価関数が用いられる。或いは、図10では、暗黙的特徴量が近いものほど評価が高くなるような評価関数が用いられる。そして、評価が高い暗黙的特徴量が、優先して探索される。 In FIG. 10, for example, an evaluation function is used in which the evaluation becomes higher as the value Y of the specified explicit knowledge approaches 1. Alternatively, in FIG. 10, an evaluation function is used in which the closer the implicit features are, the higher the evaluation is. Then, the implicit feature quantity with high evaluation is preferentially searched.
 ステップS311において、説明抽出部105は、作業者Pが形式知選択部106に入力した形式知を基に、例えば、図9の表2の選択された形式知と相関の高い暗黙的特徴量の軸を選択し、形式知の変化に伴う特徴空間上のプロット情報及び、熟練度の情報を可視化して表示するための提示情報を抽出する。図1に示される表示器15は、提示情報に基づく画像を表示する。可視化された例は、後述の図11に示される。 In step S311, the explanatory extraction unit 105 is based on the explicit knowledge input by the worker P to the explicit knowledge selection unit 106, for example, an implicit feature amount having a high correlation with the selected explicit knowledge in Table 2 of FIG. The axis is selected, and the plot information on the feature space due to the change of explicit knowledge and the presentation information for visualizing and displaying the skill level information are extracted. The display 15 shown in FIG. 1 displays an image based on the presented information. A visualized example is shown in FIG. 11 below.
 また、xを形式知紐付け部104に入力して、得られた形式知Sである技s,…,sを形式知選択部106の初期ラベルとして用いて、作業者Pが形式知選択部106で形式知を変更できるように構成してもよい。これに加えて、説明抽出部105は、形式知Sである技s,…,sの有無によって、熟練度が目標値以上になる確率を提示するための提示情報を抽出してもよい。説明抽出部105は、技s,…,sは、互いに独立であるという仮定のもと、熟練度が目標以上になったときに、技s,…,sが含まれている確率を求め、ベイズルール(Bay’s rule)によって熟練度が目標以上になる要因として形式知Sである技s,…,sが含まれる確率を算出し、これを形式知選択部106の設定参考値として作業者Pに提示するための提示情報を抽出してもよい。 Further, x a is input to the explicit knowledge linking unit 104, and the obtained explicit knowledge S techniques s i , ..., Sm are used as the initial label of the explicit knowledge selection unit 106, and the worker P formats. It may be configured so that explicit knowledge can be changed by the knowledge selection unit 106. In addition to this, the explanation extraction unit 105 may extract presentation information for presenting the probability that the skill level becomes equal to or higher than the target value depending on the presence or absence of the explicit knowledge S techniques s i , ..., Sm . .. Explanation The extraction unit 105 includes the techniques s i , ..., S m when the skill level exceeds the target under the assumption that the techniques s i , ..., S m are independent of each other. The probability is calculated, and the probability that the explicit knowledge S, s i , ..., Sm is included as a factor that makes the skill level higher than the target by the Bay's rule is calculated, and this is calculated by the explicit knowledge selection unit 106. The presentation information to be presented to the worker P may be extracted as the setting reference value of.
 図11は、可視化された説明の例を示す図である。図11において、901a及び901bは、作業者Pの操作に基づいて形式知選択部106によって選択された形式知Sである技s,…,s、もしくは、説明抽出部105の探索から、熟練度の向上に最も寄与する形式知Sである技s,…,sである。902a及び902bは、形式知Sである技s,…,sの変化を示す。また、903は、作業者Pが入力したセンサデータに対応する暗黙的特徴量の位置付けをプロットしたものである。904は、目標熟練度をヒートマップによる可視化したものである。図11のヒートマップは、2次元データの値を濃淡として表現した可視化グラフである。図11において、905a及び905bは、形式知Sである技s,…,sと相関の高い暗黙的特徴量の座標軸である。 FIG. 11 is a diagram showing an example of a visualized explanation. In FIG. 11, 901a and 901b are explicit knowledge S selected by the explicit knowledge selection unit 106 based on the operation of the worker P, from the search of the explicit knowledge s i , ..., sm , or the explanatory extraction unit 105. The explicit knowledge S that contributes most to the improvement of skill level is s i , ..., sm . 902a and 902b indicate changes in the techniques s i , ..., Sm, which are explicit knowledge S. Further, 903 is a plot of the position of the implicit feature amount corresponding to the sensor data input by the worker P. 904 is a visualization of the target skill level by a heat map. The heat map of FIG. 11 is a visualization graph expressing the values of two-dimensional data as shading. In FIG. 11, 905a and 905b are the axis of the implicit feature amount having a high correlation with the explicit knowledge S techniques s i , ..., Sm.
 ただし、可視化は、作業者Pがどの技s,…,sを習得するか判断するための一助となるような見せ方として、特徴空間を表示せずに形式知の空間のみを表示するように行われてもよい。また、可視化は、作業者Pが習得を希望する技能に関するセンサデータ以外のセンサデータに対応する暗黙的特徴量をプロットしたサーマルマップを、並べて表示するように行われてもよい。或いは、可視化は、説明抽出部105の探索で得られた他の暗黙的特徴量をプロットしたサーマルマップを並べて表示するように行われてもよい。 However, visualization displays only the explicit knowledge space without displaying the feature space as a way to help the worker P determine which technique s i , ..., S m to master. It may be done as follows. Further, the visualization may be performed so that the thermal map plotting the implicit feature amount corresponding to the sensor data other than the sensor data related to the skill desired to be acquired by the worker P is displayed side by side. Alternatively, the visualization may be performed so as to display side by side a thermal map plotting other implicit features obtained in the search of the explanatory extraction unit 105.
 これに加え、説明抽出部105は、技s,…,sの有無によって、熟練度が目標値以上になる確率を提示するための提示情報を抽出してもよい。例えば、説明抽出部105は、技s,…,sは、互いに独立であるという仮定のもと、熟練度が目標以上になったときに、技s,…,sが含まれている確率を求め、ベイズルールによって熟練度が目標以上になる要因として技s,…,sが含まれる確率を算出し、これを形式知選択部106の設定参考値として作業者Pに提示するための提示情報を抽出してもよい。図1に示される表示器15は、提示情報に基づく画像を表示する。 In addition to this, the explanation extraction unit 105 may extract presentation information for presenting the probability that the skill level becomes equal to or higher than the target value depending on the presence or absence of the techniques s i , ..., Sm . For example, the explanation extraction unit 105 includes the techniques s i , ..., S m when the skill level exceeds the target under the assumption that the techniques s i , ..., S m are independent of each other. The probability that the skill s i , ..., Sm is included as a factor that the skill level exceeds the target is calculated by the Bayes rule, and this is used as the setting reference value of the explicit knowledge selection unit 106 for the worker P. The presentation information for presentation may be extracted. The display 15 shown in FIG. 1 displays an image based on the presented information.
 以上のように構成された説明提示装置1を用いることで、作業者Pは、技s,…,sに基づいてどのようにして技能を体得すればよいかを、直感的に理解することが可能になる。 By using the explanation presentation device 1 configured as described above, the worker P intuitively understands how to acquire the skill based on the skills s i , ..., Sm . Will be possible.
《実施の形態2》
 実施の形態1では、暗黙的特徴量を用いた反実仮想的な説明を容易に理解できるようにするために、暗黙的特徴量に形式知を紐付けし、暗黙的特徴量と形式知とを、技能の向上を希望する作業者Pに提示するための提示情報を抽出する。しかし、実施の形態1では、技能の向上を希望する作業者Pに関するセンサデータを、他の作業者に関するセンサデータと比較して、比較結果を参照する処理は行われていない。そのため、例えば、以下の状況(1)~(3)が発生する可能性がある。
(1)形式知を習得することが目標クラスの出力を実現するための最短パスであっても、実例が無く非現実的な提案となる状況。
(2)技能を習得するまでの道のりが遠いため習得が困難である状況。
(3)反実仮想的な説明の探索空間が広いため、処理時間が長くなる状況。
 そこで、実施の形態2では、他の作業者のセンサデータも加味して、習得すべき技を提示するための提示情報を抽出する説明提示装置2を提案する。
<< Embodiment 2 >>
In the first embodiment, explicit knowledge is associated with the implicit feature in order to easily understand the anti-real virtual explanation using the implicit feature, and the implicit feature and the explicit knowledge are combined. Is extracted to present to the worker P who wishes to improve his / her skill. However, in the first embodiment, the sensor data relating to the worker P who wishes to improve the skill is compared with the sensor data relating to another worker, and the process of referring to the comparison result is not performed. Therefore, for example, the following situations (1) to (3) may occur.
(1) Even if acquiring explicit knowledge is the shortest path to achieve the output of the target class, there are no examples and it is an unrealistic proposal.
(2) Situations where it is difficult to acquire skills because there is a long way to go.
(3) Since the search space for the anti-real virtual explanation is wide, the processing time becomes long.
Therefore, in the second embodiment, we propose an explanatory presentation device 2 that extracts presentation information for presenting a technique to be learned in consideration of sensor data of other workers.
 図12は、実施の形態2に係る説明提示装置2の構成を示す機能ブロック図である。図12に示されるように、説明提示装置2は、データ記憶部201と、特徴取得部としての特徴抽出部202と、技能判定部203と、形式知紐付け部204と、反実仮想的説明抽出部である説明抽出部205と、形式知選択部206と、特徴比較部207とを有する。実施の形態2に係る説明提示装置2は、作業者Pのセンサデータを特徴抽出部202に入力して得られた暗黙的特徴量と、他の作業者のセンサデータから得られた暗黙的特徴量とを比較する特徴比較部207を備えた点が、実施の形態1に係る説明提示装置1と相違する。データ記憶部201、特徴抽出部202、技能判定部203、形式知紐付け部204、説明抽出部205、及び形式知選択部206は、実施の形態1におけるデータ記憶部101、特徴抽出部102、技能判定部103、形式知紐付け部104、説明抽出部105、及び形式知選択部106とそれぞれ同様である。 FIG. 12 is a functional block diagram showing the configuration of the explanatory presentation device 2 according to the second embodiment. As shown in FIG. 12, the explanation presenting device 2 includes a data storage unit 201, a feature extraction unit 202 as a feature acquisition unit, a skill determination unit 203, an explicit knowledge linking unit 204, and an anti-real virtual explanation. It has an explanatory extraction unit 205, which is an extraction unit, an explicit knowledge selection unit 206, and a feature comparison unit 207. The explanatory presentation device 2 according to the second embodiment has an implicit feature amount obtained by inputting the sensor data of the worker P into the feature extraction unit 202 and an implicit feature obtained from the sensor data of another worker. It is different from the explanatory presentation device 1 according to the first embodiment in that the feature comparison unit 207 for comparing the amount is provided. The data storage unit 201, the feature extraction unit 202, the skill determination unit 203, the explicit knowledge linking unit 204, the explicit knowledge extraction unit 205, and the explicit knowledge selection unit 206 are the data storage unit 101, the feature extraction unit 102, in the first embodiment. This is the same as the skill determination unit 103, the explicit knowledge linking unit 104, the explicit knowledge extraction unit 105, and the explicit knowledge selection unit 106, respectively.
 図13は、実施の形態2に係る説明提示装置2の動作の例を示すフローチャートである。説明提示装置2の動作は、他の作業者の特徴と比較するステップS412を有する点が、実施の形態1に係る説明提示装置1の動作と異なる。ステップS401~S404、S411は、図7におけるステップS300~S304、S311と同様である。 FIG. 13 is a flowchart showing an example of the operation of the explanatory presentation device 2 according to the second embodiment. The operation of the explanatory presentation device 2 is different from the operation of the explanatory presentation device 1 according to the first embodiment in that it has step S412 to be compared with the characteristics of other workers. Steps S401 to S404 and S411 are the same as steps S300 to S304 and S311 in FIG. 7.
 ステップS412において、特徴比較部207は、作業者Pのセンサデータから得られた暗黙的特徴量xと、データ記憶部201に記憶しておいた他の作業者の暗黙的特徴量との距離を算出し、データ記憶部201に予め記憶する。 In step S412, the feature comparison unit 207 is a distance between the implicit feature amount x a obtained from the sensor data of the worker P and the implicit feature amount of another worker stored in the data storage unit 201. Is calculated and stored in advance in the data storage unit 201.
 例えば、特徴比較部207は、暗黙的特徴量同士の近さを示す相関ベースの類似度を、例えば、分散共分散行列により算出し、行動Aの熟練度yよりも熟練度が高いデータの中から、距離の近い順にK個(Kは正の整数)のデータを得る。特徴抽出部202は、前記距離に基づいて特徴量を抽出する。 For example, the feature comparison unit 207 calculates the correlation-based similarity indicating the closeness of the implicit feature quantities by, for example, a variance-covariance matrix, and the data has a higher skill level than the skill level ya of the action A a . From among them, K pieces of data (K is a positive integer) are obtained in order of increasing distance. The feature extraction unit 202 extracts a feature amount based on the distance.
 ここでは、距離として、相関ベースのものを挙げた。しかし、用いられる距離は、コサイン距離、PCA(principal component analysis)によって次元圧縮をかけた上でのユークリッド距離、などのような公知の様々な距離算出法で得られるいずれの距離であってもよい。 Here, the distance is based on correlation. However, the distance used may be any distance obtained by various known distance calculation methods such as cosine distance, Euclidean distance after dimensional compression by PCA (principal component analysis), and the like. ..
 ステップS405~S410において、説明抽出部205は、暗黙的特徴量に変更を加えて探索を行う。ここで、説明抽出部205は、特徴比較部207により抽出したK個のデータを基に、熟練度の期待値が高くなるような暗黙的特徴量の要素を優先的に変化させ、かつ、K個のデータの暗黙的特徴量の範囲を大きく逸脱しないように探索を行う。 In steps S405 to S410, the explanatory extraction unit 205 performs a search by changing the implicit feature amount. Here, the explanatory extraction unit 205 preferentially changes the element of the implicit feature amount so as to increase the expected value of the skill level based on the K data extracted by the feature comparison unit 207, and K. The search is performed so as not to greatly deviate from the range of implicit features of individual data.
 このように作業者Pと暗黙的特徴量が近いデータを基に反実仮想的な説明を抽出することで、実例を考慮した形式知の習得を促すことができ、効率的な習熟が可能であり、習熟時間の短縮が期待できる。 By extracting the anti-real virtual explanation based on the data whose implicit features are close to those of the worker P in this way, it is possible to promote the acquisition of explicit knowledge in consideration of actual examples, and efficient learning is possible. Yes, it can be expected to shorten the learning time.
 また、暗黙的特徴量同士の比較を実現することで、形式知紐付け部204は、ニューラルネットワークなどの教師あり学習による紐付けだけでなく、協調フィルタリングなどのモデル生成を必要としない紐付け方法を用いることが可能となる。 In addition, by realizing comparisons between implicit features, the explicit knowledge linking unit 204 is not only linked by supervised learning such as a neural network, but also a linking method that does not require model generation such as collaborative filtering. Can be used.
 協調フィルタリングの例として、相関ベースの距離に基づく協調フィルタリングが挙げられる。図14は、実施の形態2に係る説明提示装置2によって生成されるデータの例を表形式(表3)で示す図である。図15は、得られた相関係数の例を表形式(表4)で示す図である。図16は、行動に紐付けられた形式知を相関係数で重み付けた値の和によって得られる熟練技能を得るために推奨される行動を表形式(表5)で示す図である。 An example of collaborative filtering is collaborative filtering based on correlation-based distance. FIG. 14 is a diagram showing an example of data generated by the explanatory presentation device 2 according to the second embodiment in a table format (Table 3). FIG. 15 is a diagram showing an example of the obtained correlation coefficient in a table format (Table 4). FIG. 16 is a diagram showing in a table format (Table 5) the behaviors recommended for obtaining the skill obtained by the sum of the explicit knowledge associated with the behaviors weighted by the correlation coefficient.
 図14の表3に示されるように、熟練度y,…,y>yとなる行動1,…,Lを抽出し、これらの特徴量同士の相関係数cov(a,j)を式(1)によって算出する。 As shown in Table 3 of FIG. 14, the behaviors 1 a , ..., La for which the skill level y 1 , ..., Y L > ya are extracted, and the correlation coefficient cov ( a ,) between these feature quantities is extracted. j) is calculated by the equation (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 その結果、図15の表4に示されるような相関係数が得られる。行動A1aと相関係数0.7以上の行動を選択した場合、行動A3a、行動ALaが協調フィルタリングするためのデータとして選択される。これを基に、説明抽出部205は、図16の表5に示されるように、各行動と紐付けられた形式知を上記の相関係数で重み付けし、重み付けされた形式知の和を行動Aが熟練技能を得るために推奨される行動として提示するための提示情報を抽出する。図1に示される表示器15は、提示情報に基づく画像を表示する。 As a result, the correlation coefficient as shown in Table 4 of FIG. 15 is obtained. When an action A1a and an action having a correlation coefficient of 0.7 or more are selected, the action A3a and the action ALa are selected as data for collaborative filtering. Based on this, as shown in Table 5 of FIG. 16, the explanatory extraction unit 205 weights the explicit knowledge associated with each action with the above correlation coefficient, and acts on the sum of the weighted explicit knowledge. Extract the presentation information for Aa to present as a recommended action to acquire skill. The display 15 shown in FIG. 1 displays an image based on the presented information.
 説明抽出部205は、ステップS405で相関係数の閾値をいくつに設定するか、目標とする到達熟練度をいくつに設定するかを変更することで、他の作業者の形式知を考慮して、暗黙的特徴量と形式知の紐付けが可能となる。 Explanation The extraction unit 205 takes into consideration the explicit knowledge of other workers by changing how many threshold values of the correlation coefficient are set and how many target achievement proficiency levels are set in step S405. , It is possible to associate implicit features with explicit knowledge.
 形式知紐付け部204は、上述した協調フィルタリングのような手法に限らず、ベイジアンネットワーク(Bayesian network)を用いて、熟練度y>yが得られたとき、その要因は、形式知である確率を算出するなどによって、事象の発生確率を基に値を推定することで、暗黙的特徴量と形式知の紐付けを行ってもよい。 The explicit knowledge linking unit 204 is not limited to the method such as the above-mentioned cooperative filtering, and when the skill level y> ya is obtained by using a Bayesian network, the factor is explicit knowledge. By estimating the value based on the probability of occurrence of an event, such as by calculating the probability, the implicit feature amount and explicit knowledge may be linked.
 以上の説明は、作業者と他の作業者のみの比較に基づいた例について示している。しかし、説明抽出部205の仮想現実的な暗黙的特徴量の探索方法を変えることで、説明抽出部205は、作業者Pが技能を体得したとして、少しずつ行動が近い人のやり方を学びながら、目標クラスに近づけていく学習方法になるように説明を提示するための提示情報を抽出することができる。 The above explanation shows an example based on a comparison of only a worker and another worker. However, by changing the method of searching for the virtual and realistic implicit feature amount of the explanation extraction unit 205, the explanation extraction unit 205 gradually learns the method of a person whose behavior is close to each other, assuming that the worker P has acquired the skill. , It is possible to extract the presentation information for presenting the explanation so that the learning method approaches the target class.
 例えば、評価関数Fは、図10の探索キューを記憶済みの他の作業者の暗黙的特徴量と探索中の特徴量の距離の差が大きいほど評価が低く、目標熟練度に達するまでの形式知の変動ΣΔSが小さいほど評価が高く、熟練度が高いほど評価が高くなるように設計される。具体的には、評価関数Fは、以下の式(2)に示されるように設計されればよい。 For example, the evaluation function F has a format in which the evaluation is lower as the difference in the distance between the implicit feature amount of another worker who has stored the search queue in FIG. 10 and the feature amount during the search is larger, and the evaluation reaches the target skill level. It is designed so that the smaller the fluctuation ΣΔS of knowledge, the higher the evaluation, and the higher the skill level, the higher the evaluation. Specifically, the evaluation function F may be designed as shown in the following equation (2).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 ステップS412において、特徴比較部207は、作業者Pのセンサデータから得られた暗黙的特徴量xと、データ記憶部201に記憶されている他の作業者のセンサデータから得られた暗黙的特徴量とを比較し、熟練度yよりも値が大きく、目標となる熟練度が達成できるようなセンサデータjがK個以上含まれるように、センサデータを抽出する。そして、特徴比較部207は、得られた全てのセンサデータについて評価関数Fによるスコアを求め、距離算出用の比較対象集合Jにjを加える。 In step S412, the feature comparison unit 207 has an implicit feature amount x a obtained from the sensor data of the worker P and an implicit feature amount x a obtained from the sensor data of another worker stored in the data storage unit 201. The sensor data is compared with the feature amount, and the sensor data is extracted so that the value is larger than the skill level ya and K or more sensor data js that can achieve the target skill level are included. Then, the feature comparison unit 207 obtains a score by the evaluation function F for all the obtained sensor data, and adds j to the comparison target set J for distance calculation.
 ステップS405~S410において、説明抽出部205は、行動(式2におけるa)を開始点として、評価関数Fの値が高い順に探索キューから暗黙的特徴量xを取り出し、この値にΔxだけ変更を加えて、特徴比較部207により抽出したJ個のデータと形式知の行動(式2におけるaからi-1まで)の変化量の合計を基に評価関数Fでスコアを算出し、探索キューへのデータの追加を行う。以上のような評価関数Fを用いた処理を繰り返すことによって、目標とする熟練度を達成しつつ、特徴量が、獲得したデータから大きく離れないような探索が可能となる。 In steps S405 to S410, the explanatory extraction unit 205 extracts the implicit feature quantity x i from the search queue in descending order of the value of the evaluation function F, starting from the action (a in Equation 2), and changes it to this value by Δx. Is added, the score is calculated by the evaluation function F based on the sum of the J data extracted by the feature comparison unit 207 and the amount of change in the behavior of formal knowledge (from a to i-1 in Equation 2), and the search queue is calculated. Add data to. By repeating the process using the evaluation function F as described above, it is possible to perform a search in which the feature amount does not deviate significantly from the acquired data while achieving the target skill level.
 実施の形態2に係る説明提示装置2を用いれば、特徴比較部207を設けて他の作業者の行動との比較をし、その結果を用いることで、現実とかけ離れた反実仮想的な説明が提示されることを防ぐことが可能である。 If the explanation presenting device 2 according to the second embodiment is used, a feature comparison unit 207 is provided to compare with the behavior of another worker, and the result is used to provide an anti-real virtual explanation that is far from reality. Can be prevented from being presented.
《実施の形態3》
 実施の形態1に係る説明提示装置1は、自動的に求められた暗黙的特徴量に形式知を紐付けし、どのような技を習得すれば、熟練度を向上させることができるかを作業者に提示する。しかし、作業者に対し、具体的に、どのような行動変容をすればよいかを提示できれば、作業者の技能の習得はより一層加速するものと考えられる。そこで、実施の形態3に係る説明提示装置3は、特徴取得部として技能データ生成部302を備えている。実施の形態3では、技能データ生成部302は、技能判定部が熟練度及び加工精度などの技能レベルを判定できるように、特徴抽出部が特徴量を自動抽出するのではなく、元のセンサデータを生成データとして再現できるように特徴量を抽出する。
<< Embodiment 3 >>
The explanation presenting device 1 according to the first embodiment associates explicit knowledge with the implicit feature amount automatically obtained, and works on what kind of technique should be learned to improve the skill level. Present to the person. However, if it is possible to specifically present to the worker what kind of behavior change should be made, it is considered that the acquisition of the worker's skill will be further accelerated. Therefore, the explanation presentation device 3 according to the third embodiment includes a skill data generation unit 302 as a feature acquisition unit. In the third embodiment, the skill data generation unit 302 does not automatically extract the feature amount by the feature extraction unit so that the skill determination unit can determine the skill level such as skill level and processing accuracy, but the original sensor data. Is extracted so that it can be reproduced as generated data.
 図17は、実施の形態3に係る説明提示装置3の構成を示す機能ブロック図である。図17に示されるように、説明提示装置3は、データ記憶部301と、特徴取得部としての技能データ生成部302と、技能判定部303と、形式知紐付け部304と、反実仮想的説明抽出部である説明抽出部305と、形式知選択部306とを有する。実施の形態3に係る説明提示装置3は、特徴取得部として技能データ生成部302を備えた点が、実施の形態1に係る説明提示装置1と異なる。つまり、技能データ生成部302は、技能判定部が熟練度及び加工精度を判定できるように、元のセンサデータを再現できるように特徴量を抽出する。データ記憶部301、技能判定部303、形式知紐付け部304、説明抽出部305、及び形式知選択部306は、実施の形態1におけるデータ記憶部101、技能判定部103、形式知紐付け部104、説明抽出部105、及び形式知選択部106とそれぞれ同様である。 FIG. 17 is a functional block diagram showing the configuration of the explanatory presentation device 3 according to the third embodiment. As shown in FIG. 17, the explanatory presentation device 3 includes a data storage unit 301, a skill data generation unit 302 as a feature acquisition unit, a skill determination unit 303, an explicit knowledge linking unit 304, and an anti-real virtual. It has an explanatory extraction unit 305, which is an explanatory extraction unit, and an explicit knowledge selection unit 306. The explanation presenting device 3 according to the third embodiment is different from the explanation presenting device 1 according to the first embodiment in that the skill data generation unit 302 is provided as a feature acquisition unit. That is, the skill data generation unit 302 extracts the feature amount so that the original sensor data can be reproduced so that the skill determination unit can determine the skill level and the processing accuracy. The data storage unit 301, the explicit knowledge determination unit 303, the explicit knowledge association unit 304, the explicit knowledge extraction unit 305, and the explicit knowledge selection unit 306 are the data storage unit 101, the explicit knowledge determination unit 103, and the explicit knowledge association unit in the first embodiment. It is the same as 104, the explicit knowledge extraction unit 105, and the explicit knowledge selection unit 106, respectively.
 技能データ生成部302は、ニューラルネットワークにより特徴量の圧縮を行う。その後、技能判定部303は、技能レベルとして熟練度を判定する。また、技能データ生成部302の後半部分にはデコーダが設けられており、技能データ生成部302は、元のセンサデータを復元するようなマルチタスクラーニングを行う。 The skill data generation unit 302 compresses the feature amount by the neural network. After that, the skill determination unit 303 determines the skill level as the skill level. Further, a decoder is provided in the latter half of the skill data generation unit 302, and the skill data generation unit 302 performs multitask learning such as restoring the original sensor data.
 図18は、マルチタスクラーニングを実行するネットワークの構成例を示す図である。図18のネットワークでは、入力層141aからセンサデータdをニューラルネットワーク142aに与え、中間層145とニューラルネットワーク142bを介して熟練度yを出力する。また、これと共に、ニューラルネットワークの中間層145から、デコーダ143を介して、センサデータdを復元し、出力層141bから出力する。技能データ生成部302は、この分岐点の出力値を、暗黙的特徴量xとして抽出する。 FIG. 18 is a diagram showing a configuration example of a network that executes multitask learning. In the network of FIG. 18, sensor data di is given to the neural network 142a from the input layer 141a, and the skill level yi is output via the intermediate layer 145 and the neural network 142b. At the same time, the sensor data di is restored from the intermediate layer 145 of the neural network via the decoder 143, and is output from the output layer 141b. The skill data generation unit 302 extracts the output value of this branch point as an implicit feature amount x i .
 図18の学習方法では、基本構成図のモデルと同様であり、教師信号となる熟練度yとセンサデータdとを用い、ニューラルネットワーク142bから出力される熟練度yとセンサデータdで構成されるロス関数Lが小さくなるようにパラメータの調整を行う。パラメータ調整後、データ記憶部301は、技能データ生成部302と技能判定部303が生成した学習済モデルM1を記憶する。 The learning method of FIG. 18 is the same as the model of the basic configuration diagram, and uses the skill level y i and the sensor data di i as teacher signals, and the skill level y i and the sensor data di i output from the neural network 142b. The parameters are adjusted so that the loss function L composed of is small. After adjusting the parameters, the data storage unit 301 stores the learned model M1 generated by the skill data generation unit 302 and the skill determination unit 303.
 ロス関数Lは、例えば、デコーダ143のロスLdecodeと熟練度の推定部分のロスLの重み付き和で定義することができる。 The loss function L can be defined, for example, by the weighted sum of the loss L decode of the decoder 143 and the loss Ly of the estimated portion of the skill level.
 図18のデコーダ143は、データを生成するモデルであれば、VAE(variational autoencoder)、GAN(Generative Adversarial Network)、など公知のものであることが可能である。 The decoder 143 in FIG. 18 can be a known model such as a VAE (variational autoencoder) or a GAN (Generative Advanced Network) as long as it is a model for generating data.
 次に、反実仮想的な説明を提供する動作を説明する。図19は、反実仮想的な説明を提供する動作を示すフローチャートである。 Next, the operation of providing an anti-real virtual explanation will be described. FIG. 19 is a flowchart showing an operation for providing an anti-real virtual explanation.
 ステップS507において、技能データ生成部302及び技能判定部303は、データ記憶部301から取得したデータを学習済モデルM1に暗黙的特徴量xとして入力する。そして、技能データ生成部302が学習したモデルは、図18に示されるように構成されているため、暗黙的特徴量xから熟練度yとセンサデータdの2つのデータが出力される。ステップS510でデータ記憶部301は、暗黙的特徴量x,センサデータd,熟練度y,形式知Sを記憶し、表6を得る。 In step S507, the skill data generation unit 302 and the skill determination unit 303 input the data acquired from the data storage unit 301 into the trained model M1 as an implicit feature amount x i . Since the model learned by the skill data generation unit 302 is configured as shown in FIG. 18, two data, skill level y i and sensor data di , are output from the implicit feature amount x i . .. In step S510, the data storage unit 301 stores the implicit feature amount x i , the sensor data di i, the skill level y i , and the explicit knowledge S, and obtains Table 6.
 図20は、実施の形態3に係る説明提示装置3によって生成されるデータの例を表形式(表6)で示す図である。図21は、図20に示されるセンサデータを可視化し、特に異なる部分を変化#1、変化#2として表示した図である。ステップS511で説明抽出部305は、図20の表6の反実仮想行動C~Cのセンサデータdを可視化し、センサデータdと特に異なる部分を、図21の変化#1、変化#2の部分のように強調表示するための提示情報を抽出する。図1に示される表示器15は、提示情報に基づく画像を表示する。 FIG. 20 is a diagram showing an example of data generated by the explanatory presentation device 3 according to the third embodiment in a table format (Table 6). FIG. 21 is a diagram in which the sensor data shown in FIG. 20 is visualized, and particularly different parts are displayed as change # 1 and change # 2. Explanation in step S511 The extraction unit 305 visualizes the sensor data di of the anti - real virtual actions C1 to CN in Table 6 of FIG. 20, and the part particularly different from the sensor data da is the change # 1 in FIG. Extract the presentation information for highlighting like the part of change # 2. The display 15 shown in FIG. 1 displays an image based on the presented information.
 以上に説明したように、実施の形態3では、技能データ生成部302が暗黙的特徴量の抽出とセンサデータの復元とを共に行い、熟練度を判別するための特徴表現とセンサデータを紐付けることで、形式知を変化させた場合に実際にセンサデータがどのように変容するか示す提示情報を生成する。実施の形態3では、この対応関係(すなわち、形式知の変化と、センサデータの変容との対応関係)を利用し、技を習得する上で取るべき行動変容のヒントを示すことができる。 As described above, in the third embodiment, the skill data generation unit 302 performs both the extraction of the implicit feature amount and the restoration of the sensor data, and associates the feature expression for determining the skill level with the sensor data. This generates presentation information that shows how the sensor data actually changes when explicit knowledge is changed. In the third embodiment, this correspondence (that is, the correspondence between the change of explicit knowledge and the change of sensor data) can be used to show hints of behavior change to be taken in order to master the technique.
 なお、実施の形態3では、センサデータの生成を行う生成モデルを用いているが、センサデータのどの部分の行動を変容するべきかを着目する方法として、アテンション機構(Attention Mechanism)を用いてもよい。 In the third embodiment, a generative model for generating sensor data is used, but as a method of paying attention to which part of the sensor data the behavior should be transformed, an attention mechanism (Attention Mechanism) may be used. good.
 図22は、Attention Branch Network(ABN)の例を示す図である。実施の形態3に火且つ説明提示装置3は、例えば、図22に示されるABNに示されるように、中間の特徴量のうちどこに着目すべきかを抽出するアテンション機構を設け、このアテンション機構で、アテンションが高い箇所に該当するセンサデータを強調表示させるための提示情報を生成してもよい。この強調表示には、例えば、非特許文献1に記載の方法を用いることができる。 FIG. 22 is a diagram showing an example of Attention Branch Network (ABN). In the third embodiment, the fire and explanation presenting device 3 is provided with an attention mechanism for extracting which of the intermediate feature quantities should be focused on, as shown in ABN shown in FIG. 22, for example. The presentation information for highlighting the sensor data corresponding to the place where the attention is high may be generated. For this highlighting, for example, the method described in Non-Patent Document 1 can be used.
《実施の形態4》
 実施の形態3では、技能データ生成部302は、行動変容させるべき領域(例えば、変化#1、変化#2)を確認するために用いられている。実施の形態4では、熟練技能に関する領域のセンサデータに対して摂動を与えることで、熟練度及び加工精度などの技能レベルがどのように変わりそうか、そして技能レベルの信憑性の確認、などを行うことが可能になる。これを実現するために、実施の形態4に係る説明提示装置4は、摂動確認部を備える。
<< Embodiment 4 >>
In the third embodiment, the skill data generation unit 302 is used to confirm the area to be behavior-changed (for example, change # 1 and change # 2). In the fourth embodiment, by giving a perturbation to the sensor data in the area related to the skill level, how the skill level such as the skill level and the processing accuracy is likely to change, and the credibility of the skill level are confirmed. It will be possible to do. In order to realize this, the explanation presenting device 4 according to the fourth embodiment includes a perturbation confirmation unit.
 図23は、実施の形態4に係る説明提示装置4の構成を示す機能ブロック図である。図23に示されるように、説明提示装置4は、データ記憶部401と、特徴取得部としての技能データ生成部402と、技能判定部403と、形式知紐付け部404と、反実仮想的説明抽出部である説明抽出部405と、形式知選択部406と、摂動確認部408とを有する。実施の形態4に係る説明提示装置4は、摂動確認部408を備えた点が、実施の形態3に係る説明提示装置3と異なる。データ記憶部401、技能データ生成部402、技能判定部403、形式知紐付け部404、説明抽出部405、及び形式知選択部406は、実施の形態3におけるデータ記憶部301、技能データ生成部302、技能判定部303、形式知紐付け部304、説明抽出部305、及び形式知選択部306とそれぞれ同様である。 FIG. 23 is a functional block diagram showing the configuration of the explanatory presentation device 4 according to the fourth embodiment. As shown in FIG. 23, the explanatory presentation device 4 includes a data storage unit 401, a skill data generation unit 402 as a feature acquisition unit, a skill determination unit 403, an explicit knowledge linking unit 404, and an anti-real virtual. It has an explanatory extraction unit 405, which is an explanatory extraction unit, an explicit knowledge selection unit 406, and a perturbation confirmation unit 408. The explanatory presentation device 4 according to the fourth embodiment is different from the explanatory presentation device 3 according to the third embodiment in that the perturbation confirmation unit 408 is provided. The data storage unit 401, skill data generation unit 402, skill determination unit 403, explicit knowledge linking unit 404, explicit knowledge extraction unit 405, and explicit knowledge selection unit 406 are the data storage unit 301 and skill data generation unit in the third embodiment. It is the same as 302, skill determination unit 303, explicit knowledge association unit 304, explicit knowledge extraction unit 305, and explicit knowledge selection unit 306, respectively.
 以下に摂動確認の流れを示す。まず、作業者Pは、技能を習得したいと考えている行動に該当するセンサデータを入力し、形式知選択部406により形式知が選択される。その結果を受け、説明抽出部405は、選択された形式知だけができるだけ大きく変化するように探索の評価関数Fを変更し、暗黙的特徴量を変更して反実仮想的な説明としての暗黙的特徴量xの生成を行う。形式知紐付け部404は、生成された暗黙的特徴量xに対応する形式知を取得し、技能判定部403は、熟練度yを判定し、技能データ生成部402は、センサデータdを生成データとして再現する。 The flow of perturbation confirmation is shown below. First, the worker P inputs sensor data corresponding to the behavior for which he / she wants to acquire the skill, and explicit knowledge is selected by the explicit knowledge selection unit 406. In response to the result, the explanation extraction unit 405 changes the evaluation function F of the search so that only the selected explicit knowledge changes as much as possible, and changes the implicit feature quantity to implicitly as an anti-real virtual explanation. The target feature amount x i is generated. The explicit knowledge linking unit 404 acquires explicit knowledge corresponding to the generated implicit feature amount x i , the skill determination unit 403 determines the skill level y i , and the skill data generation unit 402 determines the sensor data d. i is reproduced as generated data.
 説明抽出部405は、上記の探索により目標の熟練度yを満たすようにするために、反実仮想的な形式知の候補を作業者Pに提示する。次に、作業者Pは、提示されたセンサデータのうち特に熟練度に影響を及ぼす重要な部分について、摂動確認部408によって摂動を加える(すなわち、入力するセンサデータに変化を与える)。これにより、摂動確認部408は、摂動により変更されたセンサデータをデータ記憶部401に登録し、技能データ生成部402は、摂動が加えられたセンサデータを読み込み、元のデータの生成を行い、データ記憶部401にセンサデータを登録する。 The explanation extraction unit 405 presents a candidate for explicit and virtual explicit knowledge to the worker P in order to satisfy the target skill level y by the above search. Next, the worker P perturbates (that is, changes the input sensor data) by the perturbation confirmation unit 408 for the important part of the presented sensor data that particularly affects the skill level. As a result, the perturbation confirmation unit 408 registers the sensor data changed by the perturbation in the data storage unit 401, and the skill data generation unit 402 reads the sensor data to which the perturbation is applied and generates the original data. Register the sensor data in the data storage unit 401.
 摂動確認部408は、技能データ生成部402に入力したセンサデータと、技能データ生成部402が出力したセンサデータを比較し、生成データと摂動が加えられたセンサデータとの差異を作業者Pに提示するための処理を行う。また、これに合わせて説明抽出部405は、センサデータに摂動が加えられた結果、熟練技能は、どのように変わったかを合わせて提示するための提示情報を抽出する。図1に示される表示器15は、提示情報に基づく画像を表示する。 The perturbation confirmation unit 408 compares the sensor data input to the skill data generation unit 402 with the sensor data output by the skill data generation unit 402, and tells the worker P the difference between the generated data and the sensor data to which the perturbation is added. Perform the process for presenting. In addition, the explanatory extraction unit 405 extracts the presentation information for presenting how the skill skill has changed as a result of the perturbation being added to the sensor data. The display 15 shown in FIG. 1 displays an image based on the presented information.
 以上に説明したように、実施の形態4に係る説明提示装置4は、摂動時に生成されるセンサデータの挙動を可視化することで、学習済モデルに含まれていないデータを用いた場合、適切な説明が生成され難くなり、説明の精度が著しく落ちることを利用する。実施の形態4に係る説明提示装置4は、このような方法により、生成される学習済モデルが扱うことのできるデータの許容範囲を決定でき、また、摂動がどのように熟練度に影響を与えるかを知ることができる。 As described above, the explanation presenting device 4 according to the fourth embodiment is appropriate when the data not included in the trained model is used by visualizing the behavior of the sensor data generated at the time of perturbation. Take advantage of the fact that explanations are difficult to generate and the accuracy of explanations drops significantly. The explanatory presentation device 4 according to the fourth embodiment can determine the allowable range of data that can be handled by the trained model generated by such a method, and how the perturbation affects the skill level. You can know.
 また、実施の形態4の変形例として、作業者Pは、説明抽出部405のセンサデータの可視化のうち、特にどこに着目すべきかの範囲を摂動確認部408に入力(例えば、図21の変化#1、変化#2のいずれかを選択)すると、説明抽出部405は、該当部分のセンサデータのみが変化し、かつ、熟練度が現状よりも高くように暗黙的特徴量を探索し、そのセンサデータの変化の振れ幅を提示するようにしてもよい。 Further, as a modification of the fourth embodiment, the worker P inputs the range of the sensor data visualization of the explanatory extraction unit 405 to the perturbation confirmation unit 408 (for example, the change # in FIG. 21). (1) Select either change # 2), the explanation extraction unit 405 searches for an implicit feature amount so that only the sensor data of the relevant part changes and the skill level is higher than the current level, and the sensor is used. The fluctuation range of the change in the data may be presented.
 以上に説明したように、実施の形態4に係る説明提示装置4を用いれば、センサデータの摂動の幅を提示することで、作業者Pは、技能と体得する上で許容される行動変容を、再度の行動の登録を行う前に確認することが可能である。 As described above, by using the explanation presenting device 4 according to the fourth embodiment, by presenting the perturbation range of the sensor data, the worker P can change the behavior that is allowed to acquire the skill. , It is possible to confirm before registering the action again.
《実施の形態5》
 実施の形態1では、図3に示されるように、技能判定部103、特徴抽出部102、及び形式知紐付け部104のセットである学習装置によって形成される学習済モデルが、1個の学習済モデルM1である例を説明している。これに対し、実施の形態5に係る説明提示装置5は、複数の学習済モデルM1を生成し、これらの中から、作業者が習得を希望する技能に近い技能(すなわち、関連技能)に関する学習済モデルを探し当てる。
<< Embodiment 5 >>
In the first embodiment, as shown in FIG. 3, the trained model formed by the learning device which is a set of the skill determination unit 103, the feature extraction unit 102, and the explicit knowledge association unit 104 is one learning. An example of the finished model M1 is described. On the other hand, the explanatory presentation device 5 according to the fifth embodiment generates a plurality of learned models M1 and learns about a skill (that is, a related skill) close to the skill that the worker wants to acquire from among them. Find a finished model.
 図24は、実施の形態5に係る説明提示装置5の構成を示す機能ブロック図である。図24に示されるように、説明提示装置5は、データ記憶部501と、特徴取得部としての特徴抽出部502と、技能判定部503と、形式知紐付け部504と、反実仮想的説明抽出部である説明抽出部505と、形式知選択部506と、モデル優先順位決定部509とを有する。実施の形態5に係る説明提示装置5は、複数の学習済モデルM1を生成する点と、モデル優先順位決定部509が複数の学習済モデルM1の中から作業者が習得を希望する技能に近い技能に関する学習済モデルを探し当てる点が、実施の形態1に係る説明提示装置1と異なる。言い換えれば、説明提示装置5は、技能判定部503、特徴抽出部502、及び形式知紐付け部504のセットを複数有し、モデル優先順位決定部509は、複数のセットの優先順位を決定する。つまり、説明提示装置5は、それぞれが技能判定部503、特徴抽出部502、及び形式知紐付け部504からなる複数の学習セットと、複数の学習セットの優先順位を決定するモデル優先順位決定部509とを有し、複数の学習セットは、データベースから時分割でセンサデータを取得する。データ記憶部501、特徴抽出部502、技能判定部503、形式知紐付け部504、説明抽出部505、及び形式知選択部506は、実施の形態1におけるデータ記憶部101、特徴抽出部102、技能判定部103、形式知紐付け部104、説明抽出部105、及び形式知選択部106とそれぞれ同様である。 FIG. 24 is a functional block diagram showing the configuration of the explanatory presentation device 5 according to the fifth embodiment. As shown in FIG. 24, the explanatory presentation device 5 includes a data storage unit 501, a feature extraction unit 502 as a feature acquisition unit, a skill determination unit 503, an explicit knowledge linking unit 504, and an anti-real virtual explanation. It has an explanatory extraction unit 505, which is an extraction unit, an explicit knowledge selection unit 506, and a model priority determination unit 509. The explanation presenting device 5 according to the fifth embodiment is close to the point that a plurality of trained models M1 are generated and the skill that the model priority determination unit 509 wants to acquire from among the plurality of trained models M1. It differs from the explanation presenting device 1 according to the first embodiment in that it finds a learned model related to the skill. In other words, the explanation presenting device 5 has a plurality of sets of the skill determination unit 503, the feature extraction unit 502, and the explicit knowledge association unit 504, and the model priority determination unit 509 determines the priority of the plurality of sets. .. That is, the explanation presenting device 5 has a plurality of learning sets each consisting of a skill determination unit 503, a feature extraction unit 502, and an explicit knowledge association unit 504, and a model priority determination unit that determines the priority of the plurality of learning sets. With 509, the plurality of learning sets acquire sensor data from the database in a time-divided manner. The data storage unit 501, the feature extraction unit 502, the skill determination unit 503, the explicit knowledge linking unit 504, the explicit knowledge extraction unit 505, and the explicit knowledge selection unit 506 are the data storage unit 101, the explicit knowledge extraction unit 102, in the first embodiment. This is the same as the skill determination unit 103, the explicit knowledge linking unit 104, the explicit knowledge extraction unit 105, and the explicit knowledge selection unit 106, respectively.
 モデル優先順位決定部509は、データ記憶部501に登録された作業者Pのセンサデータを読み出し、複数の学習済モデルの各々が扱うのに適した時間幅のセンサデータ(すなわち、時分割センサデータ)を取得する。 The model priority determination unit 509 reads out the sensor data of the worker P registered in the data storage unit 501, and the sensor data having a time width suitable for each of the plurality of trained models (that is, the time-divided sensor data). ).
 図25は、実施の形態5に係る説明提示装置5における時分割センサデータの例を示す図である。学習済モデル#1は、機械学習時に時間R1の時間データを用いているため、センサデータを時間R1ずつ区切って、且つ時間をt1ずつ進めて時分割を行う。学習済モデル#2は、機械学習時に時間R2の時間データを用いているため、センサデータを時間R2ずつ区切って、且つ時間をt2ずつ進めて時分割を行う。 FIG. 25 is a diagram showing an example of time-division sensor data in the explanatory presentation device 5 according to the fifth embodiment. Since the trained model # 1 uses the time data of the time R1 at the time of machine learning, the sensor data is divided by the time R1 and the time is advanced by t1 to perform time division. Since the trained model # 2 uses the time data of the time R2 at the time of machine learning, the sensor data is divided by the time R2, and the time is advanced by t2 to perform time division.
 次に、モデル優先順位決定部509は、取得した各時間幅のセンサデータを全ての学習済モデルに入力し、技能判定部503及び特徴抽出部502のニューラルネットワークの最終層におけるデータの分布が正規分布から外れているかどうかをチェックする。このチェックには、例えば、非特許文献2に記載の方法を用いることができる。 Next, the model priority determination unit 509 inputs the acquired sensor data of each time width into all the trained models, and the distribution of the data in the final layer of the neural network of the skill determination unit 503 and the feature extraction unit 502 is normal. Check if it is out of the distribution. For this check, for example, the method described in Non-Patent Document 2 can be used.
 モデル優先順位決定部509は、上記最終層において、データの分布が正規分布から外れていない学習済モデルとセンサデータとの組み合わせを優先的に選択し、説明抽出部505の特徴空間の探索を行う。作業者Pが予め形式知選択部506で着目したい形式知を選択した場合、モデル優先順位決定部509は、形式知を学習データとして含んでいた学習済モデルに選択範囲を絞り込む。 In the final layer, the model priority determination unit 509 preferentially selects a combination of the trained model and the sensor data whose data distribution does not deviate from the normal distribution, and searches for the feature space of the explanation extraction unit 505. .. When the worker P selects the explicit knowledge to be focused on in the explicit knowledge selection unit 506 in advance, the model priority determination unit 509 narrows the selection range to the trained model including the explicit knowledge as learning data.
 モデル優先順位決定部509は、時分割したセンサデータdt1,dt2,…,dtrを各学習済モデルに入力し、熟練度の変動のしやすさを基準に学習済モデルを選択してもよい。また、モデル優先順位決定部509は、熟練度が全く変動しない学習済モデルは、関連が無いものとし、熟練度の変動が確認できる学習済モデルを優先的に用いてもよい。 The model priority determination unit 509 inputs the time-divisioned sensor data d t1 , d t2 , ..., D tr to each trained model, and selects the trained model based on the easiness of fluctuation of the skill level. May be good. Further, the model priority determination unit 509 considers that the trained model in which the skill level does not change at all is irrelevant, and may preferentially use the trained model in which the change in skill level can be confirmed.
 また、実施の形態3及び4で説明したように、技能データ生成部302及び402がVAE、GANなどの手法を用いて、中間層から出力されるデータの分布が正規分布になるように次元圧縮を行い、元のセンサデータを復元するように構成し、実施の形態5におけるモデル優先順位決定部509が、データの分布が正規分布から外れているどうかによって、相関のあるセンサデータと学習済モデルのセットを決定してもよい。 Further, as described in the third and fourth embodiments, the skill data generation units 302 and 402 are dimensionally compressed so that the distribution of the data output from the intermediate layer becomes a normal distribution by using a method such as VAE or GAN. , And the model priority determination unit 509 in the fifth embodiment is configured to restore the original sensor data, and the sensor data and the trained model are correlated depending on whether the distribution of the data deviates from the normal distribution. You may decide the set of.
 図26は、可視化された提示情報の例を示す図である。まず、説明抽出部505は、これまでに蓄積されたデータベースの情報を基に、表7に示されるように、ベイズルールによって熟練度が目標以上になる要因として、形式知1~mが含まれる確率を算出し、これを形式知選択部506を操作するための設定参考値として、技能を習得したい作業者Pに提示するために、表示器(図1に示される)に表示させてもよい。図26の表8には、技能を習得したい作業者Pが形式知選択部506で形式知として「技1」を選択した場合の可視化の例が示される。表8では、「技1」の値が大きく、熟練度が現在のものよりも大きい反実仮想行動C~C10の例が、一覧表示される。例えば、図26の表8の例では、熟練度が現在の作業者のセンサデータを基にして技能判定部503によって判定されたものよりも大きく、かつ、「技1」を降順に、10件の反実仮想行動の例が並べ替えて可視化されている。また、表8の左から3列目には「技1」と関連の高い技能である関連技能(S、S、など)を1つ提示している。また、表8のセンサデータの列には、技能データ生成部503によって生成されたデータを基に、大きな変化が発生した箇所を太線枠又はカラー枠などで、強調表示した例が示されている。なお、作業者Pが形式知を選択しない場合、熟練度が高い順もしくは、暗黙的特徴量の相関が高い順もしくは、これらを組み合わせたスコア順に、10件の反実仮想行動の例が並べ替えられてもよい。なお、図26に示される形式知の可視化の方法は一例であり、センサデータの重畳度合い、特徴量の相関度合い、などをベースにして、形式知を可視化し、形式知同士のつながりを作業者Pが分かるように提示してもよい。なお、図26に示される提示情報の例を、他の実施の形態に適用してもよい。 FIG. 26 is a diagram showing an example of visualized presentation information. First, the explanatory extraction unit 505 includes explicit knowledge 1 to m as a factor that makes the skill level higher than the target according to the Bayes rule, as shown in Table 7, based on the information in the database accumulated so far. The probability may be calculated and displayed on a display (shown in FIG. 1) in order to present it to the worker P who wants to acquire the skill as a setting reference value for operating the explicit knowledge selection unit 506. .. Table 8 of FIG. 26 shows an example of visualization when the worker P who wants to acquire the skill selects “Skill 1” as explicit knowledge in the explicit knowledge selection unit 506. In Table 8, examples of anti-real virtual actions C 1 to C 10 in which the value of "Skill 1" is large and the skill level is higher than the current one are listed. For example, in the example of Table 8 in FIG. 26, the skill level is larger than that determined by the skill determination unit 503 based on the sensor data of the current worker, and "Skill 1" is given in descending order of 10 cases. Examples of anti-real virtual behaviors are rearranged and visualized. Further, in the third column from the left of Table 8, one related skill (S a , S b , etc.), which is a skill highly related to “Skill 1”, is presented. Further, in the sensor data column of Table 8, an example is shown in which a portion where a large change has occurred is highlighted with a thick line frame or a color frame based on the data generated by the skill data generation unit 503. .. If the worker P does not select explicit knowledge, 10 examples of anti-real virtual behaviors are sorted in order of skill level, correlation of implicit features, or a combination of these scores. May be done. The method of visualizing explicit knowledge shown in FIG. 26 is an example. The explicit knowledge is visualized based on the degree of superimposition of sensor data, the degree of correlation of feature amounts, and the like, and the connection between explicit knowledge is connected to the operator. It may be presented so that P can be understood. The example of the presented information shown in FIG. 26 may be applied to other embodiments.
 以上に説明したように、実施の形態5に係る説明提示装置5を用いれば、作業者Pが、複数の学習済モデルの中から、学習済モデルに入力するデータに適した学習済モデルを選択する操作が不要又は容易になり、説明提示装置5が自動的に技能が抽出されそうな部分を選択し、これと共に対応する学習済モデルを選択することができる。 As described above, by using the explanation presentation device 5 according to the fifth embodiment, the worker P selects a trained model suitable for the data to be input to the trained model from the plurality of trained models. The operation to be performed becomes unnecessary or easy, and the explanation presenting device 5 can automatically select the part where the skill is likely to be extracted, and select the corresponding trained model together with this.
 また、実施の形態5に係る説明提示装置5は、作業者Pが形式知選択部506で形式知を選択した際に、対応する形式知が取り込まれていて、かつ入力されたセンサデータと関連があると思われる学習済モデルを選択することが可能である。そして、複数の技能を扱う場合に、適切な学習済モデルが選択され、暗黙的特徴量に加えて技能の習得に適した形式知の提示が可能となる。 Further, the explanatory presentation device 5 according to the fifth embodiment is associated with the sensor data in which the corresponding explicit knowledge is taken in and input when the worker P selects explicit knowledge in the explicit knowledge selection unit 506. It is possible to select a trained model that seems to have. Then, when dealing with a plurality of skills, an appropriate trained model is selected, and explicit knowledge suitable for acquiring the skill can be presented in addition to the implicit features.
 1~5 説明提示装置、 11 プロセッサ、 12 メモリ、 13 記憶装置、 14 操作装置、 15 表示器、 16 センサ、 20 学習装置、 21 データ取得部、 22 モデル生成部、 23 学習済モデル記憶部、 30 推論装置、 31 データ取得部、 32 推論部、 33 学習済モデル記憶部、 101、201、301、401、501 データ記憶部、 102、202、502 特徴抽出部(特徴取得部)、 302、402 技能データ生成部(特徴取得部)、 103、203、303、403、503 技能判定部、 104、204、304、404、504 形式知紐付け部、 105、205、305、405、505 説明抽出部、 106、206、306、406、506 形式知選択部、 207 特徴比較部、 408 摂動確認部、 509 モデル優先順位決定部。 1-5 Explanation presentation device, 11 processor, 12 memory, 13 storage device, 14 operation device, 15 display, 16 sensor, 20 learning device, 21 data acquisition unit, 22 model generation unit, 23 trained model storage unit, 30 Inference device, 31 data acquisition unit, 32 inference unit, 33 trained model storage unit, 101, 201, 301, 401, 501 data storage unit, 102, 202, 502 feature extraction unit (feature acquisition unit), 302, 402 skills Data generation unit (feature acquisition unit), 103, 203, 303, 403, 503 skill judgment unit, 104, 204, 304, 404, 504 format knowledge linking unit, 105, 205, 305, 405, 505 explanation extraction unit, 106, 206, 306, 406, 506 Format knowledge selection unit, 207 feature comparison unit, 408 perturbation confirmation unit, 509 model priority determination unit.

Claims (15)

  1.  作業者の行動を検出して得られたセンサデータと取得した人間が解釈できる知識である形式知とを記憶するデータベースから、前記行動の特徴量を取得する特徴取得部と、
     前記特徴量から前記作業者の技能レベルを判定し、前記データベースに前記技能レベルを登録する技能判定部と、
     前記データベースにおいて、前記特徴量に形式知を紐付ける形式知紐付け部と、
     前記特徴量と前記特徴量に紐付けられた前記形式知とを含む提示情報を抽出する説明抽出部と、
     を有する説明提示装置。
    A feature acquisition unit that acquires the feature amount of the behavior from a database that stores the sensor data obtained by detecting the behavior of the worker and the explicit knowledge that is the acquired knowledge that can be interpreted by humans.
    A skill determination unit that determines the skill level of the worker from the feature amount and registers the skill level in the database.
    In the database, the explicit knowledge linking unit that links explicit knowledge to the feature amount,
    An explanatory extraction unit that extracts presentation information including the feature amount and the explicit knowledge associated with the feature amount, and an explanatory extraction unit.
    Explanation presentation device having.
  2.  前記特徴取得部及び前記技能判定部は、前記センサデータと前記センサデータの正解信号とを含む第1の学習用データを用いて、前記センサデータから前記技能レベルを判定するための第1の学習済モデルを生成する
     請求項1に記載の説明提示装置。
    The feature acquisition unit and the skill determination unit use the first learning data including the sensor data and the correct answer signal of the sensor data, and the first learning for determining the skill level from the sensor data. The explanatory presentation device according to claim 1, which generates a completed model.
  3.  前記特徴取得部及び前記技能判定部は、前記第1の学習済モデルを用いて前記センサデータから前記技能レベルを判定する
     請求項2に記載の説明提示装置。
    The explanatory presentation device according to claim 2, wherein the feature acquisition unit and the skill determination unit determine the skill level from the sensor data using the first learned model.
  4.  前記特徴取得部、前記技能判定部、及び前記形式知紐付け部は、前記特徴量と前記技能レベルとを含む第2の学習用データを用いて、前記特徴量に紐づけられる形式知を出力するための第2の学習済モデルを生成する
     請求項1から3のいずれか1項に記載の説明提示装置。
    The feature acquisition unit, the skill determination unit, and the explicit knowledge linking unit output explicit knowledge associated with the feature amount using the second learning data including the feature amount and the skill level. The explanatory presentation device according to any one of claims 1 to 3, which generates a second trained model for the purpose of generating the second trained model.
  5.  前記特徴取得部、前記技能判定部、及び前記形式知紐付け部は、前記第2の学習済モデルを用いて前記形式知を出力する
     請求項4に記載の説明提示装置。
    The explanatory presentation device according to claim 4, wherein the feature acquisition unit, the skill determination unit, and the explicit knowledge linking unit output the explicit knowledge using the second learned model.
  6.  前記技能レベルは、熟練度及び加工精度の少なくとも一方を含む
     請求項1から5のいずれか1項に記載の説明提示装置。
    The explanatory presentation device according to any one of claims 1 to 5, wherein the skill level includes at least one of skill level and processing accuracy.
  7.  前記センサデータは、映像データを含む
     請求項1から6のいずれか1項に記載の説明提示装置。
    The explanatory presentation device according to any one of claims 1 to 6, wherein the sensor data includes video data.
  8.  前記提示情報に基づく画像を表示する表示器をさらに有する
     請求項1から7のいずれか1項に記載の説明提示装置。
    The explanatory presentation device according to any one of claims 1 to 7, further comprising a display for displaying an image based on the presentation information.
  9.  前記作業者のセンサデータから得られた暗黙的特徴量と他の作業者のセンサデータから得られた他の暗黙的特徴量との距離を算出し前記データベースに予め記憶する特徴比較部をさらに有し、
     前記特徴取得部は、前記距離に基づいて前記特徴量を取得する
     請求項1から8のいずれか1項に記載の説明提示装置。
    It also has a feature comparison unit that calculates the distance between the implicit feature amount obtained from the sensor data of the worker and the other implicit feature amount obtained from the sensor data of another worker and stores it in the database in advance. death,
    The explanatory presentation device according to any one of claims 1 to 8, wherein the feature acquisition unit acquires the feature amount based on the distance.
  10.  前記特徴取得部は、前記距離に基づいて抽出された、限られた個数の特徴量を取得する
     請求項9に記載の説明提示装置。
    The explanatory presentation device according to claim 9, wherein the feature acquisition unit acquires a limited number of features extracted based on the distance.
  11.  前記特徴取得部は、前記特徴量を取得するとともに、前記特徴量から元のセンサデータを生成データとして再現する技能データ生成部であり、
     前記説明抽出部は、前記提示情報を、前記センサデータと前記生成データの差の大きい範囲を強調表示させる表示にする
     請求項1から10のいずれか1項に記載の説明提示装置。
    The feature acquisition unit is a skill data generation unit that acquires the feature amount and reproduces the original sensor data from the feature amount as generated data.
    The explanatory presentation device according to any one of claims 1 to 10, wherein the explanatory extraction unit displays the presented information so as to highlight a range in which a large difference between the sensor data and the generated data is large.
  12.  前記データベースに入力する前記センサデータに変化を与え、摂動が加えられたセンサデータを前記データベースに記憶させる摂動確認部をさらに有し、
     前記特徴取得部は、前記特徴量を取得するとともに、前記特徴量から元のセンサデータを生成データとして再現する技能データ生成部であり、
     前記説明抽出部は、前記生成データと、前記摂動が加えられたセンサデータとの差異を提示する
     請求項1から10のいずれか1項に記載の説明提示装置。
    It further has a perturbation confirmation unit that changes the sensor data input to the database and stores the perturbated sensor data in the database.
    The feature acquisition unit is a skill data generation unit that acquires the feature amount and reproduces the original sensor data from the feature amount as generated data.
    The explanatory presentation device according to any one of claims 1 to 10, wherein the explanatory extraction unit presents a difference between the generated data and the sensor data to which the perturbation is applied.
  13.  それぞれが前記技能判定部、前記特徴取得部、及び前記形式知紐付け部からなる複数の学習セットと、
     前記複数の学習セットの優先順位を決定するモデル優先順位決定部と
     を有し、
     前記複数の学習セットは、前記データベースから時分割でセンサデータを取得する
     請求項1から12のいずれか1項に記載の説明提示装置。
    A plurality of learning sets each consisting of the skill determination unit, the feature acquisition unit, and the explicit knowledge linking unit,
    It has a model priority determination unit that determines the priority of the plurality of learning sets.
    The explanatory presentation device according to any one of claims 1 to 12, wherein the plurality of learning sets acquire sensor data from the database in a time-division manner.
  14.  説明提示装置によって実行される説明提示方法であって、
     作業者の行動を検出して得られたセンサデータと取得した人間が解釈できる知識である形式知とを記憶するデータベースから、前記行動の特徴量を取得するステップと、
     前記特徴量から前記作業者の技能レベルを判定し、前記データベースに前記技能レベルを登録するステップと、
     前記データベースにおいて、前記特徴量に形式知を紐付けるステップと、
     前記特徴量と前記特徴量に紐付けられた前記形式知とを含む提示情報を抽出するステップと、
     を有する説明提示方法。
    An explanation presentation method performed by the explanation presentation device.
    The step of acquiring the feature amount of the behavior from the database that stores the sensor data obtained by detecting the behavior of the worker and the explicit knowledge that is the acquired knowledge that can be interpreted by humans.
    A step of determining the skill level of the worker from the feature amount and registering the skill level in the database, and
    In the database, the step of associating explicit knowledge with the feature quantity,
    A step of extracting presentation information including the feature amount and the explicit knowledge associated with the feature amount, and
    Explanation presentation method having.
  15.  作業者の行動を検出して得られたセンサデータと取得した人間が解釈できる知識である形式知とを記憶するデータベースから、前記行動の特徴量を取得するステップと、
     前記特徴量から前記作業者の技能レベルを判定し、前記データベースに前記技能レベルを登録するステップと、
     前記データベースにおいて、前記特徴量に形式知を紐付けるステップと、
     前記特徴量と前記特徴量に紐付けられた前記形式知とを含む提示情報を抽出するステップと、
     をコンピュータに実行させる説明提示プログラム。
    The step of acquiring the feature amount of the behavior from the database that stores the sensor data obtained by detecting the behavior of the worker and the explicit knowledge that is the acquired knowledge that can be interpreted by humans.
    A step of determining the skill level of the worker from the feature amount and registering the skill level in the database, and
    In the database, the step of associating explicit knowledge with the feature quantity,
    A step of extracting presentation information including the feature amount and the explicit knowledge associated with the feature amount, and
    An explanation presentation program that causes the computer to execute.
PCT/JP2020/030891 2020-08-14 2020-08-14 Explanation presentation device, explanation presentation method, and explanation presentation program WO2022034685A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022542564A JP7158633B2 (en) 2020-08-14 2020-08-14 Explanation presentation device, explanation presentation method, and explanation presentation program
PCT/JP2020/030891 WO2022034685A1 (en) 2020-08-14 2020-08-14 Explanation presentation device, explanation presentation method, and explanation presentation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/030891 WO2022034685A1 (en) 2020-08-14 2020-08-14 Explanation presentation device, explanation presentation method, and explanation presentation program

Publications (1)

Publication Number Publication Date
WO2022034685A1 true WO2022034685A1 (en) 2022-02-17

Family

ID=80247072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/030891 WO2022034685A1 (en) 2020-08-14 2020-08-14 Explanation presentation device, explanation presentation method, and explanation presentation program

Country Status (2)

Country Link
JP (1) JP7158633B2 (en)
WO (1) WO2022034685A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002333826A (en) * 2001-05-10 2002-11-22 Nec Corp Skill improvement support device
JP2017146577A (en) * 2016-02-15 2017-08-24 日本電信電話株式会社 Technical support device, method, program and system
JP2020034849A (en) * 2018-08-31 2020-03-05 オムロン株式会社 Work support device, work support method, and work support program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002333826A (en) * 2001-05-10 2002-11-22 Nec Corp Skill improvement support device
JP2017146577A (en) * 2016-02-15 2017-08-24 日本電信電話株式会社 Technical support device, method, program and system
JP2020034849A (en) * 2018-08-31 2020-03-05 オムロン株式会社 Work support device, work support method, and work support program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TATSUTA, RIKI ET AL.: "Agricultural Support Considering Physical Behavior", PROCEEDINGS OF FIT 2016, 15TH IPSJ FORUM, August 2016 (2016-08-01), pages 261 - 262 *

Also Published As

Publication number Publication date
JPWO2022034685A1 (en) 2022-02-17
JP7158633B2 (en) 2022-10-21

Similar Documents

Publication Publication Date Title
Chapfuwa et al. Adversarial time-to-event modeling
US9721221B2 (en) Skill estimation method in machine-human hybrid crowdsourcing
Yang et al. Modeling task complexity in crowdsourcing
US20180232805A1 (en) User credit rating method and apparatus, and storage medium
CN110070391B (en) Data processing method and device, computer readable medium and electronic equipment
Tang et al. An exploratory analysis of the latent structure of process data via action sequence autoencoders
US11068285B2 (en) Machine-learning models applied to interaction data for determining interaction goals and facilitating experience-based modifications to interface elements in online environments
Dantas et al. Effort estimation in agile software development: An updated review
CN114254208A (en) Identification method of weak knowledge points and planning method and device of learning path
JP2021523509A (en) Expert Report Editor
Shen et al. A scenario-driven decision support system for serious crime investigation
US11238518B2 (en) Customized predictive financial advisory for a customer
WO2020206172A1 (en) Confidence evaluation to measure trust in behavioral health survey results
JP4447552B2 (en) Information providing method and apparatus, program, and computer-readable recording medium
Malik et al. When Does Beauty Pay? A Large-Scale Image-Based Appearance Analysis on Career Transitions
WO2022034685A1 (en) Explanation presentation device, explanation presentation method, and explanation presentation program
JP2019003408A (en) Evaluation method, computer, and program for hyperparameter
US20200081591A1 (en) Determining content values to render in a computer user inteface based on user feedback and information
WO2022226890A1 (en) Disease prediction method and apparatus, electronic device, and computer-readable storage medium
WO2020250810A1 (en) Information processing device, information processing method, and program
CN102959560A (en) Automatic appeal measurement method
Ajayi Interactive data visualization in accounting contexts: Impact on user attitudes, information processing, and decision outcomes
Arunkumar et al. Real-time visual feedback to guide benchmark creation: A human-and-metric-in-the-loop workflow
WO2021106111A1 (en) Learning device, inference device, learning method, inference method, and program
Zhang New funding and pricing mechanism in alternative and sustainable finance: the role of non-financial factors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20949547

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022542564

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20949547

Country of ref document: EP

Kind code of ref document: A1