WO2024029411A1 - Work feature amount display device, work feature amount display method, and work feature amount display program - Google Patents

Work feature amount display device, work feature amount display method, and work feature amount display program Download PDF

Info

Publication number
WO2024029411A1
WO2024029411A1 PCT/JP2023/027284 JP2023027284W WO2024029411A1 WO 2024029411 A1 WO2024029411 A1 WO 2024029411A1 JP 2023027284 W JP2023027284 W JP 2023027284W WO 2024029411 A1 WO2024029411 A1 WO 2024029411A1
Authority
WO
WIPO (PCT)
Prior art keywords
work
feature amount
worker
feature
unit
Prior art date
Application number
PCT/JP2023/027284
Other languages
French (fr)
Japanese (ja)
Inventor
健太 西行
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2024029411A1 publication Critical patent/WO2024029411A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the disclosed technology relates to a work feature display device, a work feature display method, and a work feature display program.
  • Japanese Patent Laid-Open No. 2022-077870 discloses that a first moving image in which a person performing a specific action is photographed is played back and displayed, and a frame corresponding to the specific action is displayed on the frame of the first moving image that is played back and displayed.
  • an input receiving unit that accepts an instruction input indicating a related position during playback and display; a second feature point group including one or more feature points included in the second moving image; and a second feature point group including one or more feature points included in the first moving image.
  • the degree of similarity between the first moving image and the second moving image is calculated based on the comparison result of the feature amount with a first feature point group including two or more feature points, and the calculated similarity between the first moving image and the second moving image is calculated.
  • a determination unit that determines whether the specific motion is included in the second moving image based on a degree of similarity; Disclosed is a motion recognition system including the determination unit that assigns a larger weight to the comparison result of the feature amount as the feature point is closer to the indicated position.
  • Japanese Patent Laid-Open No. 2022-077870 discloses a technology that performs motion recognition with high accuracy by increasing the weight of a part related to a specific motion specified by a user in the process of recognizing a person's motion. ing.
  • the disclosed technology has been made in view of the above points, and provides a work feature display device, a work feature display method, and a work feature display method that can select and display worker feature values suitable for work motion analysis.
  • the purpose of the present invention is to provide a work feature value display program.
  • a first aspect of the disclosure is a work feature amount display device, which includes a feature amount acquisition unit that acquires a feature amount for each part of the worker when the worker performs a predetermined series of tasks; and a control unit that controls the display unit to display the feature amount of the selected part selected from among the parts of the person.
  • a second aspect of the disclosure is the first aspect, further comprising a reduction unit that reduces the feature amount to one-dimensional series data, and the control unit controls the control unit so that the series data is displayed on the display unit. Control.
  • a third aspect of the disclosure in the first aspect or the second aspect, includes a worker reception unit that receives the worker.
  • a fourth aspect of the disclosure in any of the first to third aspects described above, includes a selected site receiving section that receives the selected site.
  • a fifth aspect of the disclosure is, in any one of the first to third aspects, a standard working time acquisition unit that acquires a standard working time for the series of tasks, and a standard working time acquisition unit that acquires a standard working time for the series of tasks, and a
  • the control unit includes a peak detection unit that detects a position, and a work time calculation unit that calculates a work time for the series of work based on the interval between each detected peak for each part, and the control unit Control is performed so that the feature amount of the selected region with which the difference between the working time and the working time is the smallest is displayed on the display unit.
  • the working time calculation unit sets the median value of the intervals between the detected peaks as the working time.
  • a seventh aspect of the disclosure is a work feature amount display method, in which a computer acquires feature amounts for each part of the worker when the worker performs a predetermined series of tasks, and A process including controlling the feature amount of the selected part selected from each part to be displayed on the display unit is executed.
  • An eighth aspect of the disclosure is a work feature amount display program, which causes a computer to acquire feature amounts for each part of the worker when the worker performs a predetermined series of tasks, and displays the feature amount of the worker.
  • a process including controlling the feature amount of the selected part selected from among the parts to be displayed on the display unit is executed.
  • FIG. 1 is a configuration diagram of a work feature amount display system.
  • FIG. 2 is a configuration diagram showing the hardware configuration of a work feature amount display device.
  • FIG. 2 is a functional block diagram of a work feature amount display device. It is a figure showing an example of a display screen. It is a figure showing an example of a display screen. It is a figure showing an example of a display screen.
  • FIG. 3 is a diagram for explaining estimation of a starting point. It is a flowchart of work feature amount display processing.
  • FIG. 1 shows the configuration of a work feature quantity display system 10.
  • the work feature display system 10 includes a work feature display device 20 and a camera 30.
  • the work feature quantity display device 20 is a device that displays motion feature quantities calculated based on moving images photographed by the camera 30.
  • the worker W takes out the work object M placed on the workbench TB and performs a predetermined series of tasks in the work space S.
  • the series of operations performed by the worker W include various operations in one work cycle, such as grasping, transporting, assembling, inspecting, tightening screws with a screwdriver, and attaching labels to components, for example.
  • the camera 30 is a photographing device capable of photographing, for example, RGB color moving images.
  • the camera 30 is installed at a position where the movement of the worker W and the entire workbench TB can be easily recognized.
  • FIG. 2 is a block diagram showing the hardware configuration of the work feature amount display device 20 according to the present embodiment.
  • the work feature amount display device 20 includes a controller 21.
  • the controller 21 is composed of a device including a general computer.
  • the controller 21 includes a CPU (Central Processing Unit) 21A, a ROM (Read Only Memory) 21B, a RAM (Random Access Memory) 21C, and an input/output interface (I/O) 2. Equipped with 1D.
  • the CPU 21A, ROM 21B, RAM 21C, and I/O 21D are connected to each other via a bus 21E.
  • Bus 21E includes a control bus, an address bus, and a data bus.
  • an operation section 22, a display section 23, a communication section 24, and a storage section 25 are connected to the I/O 21D.
  • the operation unit 22 includes, for example, a mouse and a keyboard.
  • the display unit 23 is composed of, for example, a liquid crystal display.
  • the communication unit 24 is an interface for performing data communication with an external device such as the camera 30.
  • the storage unit 25 is composed of a nonvolatile external storage device such as a hard disk. As shown in FIG. 2, the storage unit 25 stores a work feature display program 25A, a motion feature database 25B, and the like.
  • the CPU 21A is an example of a computer.
  • a computer here refers to a processor in a broad sense, and can be a general-purpose processor (e.g., CPU) or a dedicated processor (e.g., GPU: Graphics Processing Unit, ASIC: Application Specific Integrated Circuit, FPGA: FI eld Programmable Gate array, programmable logic device, etc.).
  • CPU general-purpose processor
  • ASIC Application Specific Integrated Circuit
  • FPGA FI eld Programmable Gate array
  • programmable logic device etc.
  • the work feature amount display program 25A is realized by being stored in a non-volatile non-transitory recording medium, or distributed via a network, and installed on the work feature amount display device 20 as appropriate. You may.
  • non-volatile non-transitional recording media examples include CD-ROM (Compact Disc Read Only Memory), magneto-optical disk, HDD (Hard Disk Drive), DVD-ROM (Digital Versatile Disc Read Only Memory), and DVD-ROM (Digital Versatile Disc Read Only Memory). Memory), flash memory, memory Cards, etc. are assumed.
  • FIG. 3 is a block diagram showing the functional configuration of the CPU 21A of the work feature quantity display device 20.
  • the CPU 21A functionally includes a motion feature acquisition section 40, a control section 41, a reduction section 42, a worker reception section 43, a selected part reception section 44, a standard work time acquisition section 45, It includes functional units such as a peak detection unit 46 and a work time calculation unit 47.
  • the CPU 21A functions as each functional unit shown in FIG. 3 by reading and executing the work feature amount display program 25A stored in the storage unit 25.
  • the motion feature acquisition section 40 reads motion features for each part of the worker W when the worker W performs a predetermined series of tasks from the motion feature database 25B stored in advance in the storage section 25. Obtained by The motion feature database 25B is a database that stores motion features for each part and for each worker.
  • the motion feature amount can be calculated as a motion vector series, for example, based on a moving image captured by the camera 30 of the worker W performing a predetermined series of tasks. Note that instead of the camera 30, the motion feature amount of the worker W may be acquired using a motion sensor or the like.
  • the parts of the worker W are parts including the joints that make up the skeleton of the worker W, such as the face, neck, shoulders, elbows, wrists, hips, knees, and ankles of the worker W. .
  • the posture of the worker W is estimated based on the moving image, and the estimated posture is converted into a skeletal sequence.
  • the skeletal series is time-series data that includes coordinates of feature points such as body parts and joints of the worker W, and labels representing the body parts of the feature points.
  • the feature points include the face of the worker W, such as eyes and nose, and joints such as the neck, shoulders, elbows, wrists, hips, knees, and ankles.
  • OpenPose uses a learned model that uses a moving image as input and a skeletal sequence as output, and is trained using a large number of moving images as training data.
  • a learning method for obtaining such a trained model a known method such as CNN (Convolutional Neural Networks) is used, for example.
  • this embodiment uses a method called MotionRetargeting described in Reference 2 below to convert a skeletal sequence obtained from a video image into a motion representation representing a motion feature amount. Convert to vector series.
  • a skeletal sequence is input and an encoder is used to output feature vectors of three components: motion, body shape, and camera viewpoint.
  • an encoder is used to output feature vectors of three components: motion, body shape, and camera viewpoint.
  • Only the feature vector of the motion component is used.
  • the skeletal sequence is subjected to three preprocessing processes: time series interpolation processing, time series smoothing, and lower body interpolation processing before conversion to a motion vector series. You may do so.
  • time-series interpolation processing if there is a joint point for which pose estimation has failed, the joint point of the previous frame is copied.
  • time series smoothing in order to remove noise in pose estimation, series data is smoothed using a Gaussian filter.
  • OpenPose used in this embodiment estimates not only the posture of the upper body of a person but also the posture of the lower body. When working in a factory, workers often work on a desk-top workbench, so their lower body is often shielded by the desk, resulting in loss of joints in the lower body.
  • Encoders that extract motion features, such as MotionRetargeting take as input the skeletal sequence of a person's entire body, so if lower body joint points are missing, the motion component feature vector cannot be properly output.
  • interpolation processing for the lower body may be performed. Specifically, as the interpolation process for the lower body, the joint points of at least one of both knees and both legs may be interpolated with a length proportional to the length of the torso of the person.
  • the motion features of each part calculated in this way are stored in advance in the storage unit 25 as a motion feature database 25B for each worker.
  • the control unit 41 controls the display unit 23 to display the motion feature amount of the selected part selected from among the parts of the worker W.
  • the reduction unit 42 reduces the motion features acquired by the motion feature acquisition unit 40 into one-dimensional series data.
  • the control unit 41 controls the reduced one-dimensional series data to be displayed on the display unit 23. Note that various known methods can be used as the reduction method.
  • the worker reception unit 43 receives a worker whose motion feature amount is to be displayed from among a plurality of workers.
  • the selected part receiving unit 44 receives the selected part selected from among the parts of the worker.
  • FIG. 4 shows an example of a display screen of motion feature amounts displayed on the display unit 23.
  • the display screen D shown in FIG. 4 includes a pull-down menu M1 for accepting a worker whose motion feature amount is to be displayed from among a plurality of workers, and a selection part for displaying a motion feature amount from among each part of the worker. , a display area R1 for checking the part numbers assigned to the parts of the worker, and a display area R2 for displaying the reduced one-dimensional series data as a graph.
  • the user can select a desired worker from among the plurality of workers whose motion features are stored in the motion feature database 25B.
  • the example in FIG. 4 shows a state in which a worker with worker ID "A_02" is selected.
  • the user can select a desired part from among the motion features of each part of the worker selected in the pull-down menu M1.
  • part numbers assigned in advance to parts of the worker are displayed.
  • the user refers to display area R1 and selects a desired part number from pull-down menu M2.
  • the example in FIG. 4 shows a state in which [5, 6, 7] represented by a set of part numbers is selected, that is, a state in which the left arm is selected.
  • the set of part numbers may be composed of a plurality of part numbers or a single part number.
  • a plurality of predetermined sets of part numbers are displayed in the pull-down menu M2, the user may be able to directly input any part number.
  • [2, 3, 4, 5, 6, 7] is selected as the part number. That is, the right arm and left arm are selected. Looking at the graph G in the display area R2, it can be confirmed that the variation in the intervals between the peaks of the motion feature amount is relatively small. Therefore, the user cannot understand that the motion features of the parts represented by [2, 3, 4, 5, 6, 7] are suitable for analyzing the work of worker "A_02". can.
  • [0, 1, 2, 3, 4, 5, 6, 7, 8] is selected as the part number. Looking at the graph in the display area R2, it can be seen that the variation in the peak interval of the motion feature amount is relatively large. Therefore, the user considers that the motion features of the parts represented by [0, 1, 2, 3, 4, 5, 6, 7, 8] are not suitable for analyzing the work of worker "A_02". be able to understand that.
  • the part suitable for the work analysis is automatically selected, and the selected part The motion feature amount may be displayed in the display area R2.
  • the standard working time acquisition unit 45 acquires the standard working time for a series of tasks by reading it from the storage unit 25, for example.
  • the standard working time can be obtained, for example, by measuring the working time when a standard worker performs a series of tasks, and is stored in the storage unit 25 in advance.
  • the standard working time may be acquired by having the user input the standard working time on the display screen D.
  • the peak detection unit 46 detects the position of the peak of the motion feature amount for each part. Note that various known methods can be used to detect the peak.
  • FIG. 7 shows an example of peak position detection. In the example of FIG. 7, peaks P1 to P6 are detected.
  • the work time calculation unit 47 calculates the work time of a series of tasks for each region based on the interval between each detected peak. At this time, the work time calculation unit 47 may take the median value of the intervals between the detected peaks as the work time. For example, in the example of FIG. 7, if the appearance times of peaks P1 to P7 are [100,950,1300,1800,2650,3500], the intervals between adjacent peaks are [850,350,500,850]. , 850]. In this case, the median interval between adjacent peaks is "850", which is the working time for one cycle of the series of works. In this way, by setting the median interval between adjacent peaks as the working time, it is possible to eliminate the influence of noise peaks such as peak P3 shown in FIG. 7 and calculate the working time with high accuracy. .
  • control unit 41 displays the motion feature amount of the selected part in which the difference between the standard work time acquired by the standard work time acquisition unit 45 and the work time calculated by the work time calculation unit 47 is the smallest in the display area R2. Control what is displayed. At this time, the selected region can be highlighted by displaying the region number of the selected region in the pull-down menu M2 or by making the color of the region number of the selected region different from other region numbers in the display area R1. Make it easy to understand which part it is. This eliminates the need for the user to select various parts from the pull-down menu M2, display the graph G, and judge whether or not the motion features are suitable for analysis of the work. Therefore, the user can quickly grasp the part suitable for analysis of the work.
  • step S100 the CPU 21A determines whether a worker has been selected from the pull-down menu M1. Then, if a worker is selected, the process moves to step S101, and if a worker is not selected, the process waits until a worker is selected.
  • step S101 the CPU 21A determines whether a body part has been selected from the pull-down menu M2. Then, if a part is selected, the process moves to step S102, and if no part is selected, the process moves to step S105.
  • step S102 the CPU 21A acquires the motion feature amount of the part selected in step S101 of the worker selected in step S100 by reading it from the motion feature database 25B.
  • step S103 the CPU 21A reduces the motion feature amount acquired in step S102 to a one-dimensional motion feature amount.
  • step S104 the CPU 21A displays the motion feature quantity reduced to one dimension in step S108 as a graph G in the display area R2 of the display screen D.
  • the user can display the motion feature amount in the display area R2 by selecting the worker and the part, and can analyze the work by checking the periodicity of the motion feature amount of the selected part. You can decide whether it is suitable or not.
  • step S105 the CPU 21A determines whether automatic button B has been pressed. Then, if the automatic button B is pressed, the process moves to step S106, and if the automatic button B is not pressed, the process moves to step S101.
  • step S106 the CPU 21A acquires the standard working time of the series of tasks by reading it from the storage unit 25.
  • step S107 the CPU 21A detects the peak position of the motion feature amount for each part. Specifically, for each part of the worker selected in step S100, motion features are read from the motion feature database 25B, reduced to one-dimensional motion features, and based on the reduced motion features. Detect the position of the peak.
  • step S108 the CPU 21A calculates the working time of the series of tasks for each part of the worker selected in step S100, based on the interval between each peak detected in step S107.
  • step S109 the CPU 21A compares the standard working time obtained in step S104 and the working time for each part calculated in step S106. Then, the motion feature amount of the selected part with the smallest difference between the standard work time acquired in step S104 and the work time calculated in step S106 is displayed in the display area R2.
  • the user does not have to select various parts on his/her own and display the motion feature amounts to check the periodicity, and can quickly grasp the part suitable for analysis of the work.
  • the movement of a selected part selected from among the parts of the worker among the movement features of each part of the worker when the worker performs a predetermined series of tasks is determined.
  • the feature amount is controlled to be displayed on the display unit 23. Thereby, it is possible to display the worker's motion feature amount suitable for the motion analysis of the work.
  • a motion feature amount calculated based on a video image of a worker working is reduced to a one-dimensional motion feature amount and displayed.
  • the motion feature may be a sound recorded by a microphone installed in a place where the worker is working. This is because it is sometimes possible to understand the periodicity of work from the operation sounds of machines operated by workers.
  • acceleration data detected by an acceleration sensor such as a motion sensor may be used as the motion feature amount.
  • a motion feature amount that is a combination of these motion feature amounts may be used.
  • control is performed so that the motion feature amount of the selected part selected from among the parts of the worker is displayed on the display unit 23.
  • Control may be performed so that the feature amount is displayed on the display unit 23.
  • the display unit 23 may be controlled to display other feature amounts such as a skeletal coordinate series of the worker or a velocity vector series of the skeletal coordinates.
  • the user may be able to select the feature amount to be displayed on the display unit 23 from among the motion feature amount, the skeletal coordinate series, and the velocity vector series of the skeletal coordinates.
  • the work feature amount display processing that was executed by the CPU in the above embodiment by reading the software (program) may be executed by various processors other than the CPU.
  • the processor is a PLD (Programmable Logic) whose circuit configuration can be changed after manufacturing, such as an FPGA (Field-Programmable Gate Array).
  • Examples include a dedicated electric circuit, which is a processor having a circuit configuration specifically designed to execute recognition processing, such as a device (device), and an application specific integrated circuit (ASIC).
  • the work feature amount display processing may be executed by one of these various processors, or by a combination of two or more processors of the same type or different types (for example, multiple FPGAs, and a combination of a CPU and an FPGA). combinations of etc.).
  • the hardware structure of these various processors is, more specifically, an electric circuit that is a combination of circuit elements such as semiconductor elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Factory Administration (AREA)
  • Image Analysis (AREA)

Abstract

This work feature amount display device comprises: a feature amount acquisition unit that acquires the feature amount for each body part of a worker when the worker carries out a series of predetermined work; and a control unit that controls a display unit to display the feature amount of a selected body part selected from among the body parts of the worker.

Description

作業特徴量表示装置、作業特徴量表示方法、及び作業特徴量表示プログラムWork feature display device, work feature display method, and work feature display program
 開示の技術は、作業特徴量表示装置、作業特徴量表示方法、及び作業特徴量表示プログラムに関する。 The disclosed technology relates to a work feature display device, a work feature display method, and a work feature display program.
 特開2022-077870号公報には、特定の動作を行う人物が撮影された第1の動画像を再生表示し、再生表示された前記第1の動画像のフレーム上に、前記特定の動作に関連する位置を指示する指示入力を再生表示中に受け付ける入力受け付け部と、第2の動画像に含まれる1以上の特徴点を含む第2特徴点群と、前記第1の動画像に含まれる2以上の特徴点を含む第1特徴点群との間における特徴量の比較結果に基づいて、前記第1の動画像と前記第2の動画像との類似度を算出し、算出された前記類似度に基づいて前記第2の動画像に前記特定の動作が含まれるか否かを判定する判定部であって、前記類似度の算出の際、前記第1特徴点群のうち前記指示入力による指示位置に近い特徴点ほど特徴量の比較結果に対して大きな重みを付与する前記判定部と、を有する動作認識システムが開示されている。 Japanese Patent Laid-Open No. 2022-077870 discloses that a first moving image in which a person performing a specific action is photographed is played back and displayed, and a frame corresponding to the specific action is displayed on the frame of the first moving image that is played back and displayed. an input receiving unit that accepts an instruction input indicating a related position during playback and display; a second feature point group including one or more feature points included in the second moving image; and a second feature point group including one or more feature points included in the first moving image. The degree of similarity between the first moving image and the second moving image is calculated based on the comparison result of the feature amount with a first feature point group including two or more feature points, and the calculated similarity between the first moving image and the second moving image is calculated. a determination unit that determines whether the specific motion is included in the second moving image based on a degree of similarity; Disclosed is a motion recognition system including the determination unit that assigns a larger weight to the comparison result of the feature amount as the feature point is closer to the indicated position.
 上記特開2022-077870号公報記載には、人物の動作を認識する処理において、ユーザーが指定した特定の動作に関連する箇所の重みを大きくすることで高精度に動作認識を行う技術が開示されている。 The above-mentioned Japanese Patent Laid-Open No. 2022-077870 discloses a technology that performs motion recognition with high accuracy by increasing the weight of a part related to a specific motion specified by a user in the process of recognizing a person's motion. ing.
 しかしながら、上記特開2022-077870号公報記載の技術では、動作認識に知見のないユーザーが精度向上に寄与する適切な箇所を指定することが難しく、作業の動作解析に用いるのは困難である、という問題があった。 However, with the technology described in Japanese Patent Application Laid-open No. 2022-077870, it is difficult for a user who has no knowledge of motion recognition to specify an appropriate location that contributes to improving accuracy, and it is difficult to use it for motion analysis of work. There was a problem.
 開示の技術は、上記の点に鑑みてなされたものであり、作業の動作解析に適した作業者の特徴量を選択して表示することができる作業特徴量表示装置、作業特徴量表示方法、及び作業特徴量表示プログラムを提供することを目的とする。 The disclosed technology has been made in view of the above points, and provides a work feature display device, a work feature display method, and a work feature display method that can select and display worker feature values suitable for work motion analysis. The purpose of the present invention is to provide a work feature value display program.
 開示の第1態様は、作業特徴量表示装置であって、作業者が予め定めた一連の作業を行ったときの前記作業者の部位毎の特徴量を取得する特徴量取得部と、前記作業者の各部位の中から選択された選択部位の特徴量が表示部に表示されるように制御する制御部と、を備える。 A first aspect of the disclosure is a work feature amount display device, which includes a feature amount acquisition unit that acquires a feature amount for each part of the worker when the worker performs a predetermined series of tasks; and a control unit that controls the display unit to display the feature amount of the selected part selected from among the parts of the person.
 開示の第2態様は、上記第1態様において、前記特徴量を一次元の系列データに縮約する縮約部を備え、前記制御部は、前記系列データが前記表示部に表示されるように制御する。 A second aspect of the disclosure is the first aspect, further comprising a reduction unit that reduces the feature amount to one-dimensional series data, and the control unit controls the control unit so that the series data is displayed on the display unit. Control.
 開示の第3態様は、上記第1態様又は第2態様において、前記作業者を受け付ける作業者受付部を備える。 A third aspect of the disclosure, in the first aspect or the second aspect, includes a worker reception unit that receives the worker.
 開示の第4態様は、上記第1~第3態様の何れかの態様において、前記選択部位を受け付ける選択部位受付部を備える。 A fourth aspect of the disclosure, in any of the first to third aspects described above, includes a selected site receiving section that receives the selected site.
 開示の第5態様は、上記第1~第3態様の何れかの態様において、前記一連の作業の標準作業時間を取得する標準作業時間取得部と、前記部位毎に、前記特徴量のピークの位置を検出するピーク検出部と、前記部位毎に、検出した各ピークの間隔に基づいて、前記一連の作業の作業時間を算出する作業時間算出部と、を備え、前記制御部は、前記標準作業時間と、前記作業時間と、の差が最も小さくなる選択部位の特徴量が前記表示部に表示されるように制御する。 A fifth aspect of the disclosure is, in any one of the first to third aspects, a standard working time acquisition unit that acquires a standard working time for the series of tasks, and a standard working time acquisition unit that acquires a standard working time for the series of tasks, and a The control unit includes a peak detection unit that detects a position, and a work time calculation unit that calculates a work time for the series of work based on the interval between each detected peak for each part, and the control unit Control is performed so that the feature amount of the selected region with which the difference between the working time and the working time is the smallest is displayed on the display unit.
 開示の第6態様は、上記第5態様において、前記作業時間算出部は、検出した各ピークの間隔の中央値を前記作業時間とする。 In a sixth aspect of the disclosure, in the fifth aspect, the working time calculation unit sets the median value of the intervals between the detected peaks as the working time.
 開示の第7態様は、作業特徴量表示方法であって、コンピュータが、作業者が予め定めた一連の作業を行ったときの前記作業者の部位毎の特徴量を取得し、前記作業者の各部位の中から選択された選択部位の特徴量が表示部に表示されるように制御することを含む処理を実行する。 A seventh aspect of the disclosure is a work feature amount display method, in which a computer acquires feature amounts for each part of the worker when the worker performs a predetermined series of tasks, and A process including controlling the feature amount of the selected part selected from each part to be displayed on the display unit is executed.
 開示の第8態様は、作業特徴量表示プログラムであって、コンピュータに、作業者が予め定めた一連の作業を行ったときの前記作業者の部位毎の特徴量を取得し、前記作業者の各部位の中から選択された選択部位の特徴量が表示部に表示されるように制御することを含む処理を実行させる。 An eighth aspect of the disclosure is a work feature amount display program, which causes a computer to acquire feature amounts for each part of the worker when the worker performs a predetermined series of tasks, and displays the feature amount of the worker. A process including controlling the feature amount of the selected part selected from among the parts to be displayed on the display unit is executed.
 開示の技術によれば、作業の動作解析に適した作業者の特徴量を選択して表示することができる。 According to the disclosed technology, it is possible to select and display the characteristic amount of the worker suitable for the motion analysis of the work.
作業特徴量表示システムの構成図である。FIG. 1 is a configuration diagram of a work feature amount display system. 作業特徴量表示装置のハードウェア構成を示す構成図である。FIG. 2 is a configuration diagram showing the hardware configuration of a work feature amount display device. 作業特徴量表示装置の機能ブロック図である。FIG. 2 is a functional block diagram of a work feature amount display device. 表示画面の一例を示す図である。It is a figure showing an example of a display screen. 表示画面の一例を示す図である。It is a figure showing an example of a display screen. 表示画面の一例を示す図である。It is a figure showing an example of a display screen. 始点の推定について説明するための図である。FIG. 3 is a diagram for explaining estimation of a starting point. 作業特徴量表示処理のフローチャートである。It is a flowchart of work feature amount display processing.
 以下、本開示の実施形態の一例を、図面を参照しつつ説明する。なお、各図面において同一又は等価な構成要素及び部分には同一の参照符号を付与している。また、図面の寸法比率は、説明の都合上誇張されている場合があり、実際の比率とは異なる場合がある。 Hereinafter, an example of an embodiment of the present disclosure will be described with reference to the drawings. In addition, the same reference numerals are given to the same or equivalent components and parts in each drawing. Further, the dimensional ratios in the drawings may be exaggerated for convenience of explanation and may differ from the actual ratios.
 図1は、作業特徴量表示システム10の構成を示す。作業特徴量表示システム10は、作業特徴量表示装置20及びカメラ30を備える。 FIG. 1 shows the configuration of a work feature quantity display system 10. The work feature display system 10 includes a work feature display device 20 and a camera 30.
 作業特徴量表示装置20は、カメラ30で撮影された動画像に基づいて算出した動作特徴量を表示する装置である。 The work feature quantity display device 20 is a device that displays motion feature quantities calculated based on moving images photographed by the camera 30.
 作業者Wは、一例として作業台TBに載置された作業の対象物Mを取り出して、作業スペースSで予め定めた一連の作業を行う。作業者Wが行う一連の作業は、例えば部品の把持、運搬、組み立て、検査、ドライバによるねじ締め、及びラベル貼り等、1つの作業周期に多様な動作が含まれる作業である。 As an example, the worker W takes out the work object M placed on the workbench TB and performs a predetermined series of tasks in the work space S. The series of operations performed by the worker W include various operations in one work cycle, such as grasping, transporting, assembling, inspecting, tightening screws with a screwdriver, and attaching labels to components, for example.
 カメラ30は、例えばRGBのカラー動画像を撮影可能な撮影装置である。カメラ30は、作業者Wの動き及び作業台TB全体を認識しやすい位置に設置される。 The camera 30 is a photographing device capable of photographing, for example, RGB color moving images. The camera 30 is installed at a position where the movement of the worker W and the entire workbench TB can be easily recognized.
 また、本実施形態では、カメラ30が1台の場合について説明するが、複数台のカメラ30を設けた構成としてもよい。 Furthermore, in this embodiment, a case will be described in which there is one camera 30, but a configuration in which a plurality of cameras 30 are provided may be used.
 図2は、本実施形態に係る作業特徴量表示装置20のハードウェア構成を示すブロック図である。図2に示すように、作業特徴量表示装置20は、コントローラ21を備える。コントローラ21は、一般的なコンピュータを含む装置で構成される。 FIG. 2 is a block diagram showing the hardware configuration of the work feature amount display device 20 according to the present embodiment. As shown in FIG. 2, the work feature amount display device 20 includes a controller 21. As shown in FIG. The controller 21 is composed of a device including a general computer.
 図2に示すように、コントローラ21は、CPU(Central Processing Unit)21A、ROM(Read Only Memory)21B、RAM(Random Access Memory)21C、及び入出力インターフェース(I/O)21Dを備える。そして、CPU21A、ROM21B、RAM21C、及びI/O21Dがバス21Eを介して各々接続されている。バス21Eは、コントロールバス、アドレスバス、及びデータバスを含む。 As shown in FIG. 2, the controller 21 includes a CPU (Central Processing Unit) 21A, a ROM (Read Only Memory) 21B, a RAM (Random Access Memory) 21C, and an input/output interface (I/O) 2. Equipped with 1D. The CPU 21A, ROM 21B, RAM 21C, and I/O 21D are connected to each other via a bus 21E. Bus 21E includes a control bus, an address bus, and a data bus.
 また、I/O21Dには、操作部22、表示部23、通信部24、及び記憶部25が接続されている。 Furthermore, an operation section 22, a display section 23, a communication section 24, and a storage section 25 are connected to the I/O 21D.
 操作部22は、例えばマウス及びキーボードを含んで構成される。 The operation unit 22 includes, for example, a mouse and a keyboard.
 表示部23は、例えば液晶ディスプレイ等で構成される。 The display unit 23 is composed of, for example, a liquid crystal display.
 通信部24は、カメラ30等の外部装置とデータ通信を行うためのインターフェースである。 The communication unit 24 is an interface for performing data communication with an external device such as the camera 30.
 記憶部25は、ハードディスク等の不揮発性の外部記憶装置で構成される。図2に示すように、記憶部25は、作業特徴量表示プログラム25A及び動作特徴量データベース25B等を記憶する。 The storage unit 25 is composed of a nonvolatile external storage device such as a hard disk. As shown in FIG. 2, the storage unit 25 stores a work feature display program 25A, a motion feature database 25B, and the like.
 CPU21Aは、コンピュータの一例である。ここでいうコンピュータとは、広義的なプロセッサを指し、汎用的なプロセッサ(例えば、CPU)、又は、専用のプロセッサ(例えば、GPU:Graphics Processing Unit、ASIC:Application Specific Integrated Circuit、FPGA:Field Programmable Gate Array、プログラマブル論理デバイス、等)を含むものである。 The CPU 21A is an example of a computer. A computer here refers to a processor in a broad sense, and can be a general-purpose processor (e.g., CPU) or a dedicated processor (e.g., GPU: Graphics Processing Unit, ASIC: Application Specific Integrated Circuit, FPGA: FI eld Programmable Gate array, programmable logic device, etc.).
 なお、作業特徴量表示プログラム25Aは、不揮発性の非遷移的(non-transitory)記録媒体に記憶して、又はネットワークを介して配布して、作業特徴量表示装置20に適宜インストールすることで実現してもよい。 Note that the work feature amount display program 25A is realized by being stored in a non-volatile non-transitory recording medium, or distributed via a network, and installed on the work feature amount display device 20 as appropriate. You may.
 不揮発性の非遷移的記録媒体の例としては、CD-ROM(Compact Disc Read Only Memory)、光磁気ディスク、HDD(ハードディスクドライブ)、DVD-ROM(Digital Versatile Disc Read Only Memory)、フラッシュメモリ、メモリカード等が想定される。 Examples of non-volatile non-transitional recording media include CD-ROM (Compact Disc Read Only Memory), magneto-optical disk, HDD (Hard Disk Drive), DVD-ROM (Digital Versatile Disc Read Only Memory), and DVD-ROM (Digital Versatile Disc Read Only Memory). Memory), flash memory, memory Cards, etc. are assumed.
 図3は、作業特徴量表示装置20のCPU21Aの機能構成を示すブロック図である。図3に示すように、CPU21Aは、機能的には、動作特徴量取得部40、制御部41、縮約部42、作業者受付部43、選択部位受付部44、標準作業時間取得部45、ピーク検出部46、及び作業時間算出部47の各機能部を備える。 FIG. 3 is a block diagram showing the functional configuration of the CPU 21A of the work feature quantity display device 20. As shown in FIG. 3, the CPU 21A functionally includes a motion feature acquisition section 40, a control section 41, a reduction section 42, a worker reception section 43, a selected part reception section 44, a standard work time acquisition section 45, It includes functional units such as a peak detection unit 46 and a work time calculation unit 47.
 CPU21Aは、記憶部25に記憶された作業特徴量表示プログラム25Aを読み込んで実行することにより図3に示す各機能部として機能する。 The CPU 21A functions as each functional unit shown in FIG. 3 by reading and executing the work feature amount display program 25A stored in the storage unit 25.
 動作特徴量取得部40は、作業者Wが予め定めた一連の作業を行ったときの作業者Wの部位毎の動作特徴量を、記憶部25に予め記憶された動作特徴量データベース25Bから読み出すことにより取得する。動作特徴量データベース25Bは、部位毎の動作特徴量を作業者毎に記憶したデータベースである。 The motion feature acquisition section 40 reads motion features for each part of the worker W when the worker W performs a predetermined series of tasks from the motion feature database 25B stored in advance in the storage section 25. Obtained by The motion feature database 25B is a database that stores motion features for each part and for each worker.
 動作特徴量は、例えば作業者Wが予め定めた一連の作業を行った様子をカメラ30によって撮影された動画像に基づいて、動作ベクトル系列として算出することができる。なお、カメラ30に代えて、モーションセンサ等を用いて作業者Wの動作特徴量を取得してもよい。ここで、作業者Wの部位とは、作業者Wの骨格を構成する関節を含む部位であり、例えば作業者Wの顔、首、肩、肘、手首、腰、膝、及び足首等である。 The motion feature amount can be calculated as a motion vector series, for example, based on a moving image captured by the camera 30 of the worker W performing a predetermined series of tasks. Note that instead of the camera 30, the motion feature amount of the worker W may be acquired using a motion sensor or the like. Here, the parts of the worker W are parts including the joints that make up the skeleton of the worker W, such as the face, neck, shoulders, elbows, wrists, hips, knees, and ankles of the worker W. .
 動作ベクトル系列の算出では、背景及び作業者Wの服装等に影響を受けないようにするため、動画像に基づいて作業者Wの姿勢の推定を行い、推定した姿勢を骨格系列に変換する。 In calculating the motion vector series, in order to avoid being influenced by the background, the clothes of the worker W, etc., the posture of the worker W is estimated based on the moving image, and the estimated posture is converted into a skeletal sequence.
 作業者Wの姿勢を推定し、推定した姿勢を骨格系列に変換する手法としては、下記参考文献1に記載されたOpenPoseと呼ばれる公知の手法を用いることができる。骨格系列は、作業者Wの体の部位及び関節等の特徴点の座標と、特徴点の身体の部位を表すラベルと、を含む時系列のデータである。例えば特徴点は、作業者Wの目及び鼻等の顔、首、肩、肘、手首、腰、膝、及び足首等の関節等を含む。 As a method for estimating the posture of the worker W and converting the estimated posture into a skeletal sequence, a known method called OpenPose described in Reference 1 below can be used. The skeletal series is time-series data that includes coordinates of feature points such as body parts and joints of the worker W, and labels representing the body parts of the feature points. For example, the feature points include the face of the worker W, such as eyes and nose, and joints such as the neck, shoulders, elbows, wrists, hips, knees, and ankles.
 OpenPoseでは、動画像を入力とし、骨格系列を出力とする学習モデルを、多数の動画像を教師データとして学習した学習済みモデルを用いる。このような学習済みモデルを得る学習方法としては、例えばCNN(Convolutional Neural Networks)等の公知の方法が用いられる。 OpenPose uses a learned model that uses a moving image as input and a skeletal sequence as output, and is trained using a large number of moving images as training data. As a learning method for obtaining such a trained model, a known method such as CNN (Convolutional Neural Networks) is used, for example.
(参考文献1) "OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields", Zhe Cao, Student Member, IEEE, Gines Hidalgo, Student Member, IEEE, Tomas Simon, Shih-En Wei, and Yaser Sheikh, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. (Reference 1) "OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields", Zhe Cao, Student Member, IEEE, Gines Hidalgo, Student Member, IEEE, Tomas Simon, Shih-En Wei, and Yaser Sheikh, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE.
 ここで、工場の作業において、体型の異なる様々な作業者が作業を行うため、体型の違いによる影響が大きい。体型の違いに影響を受けることがないように、本実施形態では、下記参考文献2に記載されたMotionRetargetingと呼ばれる手法を用いて、動画像から得られた骨格系列を、動作特徴量を表す動作ベクトル系列に変換する。 Here, in factory work, various workers with different body types perform the work, so the difference in body shape has a large influence. In order to avoid being affected by differences in body shape, this embodiment uses a method called MotionRetargeting described in Reference 2 below to convert a skeletal sequence obtained from a video image into a motion representation representing a motion feature amount. Convert to vector series.
(参考文献2)K. Aberman, R. Wu, D. Lischinski, B. Chen, and D. Cohen-Or,“Learning character-agnostic motion for motion retargeting in 2d,”TOG, vol.38, no.4, p.75, 2019. (Reference 2) K. Aberman, R. Wu, D. Lischinski, B. Chen, and D. Cohen-Or, “Learning character-agnostic motion for motion retargeting in 2d,” TOG, vol.38, no.4 , p.75, 2019.
 MotionRetargetingでは、骨格系列を入力として、エンコーダーを用いて、動作、体型、カメラ視点の3つの成分の特徴ベクトルを出力するが、本実施形態では、一例として体型及びカメラ視点の影響を少なくするため、動作成分の特徴ベクトルのみを用いる。 In MotionRetargeting, a skeletal sequence is input and an encoder is used to output feature vectors of three components: motion, body shape, and camera viewpoint. In this embodiment, for example, in order to reduce the influence of body shape and camera viewpoint, Only the feature vector of the motion component is used.
 なお、姿勢推定のノイズの影響を除くため、動作ベクトル系列への変換を行う前に、骨格系列に対して、時系列の補間処理、時系列平滑化、及び下半身の補間処理の3つの前処理を行ってもよい。 In addition, in order to remove the influence of noise in pose estimation, the skeletal sequence is subjected to three preprocessing processes: time series interpolation processing, time series smoothing, and lower body interpolation processing before conversion to a motion vector series. You may do so.
 時系列の補間処理では、姿勢推定に失敗している関節点があれば、1つ前のフレームの関節点をコピーする。時系列平滑化では、姿勢推定のノイズを除去するため、系列データに対してガウシアンフィルタで平滑化を行う。本実施形態で用いるOpenPoseでは、人物の上半身だけでなく、下半身の姿勢も推定する。工場における作業の場合、机上の作業台で作業することが多いため、下半身が机で遮蔽され、下半身の関節点が欠損することが多い。MotionRetargetingのような動作特徴量を抽出するエンコーダーは、人物の全身の骨格系列を入力とするため、下半身の関節点が欠損している場合は、動作成分の特徴ベクトルを適切に出力することができない場合がある。そのため、下半身の補間処理を行ってもよい。具体的には、下半身の補間処理として、人物の胴体の長さに比例する長さで両膝及び両足の少なくとも一方の関節点を補間してもよい。 In time-series interpolation processing, if there is a joint point for which pose estimation has failed, the joint point of the previous frame is copied. In time series smoothing, in order to remove noise in pose estimation, series data is smoothed using a Gaussian filter. OpenPose used in this embodiment estimates not only the posture of the upper body of a person but also the posture of the lower body. When working in a factory, workers often work on a desk-top workbench, so their lower body is often shielded by the desk, resulting in loss of joints in the lower body. Encoders that extract motion features, such as MotionRetargeting, take as input the skeletal sequence of a person's entire body, so if lower body joint points are missing, the motion component feature vector cannot be properly output. There are cases. Therefore, interpolation processing for the lower body may be performed. Specifically, as the interpolation process for the lower body, the joint points of at least one of both knees and both legs may be interpolated with a length proportional to the length of the torso of the person.
 このようにして算出された各部位の動作特徴量は、作業者毎に動作特徴量データベース25Bとして記憶部25に予め記憶される、 The motion features of each part calculated in this way are stored in advance in the storage unit 25 as a motion feature database 25B for each worker.
 制御部41は、作業者Wの各部位の中から選択された選択部位の動作特徴量が表示部23に表示されるように制御する。 The control unit 41 controls the display unit 23 to display the motion feature amount of the selected part selected from among the parts of the worker W.
 縮約部42は、動作特徴量取得部40が取得した動作特徴量を一次元の系列データに縮約する。この場合、制御部41は、縮約された一次元の系列データが表示部23に表示されるように制御する。なお、縮約方法は、種々公知の手法を用いることができる。 The reduction unit 42 reduces the motion features acquired by the motion feature acquisition unit 40 into one-dimensional series data. In this case, the control unit 41 controls the reduced one-dimensional series data to be displayed on the display unit 23. Note that various known methods can be used as the reduction method.
 作業者受付部43は、複数の作業者の中から動作特徴量を表示する作業者を受け付ける。 The worker reception unit 43 receives a worker whose motion feature amount is to be displayed from among a plurality of workers.
 選択部位受付部44は、作業者の各部位の中から選択された選択部位を受け付ける。 The selected part receiving unit 44 receives the selected part selected from among the parts of the worker.
 図4には、表示部23に表示される動作特徴量の表示画面の一例を示す。図4に示す表示画面Dは、複数の作業者の中から動作特徴量を表示する作業者を受け付けるためのプルダウンメニューM1、作業者の各部位の中から動作特徴量を表示する選択部位を受け付けるためのプルダウンメニューM2、作業者の部位に付与された部位番号を確認するための表示領域R1、縮約された一次元の系列データをグラフとして表示するための表示領域R2を含む。 FIG. 4 shows an example of a display screen of motion feature amounts displayed on the display unit 23. The display screen D shown in FIG. 4 includes a pull-down menu M1 for accepting a worker whose motion feature amount is to be displayed from among a plurality of workers, and a selection part for displaying a motion feature amount from among each part of the worker. , a display area R1 for checking the part numbers assigned to the parts of the worker, and a display area R2 for displaying the reduced one-dimensional series data as a graph.
 ユーザーは、プルダウンメニューM1を開くことにより、動作特徴量データベース25Bに動作特徴量が格納されている複数の作業者の中から所望の作業者を選択することができる。図4の例では、作業者IDが「A_02」の作業者が選択された状態を示している。 By opening the pull-down menu M1, the user can select a desired worker from among the plurality of workers whose motion features are stored in the motion feature database 25B. The example in FIG. 4 shows a state in which a worker with worker ID "A_02" is selected.
 また、ユーザーは、プルダウンメニューM2を開くことにより、プルダウンメニューM1で選択された作業者の各部位の動作特徴量の中から所望の部位を選択することができる。表示領域R1には、作業者の部位に予め付与された部位番号が表示されている。ユーザーは、表示領域R1を参照して、所望の部位番号をプルダウンメニューM2から選択する。図4の例では、部位番号の集合で表される[5,6,7]が選択された状態、すなわち
左腕が選択された状態を示している。なお、部位番号の集合は、複数の部位番号で構成されてもよいし、単一の部位番号で構成されてもよい。また、プルダウンメニューM2には、予め定めた部位番号の集合が複数種類表示されるが、ユーザーが任意の部位番号を直接入力できるようにしてもよい。
Further, by opening the pull-down menu M2, the user can select a desired part from among the motion features of each part of the worker selected in the pull-down menu M1. In the display area R1, part numbers assigned in advance to parts of the worker are displayed. The user refers to display area R1 and selects a desired part number from pull-down menu M2. The example in FIG. 4 shows a state in which [5, 6, 7] represented by a set of part numbers is selected, that is, a state in which the left arm is selected. Note that the set of part numbers may be composed of a plurality of part numbers or a single part number. Furthermore, although a plurality of predetermined sets of part numbers are displayed in the pull-down menu M2, the user may be able to directly input any part number.
 プルダウンメニューM1から作業者が選択され、プルダウンメニューM2から部位が選択されると、選択された作業者の部位の動作特徴量を一次元に縮約した系列データが表示領域R2にグラフGとして表示される。グラフGの横軸は時間、縦軸は一次元の動作特徴量である。 When a worker is selected from the pull-down menu M1 and a body part is selected from the pull-down menu M2, series data obtained by condensing the motion features of the body part of the selected worker into one dimension is displayed as a graph G in the display area R2. be done. The horizontal axis of the graph G is time, and the vertical axis is a one-dimensional motion feature.
 これにより、ユーザーは、選択した部位の動作特徴量の周期性が高いか低いかを確認することができる。 This allows the user to confirm whether the periodicity of the motion feature of the selected region is high or low.
 例えば図5の例では、部位番号として[2,3,4,5,6,7]が選択されている。すなわち、右腕及び左腕が選択されている。そして、表示領域R2のグラフGを見ると、動作特徴量のピークの間隔のばらつきが比較的小さいことが確認できる。従って、ユーザーは、作業者「A_02」の作業を解析する上で、[2,3,4,5,6,7]で表される部位の動作特徴量が適していることを把握することができる。 For example, in the example of FIG. 5, [2, 3, 4, 5, 6, 7] is selected as the part number. That is, the right arm and left arm are selected. Looking at the graph G in the display area R2, it can be confirmed that the variation in the intervals between the peaks of the motion feature amount is relatively small. Therefore, the user cannot understand that the motion features of the parts represented by [2, 3, 4, 5, 6, 7] are suitable for analyzing the work of worker "A_02". can.
 一方、図6の例では、部位番号として[0,1,2,3,4,5,6,7,8]が選択されている。そして、表示領域R2のグラフを見ると、動作特徴量のピークの間隔のばらつきが比較的大きいことが確認できる。従って、ユーザーは、作業者「A_02」の作業を解析する上で、[0,1,2,3,4,5,6,7,8]で表される部位の動作特徴量が適していないことを把握することができる。 On the other hand, in the example of FIG. 6, [0, 1, 2, 3, 4, 5, 6, 7, 8] is selected as the part number. Looking at the graph in the display area R2, it can be seen that the variation in the peak interval of the motion feature amount is relatively large. Therefore, the user considers that the motion features of the parts represented by [0, 1, 2, 3, 4, 5, 6, 7, 8] are not suitable for analyzing the work of worker "A_02". be able to understand that.
 また、ユーザーがプルダウンメニューM2から部位を選択するのではなく、図7に示すように、自動ボタンBが押下された場合には、作業の解析に適した部位を自動で選択し、選択した部位の動作特徴量が表示領域R2に表示されるようにしてもよい。 In addition, instead of the user selecting a part from the pull-down menu M2, if the automatic button B is pressed as shown in FIG. 7, the part suitable for the work analysis is automatically selected, and the selected part The motion feature amount may be displayed in the display area R2.
 この場合、標準作業時間取得部45は、一連の作業の標準作業時間を例えば記憶部25から読み出すことにより取得する。標準作業時間は、例えば標準的な作業者が一連の作業を行った場合の作業時間を計測することにより得ることができ、予め記憶部25に記憶しておく。なお、表示画面Dにおいて、ユーザーに標準作業時間を入力させることにより標準作業時間を取得してもよい。 In this case, the standard working time acquisition unit 45 acquires the standard working time for a series of tasks by reading it from the storage unit 25, for example. The standard working time can be obtained, for example, by measuring the working time when a standard worker performs a series of tasks, and is stored in the storage unit 25 in advance. Note that the standard working time may be acquired by having the user input the standard working time on the display screen D.
 そして、ピーク検出部46は、部位毎に、動作特徴量のピークの位置を検出する。なお、ピークの検出方法は、種々公知の手法を用いることができる。図7にピークの位置の検出例を示す。図7の例では、ピークP1~P6が検出されている。 Then, the peak detection unit 46 detects the position of the peak of the motion feature amount for each part. Note that various known methods can be used to detect the peak. FIG. 7 shows an example of peak position detection. In the example of FIG. 7, peaks P1 to P6 are detected.
 作業時間算出部47は、部位毎に、検出した各ピークの間隔に基づいて、一連の作業の作業時間を算出する。このとき、作業時間算出部47は、検出した各ピークの間隔の中央値を作業時間としてもよい。例えば、図7の例において、ピークP1~P7の出現時刻が[100,950,1300,1800,2650,3500]であった場合、隣接する各ピークの間隔は、[850,350,500,850,850]となる。この場合、隣接する各ピークの間隔の中央値は「850」となり、これが一連の作業の1周期分の作業時間となる。このように、隣接する各ピークの間隔の中央値を作業時間とすることにより、図7に示すピークP3のようにノイズであるピークの影響を排除して精度良く作業時間を算出することができる。 The work time calculation unit 47 calculates the work time of a series of tasks for each region based on the interval between each detected peak. At this time, the work time calculation unit 47 may take the median value of the intervals between the detected peaks as the work time. For example, in the example of FIG. 7, if the appearance times of peaks P1 to P7 are [100,950,1300,1800,2650,3500], the intervals between adjacent peaks are [850,350,500,850]. , 850]. In this case, the median interval between adjacent peaks is "850", which is the working time for one cycle of the series of works. In this way, by setting the median interval between adjacent peaks as the working time, it is possible to eliminate the influence of noise peaks such as peak P3 shown in FIG. 7 and calculate the working time with high accuracy. .
 そして、制御部41は、標準作業時間取得部45が取得した標準作業時間と、作業時間算出部47が算出した作業時間と、の差が最も小さくなる選択部位の動作特徴量が表示領域R2に表示されるように制御する。このとき、選択部位の部位番号をプルダウンメニューM2に表示させたり、表示領域R1において選択部位の部位番号の色を他の部位番号と異ならせたりする等の強調表示を行うことにより、選択部位がどの部位であるのかが把握できるようにする。これにより、ユーザーは、プルダウンメニューM2から様々な部位を選択してグラフGを表示させ、作業の解析に適した動作特徴量か否かを判断する必要がない。このため、ユーザーは、速やかに作業の解析に適した部位を把握することができる。 Then, the control unit 41 displays the motion feature amount of the selected part in which the difference between the standard work time acquired by the standard work time acquisition unit 45 and the work time calculated by the work time calculation unit 47 is the smallest in the display area R2. Control what is displayed. At this time, the selected region can be highlighted by displaying the region number of the selected region in the pull-down menu M2 or by making the color of the region number of the selected region different from other region numbers in the display area R1. Make it easy to understand which part it is. This eliminates the need for the user to select various parts from the pull-down menu M2, display the graph G, and judge whether or not the motion features are suitable for analysis of the work. Therefore, the user can quickly grasp the part suitable for analysis of the work.
 次に、作業特徴量表示装置20のCPU21Aで実行される作業特徴量表示処理について、図8に示すフローチャートを参照して説明する。なお、図8に示す作業特徴量表示処理は、繰り返し実行される。 Next, the work feature display process executed by the CPU 21A of the work feature display device 20 will be described with reference to the flowchart shown in FIG. Note that the work feature value display process shown in FIG. 8 is repeatedly executed.
 ステップS100では、CPU21Aが、プルダウンメニューM1から作業者が選択されたか否かを判定する。そして、作業者が選択された場合はステップS101へ移行し、作業者が選択されていない場合は、作業者が選択されるまで待機する。 In step S100, the CPU 21A determines whether a worker has been selected from the pull-down menu M1. Then, if a worker is selected, the process moves to step S101, and if a worker is not selected, the process waits until a worker is selected.
 ステップS101では、CPU21Aが、プルダウンメニューM2から部位が選択されたか否かを判定する。そして、部位が選択された場合はステップS102へ移行し、部位が選択されていない場合は、ステップS105へ移行する。 In step S101, the CPU 21A determines whether a body part has been selected from the pull-down menu M2. Then, if a part is selected, the process moves to step S102, and if no part is selected, the process moves to step S105.
 ステップS102では、CPU21Aが、ステップS100で選択した作業者の、ステップS101で選択した部位の動作特徴量を動作特徴量データベース25Bから読み出すことにより取得する。 In step S102, the CPU 21A acquires the motion feature amount of the part selected in step S101 of the worker selected in step S100 by reading it from the motion feature database 25B.
 ステップS103では、CPU21Aが、ステップS102で取得した動作特徴量を一次元の動作特徴量に縮約する。 In step S103, the CPU 21A reduces the motion feature amount acquired in step S102 to a one-dimensional motion feature amount.
 ステップS104では、CPU21Aが、ステップS108で一次元に縮約した動作特徴量を表示画面Dの表示領域R2にグラフGとして表示する。 In step S104, the CPU 21A displays the motion feature quantity reduced to one dimension in step S108 as a graph G in the display area R2 of the display screen D.
 このように、ユーザーは、作業者及び部位を選択することで、動作特徴量を表示領域R2に表示させることができ、選択した部位の動作特徴量の周期性を確認することで作業の解析に適しているか否かを判断することができる。 In this way, the user can display the motion feature amount in the display area R2 by selecting the worker and the part, and can analyze the work by checking the periodicity of the motion feature amount of the selected part. You can decide whether it is suitable or not.
 ステップS105では、CPU21Aが、自動ボタンBが押下されたか否かを判定する。そして、自動ボタンBが押下された場合はステップS106へ移行し、自動ボタンBが押下されていない場合はステップS101へ移行する。 In step S105, the CPU 21A determines whether automatic button B has been pressed. Then, if the automatic button B is pressed, the process moves to step S106, and if the automatic button B is not pressed, the process moves to step S101.
 ステップS106では、CPU21Aが、一連の作業の標準作業時間を記憶部25から読み出すことにより取得する。 In step S106, the CPU 21A acquires the standard working time of the series of tasks by reading it from the storage unit 25.
 ステップS107では、CPU21Aが、部位毎に、動作特徴量のピークの位置を検出する。具体的には、ステップS100で選択された作業者の部位毎に、動作特徴量を動作特徴量データベース25Bから読み出し、一次元の動作特徴量に縮約し、縮約した動作特徴量に基づいてピークの位置を検出する。 In step S107, the CPU 21A detects the peak position of the motion feature amount for each part. Specifically, for each part of the worker selected in step S100, motion features are read from the motion feature database 25B, reduced to one-dimensional motion features, and based on the reduced motion features. Detect the position of the peak.
 ステップS108では、CPU21Aが、ステップS100で選択された作業者の部位毎に、ステップS107で検出した各ピークの間隔に基づいて、一連の作業の作業時間を算出する。 In step S108, the CPU 21A calculates the working time of the series of tasks for each part of the worker selected in step S100, based on the interval between each peak detected in step S107.
 ステップS109では、CPU21Aが、ステップS104で取得した標準作業時間と、ステップS106で算出した部位毎の作業時間と、を各々比較する。そして、ステップS104で取得した標準作業時間と、ステップS106で算出した作業時間と、の差が最も小さくなる選択部位の動作特徴量を表示領域R2に表示させる。 In step S109, the CPU 21A compares the standard working time obtained in step S104 and the working time for each part calculated in step S106. Then, the motion feature amount of the selected part with the smallest difference between the standard work time acquired in step S104 and the work time calculated in step S106 is displayed in the display area R2.
 これにより、ユーザーは、自ら様々な部位を選択して動作特徴量を表示させ、周期性を確認する必要がなく、速やかに作業の解析に適した部位を把握することができる。 As a result, the user does not have to select various parts on his/her own and display the motion feature amounts to check the periodicity, and can quickly grasp the part suitable for analysis of the work.
 このように、本実施形態では、作業者が予め定めた一連の作業を行ったときの作業者の部位毎の動作特徴量のうち、作業者の各部位の中から選択された選択部位の動作特徴量が表示部23に表示されるように制御する。これにより、作業の動作解析に適した作業者の動作特徴量を表示することができる。 In this way, in this embodiment, the movement of a selected part selected from among the parts of the worker among the movement features of each part of the worker when the worker performs a predetermined series of tasks is determined. The feature amount is controlled to be displayed on the display unit 23. Thereby, it is possible to display the worker's motion feature amount suitable for the motion analysis of the work.
 なお、上記実施形態は、本開示の構成例を例示的に説明するものに過ぎない。本開示は上記の具体的な形態には限定されることはなく、その技術的思想の範囲内で種々の変形が可能である。 Note that the above embodiments are merely illustrative examples of the configuration of the present disclosure. The present disclosure is not limited to the above-described specific form, and various modifications can be made within the scope of the technical idea.
 例えば、上記実施形態では、作業者が作業をしている様子を撮影した動画像に基づいて算出された動作特徴量を一次元の動作特徴量に縮約して表示する場合について説明したが、これに限られない。例えば作業者が作業している場所に設置されたマイクで録音された音を動作特徴量としてもよい。作業者が操作する機械の操作音等から作業の周期性を把握することが可能な場合があるからである。また、前述したように、モーションセンサ等の加速度センサにより検出された加速度データ等を動作特徴量としてもよい。また、これらの動作特徴量を組み合わせた動作特徴量を用いても良い。 For example, in the above embodiment, a case has been described in which a motion feature amount calculated based on a video image of a worker working is reduced to a one-dimensional motion feature amount and displayed. It is not limited to this. For example, the motion feature may be a sound recorded by a microphone installed in a place where the worker is working. This is because it is sometimes possible to understand the periodicity of work from the operation sounds of machines operated by workers. Further, as described above, acceleration data detected by an acceleration sensor such as a motion sensor may be used as the motion feature amount. Furthermore, a motion feature amount that is a combination of these motion feature amounts may be used.
 また、上記実施形態では、作業者の各部位の中から選択された選択部位の動作特徴量が表示部23に表示されるように制御する場合について説明したが、動作特徴量に限らず、他の特徴量が表示部23に表示されるように制御してもよい。例えば、作業者の骨格座標系列又は骨格座標の速度ベクトル系列等の他の特徴量が表示部23に表示されるように制御してもよい。この場合、動作特徴量、骨格座標系列、及び骨格座標の速度ベクトル系列の中から、表示部23に表示させる特徴量をユーザーが選択できるようにしてもよい。 Further, in the above embodiment, a case has been described in which control is performed so that the motion feature amount of the selected part selected from among the parts of the worker is displayed on the display unit 23. Control may be performed so that the feature amount is displayed on the display unit 23. For example, the display unit 23 may be controlled to display other feature amounts such as a skeletal coordinate series of the worker or a velocity vector series of the skeletal coordinates. In this case, the user may be able to select the feature amount to be displayed on the display unit 23 from among the motion feature amount, the skeletal coordinate series, and the velocity vector series of the skeletal coordinates.
 また、上記実施形態でCPUがソフトウェア(プログラム)を読み込んで実行した作業特徴量表示処理を、CPU以外の各種のプロセッサが実行してもよい。この場合のプロセッサとしては、FPGA(Field-Programmable Gate Array)等の製造後に回路構成を変更可能なPLD(Programmable Logic
 Device)、及びASIC(Application Specific Integrated Circuit)等の認識の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路等が例示される。また、作業特徴量表示処理を、これらの各種のプロセッサのうちの1つで実行してもよいし、同種又は異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGA、及びCPUとFPGAとの組み合わせ等)で実行してもよい。また、これらの各種のプロセッサのハードウェア的な構造は、より具体的には、半導体素子等の回路素子を組み合わせた電気回路である。
Further, the work feature amount display processing that was executed by the CPU in the above embodiment by reading the software (program) may be executed by various processors other than the CPU. In this case, the processor is a PLD (Programmable Logic) whose circuit configuration can be changed after manufacturing, such as an FPGA (Field-Programmable Gate Array).
Examples include a dedicated electric circuit, which is a processor having a circuit configuration specifically designed to execute recognition processing, such as a device (device), and an application specific integrated circuit (ASIC). Further, the work feature amount display processing may be executed by one of these various processors, or by a combination of two or more processors of the same type or different types (for example, multiple FPGAs, and a combination of a CPU and an FPGA). combinations of etc.). Further, the hardware structure of these various processors is, more specifically, an electric circuit that is a combination of circuit elements such as semiconductor elements.
 なお、日本国特許出願第2022-124302号の開示は、その全体が参照により本明細書に取り込まれる。また、本明細書に記載された全ての文献、特許出願、及び技術規格は、個々の文献、特許出願、及び技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。 Note that the disclosure of Japanese Patent Application No. 2022-124302 is incorporated herein by reference in its entirety. In addition, all documents, patent applications, and technical standards mentioned herein are incorporated by reference to the same extent as if each individual document, patent application, and technical standard were specifically and individually indicated to be incorporated by reference. , herein incorporated by reference.

Claims (8)

  1.  作業者が予め定めた一連の作業を行ったときの前記作業者の部位毎の特徴量を取得する特徴量取得部と、
     前記作業者の各部位の中から選択された選択部位の特徴量が表示部に表示されるように制御する制御部と、
     を備えた作業特徴量表示装置。
    a feature amount acquisition unit that obtains feature amounts for each part of the worker when the worker performs a predetermined series of tasks;
    a control unit that controls the feature amount of the selected part selected from the respective parts of the worker to be displayed on a display unit;
    A work feature display device equipped with.
  2.  前記特徴量を一次元の系列データに縮約する縮約部を備え、
     前記制御部は、前記系列データが前記表示部に表示されるように制御する
     請求項1記載の作業特徴量表示装置。
    comprising a reduction unit that reduces the feature amount to one-dimensional series data,
    The work feature display device according to claim 1, wherein the control unit controls the series data to be displayed on the display unit.
  3.  前記作業者を受け付ける作業者受付部
     を備えた請求項1記載の作業特徴量表示装置。
    The work feature amount display device according to claim 1, further comprising a worker reception section that receives the worker.
  4.  前記選択部位を受け付ける選択部位受付部
     を備えた請求項1~3の何れか1項に記載の作業特徴量表示装置。
    The work feature quantity display device according to any one of claims 1 to 3, further comprising a selected part receiving section that receives the selected part.
  5.  前記一連の作業の標準作業時間を取得する標準作業時間取得部と、
     前記部位毎に、前記特徴量のピークの位置を検出するピーク検出部と、
     前記部位毎に、検出した各ピークの間隔に基づいて、前記一連の作業の作業時間を算出する作業時間算出部と、
     を備え、
     前記制御部は、
     前記標準作業時間と、前記作業時間と、の差が最も小さくなる選択部位の特徴量が前記表示部に表示されるように制御する
     請求項1~3の何れか1項に記載の作業特徴量表示装置。
    a standard working time acquisition unit that acquires standard working time for the series of tasks;
    a peak detection unit that detects the position of the peak of the feature amount for each part;
    a work time calculation unit that calculates the work time of the series of work based on the interval of each detected peak for each part;
    Equipped with
    The control unit includes:
    The work feature amount according to any one of claims 1 to 3, wherein the work feature amount is controlled so that the feature amount of the selected part having the smallest difference between the standard work time and the work time is displayed on the display unit. Display device.
  6.  前記作業時間算出部は、検出した各ピークの間隔の中央値を前記作業時間とする
     請求項5記載の作業特徴量表示装置。
    The work feature value display device according to claim 5, wherein the work time calculation unit sets the work time to be a median value of intervals between detected peaks.
  7.  コンピュータが、
     作業者が予め定めた一連の作業を行ったときの前記作業者の部位毎の特徴量を取得し、
     前記作業者の各部位の中から選択された選択部位の特徴量が表示部に表示されるように制御する
     ことを含む処理を実行する作業特徴量表示方法。
    The computer is
    Obtaining feature amounts for each part of the worker when the worker performs a predetermined series of tasks,
    A method for displaying a work feature amount, which performs a process including: controlling a feature amount of a selected part selected from among the parts of the worker to be displayed on a display unit.
  8.  コンピュータに、
     作業者が予め定めた一連の作業を行ったときの前記作業者の部位毎の特徴量を取得し、
     前記作業者の各部位の中から選択された選択部位の特徴量が表示部に表示されるように制御する
     ことを含む処理を実行させる作業特徴量表示プログラム。 
    to the computer,
    Obtaining feature amounts for each part of the worker when the worker performs a predetermined series of tasks,
    A work feature quantity display program that executes a process including controlling so that a feature quantity of a selected part selected from among the parts of the worker is displayed on a display unit.
PCT/JP2023/027284 2022-08-03 2023-07-25 Work feature amount display device, work feature amount display method, and work feature amount display program WO2024029411A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022124302A JP2024021467A (en) 2022-08-03 2022-08-03 Work feature amount display device, work feature amount display method, and work feature amount display program
JP2022-124302 2022-08-03

Publications (1)

Publication Number Publication Date
WO2024029411A1 true WO2024029411A1 (en) 2024-02-08

Family

ID=89849012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/027284 WO2024029411A1 (en) 2022-08-03 2023-07-25 Work feature amount display device, work feature amount display method, and work feature amount display program

Country Status (2)

Country Link
JP (1) JP2024021467A (en)
WO (1) WO2024029411A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019125023A (en) * 2018-01-12 2019-07-25 オムロン株式会社 Motion analysis system, motion analysis device, motion analysis method, and motion analysis program
JP2020013341A (en) * 2018-07-18 2020-01-23 コニカミノルタ株式会社 Work process management system, work process management method, and work process management program
WO2021131552A1 (en) * 2019-12-27 2021-07-01 パナソニックIpマネジメント株式会社 Operation analyzing device and operation analyzing method
JP2021174059A (en) * 2020-04-20 2021-11-01 オムロン株式会社 Estimation apparatus, learning apparatus, teacher data generation apparatus, estimation method, leaning method, teacher data generation method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019125023A (en) * 2018-01-12 2019-07-25 オムロン株式会社 Motion analysis system, motion analysis device, motion analysis method, and motion analysis program
JP2020013341A (en) * 2018-07-18 2020-01-23 コニカミノルタ株式会社 Work process management system, work process management method, and work process management program
WO2021131552A1 (en) * 2019-12-27 2021-07-01 パナソニックIpマネジメント株式会社 Operation analyzing device and operation analyzing method
JP2021174059A (en) * 2020-04-20 2021-11-01 オムロン株式会社 Estimation apparatus, learning apparatus, teacher data generation apparatus, estimation method, leaning method, teacher data generation method, and program

Also Published As

Publication number Publication date
JP2024021467A (en) 2024-02-16

Similar Documents

Publication Publication Date Title
JP4792824B2 (en) Motion analysis device
CN109807882A (en) Holding system, learning device and holding method
US20160300100A1 (en) Image capturing apparatus and method
JP6091407B2 (en) Gesture registration device
US20150262370A1 (en) Image processing device, image processing method, and image processing program
US20090232400A1 (en) Image evaluation apparatus, method, and program
JP2019159885A (en) Operation analysis device, operation analysis method, operation analysis program and operation analysis system
WO2024029411A1 (en) Work feature amount display device, work feature amount display method, and work feature amount display program
CN117456558A (en) Human body posture estimation and control method based on camera and related equipment
JP2002236904A (en) Data processor and processing method, recording medium and program
JP7232663B2 (en) Image processing device and image processing method
JP7059701B2 (en) Estimator, estimation method, and estimation program
JP2020098575A (en) Image processor, method for processing information, and image processing program
JPH08212327A (en) Gesture recognition device
Stark et al. Video based gesture recognition for human computer interaction
JP7484569B2 (en) Image processing device, image processing method, and program
WO2024018857A1 (en) Task recognition device, task recognition method, and task recognition program
JP4449483B2 (en) Image analysis apparatus, image analysis method, and computer program
CN113836991A (en) Motion recognition system, motion recognition method, and storage medium
WO2023171184A1 (en) Moving image integration device, moving image integration method, and moving image integration program
JP2001092978A (en) Device for estimating attitude of figure image and recording medium stored with attitude estimation program for figure image
JPH11175692A (en) Collector of data on joint motion
WO2023171167A1 (en) Work recognition device, work recognition method, and work recognition program
WO2023176162A1 (en) Task recognition device, task recognition method, and task recognition program
JP3603919B2 (en) Gesture recognition device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23849966

Country of ref document: EP

Kind code of ref document: A1