WO2020144727A1 - 動作特定装置、動作特定方法及び動作特定プログラム - Google Patents

動作特定装置、動作特定方法及び動作特定プログラム Download PDF

Info

Publication number
WO2020144727A1
WO2020144727A1 PCT/JP2019/000056 JP2019000056W WO2020144727A1 WO 2020144727 A1 WO2020144727 A1 WO 2020144727A1 JP 2019000056 W JP2019000056 W JP 2019000056W WO 2020144727 A1 WO2020144727 A1 WO 2020144727A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
information
target
image data
unit
Prior art date
Application number
PCT/JP2019/000056
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
勝大 草野
尚吾 清水
奥村 誠司
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2019524483A priority Critical patent/JP6777819B1/ja
Priority to PCT/JP2019/000056 priority patent/WO2020144727A1/ja
Priority to DE112019006583.1T priority patent/DE112019006583T5/de
Priority to CN201980087653.9A priority patent/CN113302653A/zh
Priority to TW108120437A priority patent/TW202026951A/zh
Publication of WO2020144727A1 publication Critical patent/WO2020144727A1/ja

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Definitions

  • the present invention relates to a technique for identifying the action content of a subject from image data taken by the subject.
  • Patent Document 1 describes that a camera attached to a person's head and a three-dimensional sensor are used to extract a feature amount of a person's action and automatically perform an action analysis.
  • Patent Document 1 a camera is attached to the head of a person.
  • An object of the present invention is to enable processing such as cycle time measurement and work content analysis without attaching unnecessary objects to the worker's body.
  • the motion specifying device is An image acquisition unit that acquires image data about the target person, From the image data acquired by the image acquisition unit, a skeleton extraction unit that extracts target information that is skeletal information representing the posture of the target person, An action identifying unit that identifies the action content indicated by the action information, which is the skeleton information similar to the target information extracted by the skeleton extracting unit, as the action content performed by the subject.
  • target information which is skeletal information representing the posture of the subject
  • the action content indicated by the action information which is skeletal information similar to the target information
  • Identify Therefore, it is possible to perform processing such as cycle time measurement and work content analysis without attaching unnecessary objects to the worker's body.
  • FIG. 3 is a configuration diagram of the action specifying device 10 according to the first embodiment. 3 is a flowchart of registration processing according to the first embodiment.
  • FIG. 3 is an explanatory diagram of image data according to the first embodiment.
  • 5 is an explanatory diagram of skeleton information 43 according to the first embodiment.
  • FIG. FIG. 5 is an explanatory diagram of registration processing according to the first embodiment.
  • 3 is an explanatory diagram of an operation information table 31 according to the first embodiment.
  • FIG. 3 is a flowchart of a specific process according to the first embodiment.
  • FIG. 6 is an explanatory diagram of a specific process according to the first embodiment.
  • the block diagram of the action specific device 10 which concerns on the modification 1.
  • FIG. The block diagram of the action specific device 10 which concerns on the modification 3.
  • FIG. 6 is a configuration diagram of a motion specifying device 10 according to the second embodiment.
  • 9 is a flowchart of learning processing according to the second embodiment.
  • 9 is a flowchart of a specific process according to the second embodiment.
  • the block diagram of the action specific device 10 which concerns on the modification 5.
  • the action specifying device 10 is a computer.
  • the behavior specifying device 10 includes hardware such as a processor 11, a memory 12, a storage 13, and a communication interface 14.
  • the processor 11 is connected to other hardware via a signal line and controls these other hardware.
  • the processor 11 is an IC (Integrated Circuit) that performs processing.
  • the processor 11 is, as a specific example, a CPU (Central Processing Unit), a DSP (Digital Signal Processor), and a GPU (Graphics Processing Unit).
  • the memory 12 is a storage device that temporarily stores data.
  • the memory 12 is, for example, an SRAM (Static Random Access Memory) or a DRAM (Dynamic Random Access Memory).
  • the storage 13 is a storage device that stores data.
  • the storage 13 is, as a specific example, an HDD (Hard Disk Drive).
  • the storage 13 includes SD (registered trademark, Secure Digital) memory card, CF (CompactFlash, registered trademark), NAND flash, flexible disk, optical disk, compact disk, Blu-ray (registered trademark) disk, DVD (Digital Versatile Disk). It may be a portable recording medium.
  • the communication interface 14 is an interface for communicating with an external device.
  • the communication interface 14 is, as a specific example, an Ethernet (registered trademark), USB (Universal Serial Bus), or HDMI (registered trademark, High-Definition Multimedia Interface) port.
  • the communication interface 14 may be provided separately for each data to be communicated.
  • HDMI registered trademark
  • HDMI may be provided to communicate image data described below
  • USB may be provided to communicate label information described below.
  • the motion identifying apparatus 10 includes an image acquiring unit 21, a skeleton extracting unit 22, a motion information registering unit 23, a motion identifying unit 24, and an output unit 25 as functional components.
  • the function of each functional component of the behavior specifying device 10 is realized by software.
  • the storage 13 stores programs that realize the functions of the functional components of the behavior identifying apparatus 10. This program is read into the memory 12 by the processor 11 and executed by the processor 11. As a result, the function of each functional component of the motion identifying apparatus 10 is realized.
  • the storage 13 also stores an operation information table 31.
  • the behavior identifying device 10 may include a CPU and a GPU as the processor 11.
  • the skeleton extracting unit 22 that performs image processing is realized by the GPU, and the remaining image acquiring unit 21, the motion information registering unit 23, the motion identifying unit 24, and the output unit 25 are provided.
  • the output unit 25 may be realized by a CPU.
  • the operation of the operation specifying device 10 according to the first embodiment will be described with reference to FIGS. 2 to 8.
  • the operation of the operation specifying device 10 according to the first embodiment corresponds to the operation specifying method according to the first embodiment. Further, the operation of the operation specifying device 10 according to the first embodiment corresponds to the processing of the operation specifying program according to the first embodiment.
  • the operation of the operation specifying device 10 according to the first exemplary embodiment includes a registration process and a specifying process.
  • Step S11 Image acquisition process
  • the image acquisition unit 21 acquires, via the communication interface 14, one or more sets of image data in which the person 42 performing the target action is captured by the image capturing device 41 and label information indicating the target action.
  • the image data is acquired by capturing the entire body of the person 42 performing the target motion by the image capturing device 41 from the front of the target person.
  • the image acquisition unit 21 writes the acquired set of image data and label information in the memory 12.
  • Step S12 Skeleton extraction processing
  • the skeleton extracting unit 22 reads the image data acquired in step S11 from the memory 12.
  • the skeleton extraction unit 22 extracts skeleton information 43 representing the posture of the person 42 from the image data as motion information.
  • the skeleton information 43 indicates the coordinates of a plurality of joints such as the neck and shoulders of the person 42, or the relative positional relationship of the plurality of joints.
  • the skeleton extraction unit 22 writes the extracted motion information in the memory 12.
  • Step S13 motion information registration process
  • the motion information registration unit 23 reads from the memory 12 the motion information extracted in step S12 and the label information of the same set as the image data from which the motion information was extracted.
  • the motion information registration unit 23 writes the read motion information and label information in the motion information table 31 in association with each other.
  • Step S14 End determination process
  • the skeleton extraction unit 22 determines whether or not all the sets acquired in step S11 have been processed.
  • the skeleton extraction unit 22 ends the registration processing when the processing has been performed for all the sets. On the other hand, if there is an unprocessed set, the skeleton extraction unit 22 returns the process to step S12 and executes the process for the next set.
  • a set of a plurality of pieces of motion information and label information is accumulated in the motion information table 31.
  • the image acquisition unit 21 sets the image data at each time and the image data at that time for the image data at each time that constitutes the video data of the person who has performed the series of operations.
  • a set of label information indicating the motion of the person indicated by the image data is acquired.
  • the skeleton extraction unit 22 extracts the motion information from the image data to be processed
  • the motion information registration unit 23 associates the label information and the motion information of the same group as the image data to be processed. And writes it in the operation information table 31.
  • the action information and the label information associated with each other are accumulated in the action information table 31 for each time action in the series of work.
  • the image acquisition unit 21 also regards the image data of each time, which constitutes the video data of the image of the person who has performed the unsteady work that is not normally performed in the series of work, and the image data of that time. You may acquire the group with the label information which shows the motion of the person which the image data of the time shows. As a result, with respect to the unsteady work, the motion information and the label information associated with each other at each time are stored in the motion information table 31.
  • Step S21 Image acquisition process
  • the image acquisition unit 21 acquires, via the communication interface 14, one or more image data of a subject imaged.
  • the image data acquired in step S21 is acquired by the imaging device 41 capturing the entire body of the target person from the front of the target person.
  • the image acquisition unit 21 writes the acquired image data in the memory 12.
  • Step S22 Skeleton extraction processing
  • the skeleton extracting unit 22 reads the image data acquired in step S21 from the memory 12.
  • the skeleton extracting unit 22 extracts skeleton information 43 representing the posture of the target person from the image data as target information.
  • the skeleton extraction unit 22 writes the extracted target information in the memory 12.
  • Step S23 motion identification processing
  • the action identifying unit 24 identifies the action content indicated by the action information, which is the skeletal information similar to the target information extracted in step S22, as the action content performed by the subject. Specifically, the motion identifying unit 24 searches the motion information table 31 for motion information similar to the target information. Similarity means that when the skeleton information 43 indicates the coordinates of a plurality of joints, the Euclidean distance between the coordinates of the same joint in the target information and the motion information is short. Further, when the skeleton information 43 indicates the relative positional relationship of a plurality of joints, it means that the Euclidean distance between the joints indicated by the target information and the Euclidean distance between the joints indicated by the motion information are close. .. Then, the motion identifying unit 24 identifies the motion content indicated by the label information associated with the motion information hit in the search as the motion content being performed by the subject.
  • the motion identifying unit 24 calculates the degree of similarity with the target information for all the motion information accumulated in the motion information table 31. Then, the motion identifying unit 24 handles the motion information having the highest degree of similarity as the motion information hit in the search. If there is no motion information whose similarity is higher than the threshold value, the motion identifying unit 24 may determine that there is no motion information hit in the search.
  • weighting may be performed so that the difference in the Euclidean distance for the specific joint has a great influence on the similarity.
  • weighting is performed so that the difference in the Euclidean distance between the coordinates in the target information and the coordinates in the motion information about a specific joint has a great influence on the similarity. You can go.
  • weighting may be performed so that the difference in the Euclidean distance between specific joints greatly affects the degree of similarity.
  • Step S24 output process
  • the output unit 25 outputs the operation content specified in step S23 to a display device or the like connected via the communication interface 14.
  • the output unit 25 may output label information indicating the operation content. If there is no motion information that hits the search, the output unit 25 outputs information indicating that the motion content cannot be specified.
  • Step S25 end determination process
  • the skeleton extraction unit 22 determines whether all the image data acquired in step S21 have been processed.
  • the skeleton extraction unit 22 ends the registration process when all the image data have been processed. On the other hand, if there is unprocessed image data, the skeleton extraction unit 22 returns the process to step S22 and executes the process for the next image data.
  • step S21 the image acquisition unit 21 acquires the image data at each time of the image data of each time that constitutes the video data of the person who has performed the series of operations. Then, in step S22, the skeleton extraction unit 22 extracts target information from the image data to be processed, and in step S23, the motion information registration unit 23 searches motion information similar to the target information and specifies the motion content. .. This makes it possible to specify the operation content at each time in a series of operations. At this time, it is possible to specify when the target work is started and when it is finished. Further, when the target person performs the unsteady work during the series of works, it is possible to specify that the unsteady work has been performed.
  • the motion identifying apparatus 10 extracts the target information, which is the skeleton information indicating the posture of the target person, from the image data obtained by photographing the target person from the front, and the skeleton similar to the target information.
  • the operation content indicated by the operation information, which is information, is specified as the operation content performed by the subject. Therefore, the motion specifying apparatus 10 according to the first embodiment can analyze a series of motions by inputting video data including a plurality of image data and specifying the motion content of each image data. .. As a result, processing such as cycle time measurement and work content analysis can be performed without attaching unnecessary objects to the worker's body.
  • the action specifying device 10 is one device.
  • the action specifying device 10 may be a system including a plurality of devices.
  • the action specifying device 10 may be a system including a registration device having a function related to registration processing and a specification device having a function related to specific processing.
  • the operation information table 31 may be stored in a storage device provided outside the registration device and the specific device, or may be stored in either the storage device of the registration device or the specific device.
  • Hardware in the registration device and the specific device is omitted in FIG. 9.
  • the registration device and the identification device include a processor, a memory, a storage, and a communication interface as hardware, similarly to the behavior identification device 10.
  • the data captured by the image capturing device 41 is used as the image data.
  • three-dimensional image data obtained by a sensor such as a depth sensor may be used as the image data.
  • each functional component is realized by software.
  • each functional component may be realized by hardware. Differences between this modification 3 and the first embodiment will be described.
  • the behavior specifying device 10 includes an electronic circuit 15 instead of the processor 11, the memory 12, and the storage 13.
  • the electronic circuit 15 is a dedicated circuit that realizes the functions of each functional component, the memory 12, and the storage 13.
  • the electronic circuit 15 includes a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, a logic IC, a GA (Gate Array), an ASIC (Application Specific Integrated Circuit), and an FPGA (Field-Programmable Gate Array). is assumed.
  • Each functional constituent element may be realized by one electronic circuit 15, or each functional constituent element may be dispersed in a plurality of electronic circuits 15 and realized.
  • ⁇ Modification 4> As a fourth modification, some of the functional components may be realized by hardware, and the other functional components may be realized by software.
  • the processor 11, the memory 12, the storage 13, and the electronic circuit 15 are called a processing circuit. That is, the function of each functional component is realized by the processing circuit.
  • Embodiment 2 differs from the first embodiment in that the learning model 32 is generated based on the motion information and the label information, and the label information corresponding to the target information is specified by the learning model 32. In the second embodiment, these different points will be described, and description of the same points will be omitted.
  • the configuration of the motion identifying apparatus 10 according to the second embodiment will be described with reference to FIG. 11.
  • the action identifying apparatus 10 is different from the action identifying apparatus 10 shown in FIG. 1 in that a learning unit 26 is provided instead of the action information registration unit 23. Further, the action identifying apparatus 10 is different from the action identifying apparatus 10 shown in FIG. 1 in that the storage 13 stores a learning model 32 instead of the action information table 31.
  • the operation of the operation specifying device 10 according to the second embodiment will be described with reference to FIGS. 12 to 13.
  • the operation of the motion identifying apparatus 10 according to the second embodiment corresponds to the motion identifying method according to the second embodiment. Further, the operation of the operation specifying device 10 according to the second embodiment corresponds to the processing of the operation specifying program according to the second embodiment.
  • the operation of the operation specifying device 10 according to the second embodiment includes a learning process and a specifying process.
  • step S31 to step S32 is the same as the processing from step S11 to step S12 in FIG.
  • step S34 is the same as the process of step S14 of FIG.
  • Step S33 Learning model generation process
  • the learning unit 26 learns, as learning data, a plurality of sets of the motion information extracted in step S32 and the label information of the same set as the image data from which the motion information is extracted. Accordingly, when the skeleton information 43 is input, the learning unit 26 specifies the motion information similar to the input skeleton information 43, and outputs the learning model 32 that outputs the label information corresponding to the specified motion information. To generate. An existing machine learning model or the like may be used as a learning method based on the learning data.
  • the learning unit 26 writes the generated learning model 32 in the storage 13. When the learning model 32 is already generated, the learning unit 26 updates the learning model 32 by giving learning data to the already generated learning model 32.
  • step S31 not only the pair of image data and label information, but only the image data may be input.
  • motion information is extracted from the image data in step S32, and only motion information is given to the learning model 32 as learning data in step S33. In this way, it is possible to obtain a certain learning effect even when label information does not exist.
  • step S41 to step S42 is the same as the processing from step S21 to step S22 in FIG. Further, the processing from step S44 to step S45 is the same as the processing from step S24 to step S25 in FIG.
  • Step S43 Action specifying process
  • the motion identifying unit 24 inputs the target information extracted in step S42 into the learning model 32 stored in the storage 13, and acquires the label information output from the learning model 32. Then, the action identifying unit 24 identifies the action content indicated by the acquired label information as the action content performed by the subject. That is, the action identifying unit 24 identifies the action content indicated by the label information inferred from the target information by the learning model 32 and output as the action content performed by the subject.
  • the motion identifying apparatus 10 generates the learning model 32 and identifies the label information corresponding to the target information by the learning model 32. Therefore, it becomes possible to efficiently identify the label information corresponding to the target information.
  • the action specifying device 10 is one device.
  • the action specifying device 10 may be a system including a plurality of devices.
  • the action specifying device 10 may be a system including a registration device having a function related to learning processing and a specifying device having a function related to specific processing.
  • the learning model 32 may be stored in a storage device provided outside the learning device and the specific device, or may be stored in a storage of either the learning device or the specific device.
  • the hardware of the registration device and the specific device is omitted.
  • the learning device and the identifying device include a processor, a memory, a storage, and a communication interface as hardware, like the behavior identifying device 10.
  • the behavior identifying apparatus 10 includes a processor 11, a memory 12, a storage 13, and a communication interface 14 as hardware.
  • the motion identifying apparatus 10 may include, as the processor 11, a CPU, a GPU, a processor for learning processing, and a processor for inference processing.
  • the skeleton extraction unit 22 that performs image processing is realized by the GPU, and the learning unit 26 related to learning of the learning model 32 is realized by the processor for learning processing, and the motion identification for inferring by the learning model 32 is specified.
  • the unit 24 may be realized by a processor for inference processing, and the remaining image acquisition unit 21 and the learning unit 26 may be realized by a CPU.
  • action identification device 11 processor, 12 memory, 13 storage, 14 communication interface, 15 electronic circuit, 21 image acquisition unit, 22 skeleton extraction unit, 23 action information registration unit, 24 action identification unit, 25 output unit, 26 learning unit , 31 motion information table, 32 learning model, 41 imaging device, 42 people, 43 skeleton information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
PCT/JP2019/000056 2019-01-07 2019-01-07 動作特定装置、動作特定方法及び動作特定プログラム WO2020144727A1 (ja)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2019524483A JP6777819B1 (ja) 2019-01-07 2019-01-07 動作特定装置、動作特定方法及び動作特定プログラム
PCT/JP2019/000056 WO2020144727A1 (ja) 2019-01-07 2019-01-07 動作特定装置、動作特定方法及び動作特定プログラム
DE112019006583.1T DE112019006583T5 (de) 2019-01-07 2019-01-07 Bewegungsidentifizierungseinrichtung, Bewegungsidentifizierungsverfahren und Bewegungsidentifizierungsprogramm
CN201980087653.9A CN113302653A (zh) 2019-01-07 2019-01-07 动作确定装置、动作确定方法以及动作确定程序
TW108120437A TW202026951A (zh) 2019-01-07 2019-06-13 動作特別指定裝置、動作特別指定方法及動作特別指定程式產品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/000056 WO2020144727A1 (ja) 2019-01-07 2019-01-07 動作特定装置、動作特定方法及び動作特定プログラム

Publications (1)

Publication Number Publication Date
WO2020144727A1 true WO2020144727A1 (ja) 2020-07-16

Family

ID=71521505

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/000056 WO2020144727A1 (ja) 2019-01-07 2019-01-07 動作特定装置、動作特定方法及び動作特定プログラム

Country Status (5)

Country Link
JP (1) JP6777819B1 (de)
CN (1) CN113302653A (de)
DE (1) DE112019006583T5 (de)
TW (1) TW202026951A (de)
WO (1) WO2020144727A1 (de)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017174093A (ja) * 2016-03-23 2017-09-28 日野自動車株式会社 運転者状態判定装置
JP2017199303A (ja) * 2016-04-28 2017-11-02 パナソニックIpマネジメント株式会社 識別装置、識別方法、識別プログラムおよび記録媒体

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016042332A (ja) * 2014-08-19 2016-03-31 大日本印刷株式会社 作業動作検査システム
JP2016099982A (ja) 2014-11-26 2016-05-30 日本電信電話株式会社 行動認識装置、行動学習装置、方法、及びプログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017174093A (ja) * 2016-03-23 2017-09-28 日野自動車株式会社 運転者状態判定装置
JP2017199303A (ja) * 2016-04-28 2017-11-02 パナソニックIpマネジメント株式会社 識別装置、識別方法、識別プログラムおよび記録媒体

Also Published As

Publication number Publication date
CN113302653A (zh) 2021-08-24
JP6777819B1 (ja) 2020-10-28
DE112019006583T5 (de) 2021-12-16
TW202026951A (zh) 2020-07-16
JPWO2020144727A1 (ja) 2021-02-18

Similar Documents

Publication Publication Date Title
JP5243529B2 (ja) 拡張リアリティイメージのためのカメラポーズ推定装置および方法
JP6293386B2 (ja) データ処理装置、データ処理方法及びデータ処理プログラム
US10146992B2 (en) Image processing apparatus, image processing method, and storage medium that recognize an image based on a designated object type
CN112446363A (zh) 一种基于视频抽帧的图像拼接与去重方法及装置
JP7080285B2 (ja) 動作特定装置、動作特定方法及び動作特定プログラム
JP2013510462A5 (de)
US20210338109A1 (en) Fatigue determination device and fatigue determination method
US20220207266A1 (en) Methods, devices, electronic apparatuses and storage media of image processing
JP6786015B1 (ja) 動作分析システムおよび動作分析プログラム
WO2020144727A1 (ja) 動作特定装置、動作特定方法及び動作特定プログラム
US20230326251A1 (en) Work estimation device, work estimation method, and non-transitory computer readable medium
US20230100238A1 (en) Methods and systems for determining the 3d-locations, the local reference frames and the grasping patterns of grasping points of an object
JP2014199519A (ja) 物体識別装置、物体識別方法及びプログラム
JP6393495B2 (ja) 画像処理装置および物体認識方法
JP2007140729A (ja) 物品の位置及び姿勢を検出する方法および装置
US20170069138A1 (en) Information processing apparatus, method for controlling information processing apparatus, and storage medium
WO2022003981A1 (ja) 行動特定装置、行動特定方法及び行動特定プログラム
JP7048347B2 (ja) 位置関係決定装置
JP7158534B1 (ja) 行動解析装置、行動解析方法及び行動解析プログラム
Saini et al. Improvement in copy-move forgery detection using hybrid approach
JP7111694B2 (ja) 点検支援装置、点検支援方法およびプログラム
JP2008146132A (ja) 画像検出装置、プログラム及び画像検出方法
JP2010160707A (ja) 画像認識方法およびその装置
JP7350222B1 (ja) 動作分析装置、動作分析方法、及び動作分析プログラム
JP7376446B2 (ja) 作業分析プログラム、および、作業分析装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019524483

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19908991

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19908991

Country of ref document: EP

Kind code of ref document: A1