WO2023062762A1 - 推定プログラム、推定方法および情報処理装置 - Google Patents
推定プログラム、推定方法および情報処理装置 Download PDFInfo
- Publication number
- WO2023062762A1 WO2023062762A1 PCT/JP2021/037972 JP2021037972W WO2023062762A1 WO 2023062762 A1 WO2023062762 A1 WO 2023062762A1 JP 2021037972 W JP2021037972 W JP 2021037972W WO 2023062762 A1 WO2023062762 A1 WO 2023062762A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- head
- information
- joints
- unit
- image
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 87
- 238000000034 method Methods 0.000 title claims description 126
- 238000010801 machine learning Methods 0.000 claims abstract description 22
- 238000001514 detection method Methods 0.000 claims description 84
- 230000001936 parietal effect Effects 0.000 claims description 75
- 230000005856 abnormality Effects 0.000 claims description 74
- 238000012545 processing Methods 0.000 claims description 53
- 230000008569 process Effects 0.000 claims description 48
- 230000002159 abnormal effect Effects 0.000 claims description 17
- 230000037308 hair color Effects 0.000 claims 3
- 230000001815 facial effect Effects 0.000 description 98
- 238000010586 diagram Methods 0.000 description 65
- 238000012937 correction Methods 0.000 description 52
- 230000009466 transformation Effects 0.000 description 42
- 238000007781 pre-processing Methods 0.000 description 27
- 238000005452 bending Methods 0.000 description 18
- 238000004891 communication Methods 0.000 description 18
- 238000005259 measurement Methods 0.000 description 17
- 210000000988 bone and bone Anatomy 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 15
- 238000011156 evaluation Methods 0.000 description 14
- 238000006243 chemical reaction Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 10
- 238000013519 translation Methods 0.000 description 7
- 238000007796 conventional method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 230000009191 jumping Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 235000008694 Humulus lupulus Nutrition 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 241001285221 Breviceps Species 0.000 description 1
- 208000025309 Hair disease Diseases 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 208000034653 disorder of pilosebaceous unit Diseases 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present invention relates to an estimation program and the like.
- 3D sensing technology has been established to detect the 3D skeletal coordinates of a person with an accuracy of ⁇ 1 cm from multiple 3D laser sensors. This 3D sensing technology is expected to be applied to a gymnastics scoring support system and expanded to other sports and other fields.
- a method using a 3D laser sensor is referred to as a laser method.
- the laser is irradiated approximately 2 million times per second, and the depth and information of each irradiation point, including the target person, are obtained based on the laser's travel time (Time of Flight: ToF).
- ToF Time of Flight
- 3D skeleton recognition is performed using an image system instead of a laser system.
- a CMOS (Complementary Metal Oxide Semiconductor) imager is used to obtain RGB (Red Green Blue) data of each pixel, and an inexpensive RGB camera can be used.
- 2D features include 2D skeleton information and heatmap information.
- FIG. 37 is a diagram showing an example of a human body model.
- the human body model M1 is composed of 21 joints.
- each joint is indicated by a node and assigned a number from 0 to 20.
- the relationship between node numbers and joint names is shown in table Te1.
- the joint name corresponding to node 0 is "SPINE_BASE”. Description of joint names for nodes 1 to 20 is omitted.
- FIG. 38 is a diagram for explaining a method using machine learning.
- 2D features 22 representing each joint feature are acquired by applying 2D backbone processing 21a to each input image 21 captured by each camera.
- aggregated volumes 23 are obtained by back-projecting each 2D feature 22 onto a 3Dcube according to camera parameters.
- Processed volumes 25 correspond to a heatmap representing the 3D likelihood of each joint.
- 3D skeleton information 27 is obtained by executing soft-argmax 26 on processed volumes 25 .
- the condition for hopping is that the top of the athlete's head be lower than the feet.
- the contour of the head obtained as a result of 3D skeleton recognition differs from the contour of the actual head, making it impossible to accurately identify the position of the top of the head.
- FIG. 39 is a diagram showing an example of an image in which the position of the top of the head cannot be specified with high accuracy.
- an image 10a with "appearance”, an image 10b with “disordered hair”, and an image 10c with “occlusion” are used for explanation.
- Appearance is defined as the fact that the competitor's head blends into the background, making it difficult even for humans to discern the area of the head.
- a player's hair is defined as disheveled.
- Occlusion is defined as covering the top of the head by the player's torso or arms.
- the position of the top of the head is specified by 3D skeleton recognition of the image 10a based on the conventional technology, the position 1a will be specified due to the influence of appearance.
- the correct position of the top of the head is 1b.
- the position of the top of the head is specified by performing 3D skeleton recognition of the image 10b, the position 1c will be specified due to the effect of disheveled hair.
- the correct position of the top of the head is 1d.
- the position of the top of the head is specified by 3D skeleton recognition of the image 10c, the position 1e will be specified due to the influence of occlusion.
- the correct position of the top of the head is 1f.
- an object of the present invention is to provide an estimation program, an estimation method, and an information processing device capable of accurately estimating the position of the top of the head of a player.
- the computer executes the following processing.
- the computer inputs an image of the player's head in a predetermined state to the machine learning model to identify the positions of multiple joints included in the player's face.
- the computer uses each of the multiple joint positions to estimate the position of the top of the player's head.
- FIG. 1 is a diagram showing an example of a gymnastics scoring support system according to the first embodiment.
- FIG. 2 is a diagram for explaining an example of source information.
- FIG. 3 is a diagram for explaining an example of target information.
- FIG. 4 is a diagram for supplementary explanation of a technique for calculating conversion parameters.
- FIG. 5 is a diagram for supplementary explanation of the method of estimating the top of the player's head.
- FIG. 6 is a diagram for explaining the effect of the information processing apparatus according to the first embodiment.
- FIG. 7 is a functional block diagram showing the configuration of the learning device according to the first embodiment.
- FIG. 8 is a diagram showing an example of the data structure of learning data.
- FIG. 9 is a functional block diagram showing the configuration of the information processing apparatus according to the first embodiment.
- FIG. 1 is a diagram showing an example of a gymnastics scoring support system according to the first embodiment.
- FIG. 2 is a diagram for explaining an example of source information.
- FIG. 3 is a diagram
- FIG. 10 is a diagram showing an example of the data structure of the measurement table.
- FIG. 11 is a diagram showing an example of the data structure of the skeleton recognition result table.
- FIG. 12 is a diagram for explaining the second feature.
- FIG. 13 is a diagram showing one second feature.
- FIG. 14 is a diagram for supplementary explanation of RANSAC.
- FIG. 15 is a diagram for explaining the problem of RANSAC.
- FIG. 16 is a diagram for explaining the processing of the estimation unit according to the first embodiment;
- FIG. 17 is a diagram for explaining the process of detecting bone length abnormality.
- FIG. 18 is a diagram for explaining the process of detecting a reverse/sideways curve abnormality.
- FIG. 19 is a diagram (1) for supplementary explanation of each vector used in reverse/sideways abnormality detection.
- FIG. 19 is a diagram (1) for supplementary explanation of each vector used in reverse/sideways abnormality detection.
- FIG. 20 is a diagram (2) for supplementary explanation of each vector used in reverse/sideways abnormality detection.
- FIG. 21 is a diagram (3) for supplementary explanation of each vector used in the reverse/sideways curve abnormality detection.
- FIG. 22 is a diagram (4) for supplementary explanation of each vector used in reverse/side-turn abnormality detection.
- FIG. 23 is a diagram for explaining the process of detecting an over-bending abnormality.
- FIG. 24 is a diagram for explaining bone length correction.
- 25A and 25B are diagrams for explaining the inverse/lateral curving correction.
- FIG. FIG. 26 is a diagram for explaining excessive bending correction.
- FIG. 27 is a flow chart showing the processing procedure of the learning device according to the first embodiment.
- FIG. 28 is a flowchart illustrating the processing procedure of the information processing apparatus according to the first embodiment
- FIG. 29 is a flowchart (1) showing a processing procedure of transformation parameter estimation processing.
- FIG. 30 is a flowchart (2) showing a processing procedure of transformation parameter estimation processing.
- FIG. 31 is a diagram for explaining comparison results of errors in parietal region estimation.
- FIG. 32 is a diagram illustrating an example of source information according to the second embodiment. 33A and 33B are diagrams for explaining the process of identifying the top of the head.
- FIG. 34 is a functional block diagram showing the configuration of the information processing apparatus according to the second embodiment.
- FIG. 35 is a flow chart showing the processing procedure of the information processing apparatus according to the second embodiment.
- FIG. 36 is a diagram illustrating an example of a hardware configuration of a computer that implements functions similar to those of the information processing apparatus.
- FIG. 37 is a diagram showing an example of a human body model.
- FIG. 38 is a diagram for explaining a method using machine learning.
- FIG. 39 is a diagram showing an example of an image in which the position of the top of the head cannot be specified with high accuracy.
- FIG. 1 is a diagram showing an example of a gymnastics scoring support system according to the first embodiment.
- this gymnastics scoring support system 35 has cameras 30 a , 30 b , 30 c , 30 d , a learning device 50 , and an information processing device 100 .
- the cameras 30a to 30d and the information processing apparatus 100 are connected by wire or wirelessly.
- the learning device 50 and the information processing device 100 are connected by wire or wirelessly.
- the gymnastics scoring support system 35 may further have other cameras.
- the contestant H1 performs a series of performances on the equipment, but it is not limited to this.
- the contestant H1 may perform a performance in a place where no equipment exists, or may perform actions other than performance.
- the camera 30a is a camera that takes an image of the competitor H1.
- the camera 30a corresponds to a CMOS imager, an RGB camera, or the like.
- the camera 30a continuously captures images at a predetermined frame rate (frames per second: FPS) and transmits image data to the information processing apparatus 100 in chronological order.
- FPS frames per second
- image frame data of one image among data of a plurality of continuous images
- Image frames are given frame numbers in chronological order.
- cameras 30b, 30c, and 30d are the same as the description regarding the camera 30a.
- the cameras 30a to 30d are collectively referred to as "camera 30" as appropriate.
- the learning device 50 machine-learns a machine-learning model for estimating the positions of facial joints from image frames based on learning data prepared in advance.
- Facial joints include left and right eyes, left and right ears, nose, chin, mouth, and the like.
- a machine learning model for estimating the positions of facial joints from image frames is referred to as a "facial joint estimation model.”
- the learning device 60 outputs information on the machine-learned facial joint estimation model to the information processing device 100 .
- the information processing apparatus 100 calculates the position of the top of the head of the player H1 based on source information prepared in advance and target information that is the recognition result of the facial joint using the facial joint estimation model. presume.
- the source information and target information will be described below.
- FIG. 2 is a diagram for explaining an example of source information.
- the positions of a plurality of face joints p1 and the positions of the parietal joints tp1 are set for the 3D human body model M2.
- the source information 60a is set in the information processing apparatus 100 in advance.
- FIG. 3 is a diagram for explaining an example of target information.
- the target information is generated by inputting the image frames acquired from the camera into the facial joint estimation model. As shown in FIG. 3, a plurality of facial joints p2 are specified in the target information 60b.
- the information processing apparatus 100 calculates transformation parameters for matching each position of the facial joint of the source information 60a with each position of the facial joint of the target information 60b.
- the information processing apparatus 100 estimates the position of the top of the head of the player H1 by applying the calculated transformation parameter to the position of the top of the head of the source information 60a.
- FIG. 4 is a diagram for supplementary explanation of the method of calculating the conversion parameters. Transformation parameters include rotation R, translation t, and scale c. Rotation R and translation t are vector values. Scale c is a scalar value. The steps S1 to S5 will be described in order.
- Step S1 will be explained. Assume that the positions of the plurality of face joints p1 included in the source information 60a are x (x is a vector value).
- Step S2 will be explained.
- the position of the face joint p1 becomes "Rx".
- Step S3 By multiplying the updated position "Rx" of the face joint p1 by the scale c, the position of the face joint p1 becomes "cRx".
- Step S4 will be explained.
- the position of the face joint p1 becomes “cRx+t”.
- Step S5 will be explained. Assuming that the position of the facial joint p2 of the target information 60b is y, the difference between the source information 60a to which the transformation parameter is applied and the target information 60b can be specified by calculating
- Equation (1) the difference e2 between the source information 60a to which the transform parameters are applied and the target information 60b is defined by equation (1).
- x indicates the position of the facial joint of the source information 60a.
- y indicates the position of the facial joint in the target information 60b.
- the information processing apparatus 100 uses the method of least squares or the like to calculate the conversion parameters R, t, and c that minimize the difference e2 in Equation (1).
- the information processing device 100 After calculating the transformation parameters, the information processing device 100 applies the transformation parameters to the position of the top of the head of the source information 60a, thereby estimating the position of the top of the head of the player H1.
- FIG. 5 is a diagram for supplementary explanation of the method of estimating the top of the player's head.
- the information processing apparatus 100 calculates the face joint position y (the position tp2 of the top of the head) of the athlete from the face coordinate position x (including the position tp1 of the top of the head) of the source information 60a. including).
- the conversion parameter of equation (2) is a conversion parameter that minimizes the difference e2 calculated by the above process.
- the information processing apparatus 100 acquires the position tp2 of the top of the head included in the calculated position y.
- the information processing apparatus 100 calculates transformation parameters for matching the positions of the facial joints of the source information 60a with the positions of the facial joints of the target information 60b.
- the information processing apparatus 100 calculates the position of the top of the head of the player by applying the calculated conversion parameter to the top of the head of the source information 60a. Since the relationship between the face joint and the parietal region is a rigid body relationship, the position of the parietal region of the player can be estimated using this relationship, thereby improving the estimation accuracy.
- FIG. 6 is a diagram for explaining the effect of the information processing apparatus according to the first embodiment.
- an image 10a with “appearance”, an image 10b with “disordered hair”, and an image 10c with “occlusion” are used for explanation.
- the position of the top of the head is specified by performing 3D skeleton recognition of the image 10a based on the conventional technology, the position 1a of the top of the head will be specified due to the influence of the appearance.
- the information processing apparatus 100 executes the above process, thereby specifying the position 2a of the top of the head.
- the correct position of the top of the head is 1b, so the accuracy of estimating the top of the head is improved compared to the conventional technique.
- the position of the top of the head is specified by performing 3D skeleton recognition of the image 10b based on the conventional technology, the position 1c of the top of the head will be specified due to the effect of disheveled hair.
- the information processing apparatus 100 executes the above-described process to specify the position 2b of the top of the head.
- the correct position of the top of the head is 1d, so the accuracy of estimating the top of the head is improved compared to the conventional technique.
- the information processing apparatus 100 executes the above process to specify the position 2c of the top of the head.
- the correct position of the top of the head is 1f, so the accuracy of estimating the top of the head is improved compared to the conventional technology.
- the information processing apparatus 100 can improve the accuracy of parietal region estimation by using facial joints that are less affected by poor observation. Also, when evaluating the performance of a contestant using the top of the head, it is possible to appropriately evaluate whether the performance is successful or not. Competitor performances using the crown of the head include hops on the balance beam and some floor exercises.
- FIG. 7 is a functional block diagram showing the configuration of the learning device according to the first embodiment.
- the learning device 50 has a communication section 51 , an input section 52 , a display section 53 , a storage section 54 and a control section 55 .
- the communication unit 51 performs data communication with the information processing device 100 .
- the communication unit 51 transmits information on the machine-learned facial joint estimation model 54 b to the information processing apparatus 100 .
- the communication unit 51 may receive learning data 54a used in machine learning from an external device.
- the input unit 52 corresponds to an input device for inputting various types of information to the learning device 50.
- the display unit 53 displays information output from the control unit 55.
- the storage unit 54 stores learning data 54a and a facial joint estimation model 54b.
- the storage unit 54 corresponds to semiconductor memory devices such as RAM (Random Access Memory) and flash memory (Flash Memory), and storage devices such as HDD (Hard Disk Drive).
- the learning data 54a holds information for machine learning the facial joint estimation model 54b.
- image frames with facial joint annotations are held as information for machine learning.
- FIG. 8 is a diagram showing an example of the data structure of learning data. As shown in FIG. 8, the learning data associates item numbers, input data, and correct data (labels). An image frame containing a face image of a person is set as input data. As correct data, the positions of facial joints included in the image frame are set.
- the facial joint estimation model 54b corresponds to an NN (Neural Network) or the like.
- the facial joint estimation model 54b outputs the positions of the facial joints based on machine-learned parameters when an image frame is input.
- the control unit 55 has an acquisition unit 55a, a learning unit 55b, and an output unit 55c.
- the control unit 55 corresponds to a CPU (Central Processing Unit) or the like.
- the acquisition unit 55a acquires the learning data 54a from the communication unit 51 or the like.
- the acquiring unit 55 a registers the acquired learning data 54 a in the storage unit 54 .
- the learning unit 55b performs machine learning of the facial joint estimation model 54b using the learning data 54a based on the error backpropagation method. For example, the learning unit 55b trains the parameters of the facial joint estimation model 54b so that the result of inputting the input data of the learning data 54a into the facial joint estimation model 54b approaches the correct data paired with the input data.
- the output unit 55c outputs information of the facial joint estimation model 54b for which machine learning has been completed to the information processing apparatus 100.
- FIG. 9 is a functional block diagram showing the configuration of the information processing apparatus according to the first embodiment.
- the information processing apparatus 100 has a communication section 110 , an input section 120 , a display section 130 , a storage section 140 and a control section 150 .
- the communication unit 110 performs data communication with the camera 30 and the information processing device 100 .
- communication unit 110 receives image frames from camera 30 .
- the communication unit 110 transmits information on the machine-learned facial joint estimation model 54 b to the information processing apparatus 100 .
- the input unit 120 corresponds to an input device for inputting various types of information to the information processing device 100 .
- the display unit 130 displays information output from the control unit 150 .
- the storage unit 140 has a facial joint estimation model 54b, source information 60a, a measurement table 141, a skeleton recognition result table 142, and a technique recognition table 143.
- the storage unit 140 corresponds to semiconductor memory elements such as RAM and flash memory, and storage devices such as HDD.
- the facial joint estimation model 54b is a facial joint estimation model 54b for which machine learning has already been performed.
- the facial joint estimation model 54b is trained by the learning device 50 described above.
- the source information 60a is information in which the positions of a plurality of facial joints p1 and the positions of the parietal joints tp1 are set, as described with reference to FIG.
- the measurement table 141 is a table that stores image frames captured by the camera 30 in chronological order.
- FIG. 10 is a diagram showing an example of the data structure of the measurement table. As shown in FIG. 10, the measurement table 141 associates camera identification information with image frames.
- Camera identification information is information that uniquely identifies a camera.
- camera identification information “C30a” corresponds to camera 30a
- camera identification information “C30b” corresponds to camera 30b
- camera identification information “C30c” corresponds to camera 30c
- camera identification information “C30d”. corresponds to the camera 30d.
- the image frames are time-series image frames captured by the corresponding camera 30 . Assume that each image frame is assigned a frame number in chronological order.
- the skeleton recognition result table 142 is a table that stores the recognition results of the 3D skeleton of player H1.
- FIG. 11 is a diagram showing an example of the data structure of the skeleton recognition result table. As shown in FIG. 11, this skeleton recognition result table 142 associates frame numbers with 3D skeleton information.
- a frame number is a frame number given to an image frame used when estimating 3D skeleton information.
- the 3D skeleton information includes the positions of joints defined in each node 0-20 shown in FIG. 37 and the positions of a plurality of facial joints including the top of the head.
- the technique recognition table 143 is a table that associates time-series changes in each joint position included in each piece of 3D skeleton information with types of techniques.
- the technique recognition table 143 also associates combinations of technique types with scores.
- the score is calculated as the sum of the D (Difficulty) score and the E (Execution) score.
- the D score is a score calculated based on the difficulty of the technique.
- the E-score is a score calculated by a deduction method according to the degree of perfection of the technique.
- the technique recognition table 143 also includes information that associates the time-series conversion of the top of the head with the type of technique, such as jumping on a balance beam or part of a floor exercise.
- the control unit 150 has an acquisition unit 151 , a preprocessing unit 152 , a target information generation unit 153 , an estimation unit 154 , an abnormality detection unit 155 , a correction unit 156 and a technique recognition unit 157 .
- the control unit 150 corresponds to the CPU and the like.
- the acquisition unit 151 acquires the facial joint estimation model 54b for which machine learning has been performed from the learning device 50 via the communication unit 110, and registers the facial joint estimation model 54b in the storage unit 140.
- the acquisition unit 151 acquires image frames in time series from the camera 30 via the communication unit 110 .
- the acquisition unit 151 stores the image frames acquired from the camera 30 in the measurement table 141 in association with the camera identification information.
- the preprocessing unit 152 executes 3D skeleton recognition of the athlete H1 from the image frames (multi-viewpoint image frames) registered in the measurement table 141.
- the preprocessing unit 152 may use any conventional technique to generate the 3D skeleton information of the player H1. An example of the processing of the preprocessing unit 152 will be described below.
- the preprocessing unit 152 acquires the image frames of the camera 30 from the measurement table 141, and based on the image frames, generates a plurality of second features respectively corresponding to the joints of the athlete H1.
- the second feature is a heatmap that indicates the likelihood of each joint position.
- a second feature corresponding to each joint is generated from one image frame acquired from one camera. For example, with 21 joints and 4 cameras, 84 second features are generated for each image frame.
- FIG. 12 is a diagram for explaining the second feature.
- An image frame Im30a1 shown in FIG. 12 is an image frame captured by the camera 30a.
- An image frame Im30b1 is an image frame captured by the camera 30b.
- An image frame Im30c1 is an image frame captured by the camera 30c.
- An image frame Im30d1 is an image frame captured by the camera 30d.
- the preprocessing unit 152 generates the second feature group information G1a based on the image frame Im30a1.
- the second feature group information G1a includes 21 second features corresponding to each joint.
- the preprocessing unit 152 generates second feature group information G1b based on the image frame Im30b1.
- the second feature group information G1b includes 21 second features corresponding to each joint.
- the preprocessing unit 152 generates the second feature group information G1c based on the image frame Im30c1.
- the second feature group information G1c includes 21 second features corresponding to each joint.
- the preprocessing unit 152 generates second feature group information G1d based on the image frame Im30d1.
- the second feature group information G1d includes 21 second features corresponding to each joint.
- FIG. 13 is a diagram showing one second feature.
- a second feature Gc1-3 shown in FIG. 13 is a second feature corresponding to the joint "HEAD" among the second features included in the second feature group information G1d.
- a likelihood is set for each pixel of the second features Gc1-3.
- colors are set according to likelihood values. The location where the likelihood is maximum becomes the coordinates of the corresponding joint. For example, in the feature Gc1-3, the area Ac1-3 with the maximum likelihood value can be identified as the coordinates of the joint "HEAD".
- the preprocessing unit 152 detects an abnormal second feature from the second features included in the second feature group information G1a, and removes the detected abnormal second feature from the second feature group information G1a.
- the preprocessing unit 152 detects abnormal second features from the second features included in the second feature group information G1b, and removes the detected abnormal second features from the second feature group information G1b.
- the preprocessing unit 152 detects an abnormal second feature from the second features included in the second feature group information G1c, and removes the detected abnormal second feature from the second feature group information G1c.
- the preprocessing unit 152 detects abnormal second features from the second features included in the second feature group information G1d, and removes the detected abnormal second features from the second feature group information G1d.
- the preprocessing unit 152 integrates the second feature group information G1a, G1b, G1c, and G1d excluding the abnormal second feature, and generates 3D skeleton information of the athlete H1 based on the integrated result.
- the 3D skeleton information generated by the preprocessing unit 152 includes the positions (three-dimensional coordinates) of each joint described with reference to FIG. Note that the preprocessing unit 152 may generate the 3D skeleton information of the player H1 using the conventional technology described with reference to FIG. Also, in the description of FIG. 37, the joint numbered 3 is "HEAD", but it may be a plurality of face joints including the top of the head.
- the preprocessing unit 152 outputs the 3D skeleton information to the estimation unit 154 each time it generates 3D skeleton information.
- the preprocessing unit 152 also outputs the image frames used for generating the 3D skeleton information to the target information generating unit 153 .
- the target information generation unit 153 generates target information by inputting an image frame to the facial joint estimation model 54b. Such target information corresponds to the target information 60b described with reference to FIG. Target information generation section 153 outputs the target information to estimation section 154 .
- the target information generation unit 153 selects one of the image frames and inputs it to the facial joint estimation model 54b.
- the target information generation unit 153 repeatedly executes the above process each time an image frame is acquired.
- the estimation unit 154 estimates the position of the top of the head of the player H1 based on the source information 60a and the target information 60b (target information specific to the image frame).
- RANSAC RANdom SAmple Consensus
- Fig. 14 is a diagram for supplementary explanation of RANSAC. Steps S10 to S13 in FIG. 14 will be described in order.
- Step S10 will be explained.
- facial joints p3-1, p3-2, p3-3, and p3-4 are included in the target information obtained by inputting the image frame to the facial joint estimation model 54b or the like.
- facial joint p3-1 is the right ear facial joint.
- the facial joint p3-2 is the nasal facial joint.
- the facial joint p3-3 is the facial joint of the neck.
- Facial joints p3-4 are facial joints of the left ear.
- the step S11 will be explained.
- RANSAC randomly samples facial joints.
- three face joints are sampled, and face joints p3-2, p3-3, and p3-4 are sampled.
- Step S12 will be explained.
- RANSAC performs alignment based on the rigid-body relationship between source information and target information, and calculates rotation, translation, and scale.
- facial joints p4-1, p4-2, p4-3, and p4-4 are identified by applying the calculation results (rotation, translation, scale) to the source information and reprojecting.
- the step S13 will be explained.
- circles cir1, cir2, cir3 and cir4 centered on facial joints p4-1 to p4-4 are set.
- the radii (threshold values) of the circles cir1 to cir4 are preset.
- the facial joints included in circles cir1, cir2, cir3, and cir4 are defined as inliers, and the facial joints included in circles cir1, cir2, cir3, and cir4 are inlier.
- the outlier is the facial joint that is not exposed.
- facial joints p3-2, p3-3, and p3-4 are inliers, and facial joint p3-1 is outlier.
- RANSAC counts the number of inliers (hereinafter referred to as the number of inliers).
- the number of inliers is "3".
- the processing of steps S11 to S13 is repeatedly executed while changing the sampling target described in step S11, and the combination of sampling target face joints with the maximum inlier number is specified. For example, in step S11, if the number of inliers when sampling facial joints p3-2, p3-3, and p3-4 is the maximum, facial joints p3-2, p3-3, and p3-4 are excluded. Output as the result after value removal.
- FIG. 15 is a diagram for explaining the problem of RANSAC.
- RANSAC it is difficult to determine which combination is better when the number of inliers is tied.
- step S11 of case 1 face joints p3-1, p3-2 and p3-3 are sampled. Description of step S12 is omitted.
- Step S13 of Case 1 will be explained. Circles cir1, cir2, cir3, and cir4 centered on facial joints p4-1 to p4-4 obtained by reprojecting the source information are set.
- facial joints p3-1, p3-2, and p3-3 are inliers, and the number of inliers is "3".
- step S11 of case 2 face joints p3-2, p3-3 and p3-4 are sampled. Description of step S12 is omitted.
- Step S13 of case 2 will be explained. Circles cir1, cir2, cir3, and cir4 centered on facial joints p4-1 to p4-4 obtained by reprojecting the source information are set.
- facial joints p3-2, p3-3, and p3-4 are inliers, and the number of inliers is "3".
- case 2 Comparing case 1 and case 2, facial joints p3-2, p3-3, and p3-4 are close to the center positions of cir2, cir3, and cir4, and overall, case 2 has better results. It can be said that there is. However, RANSAC cannot automatically adopt the result of case 2 because the number of inliers of case 1 and the number of inliers of case 2 are the same.
- the estimating unit 154 compares the position of the facial joint in the source information 60a with the position of the facial joint in the target information 60b, and calculates the transformation parameter ( Calculate rotation R, translation t, scale c).
- the estimation unit 154 randomly samples three facial joints from the facial joints included in the target information 60b, and computes the transformation parameters for the sampled facial joints.
- the three sampled face joints will be referred to as "three joints" as appropriate.
- FIG. 16 is a diagram for explaining the processing of the estimating unit according to the first embodiment.
- face joints p1-1, p1-2, p1-3, and p1-4 are set in the source information 60a.
- facial joints p2-1, p2-2, p2-3, and p2-4 are set in the target information 60b.
- p2-1, p2-2, and p2-3 are sampled among face joints p2-1, p2-2, p2-3, and p2-4.
- the estimation unit 154 applies the transformation parameters to the facial joints p1-1, p1-2, p1-3, and p1-4 of the source information 60a to re-project to the target information 60b. Then, the face joints p1-1, p1-2, p1-3 and p1-4 of the source information 60a are reprojected to the positions pr1-1, pr1-2, pr1-3 and pr1-4 of the target information 60b, respectively. be.
- the estimation unit 154 compares the facial joints p2-1, p2-2, p2-3 and p2-4 on the target information 60b with the positions pr1-1, pr1-2, pr1-3 and pr1-4. to count the number of inliers. For example, if the distance between the facial joint p2-1 and the position pr1-1, the distance between the facial joint p2-2 and the position pr1-2, and the distance between the facial joint p3-1 and the position pr3-1 are less than the threshold, If the distance between p4-1 and position pr4-1 is greater than or equal to the threshold, the number of inliers is "3".
- the distance between the corresponding facial joint and the position (for example, the reprojected position pr1-1 of the right ear facial joint p1-1 in the source information 60a and the right ear joint position p2-1 in the target information 60b). ) is defined as the reprojection error ⁇ .
- the estimation unit 154 calculates the outlier evaluation index E based on Equation (3).
- ⁇ max corresponds to the maximum value among multiple reprojection errors ⁇ .
- ⁇ indicates the average value of the remaining reprojection errors ⁇ after excluding ⁇ max among a plurality of reprojection errors ⁇ .
- the estimating unit 154 performs sampling on the facial joints of the target information 60b, calculates the conversion parameters, and repeatedly executes the processing of calculating the number of inliers and the outlier evaluation index E while changing the combination of the three joints. do.
- the estimating unit 154 specifies, as the final transformation parameter, the transformation parameter when the number of inliers takes the maximum value among the combinations of the three joints.
- the estimating unit 154 identifies the combination of the three joints with the smaller outlier evaluation index E, and calculates the transformation parameters obtained by the identified three joints. is specified as the final transformation parameter.
- the final transformation parameter identified by the estimation unit 154 from among multiple transformation parameters based on the number of inliers and the outlier evaluation index E is simply referred to as a transformation parameter.
- the estimating unit 154 applies the transformation parameters to Equation (2), and calculates the positions y (including the position tp2 of the top of the head).
- the processing of the estimating unit 154 corresponds to the processing described using FIG.
- the estimating unit 154 estimates the position of the face coordinates of the player H1 (the position of the facial joint, the position of the top of the head), 3D skeleton information is generated by replacing the positional information of the face coordinates.
- the estimation unit 154 outputs the generated 3D skeleton information to the abnormality detection unit 155 .
- the estimation unit 154 also outputs the 3D skeleton information before being replaced with the position information of the face coordinates to the abnormality detection unit 155 .
- the estimation unit 154 repeatedly executes the above process.
- the 3D skeleton information generated by replacing the head information in the 3D skeleton information estimated by the preprocessing unit 152 with information on the position of the face coordinates is referred to as “post-replacement skeleton information” as appropriate. do.
- the 3D skeleton information before replacement is referred to as "pre-replacement skeleton information”.
- the post-replacement skeleton information and the pre-replacement skeleton information are not distinguished from each other, they are simply referred to as 3D skeleton information.
- the abnormality detection unit 155 detects an abnormality in the parietal region of the 3D skeleton information generated by the estimation unit 154 .
- the types of abnormality detection include "bone length abnormality detection”, “reverse/side bending abnormality detection”, and “overbending abnormality detection”.
- the joint numbers shown in FIG. 37 will be used. In the following description, the joint numbered n will be referred to as joint n.
- FIG. 17 is a diagram for explaining the process of detecting bone length abnormality.
- the abnormality detection unit 155 calculates a vector b head directed from the joint 18 to the joint 3 among the joints included in the pre-replacement skeleton information.
- the anomaly detection unit 155 calculates the norm
- C1 be the result of bone length abnormality detection for the pre-replacement skeletal information. For example, if the norm
- the anomaly detection unit 155 similarly calculates the norm
- C'1 be the result of bone length abnormality detection for the pre-replacement skeletal information. For example, if the norm
- Th 1 low to Th 1 high can be defined using the 3 ⁇ method. Using the average ⁇ and standard deviation ⁇ calculated from the head length data of multiple persons, Th 1 low can be defined as in Equation (4). Th 1 high can be defined as in Equation (5).
- the 3 ⁇ method is a method of determining abnormal when the target data are separated by three times or more of the standard deviation.
- normality is 99.74%, which applies to almost all human head lengths, so abnormalities such as extremely long or short heads can be detected.
- FIG. 18 is a diagram for explaining the process of detecting a reverse/sideways curve abnormality.
- the abnormality detection unit 155 calculates a vector b head directed from the joint 18 to the joint 3 among the joints included in the pre-replacement skeleton information.
- the abnormality detection unit 155 calculates a vector b neck from joint 2 to joint 18 among the joints included in the pre-replacement skeleton information.
- the abnormality detection unit 155 calculates a vector b shoulder directed from joint 4 to joint 7 among the joints included in the pre-replacement skeleton information.
- the anomaly detection unit 155 calculates a normal vector b neck ⁇ b head from b neck and b head . "x" indicates a cross product. The anomaly detection unit 155 calculates the formed angle ⁇ (b neck ⁇ b head , b shoulder ) from “b neck ⁇ b head ” and “b shoulder ”.
- C2 be the result of the reverse/sideways anomaly detection for the pre-replacement skeleton information. For example, when the formed angle ⁇ (b neck ⁇ b head , b shoulder ) is less than or equal to Th 2 , the abnormality detection unit 155 sets C2 to 0 as being normal. The anomaly detection unit 155 sets C2 to 1 as an anomaly when the formed angle ⁇ (b neck ⁇ b head , b shoulder ) is greater than Th2 .
- the anomaly detection unit 155 similarly calculates the formed angle ⁇ (b neck ⁇ b head , b shoulder ) for the post-replacement skeleton information.
- C'2 be the result of reverse/sideways bending abnormality detection for the post-replacement skeleton information.
- the abnormality detection unit 155 sets C′2 to 0 as being normal.
- the anomaly detection unit 155 sets C′2 to 1 as an anomaly when the formed angle ⁇ ( bneck ⁇ bhead , bshoulder ) is greater than Th2 .
- 19 to 22 are diagrams for supplementary explanation of each vector used in reverse/sideways curve abnormality detection.
- the x coordinate system corresponds to the frontal direction of player H1.
- the y coordinate system corresponds to the left direction of player H1.
- the z coordinate system points in the same direction as the b neck .
- the relationship between b neck , b head and b shoulder shown in FIG. 18 becomes the relationship between b neck , b head and b shoulder shown in FIG. 19 .
- FIG. 20 shows an example of "normal".
- Each coordinate system shown in FIG. 20 is the same as the coordinate system described in FIG.
- the formed angle ⁇ (b neck ⁇ b head , b shoulder ) is 0 (deg).
- FIG. 21 shows an example of "reverse bending".
- Each coordinate system shown in FIG. 21 is the same as the coordinate system described in FIG.
- the formed angle ⁇ (b neck ⁇ b head , b shoulder ) is 180 (deg).
- FIG. 22 shows an example of "lateral bending".
- Each coordinate system shown in FIG. 22 is the same as the coordinate system described in FIG.
- the formed angle ⁇ (b neck ⁇ b head , b shoulder ) is 90 (deg).
- the angle ⁇ (b neck ⁇ b head , b shoulder ) to be compared with the threshold Th 2 is 0 (deg) for a backward bend that should be regarded as normal, and 180 (deg) for a reverse bend that should be regarded as abnormal. , takes 90 (deg) in sideways bending. Therefore, if it is desired that both the reverse and the lateral curve are abnormal, Th 2 is set to 90 (deg).
- FIG. 23 is a diagram for explaining the process of detecting an over-bending abnormality.
- a vector b head directed from joint 18 to joint 3 among the joints included in the pre-replacement skeleton information is calculated.
- the abnormality detection unit 155 calculates a vector b neck from joint 2 to joint 18 among the joints included in the pre-replacement skeleton information.
- the anomaly detection unit 155 calculates an angle ⁇ (b neck , b head ) formed from b neck and b head .
- C3 be the result of over-curving anomaly detection for the pre-replacement skeleton information. For example, when the formed angle ⁇ (b neck , b head ) is less than Th3 , the abnormality detection unit 155 sets C3 to 0 as being normal. The anomaly detection unit 155 sets 1 to C3 as an anomaly when the formed angle ⁇ (b neck , b head ) is greater than Th3 .
- Th 3 is set to 60 (deg).
- the anomaly detection unit 155 similarly calculates the angle ⁇ (b neck , b head ) formed by the post-replacement skeleton information.
- C'3 be the result of over-curving anomaly detection for the post-replacement skeleton information. For example, when the formed angle ⁇ (b neck , b head ) is less than or equal to Th 3 , the abnormality detection unit 155 sets C′3 to 0 as normal. The anomaly detection unit 155 sets C'3 to 1 as an anomaly when the formed angle ⁇ (b neck , b head ) is greater than Th3 .
- the abnormality detection unit 155 sets a value to C 1 (C′ 1 ) based on the condition of Equation (6) for bone length abnormality detection.
- the abnormality detection unit 155 sets a value to C 2 (C′ 2 ) based on the condition of Expression (7) for reverse/sideways abnormality detection.
- the abnormality detection unit 155 sets a value to C 3 (C′ 3 ) based on the condition of Expression (8) for excessive bending abnormality detection.
- the abnormality detection unit 155 calculates determination results D 1 , D 2 , and D 3 after executing “bone length abnormality detection”, “reverse/lateral bending abnormality detection”, and “overbending abnormality detection”.
- the abnormality detection unit 155 calculates the determination result D1 based on Equation (9).
- the abnormality detection unit 155 calculates the determination result D2 based on Equation (10).
- a determination result D3 is calculated based on the equation (11).
- the anomaly detection unit 155 detects an anomaly of the top of the head with respect to the 3D skeleton information when one of the determination results D 1 to D 3 is set to “1”.
- the abnormality detection unit 155 outputs the 3D skeleton information to the correction unit 156 when the abnormality of the top of the head is detected.
- the abnormality detection unit 155 determines that there is no abnormality in the parietal region with respect to the 3D skeleton information. judge. If the abnormality detection unit 155 detects no abnormality in the parietal region, the abnormality detection unit 155 associates the frame number with the 3D skeleton information (post-replacement skeleton information) and registers them in the skeleton recognition result table 142 .
- the anomaly detection unit 155 repeats the above process each time it acquires 3D skeleton information from the estimation unit 154 .
- the correction unit 156 corrects the acquired 3D skeleton information when the abnormality detection unit 155 acquires the 3D skeleton information in which the abnormality of the parietal region is detected.
- description will be made using post-replacement skeleton information as 3D skeleton information.
- the corrections performed by the correcting unit 156 include “bone length correction”, “reverse/lateral bending correction”, and “overbending correction”.
- FIG. 24 is a diagram for explaining bone length correction. As shown in FIG. 24, the correction unit 156 performs processing in order of step S20, step S21, and step S22.
- Step S20 will be described.
- the correction unit 156 calculates a vector b head directed from the joint 18 to the joint 3 among the joints included in the post-replacement skeleton information.
- Step S22 will be described.
- the correction unit 156 uses the joint 18 as a reference, the correction unit 156 outputs the joint extended in the direction of the unit vector n head by the average ⁇ of the bone lengths calculated in the past image frames as the corrected parietal region (after replacement update the position of the top of the head in the skeletal information). Since ⁇ is in the normal range, the bone length becomes normal.
- FIG. 25A and 25B are diagrams for explaining the inverse/lateral curving correction.
- the correction unit 156 performs processing in order of step S30, step S31, and step S32.
- Step S30 will be described.
- the correction unit 156 calculates a vector b neck from the joint 2 to the joint 18 among the joints included in the post-replacement skeleton information.
- Step S31 will be described.
- Step S32 will be described.
- the correction unit 156 extends the standard bone length ⁇ in the direction of the unit vector n neck and outputs the result of correction so as to fall within the threshold value as the parietal region (the parietal region of the post-replacement skeletal information). position update). Since the head extends in the same direction as the neck, the reverse and sideways anomalies are corrected.
- FIG. 26 is a diagram for explaining excessive bending correction. As shown in FIG. 26, the correction unit 156 performs processing in order of step S40, step S41, and step S42.
- Step S40 will be described.
- the correction unit 156 calculates a vector b head directed from the joint 18 to the joint 3 among the joints included in the post-replacement skeleton information.
- the correction unit 156 calculates a vector b neck from the joint 2 to the joint 18 among the joints included in the post-replacement skeleton information.
- the correction unit 156 calculates a vector b shoulder directed from joint 4 to joint 7 among the joints included in the post-replacement skeleton information.
- Step S41 will be explained.
- the correction unit 156 calculates the normal vector b neck ⁇ b head from the vector b neck and the vector b head .
- Step S42 will be explained.
- the normal vector b neck ⁇ b head is a vector extending from the front to the back.
- the correction unit 156 rotates the vector b head about the normal vector b neck ⁇ b head by the residual "Th 3 -angle ⁇ (b neck , b head )" (deg) from the threshold Th 3 .
- the result corrected so that it falls within the threshold is output as the parietal region (the position of the parietal region in post-replacement skeleton information is updated). Since the angle is within the threshold, the over-bending anomaly is corrected.
- the correction unit 156 executes "bone length correction”, “reverse/side bending correction”, and “over bending correction” to correct the 3D skeleton information.
- the correction unit 156 associates the frame number with the corrected 3D skeleton information and registers them in the skeleton recognition result table 142 .
- the technique recognition unit 157 acquires the 3D skeleton information from the skeleton recognition result table 142 in order of frame number, and identifies the time series change of each joint coordinate based on the continuous 3D skeleton information.
- the technique recognition unit 157 identifies the type of technique by comparing the chronological change in each joint position with the technique recognition table 145 . Also, the technique recognition unit 157 compares the combination of technique types with the technique recognition table 143 to calculate the performance score of the competitor H1.
- the performance score of the player H1 calculated by the technique recognition unit 157 also includes the performance score for evaluating the time-series transformation of the top of the head, such as jumping on the balance beam and part of the floor exercise performance. .
- the technique recognition unit 157 generates screen information based on the score of the performance and the 3D skeleton information from the start to the end of the performance.
- the technique recognition unit 157 outputs the generated screen information to the display unit 130 for display.
- FIG. 27 is a flow chart showing the processing procedure of the learning device according to the first embodiment.
- the acquisition unit 55a of the learning device 50 acquires learning data 54a and registers it in the storage unit 54 (step S101).
- the learning unit 55b of the learning device 50 executes machine learning corresponding to the facial joint estimation model 54b based on the learning data 54a (step S102).
- the output unit 55c of the learning device 50 transmits the facial joint estimation model to the information processing device 100 (step S103).
- FIG. 28 is a flowchart illustrating the processing procedure of the information processing apparatus according to the first embodiment.
- the acquisition unit 151 of the information processing device 100 acquires the facial joint estimation model 54b from the learning device 50 and registers it in the storage unit 140 (step S201).
- the acquisition unit 151 receives time-series image frames from the camera and registers them in the measurement table 141 (step S202).
- the preprocessing unit 152 of the information processing device 100 estimates and generates 3D skeleton information based on the multi-viewpoint image frames in the measurement table 141 (step S203).
- the target information generation unit 153 of the information processing apparatus 100 inputs the image frame to the facial joint estimation model 54b and generates target information (step S204).
- the estimation unit 154 of the information processing device 100 executes transformation parameter estimation processing (step S205).
- the estimation unit 154 applies the transformation parameters to the source information 60a to estimate the top of the head (step S206).
- the estimation unit 154 replaces the information of the parietal region of the 3D skeleton information with the information of the estimated parietal region (step S207).
- the anomaly detection unit 155 of the information processing device 100 determines whether or not an anomaly of the parietal region is detected (step S208). If the abnormality detection unit 155 does not detect an abnormality in the parietal region (step S208, No), the abnormality detection unit 155 registers the post-replacement skeleton information in the skeleton recognition result table 142 (step S209), and proceeds to step S212.
- step S208 when the abnormality detection unit 155 detects an abnormality in the parietal area (step S208, Yes), the process proceeds to step S210.
- the correction unit 156 of the information processing apparatus 100 corrects the post-replacement skeleton information (step S210).
- the correction unit 156 registers the corrected post-replacement skeleton information in the skeleton recognition result table 142 (step S211), and proceeds to step S212.
- the technique recognition unit 157 of the information processing device 100 reads the time-series 3D skeleton information from the skeleton recognition result table 142, and executes technique recognition based on the technique recognition table 143 (step S212).
- 29 and 30 are flowcharts showing the processing procedure of transformation parameter estimation processing.
- the estimation unit 154 of the information processing device 100 sets initial values for the maximum inlier number and the reference evaluation index (step S301). For example, the estimation unit 154 sets the maximum inlier number to “0” and the reference evaluation index to “ ⁇ (large value)”.
- the estimation unit 154 acquires target information and source information (step S302).
- the estimation unit 154 samples three joints from the target information (step S303).
- the estimating unit 154 calculates the transformation parameters (R, t, c) that minimize the difference e2 between the target information and the source information based on the equation (1) (step S304).
- the estimating unit 154 applies the transformation parameters to the source information and reprojects to match the target information (step S305).
- the estimation unit 154 calculates a reprojection error ⁇ between the projection result of the source information and the three joints of the target information (step S306).
- the estimation unit 154 sets the number of facial joints for which the reprojection error ⁇ is equal to or less than the threshold to the inlier number (step S307).
- the estimation unit 154 calculates an outlier evaluation index (step S308).
- the estimation unit 154 proceeds to step S309 in FIG. 30 .
- step S309 If the number of inliers is greater than the maximum number of inliers (step S309, Yes), the estimation unit 154 proceeds to step S312. On the other hand, if the number of inliers is not greater than the maximum number of inliers (step S309, No), the estimation unit 154 proceeds to step S310.
- step S310 When the number of inliers and the maximum number of inliers are the same (step S310, Yes), the estimation unit 154 proceeds to step S311. On the other hand, when the number of inliers and the maximum number of inliers are not the same (step S310, No), the estimation unit 154 proceeds to step S314.
- step S311, No When the outlier evaluation index E is not smaller than the reference evaluation index (step S311, No), the estimation unit 154 proceeds to step S314. On the other hand, when the outlier evaluation index E is smaller than the reference evaluation index (step S311, Yes), the estimation unit 154 proceeds to step S312.
- the estimation unit 154 updates the maximum number of inliers to the number of inliers calculated this time, and updates the reference evaluation index with the value of the outlier evaluation index (step S312).
- the estimation unit 154 updates the transformation parameter corresponding to the maximum number of inliers (step S313).
- step S314, No If the number of sampling times has not reached the upper limit (step S314, No), the estimation unit 154 proceeds to step S303 in FIG. On the other hand, when the number of times of sampling reaches the upper limit (step S314, Yes), the estimation unit 154 outputs a transformation parameter corresponding to the maximum number of inliers (step S315).
- the information processing apparatus 100 calculates transformation parameters for aligning the positions of the facial joints of the source information 60a with the positions of the facial joints of the target information 60b.
- the information processing apparatus 100 calculates the position of the top of the head of the player by applying the calculated conversion parameter to the top of the head of the source information 60a. Since the relationship between the face joint and the parietal region is a rigid body relationship, the position of the parietal region of the player can be estimated using this relationship, thereby improving the estimation accuracy.
- the accuracy of estimating the top of the head is improved compared to the conventional technology. Since the information processing apparatus 100 improves the estimation accuracy of the parietal region, it is possible to appropriately evaluate the success or failure of the performance even when evaluating the performance of the contestant using the parietal region. Competitor performances using the crown of the head include hops on the balance beam and some floor exercises.
- the information processing apparatus 100 identifies conversion parameters based on the number of inliers and the outlier error index E. Therefore, even when there are a plurality of transformation parameters with the same number of inliers, the outlier error index E can be used to select the optimum transformation parameter.
- FIG. 31 is a diagram for explaining the results of comparison of parietal region estimation errors.
- a graph G1 in FIG. 31 shows an error when the parietal region is estimated without executing RANSAC.
- Graph G2 shows the error when performing RANSAC and estimating the top of the head.
- a graph G3 indicates an error when the estimation unit 154 according to the first embodiment estimates the top of the head.
- the horizontal axes of the graphs G1 and G2 correspond to the maximum value of the error between the face joint of the target information and the GT (correct position of the face joint).
- the vertical axes of the graphs G1 and G2 indicate the error between the estimation result of the parietal region and the GT (correct position of the parietal region).
- the average error between the estimation result of the parietal region and the GT is "30 mm”.
- the average error between the estimated result of the parietal region and the GT is "22 mm”.
- the average error between the estimation result of the top of the head and the GT is "15 mm”. That is, the information processing apparatus 100 according to the first embodiment can estimate the position of the top of the head with high accuracy compared to conventional techniques such as RANSAC. For example, the area ar1 of the graph G2 indicates that outlier removal has failed.
- the information processing apparatus 100 executes processing for correcting the position of the top of the head when detecting an abnormality in the top of the head in the 3D skeleton information. This makes it possible to further improve the estimation accuracy of the 3D skeleton information.
- the correction unit 156 corrects the post-replacement skeleton information, but the correction unit 156 may correct the pre-replacement skeleton information and output the corrected pre-replacement skeleton information. Further, the correcting unit 156 may output the pre-replacement skeleton information as it is as the post-correction skeleton information without actually performing the correction.
- a system related to the second embodiment is similar to the system of the first embodiment.
- an information processing apparatus according to the second embodiment will be described.
- the information processing apparatus according to the second embodiment has a plurality of parietal area candidates.
- FIG. 32 is a diagram showing an example of source information according to the second embodiment.
- this source information 60c includes a plurality of parietal joint candidates tp1-1, tp1-2, tp1-3, tp1-4, tp1-5, and tp1-6 in a 3D human body model M2. have.
- the source information 60c sets the positions of a plurality of facial joints in the same manner as the source information 60a shown in the first embodiment.
- the information processing device calculates conversion parameters in the same manner as in the first embodiment.
- the information processing device applies the calculated transformation parameter to the source information 60c, compares the values in the z-axis direction of the plurality of parietal joint candidates tp1-1 to tp1-6, and determines the value in the z-axis direction that is the smallest.
- a parietal joint candidate is identified as the parietal region.
- FIG. 33 is a diagram for explaining the process of identifying the top of the head.
- the example shown in FIG. 33 shows the result of applying the transformation parameters to the source information 60c.
- the information processing apparatus selects the parietal joint candidate tp1-2 because the parietal joint candidate tp1-2 has the smallest value among the values in the z-axis direction of the plurality of parietal joint candidates tp1-1 to tp1-6. Select as the top of the head.
- the information processing apparatus applies the transformation parameter to the source information 60c, compares the values of the plurality of parietal joint candidates tp1-1 to tp1-6 in the z-axis direction, and The position of the parietal joint candidate with the smallest value in the z-axis direction is specified as the parietal position.
- FIG. 34 is a functional block diagram showing the configuration of the information processing apparatus according to the second embodiment.
- this information processing apparatus 200 has a communication section 110, an input section 120, a display section 130, a storage section 240, and a control section 250.
- FIG. 34 is a functional block diagram showing the configuration of the information processing apparatus according to the second embodiment.
- this information processing apparatus 200 has a communication section 110, an input section 120, a display section 130, a storage section 240, and a control section 250.
- FIG. 34 is a functional block diagram showing the configuration of the information processing apparatus according to the second embodiment.
- this information processing apparatus 200 has a communication section 110, an input section 120, a display section 130, a storage section 240, and a control section 250.
- FIG. 34 is a functional block diagram showing the configuration of the information processing apparatus according to the second embodiment.
- this information processing apparatus 200 has a communication section 110, an input section 120, a display section 130, a storage section 240, and a control section 250.
- the description regarding the communication unit 110, the input unit 120, and the display unit 130 is the same as the description regarding the communication unit 110, the input unit 120, and the display unit 130 described with reference to FIG.
- the storage unit 240 has a facial joint estimation model 54b, source information 60c, a measurement table 141, a skeleton recognition result table 142, and a technique recognition table 143.
- the storage unit 240 corresponds to semiconductor memory elements such as RAM and flash memory, and storage devices such as HDD.
- the descriptions of the facial joint estimation model 54b, the measurement table 141, the skeleton recognition result table 142, and the technique recognition table 143 refer to the facial joint estimation model 54b, the measurement table 141, the skeleton recognition result table 142, and the technique recognition table 143 described with reference to FIG. Same as description.
- the source information 60c is information in which the positions of a plurality of face joints and the positions of a plurality of parietal joint candidates are set, as described with reference to FIG.
- the control unit 250 has an acquisition unit 151, a preprocessing unit 152, a target information generation unit 153, an estimation unit 254, an anomaly detection unit 155, a correction unit 156, and a technique recognition unit 157.
- the control unit 250 corresponds to the CPU and the like.
- the acquisition unit 151, the preprocessing unit 152, the target information generation unit 153, the anomaly detection unit 155, the correction unit 156, and the technique recognition unit 157 are described with reference to FIG. 153, the abnormality detection unit 155, the correction unit 156, and the technique recognition unit 157.
- the estimation unit 254 estimates the position of the top of the head of the player H1 based on the source information 60c and the target information 60b (target information unique to the image frame).
- the estimating unit 254 compares the positions of the facial joints in the source information 60c and the positions of the facial joints (three joints) in the target information 60b, and performs conversion such that the difference e2 in the above equation (1) is minimized. Calculate the parameters (rotation R, translation t, scale c). The processing by which the estimating unit 254 calculates the transformation parameters is the same as that of the estimating unit 154 of the first embodiment.
- the estimation unit 254 applies transformation parameters to the source information 60c as described with reference to FIG.
- the estimating unit 254 compares the values in the z-axis direction of the plurality of parietal joint candidates tp1-1 to tp1-6, and determines the position of the parietal joint candidate with the smallest value in the z-axis direction as the position of the parietal part. Identify as
- the estimating unit 254 estimates the position of the face coordinates of the player H1 (the position of the facial joint, the position of the top of the head), 3D skeleton information is generated by replacing the positional information of the face coordinates.
- the estimation unit 254 outputs the generated 3D skeleton information to the abnormality detection unit 255 .
- the estimation unit 254 also outputs the 3D skeleton information before being replaced with the positional information of the face coordinates to the abnormality detection unit 155 .
- FIG. 35 is a flow chart showing the processing procedure of the information processing apparatus according to the second embodiment.
- the acquisition unit 151 of the information processing device 200 acquires the facial joint estimation model 54b from the learning device 50 and registers it in the storage unit 240 (step S401).
- the acquisition unit 151 receives time-series image frames from the camera and registers them in the measurement table 141 (step S402).
- the preprocessing unit 152 of the information processing device 200 estimates and generates 3D skeleton information based on the multi-viewpoint image frames in the measurement table 141 (step S403).
- the target information generation unit 153 of the information processing device 200 inputs the image frame to the facial joint estimation model 54b and generates target information (step S404).
- the estimation unit 254 of the information processing device 200 executes transformation parameter estimation processing (step S405).
- the estimation unit 154 applies the transformation parameters to the source information 60a and estimates the parietal region from the multiple parietal region joint candidates (step S406).
- the estimation unit 254 replaces the information of the parietal region of the 3D skeleton information with the information of the estimated parietal region (step S407).
- the anomaly detection unit 155 of the information processing device 200 determines whether or not an anomaly of the parietal region has been detected (step S408). If the abnormality detection unit 155 does not detect an abnormality in the parietal region (step S408, No), it registers the post-replacement skeleton information in the skeleton recognition result table 142 (step S409), and proceeds to step S412.
- step S410 when the abnormality detection unit 155 detects an abnormality in the parietal area (step S408, Yes), the process proceeds to step S410.
- the correction unit 156 of the information processing device 200 corrects the post-replacement skeleton information (step S410).
- the correction unit 156 registers the corrected post-replacement skeleton information in the skeleton recognition result table 142 (step S411), and proceeds to step S412.
- the technique recognition unit 157 of the information processing device 200 reads the time-series 3D skeleton information from the skeleton recognition result table 142, and executes technique recognition based on the technique recognition table 143 (step S412).
- the transformation parameter estimation process shown in step S405 of FIG. 35 corresponds to the transformation parameter estimation process shown in FIGS. 29 and 30 of the first embodiment.
- the information processing apparatus 200 applies the conversion parameter to the source information 60c, compares the values in the z-axis direction of the plurality of parietal joint candidates, and selects the parietal joint candidate with the smallest value in the z-axis direction. Identify as top. As a result, it is possible to more appropriately select the position of the top of the head when evaluating performances in which the top of the head is directed downward, such as loop jumping.
- FIG. 36 is a diagram illustrating an example of a hardware configuration of a computer that implements functions similar to those of the information processing apparatus.
- the computer 300 has a CPU 301 that executes various arithmetic processes, an input device 302 that receives data input from the user, and a display 303 .
- the computer 300 also has a communication device 304 that receives distance image data from the camera 30, and an interface device 305 that connects to various devices.
- the computer 300 has a RAM 306 that temporarily stores various information and a hard disk device 307 . Each device 301 - 307 is then connected to a bus 308 .
- the hard disk device 307 has an acquisition program 307a, a preprocessing program 307b, a target information generation program 307c, an estimation program 307d, an anomaly detection program 307e, a correction program 307f, and a technique recognition program 307g.
- the CPU 301 reads the acquisition program 307a, the preprocessing program 307b, the target information generation program 307c, the estimation program 307d, the abnormality detection program 307e, the correction program 307f, and the technique recognition program 307g, and loads them into the RAM306.
- the acquisition program 307a functions as an acquisition process 306a.
- the preprocessing program 307b functions as a preprocessing process 306b.
- the target information generation program 307c functions as a target information generation process 306c.
- the estimation program 307d functions as an estimation process 306d.
- the anomaly detection program 307e functions as an anomaly detection process 306e.
- Correction program 307f functions as correction process 306f.
- the trick recognition program 307g functions as a trick recognition process 306g.
- the processing of the acquisition process 306a corresponds to the processing of the acquisition unit 151.
- the processing of the preprocessing process 306 b corresponds to the processing of the preprocessing unit 152 .
- Processing of the target information generation process 306 c corresponds to processing of the target information generation unit 153 .
- the processing of the estimation process 306 d corresponds to the processing of the estimation units 154 and 254 .
- the processing of the anomaly detection process 306 e corresponds to the processing of the anomaly detection unit 155 .
- Processing of the correction process 306 f corresponds to processing of the correction unit 156 .
- the processing of the technique recognition process 306 g corresponds to the processing of the technique recognition unit 157 .
- each program does not necessarily have to be stored in the hard disk device 307 from the beginning.
- each program is stored in a “portable physical medium” such as a flexible disk (FD), CD-ROM, DVD disk, magneto-optical disk, IC card, etc. inserted into the computer 300 .
- the computer 300 may read and execute each of the programs 307a-307e.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
110 通信部
120 入力部
130 表示部
140,240 記憶部
141 測定テーブル
142 骨格認識結果テーブル
143 技認識テーブル
150,250 制御部
151 取得部
152 前処理部
153 ターゲット情報生成部
154,254 推定部
155 異常検知部
156 補正部
157 技認識部
Claims (18)
- 競技者の頭部が所定の状態の画像を機械学習モデルに入力することで、前記競技者の顔に含まれる複数の関節の位置を特定し、
前記複数の関節の位置のそれぞれを用いて、前記競技者の頭頂部の位置を推定する
処理をコンピュータに実行させることを特徴とする推定プログラム。 - 人物の顔に含まれる複数の関節の位置と前記人物の頭頂部とを定義した定義情報と、前記競技者の顔に含まれる複数の関節の位置を示す認識情報とを基にして、前記定義情報の複数の関節の位置を、前記認識情報の複数の関節の位置に合わせるパラメータを推定する処理を更にコンピュータに実行させ、
前記頭頂部の位置を推定する処理は、前記パラメータと、前記定義情報の頭頂部の座標とを基にして、前記競技者の頭頂部の位置を推定することを特徴とする請求項1に記載の推定プログラム。 - 前記機械学習モデルに入力される画像は、背景の色と前記競技者の髪の色とが類似している状態の画像、前記競技者の髪が乱れている状態の画像、または、前記競技者の頭が隠れている状態の画像のうち、何れかの画像であることを特徴とする請求項1に記載の推定プログラム。
- 前記頭頂部の位置を基にして、平均台または床運動に関する演技を評価する処理を更にコンピュータに実行させることを特徴とする請求項1に記載の推定プログラム。
- 前記推定する処理によって推定された前記競技者の頭頂部の位置が異常であるか否かを判定し、前記競技者の頭頂部の位置が異常である場合に、前記前記競技者の頭頂部の位置を補正する処理を更にコンピュータに実行させることを特徴とする請求項1に記載の推定プログラム。
- 前記定義情報は、複数の頭頂部の候補を有し、前記頭頂部の位置を推定する処理は、前記パラメータを、前記定義情報に適用した場合に、前記複数の頭頂部の候補のうち、鉛直方向の値が最小となる頭頂部の候補の位置を、前記競技者の頭頂部の位置として推定することを特徴とする請求項2に記載の推定プログラム。
- 競技者の頭部が所定の状態の画像を機械学習モデルに入力することで、前記競技者の顔に含まれる複数の関節の位置を特定し、
前記複数の関節の位置のそれぞれを用いて、前記競技者の頭頂部の位置を推定する
処理をコンピュータが実行することを特徴とする推定方法。 - 人物の顔に含まれる複数の関節の位置と前記人物の頭頂部とを定義した定義情報と、前記競技者の顔に含まれる複数の関節の位置を示す認識情報とを基にして、前記定義情報の複数の関節の位置を、前記認識情報の複数の関節の位置に合わせるパラメータを推定する処理を更にコンピュータが実行し、
前記頭頂部の位置を推定する処理は、前記パラメータと、前記定義情報の頭頂部の座標とを基にして、前記競技者の頭頂部の位置を推定することを特徴とする請求項7に記載の推定方法。 - 前記機械学習モデルに入力される画像は、背景の色と前記競技者の髪の色とが類似している状態の画像、前記競技者の髪が乱れている状態の画像、または、前記競技者の頭が隠れている状態の画像のうち、何れかの画像であることを特徴とする請求項7に記載の推定方法。
- 前記頭頂部の位置を基にして、平均台または床運動に関する演技を評価する処理を更にコンピュータが実行することを特徴とする請求項7に記載の推定方法。
- 前記推定する処理によって推定された前記競技者の頭頂部の位置が異常であるか否かを判定し、前記競技者の頭頂部の位置が異常である場合に、前記前記競技者の頭頂部の位置を補正する処理を更にコンピュータが実行することを特徴とする請求項7に記載の推定方法。
- 前記定義情報は、複数の頭頂部の候補を有し、前記頭頂部の位置を推定する処理は、前記パラメータを、前記定義情報に適用した場合に、前記複数の頭頂部の候補のうち、鉛直方向の値が最小となる頭頂部の候補の位置を、前記競技者の頭頂部の位置として推定することを特徴とする請求項8に記載の推定方法。
- 競技者の頭部が所定の状態の画像を機械学習モデルに入力することで、前記競技者の顔に含まれる複数の関節の位置を特定する生成部と、
前記複数の関節の位置のそれぞれを用いて、前記競技者の頭頂部の位置を推定する推定部と、
有することを特徴とする情報処理装置。 - 前記推定部は、人物の顔に含まれる複数の関節の位置と前記人物の頭頂部とを定義した定義情報と、前記競技者の顔に含まれる複数の関節の位置を示す認識情報とを基にして、前記定義情報の複数の関節の位置を、前記認識情報の複数の関節の位置に合わせるパラメータを推定し、前記パラメータと、前記定義情報の頭頂部の座標とを基にして、前記競技者の頭頂部の位置を推定することを特徴とする請求項13に記載の情報処理装置。
- 前記機械学習モデルに入力される画像は、背景の色と前記競技者の髪の色とが類似している状態の画像、前記競技者の髪が乱れている状態の画像、または、前記競技者の頭が隠れている状態の画像のうち、何れかの画像であることを特徴とする請求項13に記載の情報処理装置。
- 前記頭頂部の位置を基にして、平均台または床運動に関する演技を評価する技認識部を更に有することを特徴とする請求項13に記載の情報処理装置。
- 前記推定部によって推定された前記競技者の頭頂部の位置が異常であるか否かを判定する異常検知部と、前記競技者の頭頂部の位置が異常である場合に、前記前記競技者の頭頂部の位置を補正する補正部を更に有することを特徴とする請求項13に記載の情報処理装置。
- 前記定義情報は、複数の頭頂部の候補を有し、前記推定部は、前記パラメータを、前記定義情報に適用した場合に、前記複数の頭頂部の候補のうち、鉛直方向の値が最小となる頭頂部の候補の位置を、前記競技者の頭頂部の位置として推定することを特徴とする請求項14に記載の情報処理装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180103161.1A CN118103866A (zh) | 2021-10-13 | 2021-10-13 | 估计程序、估计方法以及信息处理装置 |
JP2023553833A JPWO2023062762A1 (ja) | 2021-10-13 | 2021-10-13 | |
PCT/JP2021/037972 WO2023062762A1 (ja) | 2021-10-13 | 2021-10-13 | 推定プログラム、推定方法および情報処理装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/037972 WO2023062762A1 (ja) | 2021-10-13 | 2021-10-13 | 推定プログラム、推定方法および情報処理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023062762A1 true WO2023062762A1 (ja) | 2023-04-20 |
Family
ID=85987666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/037972 WO2023062762A1 (ja) | 2021-10-13 | 2021-10-13 | 推定プログラム、推定方法および情報処理装置 |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2023062762A1 (ja) |
CN (1) | CN118103866A (ja) |
WO (1) | WO2023062762A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0877334A (ja) * | 1994-09-09 | 1996-03-22 | Konica Corp | 顔画像の特徴点自動抽出方法 |
JP2007252617A (ja) * | 2006-03-23 | 2007-10-04 | Kao Corp | ヘアスタイルシミュレーション画像の形成方法 |
JP2018057596A (ja) | 2016-10-05 | 2018-04-12 | コニカミノルタ株式会社 | 関節位置推定装置および関節位置推定プログラム |
JP2021026265A (ja) | 2019-07-31 | 2021-02-22 | 富士通株式会社 | 画像処理装置、画像処理プログラム、及び画像処理方法 |
-
2021
- 2021-10-13 WO PCT/JP2021/037972 patent/WO2023062762A1/ja active Application Filing
- 2021-10-13 JP JP2023553833A patent/JPWO2023062762A1/ja active Pending
- 2021-10-13 CN CN202180103161.1A patent/CN118103866A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0877334A (ja) * | 1994-09-09 | 1996-03-22 | Konica Corp | 顔画像の特徴点自動抽出方法 |
JP2007252617A (ja) * | 2006-03-23 | 2007-10-04 | Kao Corp | ヘアスタイルシミュレーション画像の形成方法 |
JP2018057596A (ja) | 2016-10-05 | 2018-04-12 | コニカミノルタ株式会社 | 関節位置推定装置および関節位置推定プログラム |
JP2021026265A (ja) | 2019-07-31 | 2021-02-22 | 富士通株式会社 | 画像処理装置、画像処理プログラム、及び画像処理方法 |
Non-Patent Citations (1)
Title |
---|
ITO, KENTARO; TSURUTA, SEIYA; CHOI, WOONG; SEKIGUCHI, HIROYUKI; HACHIMURA, KOZABURO: "Recognition of Dance Steps with KANSEI Information", JINMONKON 2009 PROCEEDINGS, vol. 16, 11 December 2009 (2009-12-11), pages 147 - 154, XP009545542 * |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023062762A1 (ja) | 2023-04-20 |
CN118103866A (zh) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11763603B2 (en) | Physical activity quantification and monitoring | |
JP4585471B2 (ja) | 特徴点検出装置及びその方法 | |
JP7367764B2 (ja) | 骨格認識方法、骨格認識プログラムおよび情報処理装置 | |
US11138419B2 (en) | Distance image processing device, distance image processing system, distance image processing method, and non-transitory computer readable recording medium | |
US11087493B2 (en) | Depth-image processing device, depth-image processing system, depth-image processing method, and recording medium | |
US11403882B2 (en) | Scoring metric for physical activity performance and tracking | |
JP7164045B2 (ja) | 骨格認識方法、骨格認識プログラムおよび骨格認識システム | |
WO2019069358A1 (ja) | 認識プログラム、認識方法および認識装置 | |
JP7037159B2 (ja) | 被験者の顎運動を測定するためのシステム、プログラム、および方法 | |
CN110910426A (zh) | 动作过程和动作趋势识别方法、存储介质和电子装置 | |
JP2021527888A (ja) | 軸外カメラを使用して眼追跡を実施するための方法およびシステム | |
JP7318814B2 (ja) | データ生成方法、データ生成プログラムおよび情報処理装置 | |
WO2023062762A1 (ja) | 推定プログラム、推定方法および情報処理装置 | |
CN115223240B (zh) | 基于动态时间规整算法的运动实时计数方法和系统 | |
KR102468648B1 (ko) | 영상에 대한 원격 광용적맥파신호를 이용하여 사람의 심박수를 산출하는 방법 | |
WO2020121500A1 (ja) | 推定方法、推定プログラムおよび推定装置 | |
JP4830585B2 (ja) | 画像処理装置および画像処理方法 | |
JP5688514B2 (ja) | 視線計測システム、方法およびプログラム | |
WO2023188217A1 (ja) | 情報処理プログラム、情報処理方法、および情報処理装置 | |
JP2021099666A (ja) | 学習モデルの生成方法 | |
JP7419993B2 (ja) | 信頼度推定プログラム、信頼度推定方法、および信頼度推定装置 | |
WO2024135013A1 (ja) | 行動解析方法、行動解析プログラム、および行動解析システム | |
WO2023223508A1 (ja) | 映像処理装置、映像処理方法、およびプログラム | |
WO2022190206A1 (ja) | 骨格認識方法、骨格認識プログラムおよび体操採点支援システム | |
WO2023162223A1 (ja) | 学習プログラム、生成プログラム、学習方法および生成方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21960620 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023553833 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021960620 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021960620 Country of ref document: EP Effective date: 20240513 |