WO2018207351A1 - 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム - Google Patents
距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム Download PDFInfo
- Publication number
- WO2018207351A1 WO2018207351A1 PCT/JP2017/018034 JP2017018034W WO2018207351A1 WO 2018207351 A1 WO2018207351 A1 WO 2018207351A1 JP 2017018034 W JP2017018034 W JP 2017018034W WO 2018207351 A1 WO2018207351 A1 WO 2018207351A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- distance image
- human body
- distance
- target person
- learning
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present invention relates to a distance image processing apparatus and the like.
- FIG. 14 is a diagram for explaining a conventional system that performs posture recognition.
- the conventional system acquires a distance image 7 of the subject 5 a using the distance sensor 6.
- the conventional system identifies the skeleton position 5b of the subject 5a by estimating the joint position based on the distance image 7, and estimates the posture of the subject 5a.
- FIG. 15 is a diagram for explaining an application example of the conventional system.
- the distance sensor 6 is used to acquire a distance image of the subject 5a, recognize the posture of the subject 5a, and the avatar 5c on the game takes the same posture as the subject 5a. It is
- FIG. 16 is a diagram for explaining an example of a conventional technique for posture recognition.
- the prior art acquires one or more distance images including a human body (step S10).
- the distance image 1 includes a foreground pixel 1a related to the specified human body and other background pixels 1b.
- the conventional technique separates the background pixel 1b from the distance image 1 to obtain the distance image 1c including only the foreground pixel (step S11).
- the human body region of the distance image 1c is divided into a plurality of part labels bp1 to bp14 (step S12).
- the conventional technique proposes a plurality of human skeleton models having a plurality of three-dimensional skeleton positions based on the respective body part labels bp1 to bp14 (step S13).
- the conventional technique selects a skeleton model having the highest likelihood from a plurality of skeleton models, and recognizes the posture of the person based on the selected skeleton model (step S14).
- FIG. 17 is a flowchart showing a processing procedure for learning a conventional classifier.
- the conventional technique acquires motion capture data (step S20).
- the prior art generates a plurality of human body models having various postures by performing human body model retargeting based on the motion capture data (step S21).
- the conventional technology removes redundancy by removing similar human body model postures from each human body model and leaving only unique human body model postures (step S22).
- the conventional technology generates a part label image and a distance image based on the assumed position of the distance sensor based on the unique human body model posture (step S23).
- the conventional technology generates a discriminator by repeatedly learning the correspondence between the feature of each position of the distance image (and the feature of the peripheral position) and the part label based on the pair of the part label image and the distance image. (Step S24).
- Occlusion by an object is a state in which a part of a human body to be recognized becomes invisible by another object.
- FIG. 18 is a diagram illustrating an example of occlusion by an object.
- a part of the body of the subject 8 a is hidden behind the Kurama 8 b.
- a normal part label is not assigned and accurate posture recognition cannot be performed.
- 19 and 20 are diagrams for explaining the problems of the prior art.
- a distance image including the subject person 8a and the horse 8b is acquired, a background is removed from the distance image, and a part label is assigned, a part label recognition result 9A shown in FIG. 19 is obtained.
- a region label is assigned with the region 8c including the subject 8a and the horse 8b as the region of the subject 8a and the horse 8b as a part of the human body.
- a distance image of only the fixed horse 8b is taken in advance in the state where the subject 8a does not exist, and a distance image taken when the subject 8a is actually exercising on the horse 8b. Therefore, it is possible to remove the distance image of only the horse 8b. If the distance image of only the horse 8b is removed in this way, the distance image of the foot portion hidden behind the horse 8b cannot be detected, so that only the distance image divided by the horse 8b can be obtained.
- a part label is assigned to a distance image obtained by removing the distance image of the horse 8b, a part label recognition result 9B shown in FIG. 20 is obtained.
- each divided region B 1 , B 2 is recognized as one subject region, and a part label is assigned.
- a foot part label may be assigned to the part b 1 and for the region B 2 , a part label other than a human foot (for example, a hand) may be assigned.
- an object of the present invention is to provide a distance image processing device, a distance image processing system, a distance image processing method, and a distance image processing program that can appropriately determine a part of a human body.
- the distance image processing apparatus includes a generation unit and a learning unit.
- the generation unit is configured to generate a distance image indicating a distance from the reference position to each position of the human body or each position of the object based on a combined model obtained by combining the three-dimensional model of the human body and the three-dimensional model of the object, A plurality of learning images are generated in association with a part image for identifying a part or a part of an object.
- the learning unit learns a discriminator in which a feature of the distance image is associated with a human body part or an object part based on a plurality of learning images.
- the present invention can appropriately determine the part of the human body.
- FIG. 1 is a diagram illustrating an example of a distance image processing system according to the present embodiment.
- FIG. 2 is a diagram illustrating an example of the configuration of the learning device.
- FIG. 3 is a diagram for explaining the capture camera.
- FIG. 4 is a diagram illustrating an example of object model data.
- FIG. 5 is a diagram illustrating an example of the data structure of the composite model table.
- FIG. 6 is a diagram illustrating an example of the data structure of the learning image table.
- FIG. 7 is a diagram for explaining the relationship between the part label image and the distance image.
- FIG. 8 is a diagram illustrating an example of the data structure of the discriminator data.
- FIG. 9 is a diagram illustrating an example of the configuration of the recognition apparatus.
- FIG. 9 is a diagram illustrating an example of the configuration of the recognition apparatus.
- FIG. 10 is a flowchart illustrating the processing procedure of the learning device according to the present embodiment.
- FIG. 11 is a flowchart illustrating the processing procedure of the recognition apparatus according to the present embodiment.
- FIG. 12 is a diagram illustrating an example of a hardware configuration of a computer that implements the same function as the learning device.
- FIG. 13 is a diagram illustrating an example of a hardware configuration of a computer that implements the same function as the recognition device.
- FIG. 14 is a diagram for explaining a conventional system that performs posture recognition.
- FIG. 15 is a diagram for explaining an application example of the conventional system.
- FIG. 16 is a diagram for explaining an example of a conventional technique for posture recognition.
- FIG. 17 is a flowchart showing a processing procedure for learning a conventional classifier.
- FIG. 18 is a diagram illustrating an example of occlusion by an object.
- FIG. 19 is a diagram (1) for explaining the problems of the prior art.
- FIG. 20 is a diagram (2) for
- FIG. 1 is a diagram illustrating an example of a distance image processing system according to the present embodiment.
- the distance image processing system includes a learning device 100 and a recognition device 200.
- the learning device 100 is connected to the motion capture device 10.
- the recognition device 200 is connected to the distance sensor 20. Further, the learning device 100 and the recognition device 200 are connected to each other.
- the learning device 100 is a device that learns the discriminator data used when the recognition device 200 recognizes the posture of the target person.
- the recognition device 200 is a device that recognizes the posture of the target person using the classifier data learned by the learning device 100.
- the learning device 100 and the recognition device 200 are examples of a distance image processing device.
- FIG. 2 is a diagram illustrating an example of the configuration of the learning device. As shown in FIG. 2, the learning device 100 is connected to the motion capture device 10.
- the learning apparatus 100 includes an input unit 110, a display unit 120, a storage unit 130, and a control unit 140.
- the motion capture device 10 is connected to a plurality of capture cameras 10a.
- FIG. 3 is a diagram for explaining the capture camera. As shown in FIG. 3, the capture camera 10 a is arranged around the subject 11. A marker 12 is attached to each joint position of the subject 11.
- the motion capture device 10 records the movement of the marker 12 of the subject 11 using each camera 10 a and obtains a three-dimensional joint position from each marker 12.
- the motion capture device 10 generates motion capture data by sequentially recording the three-dimensional joint positions obtained from the position coordinates of each marker 12.
- the motion capture device 10 outputs motion capture data to the learning device 100.
- the input unit 110 is an input device for inputting various types of information to the learning device 100.
- the input unit 110 corresponds to a keyboard, a mouse, a touch panel, and the like.
- the display unit 120 is a display device that displays information output from the control unit 140.
- the display unit 120 corresponds to a liquid crystal display, a touch panel, or the like.
- the storage unit 130 includes motion capture data 130a, human body model data 130b, object model data 130c, a synthesized model table 130d, a learning image table 130e, and classifier data 130f.
- the storage unit 130 corresponds to a semiconductor memory device such as a RAM (Random Access Memory), a ROM (Read Only Memory), and a flash memory (Flash Memory), and a storage device such as an HDD (Hard Disk Drive).
- the motion capture data 130a is data that is generated by the motion capture device 10 and records the movement of the three-dimensional joint position of the person.
- the motion capture data 130a has information on joint positions for each frame.
- the human body model data 130b is data of a three-dimensional model of the human body.
- the human body model data 130b is information generated by combining a three-dimensional human body model with a skeleton based on each joint position of a person in the motion capture 130a.
- the object model data 130c is a three-dimensional model of an object different from a person.
- FIG. 4 is a diagram illustrating an example of object model data.
- the object is a horse, but in terms of gymnastics, a ring of a suspended ring, a steel bar, a parallel bar, a jumping horse, and the like are equivalent, and the invention is not limited to these.
- the synthesized model table 130d is a table having a plurality of synthesized model data obtained by synthesizing the human body model data 130b and the object model data 130c.
- FIG. 5 is a diagram illustrating an example of the data structure of the composite model table. As shown in FIG. 5, this synthetic model table 130d associates a synthetic model number with synthetic model data.
- the composite model number is a number for identifying the composite model data.
- the synthesized model data is data obtained as a result of synthesizing the human body model data 130b at a timing (frame) in a series of movements and the object model data 130c.
- the learning image table 130e is a table having a plurality of learning image data for generating the discriminator data 130f.
- FIG. 6 is a diagram illustrating an example of the data structure of the learning image table. As shown in FIG. 6, the learning image table 130e associates learning image numbers, part label image data, and distance image data.
- the learning image number is a number that uniquely identifies a set of the part label image data that becomes the learning image and the distance image data.
- the part label image data is information indicating each part and object of the combined model data (human body + object) with a unique part label.
- the distance image data is a distance image generated from the combined model data (human body + object). Corresponding part label image data and distance image data are generated from the same combined model data.
- FIG. 7 is a diagram for explaining the relationship between the part label image and the distance image.
- FIG. 7 shows a set of part label image data 131A and distance image data 131B corresponding to a certain learning image number.
- the distance image data 131B is distance image data indicating, for each pixel, a distance from a reference position such as a camera to each position of the synthesized model data.
- the part label image data 131A is information indicating each part and object of the person included in the distance image 131B with a unique part label. For example, based on a predetermined division policy, a person's area is divided into a plurality of parts, and a unique part label is assigned to the area corresponding to each part. For the object, a part label different from the part of the person is assigned to a region corresponding to the object.
- the discriminator data 130f constitutes a discriminator that associates each pixel of the distance image with a part label based on, for example, a feature amount around a position of the distance image data.
- FIG. 8 is a diagram illustrating an example of the data structure of the discriminator data.
- the identifier data 130f a plurality of branch nodes f 1-1, f 2-1 ⁇ f 2 -n, and f 3-1 ⁇ f 3-n, the leaf node R 1 ⁇ R n .
- the branch nodes f 1-1 , f 2-1 to f 2-n , and f 3-1 to f 3-n are collectively referred to as a branch node f.
- Leaf nodes R 1 to R n are collectively expressed as leaf node R.
- the branch node f is a node that designates one of branch destinations among the branch nodes f under the control based on the feature amount around the position where the distance image data exists.
- the branch node f is the branch nodes f 3-1 to f 3-n , among the subordinate leaf nodes R according to the feature amount at a certain position of the distance image data and the feature amount around the certain position. , Indicate one of the branch destinations.
- the leaf node R is a node that stores data indicating a human body part or an object part.
- the control unit 140 includes an acquisition unit 140a, a generation unit 140b, a learning unit 140c, and a notification unit 140d.
- the control unit 140 can be realized by a CPU (Central Processing Unit), an MPU (Micro Processing Unit), or the like.
- the control unit 140 can also be realized by hard wired logic such as ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array).
- the acquisition unit 140a is a processing unit that acquires the motion capture data 130a from the motion capture device 10.
- the acquisition unit 140a stores the acquired motion capture data 130a in the storage unit 130.
- the generation unit 140b is a processing unit that generates the learning image table 130e.
- the generation unit 140b executes processing for generating the human body model data 130b, processing for generating the composite model table 130d, and processing for generating the learning image table 130e.
- the generation unit 140b may newly generate the object model data 130c, or may use existing object model data as the object model data 130c.
- the generation unit 140b acquires information on the joint position of the person from a series of movements of the joint position of the person included in the motion capture data 130a, and generates the skeleton information of the person by connecting the joint positions with a skeleton. .
- the generation unit 140b generates a human body model corresponding to the skeleton information by combining parts of the human body model prepared in advance with the skeleton information. That is, the process performed by the generation unit 140b corresponds to the process of combining the motion capture data 130a and the human body model.
- the generation unit 140b acquires a human body model from the human body model data 130b with respect to the series of motion capture data 130a, and combines the acquired human body model with the object model of the object model data 130c to obtain the combined model data. Generate.
- the generation unit 140b generates a plurality of combined model data by repeating a process of combining a human body model corresponding to another frame and an object model.
- the generation unit 140b registers the synthesized model data in the synthesized model table 130d in association with the synthesized model number.
- the generation unit 140b may perform a process of removing redundancy when similar composite model data is included among a plurality of composite model data registered in the composite model table 130d. For example, the generation unit 140b determines that the combined model data in which the total value of the joint position differences in the combined model data is less than the threshold is similar combined model data. The generation unit 140b performs a process of leaving one composite model data out of similar composite model data and deleting other composite model data.
- the generation unit 140b refers to the synthesis model table 130d and acquires the synthesis model data having a certain synthesis model number.
- the generation unit 140b generates part label image data and distance image data based on the acquired combined model data.
- the generation unit 140b registers the part label image data and the distance image data in the learning image table 130e in association with the learning image number.
- the generation unit 140b previously arranges a part label for identifying a part of the human body in the synthesized model data.
- the generation unit 140b sets a virtual reference position in three dimensions, and generates distance image data when the synthesized model data is viewed from this reference position.
- the generation unit 140b generates part label image data by classifying the region of the composite model data when the composite model data is viewed from the reference position into a plurality of part labels.
- the part label image data and the distance image data generated from a certain composite model data correspond to the part label image data 131A and the distance image data 131B described with reference to FIG.
- the generating unit 140b generates the part label image data and the distance image data by repeatedly executing the above processing for the other combined model data stored in the combined model table 130d, and stores it in the learning image table 130e.
- the learning unit 140c is a processing unit that repeatedly executes machine learning based on a set of a plurality of part label image data and distance image data included in the learning image table 130e to generate discriminator data 130f.
- the learning unit 140c specifies a feature amount around a position (x1, y1) in the distance image data and a part label corresponding to the certain position (x1, y1).
- the feature amount around a certain position (x1, y1) may be unevenness in the distance image data around the position (x1, y1) on the distance image data, or other feature amount. There may be.
- a part label corresponding to a certain position (x1, y1) corresponds to a part label assigned to a certain position (x1, y1) in the part label image data.
- the learning unit 140c specifies the pattern of the feature quantity around a certain position (xn, yn) of the distance image data and the part label corresponding to the certain position (xn, yn) for each different position.
- the learning unit 140c generates machine classifier data 130f by repeatedly machine-learning patterns at different positions.
- the notification unit 140d is a processing unit that transmits the discriminator data 130f generated by the learning unit 140c to the recognition device 200.
- FIG. 9 is a diagram illustrating an example of the configuration of the recognition apparatus. As shown in FIG. 9, the recognition device 200 is connected to the distance sensor 20.
- the recognition apparatus 200 includes an input unit 210, a display unit 220, a storage unit 230, and a control unit 240.
- the distance sensor 20 measures a distance image of the target person and a predetermined object (such as a horse), and outputs the measured distance image data to the recognition device 200 during posture recognition processing.
- a predetermined object such as a horse
- the distance image data acquired from the distance sensor 20 is referred to as recognition distance image data 230a.
- description will be made assuming that a predetermined object is a horse.
- the input unit 210 is an input device for inputting various information to the recognition device 200.
- the input unit 210 corresponds to a keyboard, a mouse, a touch panel, or the like.
- the display unit 220 is a display device that displays information output from the control unit 240.
- the display unit 220 corresponds to a liquid crystal display, a touch panel, or the like.
- the storage unit 230 includes recognition distance image data 230a, background distance image data 230b, and classifier data 130f.
- the storage unit 130 corresponds to a semiconductor memory element such as a RAM, a ROM, and a flash memory, and a storage device such as an HDD.
- the recognition distance image data 230a is distance image data measured by the distance sensor 20 at the time of recognition.
- the recognition distance image data 230a is data indicating the distance from the distance sensor 20 to the subject and the object for each position (pixel).
- the background distance image data 230b is distance image data of only the background photographed by the distance sensor 20 in a state where the target person and the predetermined object do not exist.
- the acquisition unit 240 a acquires the background distance image data 230 b from the distance sensor 20 and stores it in the storage unit 230 in advance.
- the discriminator data 130f is discriminator data generated by the learning device 100.
- the data structure of the discriminator data 130f corresponds to the data structure described with reference to FIG.
- the control unit 240 includes an acquisition unit 240a, a removal unit 240b, a determination unit 240c, and a recognition unit 240d.
- the control unit 240 can be realized by a CPU, MPU, or the like.
- the control unit 240 can also be realized by a hard wired logic such as ASIC or FPGA.
- the acquisition unit 240 a acquires the recognition distance image data 230 a from the distance sensor 20 and stores it in the storage unit 230.
- the acquisition unit 240a acquires the discriminator data 130f from the learning device 100, and stores the acquired discriminator data 130f in the storage unit 230.
- the removal unit 240b is a processing unit that removes background information from the recognition distance image data 230a by taking a difference between the recognition distance image data 230a and the background distance image data 230b.
- the removal unit 240b outputs the distance image data from which the background information is removed from the recognition distance image data 230a to the determination unit 240c.
- the distance image data obtained by removing background information from the recognized distance image data 230a is simply referred to as “distance image data”.
- the determination unit 240c is a processing unit that determines a corresponding part label for each position (pixel) of the distance image data based on the distance image data acquired from the removal unit 240b and the discriminator data 130f. For example, the determination unit 240c compares the feature amount around the distance image data with each branch node f of the discriminator data 130f, traces each branch node f, and the part indicated by the leaf node R that is followed The label is used as a part label of the determination result. The determination part 240c determines the part label corresponding to all distance image data by repeatedly performing the said process also about another pixel.
- the part label corresponding to each position includes a part label that uniquely identifies a part of the human body and a part label that indicates an object (horse).
- the determination unit 240c outputs a determination result in which each position of the distance image data is associated with the part label to the recognition unit 240d.
- the recognition unit 240d is a processing unit that recognizes the posture of the target person based on the determination result of the determination unit 240c. For example, the recognition unit 240d removes the part label of the object and proposes a plurality of human skeleton models having a plurality of three-dimensional positions based on the part label of the human body. The recognition unit 240d selects a skeleton model with the highest likelihood from a plurality of skeleton models, and recognizes the posture of the person based on the selected skeleton model.
- FIG. 10 is a flowchart illustrating the processing procedure of the learning device according to the present embodiment.
- the acquisition unit 140a of the learning device 100 acquires motion capture data 130a from the motion capture device 10 (step S101).
- the generation unit 140b of the learning device 100 generates the human body model data 130b (step S102a).
- the generation unit 140b generates object model data 130c (step S102b).
- the generation unit 140b may use previously generated object model data as the object model data 130c.
- the generation unit 140b generates composite model data obtained by combining a plurality of human body models and object models according to movement (step S103).
- the generation unit 140b removes redundancy from the combined model table 130d (step S104).
- the generation unit 140b registers the part label image data and the distance image data in the learning image table 130e based on the synthesized model data (step S105).
- the learning unit 140c of the learning device 100 refers to the learning image table 130e, performs machine learning on the relationship between the feature of the distance image data and the part label, and generates discriminator data 130f (step S106).
- the notification unit 140d of the learning device 100 notifies the recognition device 100 of the classifier data 130f (step S107).
- FIG. 11 is a flowchart showing the processing procedure of the recognition apparatus according to the present embodiment. As illustrated in FIG. 11, the acquisition unit 240a of the recognition device 200 acquires recognition distance image data 230a from the distance sensor 20 (step S201).
- the removal unit 240b of the recognition device 200 removes the background from the recognition distance image data 230a (step S202). Based on the identification data 130f and the distance image data, the determination unit 240c of the recognition device 200 determines each part label of the human body and the part label of the object included in the distance image data (step S203).
- the recognition unit 240d of the recognition device 200 removes the part label of the object (step S204).
- the recognition unit 240d recognizes the posture of the target person based on each part label of the human body (step S205).
- the generation unit 140b of the learning device 100 generates a plurality of learning images in which the distance image data and the part label image are associated with each other based on the combined model data obtained by combining the human body model data 130b and the object model data 130c.
- the learning device 100 machine-learns a plurality of learning images to generate discriminator data 130f in which the features of the distance image data are associated with the human body part label or the object part label.
- the classifier data 130f is a classifier that associates the characteristics of the distance image data with the part label of the human body or the part label of the object, even when the human body and the object exist at the same time when the distance image is acquired, Can be classified into a human body part label and an object part label from the distance image data.
- the recognizing device 200 uses the distance image data obtained by removing the background from the recognized distance image data 230 acquired from the distance sensor 20 and the discriminator data 130f to determine the part labels of the subject and the object. For this reason, even when the human body and the object are included in the distance image data, the distance image data can be classified into a human body part label and an object part label. That is, even if occlusion due to an object exists, correct site recognition can be performed.
- the recognition apparatus 200 Since the recognition apparatus 200 identifies the posture of the target person after removing the part label of the object from the part label of the target person and the part label of the object, the recognition apparatus 200 can accurately recognize the posture of the target person.
- the content of the above-described embodiment is an example, and the processing of the learning device 100 and the recognition device 200 is not limited to the above processing.
- the other processes 1 to 3 will be described.
- the learning device 130 uses the object model of the horse that exists at a fixed position as the object model data 130c, but is not limited thereto.
- an object that moves with a specific relationship with a human part may be used as the object model.
- the generation unit 140b generates a three-dimensional model of a suspension ring in the same manner as the horse.
- the generation unit 140b moves the suspension ring to the hand part of the human body model in units of frames, and based on the direction of the hand (upward, downward, sideways, etc.) Place a hanging ring on the
- the generation unit 140b repeatedly executes the above process for each frame, thereby generating a plurality of combined model data and storing it in the combined model table 130d.
- Other processing is the same as the processing described in the embodiment.
- the learning device 100 generates the discriminator data 130f
- the recognition device 200 recognizes the posture of the subject using the discriminator data 130f.
- the present invention is not limited to this.
- the distance image processing device that performs the processing of the learning device 100 and the recognition device 200 may execute processing corresponding to the above-described embodiment.
- the distance image processing apparatus generates the discriminator data 130f by executing the same processing as the control unit 140 of FIG. 2 in the “learning phase”.
- the distance image processing apparatus executes processing similar to that of the control unit 240 shown in FIG. 9 using the discriminator data 130f learned in the learning phase, and recognizes the posture of the target person. To do.
- FIG. 8 the method of part label recognition using a binary tree is described. However, there may be a plurality of binary trees, and an object is included from a distance image by deep learning without using a binary tree. Recognized site label may be performed.
- FIG. 12 is a diagram illustrating an example of a hardware configuration of a computer that implements the same function as the learning device.
- the computer 300 includes a CPU 301 that executes various arithmetic processes, an input device 302 that receives data input from a user, and a display 303.
- the computer 300 also includes a reading device 304 that reads a program or the like from a storage medium, and an interface device 305 that exchanges data with another computer (such as the motion capture device 10) via a wired or wireless network.
- the computer 300 also includes a RAM 306 that temporarily stores various types of information and a hard disk device 307.
- the devices 301 to 307 are connected to the bus 308.
- the hard disk device 307 has an acquisition program 307a, a generation program 307b, a learning program 307c, and a notification program 307d.
- the CPU 301 reads the acquisition program 307 a, the generation program 307 b, the learning program 307 c, and the notification program 307 d and expands them in the RAM 306.
- the acquisition program 307a functions as the acquisition process 306a.
- the generation program 307b functions as a generation process 306b.
- the learning program 307c functions as a learning process 306c.
- the notification program 307d functions as a notification process 306d.
- the processing of the acquisition process 306a corresponds to the processing of the acquisition unit 140a.
- the process of the generation process 306b corresponds to the process of the generation unit 140b.
- the process of the learning process 306c corresponds to the process of the learning unit 140c.
- the process of the notification unit process 306d corresponds to the process of the notification unit 140d.
- each program is stored in a “portable physical medium” such as a flexible disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, and an IC card inserted into the computer 300. Then, the computer 300 may read and execute each of the programs 307a to 307d.
- a “portable physical medium” such as a flexible disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, and an IC card inserted into the computer 300.
- the computer 300 may read and execute each of the programs 307a to 307d.
- FIG. 13 is a diagram illustrating an example of a hardware configuration of a computer that realizes the same function as the recognition device.
- the computer 400 includes a CPU 401 that executes various arithmetic processes, an input device 402 that receives input of data from a user, and a display 403.
- the computer 400 also includes a reading device 404 that reads a program or the like from a storage medium, and an interface device 405 that exchanges data with another computer (such as the motion capture device 10) via a wired or wireless network.
- the computer 400 also includes a RAM 406 that temporarily stores various types of information and a hard disk device 407.
- the devices 401 to 407 are connected to the bus 408.
- the hard disk device 407 has an acquisition program 407a, a removal program 407b, a determination program 407c, and a recognition program 407d.
- the CPU 401 reads out the acquisition program 407 a, the removal program 407 b, the determination program 407 c, and the recognition program 407 d and develops them in the RAM 406.
- the acquisition program 407a functions as the acquisition process 406a.
- the removal program 407b functions as a removal process 406b.
- the determination program 407c functions as a determination process 406c.
- the recognition program 407d functions as a recognition process 406d.
- the processing of the acquisition process 406a corresponds to the processing of the acquisition unit 240a.
- the process of the removal process 406b corresponds to the process of the removal unit 240b.
- the process of the determination process 406c corresponds to the process of the determination unit 240c.
- the processing of the recognition unit process 406d corresponds to the processing of the recognition unit 240d.
- each program is stored in a “portable physical medium” such as a flexible disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, or an IC card inserted into the computer 400. Then, the computer 400 may read and execute each of the programs 407a to 407d.
- a “portable physical medium” such as a flexible disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, or an IC card inserted into the computer 400.
- the computer 400 may read and execute each of the programs 407a to 407d.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
20 距離センサ
100 学習装置
200 認識装置
Claims (10)
- 人体の3次元モデルと物体の3次元モデルとを合成した合成モデルを基にして、基準位置から前記人体の各位置または前記物体の各位置までの距離を示す距離画像と、前記人体の各部位または前記物体の部位を識別する部位画像とを対応づけた学習画像を複数生成する生成部と、
複数の学習画像を基にして、前記距離画像の特徴と、前記人体の部位または前記物体の部位とを対応づけた識別器を学習する学習部と
を有することを特徴とする距離画像処理装置。 - 距離センサから対象者、物体および背景を含んだ距離画像を取得する取得部と、前記対象者、物体および背景を含んだ距離画像から背景を取り除くことで対象者距離画像を生成する除去部と、前記対象者距離画像と前記識別器とを基にして、前記対象者距離画像の位置と、前記人体の部位または前記物体の部位とを関係を判定する判定部とを更に有することを特徴とする請求項1に記載の距離画像処理装置。
- 前記判定部の判定結果を基にして前記対象者距離画像に含まれる前記人体の各部位を特定し、前記物体の部位を除く残りの各部位の関係から前記対象者の姿勢を認識する認識部を更に有することを特徴とする請求項2に記載の距離画像処理装置。
- 学習装置と認識装置とを有する距離画像処理システムであって、
前記学習装置は、
人体の3次元モデルと物体の3次元モデルとを合成した合成モデルを基にして、基準位置から前記人体の各位置または前記物体の各位置までの距離を示す距離画像と、前記人体の各部位または前記物体の部位を識別する部位画像とを対応づけた学習画像を複数生成する生成部と、
複数の学習画像を基にして、前記距離画像の特徴と、前記人体の部位または前記物体の部位とを対応づけた識別器を学習する学習部とを有し、
前記認識装置は、
距離センサから対象者および背景を含んだ距離画像を取得する取得部と、
前記対象者および背景を含んだ距離画像から背景を取り除くことで対象者距離画像を生成する除去部と、
前記対象者距離画像と前記識別器とを基にして、前記対象者距離画像の位置と、前記人体の部位または前記物体の部位とを関係を判定する判定部とを有する
ことを特徴とする距離画像処理システム。 - コンピュータが実行する距離画像処理方法であって、
人体の3次元モデルと物体の3次元モデルとを合成した合成モデルを基にして、基準位置から前記人体の各位置または前記物体の各位置までの距離を示す距離画像と、前記人体の各部位または前記物体の部位を識別する部位画像とを対応づけた学習画像を複数生成し、
複数の学習画像を基にして、前記距離画像の特徴と、前記人体の部位または前記物体の部位とを対応づけた識別器を学習する
処理を実行することを特徴とする距離画像処理方法。 - 距離センサから対象者、物体および背景を含んだ距離画像を取得し、前記対象者、物体および背景を含んだ距離画像から背景を取り除くことで対象者距離画像を生成し、前記対象者距離画像と前記識別器とを基にして、前記対象者距離画像の位置と、前記人体の部位または前記物体の部位とを関係を判定する処理を更に実行することを特徴とする請求項5に記載の距離画像処理方法。
- 前記判定する処理の判定結果を基にして前記対象者距離画像に含まれる前記人体の各部位を特定し、前記物体の部位を除く残りの各部位の関係から前記対象者の姿勢を認識する処理を更に実行することを特徴とする請求項6に記載の距離画像処理方法。
- コンピュータに、
人体の3次元モデルと物体の3次元モデルとを合成した合成モデルを基にして、基準位置から前記人体の各位置または前記物体の各位置までの距離を示す距離画像と、前記人体の各部位または前記物体の部位を識別する部位画像とを対応づけた学習画像を複数生成し、
複数の学習画像を基にして、前記距離画像の特徴と、前記人体の部位または前記物体の部位とを対応づけた識別器を学習する
処理を実行させることを特徴とする距離画像処理プログラム。 - 距離センサから対象者、物体および背景を含んだ距離画像を取得し、前記対象者、物体および背景を含んだ距離画像から背景を取り除くことで対象者距離画像を生成し、前記対象者距離画像と前記識別器とを基にして、前記対象者距離画像の位置と、前記人体の部位または前記物体の部位とを関係を判定する処理を更に実行することを特徴とする請求項8に記載の距離画像処理プログラム。
- 前記判定する処理の判定結果を基にして前記対象者距離画像に含まれる前記人体の各部位を特定し、前記物体の部位を除く残りの各部位の関係から前記対象者の姿勢を認識する処理を更に実行することを特徴とする請求項9に記載の距離画像処理プログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019516851A JP6860066B2 (ja) | 2017-05-12 | 2017-05-12 | 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム |
PCT/JP2017/018034 WO2018207351A1 (ja) | 2017-05-12 | 2017-05-12 | 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム |
EP17909348.9A EP3624051A4 (en) | 2017-05-12 | 2017-05-12 | DISTANCE IMAGE PROCESSING DEVICE, DISTANCE IMAGE PROCESSING SYSTEM, DISTANCE IMAGE PROCESSING METHOD AND DISTANCE IMAGE PROCESSING PROGRAM |
CN201780090521.2A CN110622217B (zh) | 2017-05-12 | 2017-05-12 | 距离图像处理装置以及距离图像处理系统 |
US16/676,404 US11087493B2 (en) | 2017-05-12 | 2019-11-06 | Depth-image processing device, depth-image processing system, depth-image processing method, and recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2017/018034 WO2018207351A1 (ja) | 2017-05-12 | 2017-05-12 | 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/676,404 Continuation US11087493B2 (en) | 2017-05-12 | 2019-11-06 | Depth-image processing device, depth-image processing system, depth-image processing method, and recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018207351A1 true WO2018207351A1 (ja) | 2018-11-15 |
Family
ID=64105072
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/018034 WO2018207351A1 (ja) | 2017-05-12 | 2017-05-12 | 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US11087493B2 (ja) |
EP (1) | EP3624051A4 (ja) |
JP (1) | JP6860066B2 (ja) |
CN (1) | CN110622217B (ja) |
WO (1) | WO2018207351A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020178957A1 (ja) * | 2019-03-04 | 2020-09-10 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム記録媒体 |
CN111753632A (zh) * | 2019-03-29 | 2020-10-09 | 本田技研工业株式会社 | 驾驶辅助装置 |
JP2021099666A (ja) * | 2019-12-23 | 2021-07-01 | 住友ゴム工業株式会社 | 学習モデルの生成方法 |
CN113822182A (zh) * | 2021-09-08 | 2021-12-21 | 河南理工大学 | 一种运动动作检测方法和系统 |
US20220334674A1 (en) * | 2019-10-17 | 2022-10-20 | Sony Group Corporation | Information processing apparatus, information processing method, and program |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018207365A1 (ja) | 2017-05-12 | 2018-11-15 | 富士通株式会社 | 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム |
EP3579138B1 (en) * | 2018-06-04 | 2020-11-11 | CogVis Software und Consulting GmbH | Method for determining a type and a state of an object of interest |
US11120280B2 (en) * | 2019-11-15 | 2021-09-14 | Argo AI, LLC | Geometry-aware instance segmentation in stereo image capture processes |
CN113096337B (zh) * | 2021-04-08 | 2022-11-11 | 中国人民解放军军事科学院国防工程研究院工程防护研究所 | 用于复杂背景的移动目标识别处理方法及智能安防系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012120647A (ja) | 2010-12-07 | 2012-06-28 | Alpha Co | 姿勢検出装置 |
US20150036879A1 (en) | 2013-07-30 | 2015-02-05 | Canon Kabushiki Kaisha | Posture estimating apparatus, posture estimating method and storing medium |
WO2015186436A1 (ja) * | 2014-06-06 | 2015-12-10 | コニカミノルタ株式会社 | 画像処理装置、画像処理方法、および、画像処理プログラム |
US20160125243A1 (en) | 2014-10-30 | 2016-05-05 | Panasonic Intellectual Property Management Co., Ltd. | Human body part detection system and human body part detection method |
JP2016212688A (ja) | 2015-05-11 | 2016-12-15 | 日本電信電話株式会社 | 関節位置推定装置、方法、及びプログラム |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000251078A (ja) * | 1998-12-22 | 2000-09-14 | Atr Media Integration & Communications Res Lab | 人物の3次元姿勢推定方法および装置ならびに人物の肘の位置推定方法および装置 |
JP2004226197A (ja) | 2003-01-22 | 2004-08-12 | Seiko Epson Corp | 物体識別方法および物体識別装置、並びに物体識別プログラム |
JP2007310707A (ja) | 2006-05-19 | 2007-11-29 | Toshiba Corp | 姿勢推定装置及びその方法 |
US20110227923A1 (en) * | 2008-04-14 | 2011-09-22 | Xid Technologies Pte Ltd | Image synthesis method |
US8638985B2 (en) | 2009-05-01 | 2014-01-28 | Microsoft Corporation | Human body pose estimation |
EP2430614B1 (de) * | 2009-05-11 | 2013-09-18 | Universität zu Lübeck | Verfahren zur echtzeitfähigen, rechnergestützten analyse einer eine veränderliche pose enthaltenden bildsequenz |
US8213680B2 (en) * | 2010-03-19 | 2012-07-03 | Microsoft Corporation | Proxy training data for human body tracking |
US8625897B2 (en) | 2010-05-28 | 2014-01-07 | Microsoft Corporation | Foreground and background image segmentation |
US8571263B2 (en) | 2011-03-17 | 2013-10-29 | Microsoft Corporation | Predicting joint positions |
KR101815975B1 (ko) * | 2011-07-27 | 2018-01-09 | 삼성전자주식회사 | 객체 자세 검색 장치 및 방법 |
JP2013058174A (ja) * | 2011-09-09 | 2013-03-28 | Fujitsu Ltd | 画像処理プログラム、画像処理方法および画像処理装置 |
KR101763778B1 (ko) * | 2011-09-30 | 2017-08-01 | 인텔 코포레이션 | 깊이 영상들에서 사람의 머리 부위 검출 |
KR101283262B1 (ko) * | 2011-10-21 | 2013-07-11 | 한양대학교 산학협력단 | 영상 처리 방법 및 장치 |
US8666149B2 (en) * | 2012-08-01 | 2014-03-04 | Chevron U.S.A. Inc. | Method for editing a multi-point facies simulation |
US10248993B2 (en) * | 2015-03-25 | 2019-04-02 | Optitex Ltd. | Systems and methods for generating photo-realistic images of virtual garments overlaid on visual images of photographic subjects |
US10659773B2 (en) * | 2017-04-13 | 2020-05-19 | Facebook, Inc. | Panoramic camera systems |
WO2018207365A1 (ja) | 2017-05-12 | 2018-11-15 | 富士通株式会社 | 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム |
-
2017
- 2017-05-12 EP EP17909348.9A patent/EP3624051A4/en active Pending
- 2017-05-12 CN CN201780090521.2A patent/CN110622217B/zh active Active
- 2017-05-12 WO PCT/JP2017/018034 patent/WO2018207351A1/ja active Application Filing
- 2017-05-12 JP JP2019516851A patent/JP6860066B2/ja active Active
-
2019
- 2019-11-06 US US16/676,404 patent/US11087493B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012120647A (ja) | 2010-12-07 | 2012-06-28 | Alpha Co | 姿勢検出装置 |
US20150036879A1 (en) | 2013-07-30 | 2015-02-05 | Canon Kabushiki Kaisha | Posture estimating apparatus, posture estimating method and storing medium |
JP2015167008A (ja) | 2013-07-30 | 2015-09-24 | キヤノン株式会社 | 姿勢推定装置、姿勢推定方法およびプログラム |
WO2015186436A1 (ja) * | 2014-06-06 | 2015-12-10 | コニカミノルタ株式会社 | 画像処理装置、画像処理方法、および、画像処理プログラム |
US20160125243A1 (en) | 2014-10-30 | 2016-05-05 | Panasonic Intellectual Property Management Co., Ltd. | Human body part detection system and human body part detection method |
JP2016091108A (ja) | 2014-10-30 | 2016-05-23 | パナソニックIpマネジメント株式会社 | 人体部位検出システムおよび人体部位検出方法 |
JP2016212688A (ja) | 2015-05-11 | 2016-12-15 | 日本電信電話株式会社 | 関節位置推定装置、方法、及びプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3624051A4 |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020178957A1 (ja) * | 2019-03-04 | 2020-09-10 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム記録媒体 |
JPWO2020178957A1 (ja) * | 2019-03-04 | 2021-10-21 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP7294402B2 (ja) | 2019-03-04 | 2023-06-20 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
US11803615B2 (en) | 2019-03-04 | 2023-10-31 | Nec Corporation | Generating 3D training data from 2D images |
CN111753632A (zh) * | 2019-03-29 | 2020-10-09 | 本田技研工业株式会社 | 驾驶辅助装置 |
US11380120B2 (en) | 2019-03-29 | 2022-07-05 | Honda Motor Co., Ltd. | Driving assistance device |
CN111753632B (zh) * | 2019-03-29 | 2024-03-26 | 本田技研工业株式会社 | 驾驶辅助装置 |
US20220334674A1 (en) * | 2019-10-17 | 2022-10-20 | Sony Group Corporation | Information processing apparatus, information processing method, and program |
JP2021099666A (ja) * | 2019-12-23 | 2021-07-01 | 住友ゴム工業株式会社 | 学習モデルの生成方法 |
JP7482471B2 (ja) | 2019-12-23 | 2024-05-14 | 住友ゴム工業株式会社 | 学習モデルの生成方法 |
CN113822182A (zh) * | 2021-09-08 | 2021-12-21 | 河南理工大学 | 一种运动动作检测方法和系统 |
Also Published As
Publication number | Publication date |
---|---|
CN110622217A (zh) | 2019-12-27 |
JPWO2018207351A1 (ja) | 2020-03-12 |
EP3624051A1 (en) | 2020-03-18 |
US20200074679A1 (en) | 2020-03-05 |
US11087493B2 (en) | 2021-08-10 |
CN110622217B (zh) | 2023-04-18 |
EP3624051A4 (en) | 2020-03-25 |
JP6860066B2 (ja) | 2021-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018207351A1 (ja) | 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム | |
JP6809604B2 (ja) | 距離画像処理装置、距離画像処理システム、距離画像処理方法および距離画像処理プログラム | |
US11232556B2 (en) | Surgical simulator providing labeled data | |
US11281896B2 (en) | Physical activity quantification and monitoring | |
JP5016602B2 (ja) | モーションキャプチャに使用されるラベリング | |
CN102725038B (zh) | 组合多传感输入以用于数字动画 | |
CN103996184B (zh) | 增强现实应用中的可变形表面跟踪 | |
JP2009265732A (ja) | 画像処理装置及びその方法 | |
JP6760491B2 (ja) | 認識装置、認識システム、認識方法および認識プログラム | |
JP2023109570A (ja) | 情報処理装置、学習装置、画像認識装置、情報処理方法、学習方法、画像認識方法 | |
JP6713422B2 (ja) | 学習装置、イベント検出装置、学習方法、イベント検出方法、プログラム | |
JP6393495B2 (ja) | 画像処理装置および物体認識方法 | |
WO2022024294A1 (ja) | 行動特定装置、行動特定方法及び行動特定プログラム | |
JP6892844B2 (ja) | 情報処理装置、情報処理方法、透かし検出装置、透かし検出方法、及びプログラム | |
JP2021144359A (ja) | 学習装置、推定装置、学習方法、及びプログラム | |
JPWO2020184006A1 (ja) | 画像処理装置、画像処理方法及びプログラム | |
WO2022003981A1 (ja) | 行動特定装置、行動特定方法及び行動特定プログラム | |
WO2023012915A1 (ja) | 姿勢特定プログラム、姿勢特定方法および情報処理装置 | |
WO2023062762A1 (ja) | 推定プログラム、推定方法および情報処理装置 | |
WO2023162223A1 (ja) | 学習プログラム、生成プログラム、学習方法および生成方法 | |
WO2023039527A1 (en) | Body pose tracking of players from sports broadcast video feed | |
Mathur et al. | Real Time Multi-Object Detection for Helmet Safety | |
Khan et al. | Classification of markers in the ARTool kit library to reduce inter-marker confusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17909348 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019516851 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2017909348 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2017909348 Country of ref document: EP Effective date: 20191212 |