WO2018087853A1 - 立体画像生成システム、立体画像生成方法及び立体画像生成プログラム - Google Patents
立体画像生成システム、立体画像生成方法及び立体画像生成プログラム Download PDFInfo
- Publication number
- WO2018087853A1 WO2018087853A1 PCT/JP2016/083296 JP2016083296W WO2018087853A1 WO 2018087853 A1 WO2018087853 A1 WO 2018087853A1 JP 2016083296 W JP2016083296 W JP 2016083296W WO 2018087853 A1 WO2018087853 A1 WO 2018087853A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- subject
- display
- stereoscopic
- data
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 13
- 238000006243 chemical reaction Methods 0.000 claims abstract description 44
- 238000012545 processing Methods 0.000 claims abstract description 34
- 238000003384 imaging method Methods 0.000 claims abstract description 14
- 238000003860 storage Methods 0.000 claims description 32
- 239000000203 mixture Substances 0.000 claims description 26
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000011084 recovery Methods 0.000 claims description 7
- 210000000988 bone and bone Anatomy 0.000 description 36
- 230000006870 function Effects 0.000 description 15
- 238000005259 measurement Methods 0.000 description 7
- 210000001503 joint Anatomy 0.000 description 6
- 210000003371 toe Anatomy 0.000 description 5
- 210000003423 ankle Anatomy 0.000 description 4
- 210000003414 extremity Anatomy 0.000 description 4
- 210000002683 foot Anatomy 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 208000026106 cerebrovascular disease Diseases 0.000 description 3
- 239000002131 composite material Substances 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 230000001154 acute effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000005452 bending Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000004394 hip joint Anatomy 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 206010008118 cerebral infarction Diseases 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 210000003811 finger Anatomy 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000399 orthopedic effect Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H1/00—Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
- A61H1/02—Stretching or bending or torsioning apparatus for exercising
Definitions
- the present invention relates to a three-dimensional image generation system that generates a real-time three-dimensional image by which a human body of a subject can be photographed in time series and the operation state of the subject can be confirmed.
- rehabilitation There is also an image generation system for supporting rehabilitation (hereinafter also simply referred to as “rehabilitation”) by photographing a human body (all or a part) of a subject and automatically measuring a range of motion of a patient. It has been proposed (see, for example, Patent Document 1).
- Patent Document 1 the technique disclosed in Patent Document 1 is to measure the range of motion by determining the joint position on the front or side of the subject, and does not determine the three-dimensional physique of the subject. It has not been possible to deal with a three-dimensional range of motion including the depth direction.
- the joint movable region in rehabilitation includes the depth direction such as bending extension and rotation with reference to the target joint, so that at least a time series change of the movable range that changes in time series is displayed by an image. If the recovery rate and the like can be calculated, diagnosis by a doctor or the like can be performed efficiently.
- the present invention generates a stereoscopic image from a human body image taken without using a position sensor, and easily identifies the subject and the physique of the subject. It is an object of the present invention to provide a three-dimensional image generation system that can be determined as follows.
- a stereoscopic image generation system displays a reception unit that receives image data of an image including a subject imaged by an imaging device, and an image based on the image data received by the reception unit.
- An image processing unit to be displayed on the display screen of the apparatus, and an image conversion unit to convert a stereoscopic image capable of identifying an apparent depth based on the image data received by the receiving unit into stereoscopic display data for displaying on the display screen
- a determination unit that executes at least one of identification of the subject and determination of the physique of the subject based on the stereoscopic display data, and the image conversion unit in the display screen according to the determination result of the determination unit
- the stereoscopic display data is converted so that the display state of the subject is different.
- a stereoscopic image can be generated from a human body image of a subject photographed without using a position sensor, and the identification of the subject and the physique of the subject can be easily determined.
- (A) is a conceptual diagram of a stereoscopic image generation system
- (B) is a block circuit diagram of the stereoscopic image generation system. It is explanatory drawing which shows the node mark as a joint position with respect to a human body (subject), and the link mark as a bone between joints. It is explanatory drawing of the example of a display in the display screen in the case of operation recording mode. It is explanatory drawing of the example of a display in the display screen in the case of joint angle recording mode. It is explanatory drawing of the example of a display in the display screen in the case of measurement mode after recording. It is explanatory drawing of the example of a display in the initial display screen in the case of tracking mode.
- the stereoscopic image generation system 1 uses an imaging device 2 for imaging a subject and a general-purpose computer 3.
- the general-purpose computer 3 may be a desktop computer provided with a computer main body 4, a display device (monitor) 5, and a keyboard 6 and a mouse 7 as input devices for input operation, as shown in the figure, There is no particular limitation as long as it can capture image data captured by the image capturing device 2, such as a notebook computer or a tablet terminal functionally integrated with the image capturing device 2. Absent. Further, the number of installed photographing apparatuses 2 is not limited to one.
- the computer body 4 includes a mass storage device (HDD), a read-only memory (ROM), a random access memory (ROM) in which programs such as an operating system (OS) and various applications are installed.
- a storage circuit unit 41 as a storage unit using various storage media such as a RAM), and a control circuit unit 42 such as a microprocessor (CPU) as a control unit that executes a program stored in the storage circuit unit 41.
- Computer a mass storage device (HDD), a read-only memory (ROM), a random access memory (ROM) in which programs such as an operating system (OS) and various applications are installed.
- a storage circuit unit 41 as a storage unit using various storage media such as a RAM
- a control circuit unit 42 such as a microprocessor (CPU) as a control unit that executes a program stored in the storage circuit unit 41.
- CPU microprocessor
- the computer body 4 includes a receiving circuit unit 43 as a receiving unit that receives image data of an image including the subject imaged by the imaging device 2.
- the image including the subject means including a background or the like that is imaged according to the angle of view of the imaging device 2.
- the computer main body 4 is an image processing unit that displays an image based on the image data received by the receiving circuit unit 43 on the display screen 51 of the display device 5 via the output circuit unit 44.
- an image conversion circuit that converts a stereoscopic image capable of identifying an apparent depth based on the image data received by the image processing circuit unit 45 and the reception circuit unit 43 into stereoscopic display data for display on the display screen 51.
- An image conversion circuit unit 46 and a determination circuit unit 47 as a determination unit that executes at least one of identification of the subject and determination of the physique of the subject based on the stereoscopic display data.
- stereoscopic display is visually three-dimensional (hereinafter also referred to as “3D”) on a two-dimensional (hereinafter also referred to as “2D”) screen, which is a substantially flat surface. In this way, the image is displayed by image processing.
- the image processing circuit unit 45 displays a real-time image based on the image data in the unconverted image display area 51A assigned to the display screen 51, and the image conversion circuit unit 46 Then, a stereoscopic image based on the stereoscopic display data is displayed in the converted image display area 51B allocated to the display screen 51.
- the current imaging device 2 that is, a real-time image similar to the naked eye, and the converted stereoscopic image.
- the image processing circuit unit 45 and the image conversion circuit unit 46 are the image data received by the receiving circuit unit 43 at the same time in the image displayed in the unconverted image display area 51A and the image displayed in the converted image display area 51B.
- Real-time is not limited to real-time on time, but includes a case of a moving image that is not a still image, that is, a motion that changes in time series of a person who is moving.
- the real time includes a case in which an operation itself for continuously identifying the motion of the subject on the moving image in time series is targeted.
- the reception circuit unit 43 receives image data of an image including the subject imaged by the imaging device 2 in time series (continuous / intermittent), and receives the received image data for each time series in the storage circuit unit 41.
- the determination circuit unit 47 determines whether or not the image data before and after the time series stored in the storage circuit unit 41 includes an operation in which at least a part of the human body of the subject changes, When the determination circuit unit 47 determines that the subject's operation is included, the image conversion circuit unit 46 changes (changes) the display state of the subject on the display device 5 according to the operation of the subject. It is also possible to convert the stereoscopic display data as described above.
- the time series may be either continuous shooting of moving images such as video shooting or continuous shooting of still images using a camera shutter function.
- a joint range of motion movable region
- an injury (failure) site in the rehabilitation of the subject, and use the joint range of motion to obtain a recovery rate or the like. If the calculation is performed, a diagnosis by a doctor or the like can be performed efficiently.
- the computer main body 4 automatically detects or manually designates a node mark corresponding to the joint position of the subject P (for example, as shown in FIG. 2). , And a link mark (for example, illustrated) extending between the adjacent joints of the subject P and from the joint of the subject P to a body end (for example, the tip of a hand or a foot, the head). And an image composition circuit unit 48 as an image composition unit for superimposing and displaying the image of the subject P displayed on the display screen 51 in correspondence (joint joints are matched).
- the node mark and the link mark may be simply referred to as “node” or “link”.
- the vicinity of the forehead of the subject P (specification of the head position), the neck, the base of the neck (center of both shoulders), both shoulders, both elbows, both wrists, hands and fingers
- the node mark target positions are the fingertip, the center of the body, the center of the body, the vicinity of the center of the sacrum, the hip joints, both knees, both ankles, and the tip of the foot, and the node marks are connected by link marks.
- the entire image including the node mark and the link mark may be referred to as a “bone image”.
- the bone image can be displayed as a three-dimensional image (hereinafter also referred to as “3D bone image”).
- the image composition circuit unit 48 can superimpose and display such a bone image on the image of the subject P selectively displayed in the unconverted image display area 51A and the converted image display area 51B. Therefore, the determination circuit unit 47 determines the setting / selection / change of the work content, and the display state of the subject P in the converted image display area 51B of the display screen 51 is changed according to the determination result to only the stereoscopic image. In the case of an image obtained by superimposing a bone image on a stereoscopic image, in the case of only a 3D bone image, the image conversion circuit unit 46 converts the stereoscopic display data so as to change the display state.
- the image composition circuit unit 48 has a part of the function of the image conversion circuit unit 46 that converts the stereoscopic display data so as to change the display state of the subject P on the display screen 51 according to the determination result.
- the image conversion circuit unit 46 and the image composition circuit unit 48 display a stereoscopic image and a 3D bone image of the subject P so as to be a rotated image that is rotated as if viewed from multiple directions. It has a function of converting to 3D display data in different states.
- the determination circuit unit 47 determines whether at least a part of the human body of the subject P corresponding to the node mark and the link mark has moved in the image data before and after the time series stored in the storage circuit unit 41.
- the image composition circuit unit 48 displays the movement locus mark indicating the movement locus while following the corresponding node mark or link mark. It is desirable to output movement trajectory information. As a result, the range of motion of the joint that changes in time series can be easily confirmed with the naked eye.
- the bone image (or 3D bone image) is either the subject P on the image displayed in the unconverted image display area 51A or the subject P on the image displayed in the converted image display area 51B. Also, the presence / absence of superimposed display can be switched and displayed.
- a reception function for receiving image data of the image including the subject P photographed by the photographing device 2 and an image based on the received image data are displayed on the display screen 51 of the display device 5.
- a determination function that executes at least one of identification of the subject P and determination of the physique of the subject P, and the display state of the subject P on the display screen 51 differs depending on the determination result
- a stereoscopic image generation program including a function for converting stereoscopic display data is stored.
- the image processing circuit unit 45 receives image data received by a two-dimensional color image sensor (not shown) of the image capturing device 2 via the reception circuit unit 43 and the control circuit unit 42. And the image data is output to the control circuit unit 42.
- the control circuit unit 52 displays an unconverted image display area 51A assigned to the display screen 51 of the display device 5, for example, as shown in FIGS.
- Image data is output to the output circuit unit 44 so as to display a real-time image (color) including the subject P. Thereby, an image including the subject P can be displayed in the unconverted image display area 51 ⁇ / b> A of the display screen 51.
- FIGS. 3 to 7 The detailed display state of FIGS. 3 to 7 will be described later.
- the image processing circuit unit 45 stores the captured image data in the storage circuit unit 41 in parallel with the output from the output circuit unit 44 to the display device 5 via the control circuit unit 42. Therefore, the control circuit unit 52 displays the fixed image in the unconverted image display area 51A based on the image data that has been recalled and fixed (for example, for one frame) from the storage circuit unit 41. Can do. Thereby, for example, it becomes possible to specify the joint position of the subject P described later using the mouse 7 (see FIG. 5).
- the image conversion circuit unit 46 captures image data from any of the control circuit unit 42 and the storage circuit unit 41 via the image processing circuit unit 45 and the reception circuit unit 43, and sets the apparent depth. A stereoscopic image that can be identified is converted into stereoscopic display data for display on the display screen 51 (see FIGS. 8 and 9). Therefore, the “image data received by the receiving circuit unit” means that the image data is the same as the image data processed by the image processing circuit unit 45, and it does not matter where the image data is acquired from. .
- Image conversion by the image conversion circuit unit 46 is performed by using a known method based on image data. For example, as shown in FIG. 8 or FIG. 9, at least a node mark is identified in the converted image display area 51B of the display screen 51.
- 3D image (see FIG. 8) expressed in a point cloud shape so that it is easy to identify a sense of depth, or expressed in a point cloud shape so that color identification such as the color of clothes of the subject P can be performed.
- the stereoscopic display data is converted so as to change the display state by displaying the color stereoscopic image (for example, FIG. 9). Therefore, the user can recognize the depth direction based on the density (including color) of the point cloud.
- a person different from the subject P and surrounding objects are deleted or pseudo (including color change) point cloud by automatic recognition or designation. It is also possible to convert the stereoscopic display data so as to change the display state by replacing with.
- a point cloud generates stereoscopic image data of a point cloud (stipple) that has been image-processed so as to appear visually in three dimensions (hereinafter also referred to as “3D” or “solid”) on a two-dimensional screen, This means that a stereoscopic image is displayed based on the generated image data.
- the point cloud generates and displays color stereoscopic image data for displaying, for example, the subject's clothes and the like in real colors.
- the point cloud is capable of discriminating between the subject in the depth direction orthogonal to the display screen, in particular, the subject and other objects (including people).
- a color stereoscopic image of a color corresponding to the distance is generated and displayed on an object other than the subject. Note that stereoscopic image display by the point cloud is for still image data.
- a method such as layer or depth in the field of image processing is used, and the overlapping state of the overlapped images (which is in front) is identified. Functions can be used.
- this method in the example shown in FIGS. 8 and 9, the rear edge and the front edge of the door located behind the subject P are different in color from the subject P (for example, , Yellow and white).
- the stereoscopic display data so as to change the display state such as display deletion of the edge of the door.
- “apparently” means that the image displayed on the display screen 51 is a two-dimensional (2D) planar image and not a stereoscopic image display, and therefore, the depth direction with the naked eye can be recognized. . Therefore, when a display device capable of stereoscopic (3D) display such as a hologram is used, naturally, stereoscopic image data for the 3D display can be generated. In addition, as shown in FIG. 8, only the node (or only the link) can be displayed.
- the image conversion means that when the display screen 51 includes the unconverted image display area 51A and the converted image display area 51B, an apparent stereoscopic image is displayed on the display screen 51 based on the original image data.
- stereoscopic display data to be displayed in the area 51B is generated. That is, when there is no unconverted image display area 51A on the display screen 51, the display state of the display screen 51 is switched from the image display state based on the image data to the stereoscopic image display state based on the stereoscopic display data.
- the image conversion means generating stereoscopic display data for displaying on the display screen 51 a stereoscopic image having a sense of depth in the form of a single color grid or a point group based on the original image data. .
- converting the stereoscopic display data so as to change the display state by the image conversion circuit unit 46 means generating stereoscopic display data for a stereoscopic image having a sense of depth. Specifically, it means that the hue according to the depth is changed with a single color. In this case, for example, it is possible to include a display that makes the subject P stand out by excluding bright or dark colors that are equal to or greater than a predetermined value. Further, the conversion of the stereoscopic display data so as to change the display state can include the switching of the point group-like stereoscopic image display shown in FIGS. 8 and 9.
- the determination circuit unit 47 executes identification of the subject P from other persons and determination of the physique of the subject P based on the stereoscopic display data.
- pattern data of a person lattice pattern or a person point cloud pattern when such a subject P is photographed is stored, for example, by gender, by height, by position (standing, sleeping).
- the subject P can be specified by comparing the pattern data with the stereoscopic display data, and a person different from the subject P (helper, etc.) And objects (chairs, desks, doors, beds, etc.) can be excluded.
- it is also possible to determine the physique of the subject P by comparing the pattern data and the stereoscopic display data.
- the determination of the physique can include the determination of the motion of the subject P and the specification of the joint position.
- the identification (automatic) of the joint position of the subject P will be described later.
- motion determination after specifying the subject P (it is possible even before specifying), for example, the image data of the subject P before and after the time series (for example, image patterns in the XY directions)
- the image data of the subject P before and after the time series for example, image patterns in the XY directions
- This operation can determine the presence or absence of the operation for the entire subject P or for some cases.
- the determination circuit unit 47 determines that the subject P is moving, the pattern analysis is performed on the image data or the stereoscopic display data corresponding to the movement that changes in time series, and the pattern changes are superimposed. Thus, motion trajectory data can be generated. Furthermore, the determination circuit unit 47 specifies the change in the node position and the link position from the node data indicating the joint position and the link data connecting the nodes or between the node and the human body tip, thereby determining the joint movable range. For example, the range of motion of the joint can be specified by replacing the movement locus with the node position or the link tip as a reference. Node data and link data are processed as mark image data, that is, bone image data (3D bone image data).
- the determination circuit unit 47 can store, for example, the time-series change of the specified joint range of motion in the storage circuit unit 41 as the range of motion information in the rehabilitation process of the subject P. Further, the movement locus information is output to the image composition circuit unit 48 so that the movement locus mark indicating the movement locus is displayed while following the node mark or the link mark corresponding to the movement locus.
- the determination means determining the physique of the subject P including the identification of the subject P (determination of whether or not the subject P is). Further, the determination can include the exclusion of a person who is different from the subject P at the same time as the identification of the subject P (determination as to whether or not it is the subject P). Further, the determination can include the determination of the movement of the subject P and the joint position specification (determination of whether or not the joint is a joint). The determination can include an operation determination comparing the image data or stereoscopic display data of the subject P before and after the time series in the subject P. Furthermore, the determination can include specifying the range of motion of the joint by pattern analysis (determining whether the coordinates of the operating point, the inflection point, etc. have changed).
- the image composition circuit unit 48 includes a node mark corresponding to the joint position of the subject P, the adjacent joints of the subject P, and the joints of the subject P from the body end.
- the link mark extending to the part (limbs / tips) is superimposed and displayed in correspondence with the image of the subject P displayed in the unconverted image display area 51A of the display screen 51. Note that the shape of the node mark and the line segment of the link mark are not limited to ⁇ and the solid line shown in FIGS.
- image synthesis means generation of composite image data for displaying a composite image (superimposed image) on a display image displayed on the display screen 51 based on image data or stereoscopic display data. Specifically, this means that a node mark and a link mark by automatic recognition or manual designation are displayed.
- the image composition means displaying a movement locus (see FIG. 7) as a joint movable range in rehabilitation. A specific movement locus will be described later.
- Step S1 In step S ⁇ b> 1, the control circuit unit 42 executes a reception step for determining whether or not image data has been received from the imaging apparatus 2 via the reception circuit unit 43. If the control circuit unit 42 determines that the image data has been received, the control circuit unit 42 proceeds to step S2. When the control circuit unit 42 does not determine that the image data has been received, the control circuit unit 42 continues to monitor this routine.
- Step S2 the control circuit unit 42 causes the image processing circuit unit 45 to execute an image processing step for displaying an image based on the received image data on the display screen 51 of the display device 5, and proceeds to step S3.
- Step S3 In step S ⁇ b> 3, the control circuit unit 42 performs an image conversion step of converting a stereoscopic image whose apparent depth can be identified based on the received image data into stereoscopic display data for displaying on the display screen 51.
- the process is executed by the unit 46, and the process proceeds to step S4.
- Step S4 the control circuit unit 42 causes the determination circuit unit 47 to execute a determination step of executing at least one of identification of the subject P and determination of the physique of the subject P based on the stereoscopic display data, and step S5 Migrate to
- Step S5 the control circuit unit 42 causes the image conversion circuit unit 46 to execute a stereoscopic image display step for converting the stereoscopic display data so as to change the display state of the subject on the display screen according to the determination result.
- control circuit unit 42 continues to execute the following routine.
- Step S6 the control circuit unit 42 causes the image composition circuit unit 48 to display the node mark and link mark determined automatically or by designation on the display screen 51, and proceeds to step S7.
- Step S7 In step S ⁇ b> 7, the control circuit unit 42 causes the determination circuit unit 47 to determine whether or not the subject P has operated based on the image data or the stereoscopic display data stored in the storage circuit unit 41. If the determination circuit unit 47 is stable with operation, the determination circuit unit 47 proceeds to step S8. If the determination circuit unit 47 does not determine that there is an operation, the routine ends.
- Step S8 the control circuit unit 42 causes the determination circuit unit 47 to analyze the operation, and for example, causes the image composition circuit unit 48 to generate a movement locus mark (not shown) for displaying a movement locus, and then proceeds to step S9. And migrate.
- Step S9 the control circuit unit 42 displays the movement trajectory mark corresponding to the part where the subject P has moved, with respect to the image displayed in the unconverted image display area 51A or the converted image display area 51B.
- the image composition circuit unit 48 is caused to execute, and this routine is terminated.
- the determination circuit unit 47 in step S8 it is calculated by the determination circuit unit 47 in step S8, and the result is displayed on the display screen 51 by the image composition circuit unit 48.
- the node corresponding to the joint position of the subject P moves the mouse pointer to the joint position in the image of the subject P displayed in the unconverted image display area 51A using the mouse 7 as described above.
- automatic recognition is also possible.
- the determination circuit unit 47 generates node information indicating each joint position of the subject P based on image data obtained by photographing the subject P. Specifically, for such automatic recognition, it is assumed that the determination circuit unit 47 is configured as Kinect (registered trademark) of Microsoft (registered trademark). Kinect (registered trademark) is a so-called motion capture device and can recognize the operation of the subject.
- Kinect registered trademark
- Kinect has a function as a non-contact type controller that recognizes the movement / posture of the subject P in real time based on image data taken by a non-contact type camera.
- the non-contact type camera has a function as a distance image sensor, and accurately recognizes the posture of the subject P by using the posture estimation software of the subject P based on the captured image data. ing.
- image data is used as a distance image in units of frames, and identification of where each part corresponds by a decision tree prepared in advance, Tracking control is executed for each part divided into a predetermined number of parts.
- Decision tree learning can be processed pixel by pixel.
- the arrangement (joint joint) of the part in 3D is extracted so that the kinematic constraint and temporal consistency are maintained.
- the part which is not visible in this state is not considered, it is not a complete skeleton but a hypothetical state in which only the parts that are visible on the surface that can be identified from the distance image are collected. Further, when a plurality of persons including the subject P are present on the angle of view, the subject P is not distinguished at this point.
- the motion of the actual human skeleton is finally estimated.
- the most probable 3D arrangement is calculated from each hypothesis, the estimation of the skeleton for each subject P is confirmed, and a stereoscopic image can be displayed in the converted image display area 51B.
- the three-dimensional posture estimation is not limited to the above.
- each joint position is determined by automatic recognition or manual designation in this way, it is possible to determine a link between adjacent joints and from the joint to the end of the human body, and display the node mark and the link mark as an unconverted image. It can be set as the image display superimposed on the image of the subject P displayed in the area 51A.
- the work mode shown in FIG. 3 is an operation recording mode. This operation recording mode can be used when shooting moving images (even when shooting is not performed).
- unconverted image display area 51A characters of “operation record” indicating the work mode, a brief description thereof, a captured image based on the image data, and a bone image superimposed on the image of the subject P are displayed.
- Various icons corresponding to the work contents are displayed in the unconverted image display area 51A.
- a mirror image display icon that inverts the display image when the subject P views the screen, a scale of the display dimension on the display screen 51 to a predetermined dimension (for example, 10 cm)
- a protractor or a protractor icon that displays an angle such as an arm angle based on the vertical direction Marker icon with infrared reflection mark etc.
- bone icon for switching bone image display / non-display
- cloud icon for selecting display / non-display of 3D display image by point cloud (point cloud), for subject P Mosaic icons that perform image processing so that the face of the subject P cannot be identified when privacy protection is required, etc. It is displayed.
- the converted image display area 51B On the other hand, at least a bone image is displayed in the converted image display area 51B. Further, in the converted image display area 51B, in the same manner as the unconverted image display area 51A, in addition to the mirror image display icon, grid display icon, protractor icon, marker icon, bone icon, cloud icon, the past (for example, 1 or An image superimposition icon that can superimpose a moving image taken a month ago) and a stereoscopic image with different shooting times as in this time and compare them on the screen is displayed. Therefore, the unconverted image display area 51A and the converted image display area 51B enable independent operations on the displayed images.
- the image composition circuit unit 48 changes only the operation without changing the size of the bone image. Further, the image composition circuit unit 48 can display the bone image as a 3D bone image and change the direction thereof.
- the image conversion circuit unit 46 or the image composition circuit unit 48 converts (generates) the stereoscopic image data and the 3D bone image data based on the image data in order to display the stereoscopic image or the 3D bone image. For this reason, although the length of the limbs in the bending and stretching of the limb when the subject P approaches is the same, the two-dimensional display is made as if the limb is shortened. However, in the stereoscopic image data and the 3D bone image data, calculation processing is performed so as to accurately grasp the angle change at that time.
- the image composition circuit unit 48 does not move the 3D bone image in front (walking direction) but displays only the movement with respect to the walking motion.
- a 3D bone image is displayed by changing the orientation of the bone image displayed in the converted image display area 51B, grid lines can also be displayed in the depth direction. Accordingly, for example, when the 3D bone image is inclined from the front, the stride and the like can be easily confirmed. Further, the bone image is displayed larger than the image of the subject P displayed in the unconverted image display area 51A.
- the image composition circuit unit 48 can convert the stereoscopic display data so as to change the display state of the subject on the display screen 51 as a part of the function of the image conversion circuit unit 46. In this operation recording mode, since recording is possible, the same image processing as described above can be reproduced on a moving image.
- the work mode shown in FIG. 4 is a joint angle recording mode. This joint angle recording mode can be used after shooting a moving image (even when shooting).
- unconverted image display area 51A characters of “joint angle recording” indicating the work mode, a brief description thereof, a captured image based on the captured image data, and a bone image superimposed on the image of the subject P are displayed. Yes.
- the above-described various icons and various video operation icons for reproducing the captured image data are displayed.
- the bone image and the various icons described above are displayed.
- the bone image is displayed larger than the image of the subject P displayed in the unconverted image display area 51A.
- an angle display image is displayed on the bone image by operating the protractor icon.
- the image data of the angle display image may be any image processing of the image conversion circuit unit 46 or the image composition circuit unit 48.
- the work mode shown in FIG. 5 is a post-recording measurement mode.
- This post-recording measurement mode can be used after shooting a moving image (also possible for a still image being shot).
- the operator operates the mouse or the like on the subject P (or the stereoscopic image displayed in the converted image display area 51B) displayed in the unconverted image display area 51A, for example, the foot
- the leg spread can be measured by designating the left and right toes and the vicinity of the hip joint.
- the angle can be displayed in the same manner as described above.
- an angle display screen that displays numerically calculated angles can be displayed. Note that the calculation processing function for angle display will be described later.
- the work mode shown in FIGS. 6 and 7 is the tracking mode. This tracking mode can be used after shooting a movie (also possible at the same time as shooting).
- unconverted image display area 51A characters of “tracking” indicating a work mode, a brief description thereof, a captured image based on captured image data, various icons described above, and various video operation icons for reproducing the captured image data are displayed. Is displayed.
- a work designation screen is displayed.
- the operator designates “tracking” on the work designation screen and then designates a node mark at a position to be tracked (for example, a head) using a mouse or the like.
- the designated node mark is different in size and color so that it can be distinguished from other node marks, for example.
- the movement trajectory (two-dot chain line in the figure) of the node mark is displayed according to the operation of the subject P as shown in FIG.
- display at several times speed and rotation of the 3D bone image are possible, so that the movement trajectory can be confirmed three-dimensionally.
- the display form of the movement trajectory is arbitrary, and it is not a simple line.
- the position of the node mark can be displayed every predetermined period (for example, 0.2 seconds), and the movement speed can be measured. Is possible.
- the determination circuit unit 47 stores image data, which is a moving image when the subject P turns the toe, in the storage circuit unit 41, and a link L extending from the joint of the left ankle to the toe based on the stored image data. Is detected as a movement trajectory that moves in accordance with the rotation of the toe (for example, an ellipse of a two-dot chain line shown in FIG. 10A).
- the determination circuit unit 47 determines whether at least a part of the human body of the subject P is moving in the image data before and after the time series stored in the storage circuit unit 41, and at least the human body. When it is determined that a part is moving, the movement trajectory is specified and stored in the storage circuit unit 41 as the movable range information in the rehabilitation process of the subject P.
- the determination circuit unit 47 compares whether or not the range of motion is expanded between the range of motion information of the same subject P in the past stored in the storage circuit unit 41 and the current range of motion information. When it is determined that the image has spread, the determination result information is output to the image processing circuit unit 45 or the image conversion circuit unit 46 so that the result is displayed on the display screen 51.
- the determination circuit unit 47 has a target set in advance for the doctor to determine that the average range of motion information (by gender age) or recovery (for example, the end of rehabilitation) stored in advance in the storage circuit unit 41 has been determined.
- the degree of recovery of the current range of motion information may be calculated based on the value, and the calculation result information may be output to the image processing circuit unit 45 or the image conversion circuit unit so that the result is displayed on the display screen 51.
- the determination circuit unit 47 compares the range of motion in the past same subject P stored in the storage circuit unit 41 with the current range of motion information to determine whether the range of motion is wide. When it is determined that the image has spread, the determination result information is output to the image processing circuit unit 45 or the image conversion circuit unit 46 so that the result is displayed on the display screen 51.
- the determination circuit unit 47 calculates the degree of recovery of the current range of motion information based on the average range of motion information stored in advance in the storage circuit unit 41 (for example, by gender age for each joint, by link length, etc.). Then, it is also possible to output the calculation result information to the image processing circuit unit 45 or the image conversion circuit unit 46 so as to display the result on the display screen 51.
- the calculation result information may be output by the image composition circuit unit 48, or output from the control circuit unit 42 to the output circuit unit 44 to be output at an arbitrary position on the display screen 51 (for example, each area 51A, 51B). May be displayed in a separate display area).
- a medical worker such as a doctor such as an orthopedic surgeon or a physical therapist or occupational therapist supporting rehabilitation can easily recognize the degree of recovery of the subject P numerically. It can be used for diagnosis of future treatment policy (rehabilitation policy).
- control circuit unit 42 (or the determination circuit unit 47) can display auxiliary functions such as guidelines and courses in the rehabilitation of the subject P on the display screen 51 based on the result of determining the physique, for example. It is.
- the rehabilitation content according to the rehabilitation program is displayed on the display screen 51 or another monitor to execute the rehabilitation, and the movement of the subject P at that time is identified, and the joint distortion (GLAB) or the like It is also possible to perform automatic measurement.
- GLAB joint distortion
- the captured image data and stereoscopic display data can be stored in the storage circuit unit 41 in time series, it can be used as information for obtaining a detailed analysis result at a later date, for example.
- the stereoscopic display data for the high-precision stereoscopic composite image that can rotate and invert the stereoscopic display image is generated. be able to.
- the generated stereoscopic display data can be expected to improve versatility as data for performing subsequent rehabilitation, not just for display.
- the three-dimensional image generation system generates a three-dimensional image from a human body image of a subject photographed without using a position sensor, and specifies the subject and the physique of the subject.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Physical Education & Sports Medicine (AREA)
- Pain & Pain Management (AREA)
- Physiology (AREA)
- Dentistry (AREA)
- Epidemiology (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Rehabilitation Therapy (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Processing Or Creating Images (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Rehabilitation Tools (AREA)
- Image Processing (AREA)
Abstract
Description
ステップS1において、制御回路部42は、撮影装置2から受信回路部43を経由して画像データを受信したか否かを判定する受信ステップを実行する。制御回路部42は、画像データを受信したと判定した場合には、ステップS2へと移行する。、制御回路部42は、画像データを受信したと判定しなかった場合には、引き続きこのルーチンを監視する。
ステップS2において、制御回路部42は、受信した画像データに基づく画像を表示装置5の表示画面51に表示させる画像処理ステップを画像処理回路部45に実行させ、ステップS3へと移行する。
ステップS3において、制御回路部42は、受信した画像データに基づいて見掛け上の深度の識別が可能な立体画像を表示画面51に表示させるための立体表示データに変換する画像変換ステップを画像変換回路部46に実行させ、ステップS4へと移行する。
ステップS4において、制御回路部42は、立体表示データに基づいて被検者Pの特定及び被検者Pの体格の判定の少なくとも一方を実行する判定ステップを判定回路部47に実行させ、ステップS5へと移行する。
ステップS5において、制御回路部42は、判定結果に応じて前記表示画面における被検者の表示状態を異ならせるように立体表示データを変換する立体画像表示ステップを画像変換回路部46に実行させる。
ステップS6において、制御回路部42は、自動又は指定により決定したノードマーク及びリンクマークを表示画面51に表示するよう画像合成回路部48に実行させ、ステップS7へと移行する。
ステップS7において、制御回路部42は、記憶回路部41に記憶した画像データ若しくは立体表示データに基づいて、被検者Pが動作したか否かを判定回路部47に判定させる。判定回路部47は、動作ありと安定した場合にはステップS8へと移行する。判定回路部47は、動作ありと判定しなかった場合には、このルーチンを終了する。
ステップS8において、制御回路部42は、判定回路部47により、動作を解析させるとともに、例えば、移動軌跡表示用の移動軌跡マーク(図示せず)を画像合成回路部48に生成させ、ステップS9へと移行する。
ステップS9において、制御回路部42は、未変換画像表示エリア51A若しくは変換画像表示エリア51Bに表示している画像について、被検者Pの動作があった部位に対応して、移動軌跡マークを表示するよう、画像合成回路部48に実行させ、このルーチンを終了する。なお、関節可動域の数値等を算出する場合には、ステップS8で判定回路部47によって算出し、その結果を画像合成回路部48によって表示画面51に表示させる。
2 撮影装置
3 コンピュータ
4 コンピュータ本体
41 記憶回路部(記憶部)
42 制御回路部(制御部)
43 受信回路部(受信部)
44 出力回路部
45 画像処理回路部(画像処理部)
46 画像変換回路部(画像変換部)
47 判定回路部(判定部)
48 画像合成回路部(画像合成部)
5 表示装置
51 表示画面
51A 未変換画像表示エリア
51B 変換画像表示エリア
Claims (11)
- 撮影装置によって撮影した被検者を含む画像の画像データを受信する受信部と、
前記受信部で受信した画像データに基づく画像を表示装置の表示画面に表示させる画像処理部と、
前記受信部で受信した画像データに基づいて見掛け上の深度の識別が可能な立体画像を前記表示画面に表示させるための立体表示データに変換する画像変換部と、
前記立体表示データに基づいて被検者の特定及び被検者の体格の判定の少なくとも一方を実行する判定部と、
を備え、
前記画像変換部は、
前記判定部の判定結果に応じて前記表示画面における被検者の表示状態を異ならせるように立体表示データを変換する、立体画像生成システム。 - 前記画像処理部は、
前記画像データに基づく画像を前記表示画面に割り当てた未変換画像表示エリアに表示させ、
前記画像変換部は、
前記立体表示データに基づく画像を前記表示画面に割り当てた変換画像表示エリアに表示させる、
ことを特徴とする請求項1に記載の立体画像生成システム。 - 前記画像変換部は、
前記未変換画像表示エリアに表示している被検者の画像を対象として、前記変換画像表示エリアに表示する被検者の画像を、同一時刻に前記受信部で受信した前記画像データに基づいて同一サイズで前記変換画像表示エリアに表示させる、
ことを特徴とする請求項2に記載の立体画像生成システム。 - 前記受信部で受信した画像データ及び前記画像変換部で変換した立体表示データを記憶する記憶部を備え、
前記受信部は、
前記撮影装置で撮影した被検者を含む画像の画像データを時系列で受信するとともに、受信した時系列ごとの前記画像データを前記記憶部に記憶させ、
前記判定部は、
前記記憶部に記憶した時系列の前後の画像データに対して、被検者の少なくとも人体の一部が変化する動作を含むか否かを判定し、
前記画像変換部は、
前記判定部が被検者の動作を含むと判定した場合には、被検者の動作に応じて前記表示画面における被検者の表示状態を異ならせるように立体表示データを変換する、
ことを特徴とする請求項2又は請求項3に記載の立体画像生成システム。 - 被検者の関節位置に対応するノードマーク、及び、被検者の隣接する関節間及び被検者の関節から身体端部に延びるリンクマーク、に対応したマーク画像データを生成したうえで、前記表示画面に表示した被検者の画像と対応させて重畳表示させる画像合成部を有する、
ことを特徴とする請求項2~請求項4のいずれか1の請求項に記載の立体画像生成システム。 - 前記画像合成部は、
前記ノードマークと前記リンクマークとを前記未変換画像表示エリアに表示した被検者の画像の対応する関節部位を一致させて重畳して合成表示し、
前記判定部は、
前記記憶部に記憶した時系列の前後の画像データにおいて、前記ノードマーク及び前記リンクマークに対応した被検者の少なくとも人体の一部が移動しているか否かを判定するとともに、少なくとも人体の一部が移動していると判定した場合には、該当する前記ノードマーク若しくは前記リンクマークを追従させつつ移動軌跡を示す移動軌跡マークを表示するよう、前記画像合成部に移動軌跡情報を出力する、
ことを特徴とする請求項5に記載の立体画像生成システム。 - 前記判定部は、
前記記憶部に記憶した時系列の前後の画像データにおいて、被検者の少なくとも人体の一部が移動しているか否かを判定し、少なくとも人体の一部が移動していると判定した場合には、移動軌跡を特定するとともに被検者のリハビリ工程における可動域情報として前記記憶部に記憶する、
ことを特徴とする請求項4~請求項6のいずれか1の請求項に記載の立体画像生成システム。 - 前記判定部は、
前記記憶部に記憶した過去の同一の被検者における前記可動域情報と今回の前記可動域情報とで可動域が広がっているか否かを比較し、可動域が広がっていると判定した場合には、その結果を前記表示画面に表示するよう前記画像処理部又は前記画像変換部に判定結果情報を出力する、
ことを特徴とする請求項7に記載の立体画像生成システム。 - 前記判定部は、
前記記憶部に予め記憶した平均可動域情報若しくは目標値を基準として今回の前記可動域情報の回復度を算出し、その結果を前記表示画面に表示するよう前記画像処理部又は前記画像変換部に算出結果情報を出力する、
ことを特徴とする請求項7に記載の立体画像生成システム。 - 撮影装置によって撮影した被検者を含む画像の画像データを受信する受信ステップと、
受信した画像データに基づく画像を表示装置の表示画面に表示させる画像処理ステップと、
受信した画像データに基づいて見掛け上の深度の識別が可能な立体画像を前記表示画面に表示させるための立体表示データに変換する画像変換ステップと、
前記立体表示データに基づいて被検者の特定及び被検者の体格の判定の少なくとも一方を実行する判定ステップと、
判定結果に応じて前記表示画面における被検者の表示状態を異ならせるように立体表示データを変換する立体画像表示ステップと、を備える立体画像生成方法。 - コンピュータに、
撮影装置によって撮影した被検者を含む画像の画像データを受信する受信機能と、
受信した画像データに基づく画像を表示装置の表示画面に表示させる画像処理機能と、
受信した画像データに基づいて見掛け上の深度の識別が可能な立体画像を前記表示画面に表示させるための立体表示データに変換する画像変換機能と、
前記立体表示データに基づいて被検者の特定及び被検者の体格の判定の少なくとも一方を実行する判定機能と、
を実現させ、
前記画像変換機能は、
判定結果に応じて前記表示画面における被検者の表示状態を異ならせるように立体表示データを変換する、立体画像生成プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/083296 WO2018087853A1 (ja) | 2016-11-09 | 2016-11-09 | 立体画像生成システム、立体画像生成方法及び立体画像生成プログラム |
JP2018549686A JP6930995B2 (ja) | 2016-11-09 | 2016-11-09 | 立体画像生成システム、立体画像生成方法及び立体画像生成プログラム |
JP2021082785A JP2021128794A (ja) | 2016-11-09 | 2021-05-14 | 立体画像生成システム、立体画像生成方法及び立体画像生成プログラム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2016/083296 WO2018087853A1 (ja) | 2016-11-09 | 2016-11-09 | 立体画像生成システム、立体画像生成方法及び立体画像生成プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018087853A1 true WO2018087853A1 (ja) | 2018-05-17 |
Family
ID=62109508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/083296 WO2018087853A1 (ja) | 2016-11-09 | 2016-11-09 | 立体画像生成システム、立体画像生成方法及び立体画像生成プログラム |
Country Status (2)
Country | Link |
---|---|
JP (2) | JP6930995B2 (ja) |
WO (1) | WO2018087853A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019219989A (ja) * | 2018-06-21 | 2019-12-26 | 日本電信電話株式会社 | 姿勢推定装置、姿勢推定方法、およびプログラム |
JP2019219836A (ja) * | 2018-06-19 | 2019-12-26 | Kddi株式会社 | 映像データから人の骨格位置の変位の軌跡を描写するプログラム、装置及び方法 |
WO2020021873A1 (ja) * | 2018-07-24 | 2020-01-30 | 日本電気株式会社 | 処理装置、処理方法及びプログラム |
JP2020126568A (ja) * | 2019-01-31 | 2020-08-20 | ユインケア コーポレーション | Rgb−dカメラを利用したリハビリ訓練システム及び方法 |
JPWO2019229818A1 (ja) * | 2018-05-28 | 2021-02-25 | 富士通株式会社 | 表示方法、表示プログラムおよび情報処理装置 |
WO2021149629A1 (ja) * | 2020-01-21 | 2021-07-29 | Posen株式会社 | 姿勢診断システム、姿勢診断方法及び姿勢診断用データセット |
GB2598825A (en) * | 2020-06-26 | 2022-03-16 | Agile Kinetic Ltd | Method of monitoring mobility |
CN114984450A (zh) * | 2022-05-26 | 2022-09-02 | 苏州景昱医疗器械有限公司 | 控制器、植入式神经刺激系统及计算机可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09273946A (ja) * | 1996-04-05 | 1997-10-21 | Anima Kk | 動作解析装置 |
JP2003162720A (ja) * | 2001-11-27 | 2003-06-06 | Victor Co Of Japan Ltd | モデリングデータの生成方法、肖像権侵害の判定方法、肖像権侵害判定装置、肖像権判定用プログラム、及びモデリングデータの記録媒体 |
JP2004041511A (ja) * | 2002-07-12 | 2004-02-12 | Seiko Epson Corp | 負荷動作診断装置 |
WO2012046392A1 (ja) * | 2010-10-08 | 2012-04-12 | パナソニック株式会社 | 姿勢推定装置及び姿勢推定方法 |
JP2013103010A (ja) * | 2011-11-15 | 2013-05-30 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
JP2014137725A (ja) * | 2013-01-17 | 2014-07-28 | Canon Inc | 情報処理装置、情報処理方法及びプログラム |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10149445A (ja) * | 1996-11-19 | 1998-06-02 | Image Joho Kagaku Kenkyusho | 身体動作解析可視化装置 |
JP2004089355A (ja) * | 2002-08-30 | 2004-03-25 | Taito Corp | 歩行運動装置 |
JP5641222B2 (ja) * | 2010-12-06 | 2014-12-17 | セイコーエプソン株式会社 | 演算処理装置、運動解析装置、表示方法及びプログラム |
JP2014068714A (ja) * | 2012-09-28 | 2014-04-21 | Kitasato Institute | 関節角度測定システム |
WO2014104360A1 (ja) * | 2012-12-28 | 2014-07-03 | 株式会社東芝 | 動作情報処理装置及び方法 |
WO2014112632A1 (ja) * | 2013-01-18 | 2014-07-24 | 株式会社東芝 | 動作情報処理装置及び方法 |
WO2014115817A1 (ja) * | 2013-01-23 | 2014-07-31 | 株式会社東芝 | 動作情報処理装置 |
JP6359343B2 (ja) * | 2013-07-01 | 2018-07-18 | キヤノンメディカルシステムズ株式会社 | 動作情報処理装置及び方法 |
JP6433149B2 (ja) * | 2013-07-30 | 2018-12-05 | キヤノン株式会社 | 姿勢推定装置、姿勢推定方法およびプログラム |
JP6251544B2 (ja) * | 2013-11-05 | 2017-12-20 | 株式会社システムフレンド | リハビリ支援画像生成装置、リハビリ支援システム及びプログラム |
JP2015102913A (ja) * | 2013-11-21 | 2015-06-04 | キヤノン株式会社 | 姿勢推定装置及び姿勢推定方法 |
GB2551238B (en) * | 2014-09-30 | 2019-04-10 | 270 Vision Ltd | Mapping trajectories of the anatomy of the human or animal body for comparitive analysis |
JP6466139B2 (ja) * | 2014-10-20 | 2019-02-06 | 有限会社テレビジネス | 人間の動きを測定するロボット計測器 |
-
2016
- 2016-11-09 JP JP2018549686A patent/JP6930995B2/ja active Active
- 2016-11-09 WO PCT/JP2016/083296 patent/WO2018087853A1/ja active Application Filing
-
2021
- 2021-05-14 JP JP2021082785A patent/JP2021128794A/ja active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09273946A (ja) * | 1996-04-05 | 1997-10-21 | Anima Kk | 動作解析装置 |
JP2003162720A (ja) * | 2001-11-27 | 2003-06-06 | Victor Co Of Japan Ltd | モデリングデータの生成方法、肖像権侵害の判定方法、肖像権侵害判定装置、肖像権判定用プログラム、及びモデリングデータの記録媒体 |
JP2004041511A (ja) * | 2002-07-12 | 2004-02-12 | Seiko Epson Corp | 負荷動作診断装置 |
WO2012046392A1 (ja) * | 2010-10-08 | 2012-04-12 | パナソニック株式会社 | 姿勢推定装置及び姿勢推定方法 |
JP2013103010A (ja) * | 2011-11-15 | 2013-05-30 | Sony Corp | 画像処理装置、画像処理方法及びプログラム |
JP2014137725A (ja) * | 2013-01-17 | 2014-07-28 | Canon Inc | 情報処理装置、情報処理方法及びプログラム |
Non-Patent Citations (2)
Title |
---|
EIJIRO ADACHI: "KINECT applications for the physical rehabilitation", IMAGE LAB, vol. 24, no. 11, 10 November 2013 (2013-11-10), pages 1 - 7 * |
HIROYUKI ADACHI ET AL.: "Real time measurement of large joints from 3D co-ordinates using KINECT", IEICE TECHNICAL REPORT, vol. 114, no. 153, July 2014 (2014-07-01), pages 25 - 29 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2019229818A1 (ja) * | 2018-05-28 | 2021-02-25 | 富士通株式会社 | 表示方法、表示プログラムおよび情報処理装置 |
JP7070675B2 (ja) | 2018-05-28 | 2022-05-18 | 富士通株式会社 | 表示方法、表示プログラムおよび情報処理装置 |
JP2019219836A (ja) * | 2018-06-19 | 2019-12-26 | Kddi株式会社 | 映像データから人の骨格位置の変位の軌跡を描写するプログラム、装置及び方法 |
JP2019219989A (ja) * | 2018-06-21 | 2019-12-26 | 日本電信電話株式会社 | 姿勢推定装置、姿勢推定方法、およびプログラム |
JP7066122B2 (ja) | 2018-06-21 | 2022-05-13 | 日本電信電話株式会社 | 姿勢推定装置、姿勢推定方法、およびプログラム |
WO2020021873A1 (ja) * | 2018-07-24 | 2020-01-30 | 日本電気株式会社 | 処理装置、処理方法及びプログラム |
JPWO2020021873A1 (ja) * | 2018-07-24 | 2021-08-12 | 日本電気株式会社 | 処理装置、処理方法及びプログラム |
JP2020126568A (ja) * | 2019-01-31 | 2020-08-20 | ユインケア コーポレーション | Rgb−dカメラを利用したリハビリ訓練システム及び方法 |
WO2021149629A1 (ja) * | 2020-01-21 | 2021-07-29 | Posen株式会社 | 姿勢診断システム、姿勢診断方法及び姿勢診断用データセット |
GB2598825A (en) * | 2020-06-26 | 2022-03-16 | Agile Kinetic Ltd | Method of monitoring mobility |
CN114984450A (zh) * | 2022-05-26 | 2022-09-02 | 苏州景昱医疗器械有限公司 | 控制器、植入式神经刺激系统及计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP2021128794A (ja) | 2021-09-02 |
JPWO2018087853A1 (ja) | 2020-05-28 |
JP6930995B2 (ja) | 2021-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6930995B2 (ja) | 立体画像生成システム、立体画像生成方法及び立体画像生成プログラム | |
MassirisFernández et al. | Ergonomic risk assessment based on computer vision and machine learning | |
US9761011B2 (en) | Motion information processing apparatus obtaining motion information of a subject performing a motion | |
Viswakumar et al. | Human gait analysis using OpenPose | |
JP6381918B2 (ja) | 動作情報処理装置 | |
Diego-Mas et al. | Using Kinect™ sensor in observational methods for assessing postures at work | |
US9700242B2 (en) | Motion information processing apparatus and method | |
Bonnechere et al. | Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry | |
WO2014112645A1 (ja) | 動作情報表示装置及びプログラム | |
Skals et al. | A musculoskeletal model driven by dual Microsoft Kinect Sensor data | |
Kurillo et al. | Upper extremity reachable workspace evaluation with Kinect | |
JP2016140591A (ja) | 動作解析評価装置、動作解析評価方法、及びプログラム | |
WO2015162158A1 (en) | Human motion tracking | |
KR20160076488A (ko) | 근골격계 질환의 발생가능성의 판단 장치 및 방법 | |
Liu et al. | Simple method integrating OpenPose and RGB-D camera for identifying 3D body landmark locations in various postures | |
Kuryło et al. | Machine vision system measuring the trajectory of upper limb motion applying the matlab software | |
JP6558820B2 (ja) | 計測装置、計測方法、及びプログラム | |
KR102310964B1 (ko) | 근골격계 증상을 진단하기 위한 전자 장치, 방법, 및 시스템 | |
JP2002063579A (ja) | 画像解析装置及びその方法 | |
JP6940139B2 (ja) | 身体特性分析装置、身体特性分析方法、及びプログラム | |
JP2014117409A (ja) | 身体関節位置の計測方法および装置 | |
KR20140013662A (ko) | 캘리브레이션 장치 및 방법 | |
JP2021099666A (ja) | 学習モデルの生成方法 | |
JP2019197278A (ja) | 画像処理装置、画像処理装置の制御方法およびプログラム | |
KR102626551B1 (ko) | 체형분석장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16921293 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018549686 Country of ref document: JP Kind code of ref document: A |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.08.2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16921293 Country of ref document: EP Kind code of ref document: A1 |