CN109003301B - Human body posture estimation method based on OpenPose and Kinect and rehabilitation training system - Google Patents

Human body posture estimation method based on OpenPose and Kinect and rehabilitation training system Download PDF

Info

Publication number
CN109003301B
CN109003301B CN201810737327.5A CN201810737327A CN109003301B CN 109003301 B CN109003301 B CN 109003301B CN 201810737327 A CN201810737327 A CN 201810737327A CN 109003301 B CN109003301 B CN 109003301B
Authority
CN
China
Prior art keywords
dimensional
kinect
coordinates
rgb
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810737327.5A
Other languages
Chinese (zh)
Other versions
CN109003301A (en
Inventor
宋爱国
唐心宇
石珂
陈大鹏
李会军
曾洪
徐宝国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810737327.5A priority Critical patent/CN109003301B/en
Publication of CN109003301A publication Critical patent/CN109003301A/en
Application granted granted Critical
Publication of CN109003301B publication Critical patent/CN109003301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus ; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0214Stretching or bending or torsioning apparatus for exercising by rotating cycling movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus ; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0237Stretching or bending or torsioning apparatus for exercising for the lower limbs
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/035Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously
    • A63B23/04Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for lower limbs
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/035Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously
    • A63B23/04Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for lower limbs
    • A63B23/0405Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for lower limbs involving a bending of the knee and hip joints simultaneously
    • A63B23/0464Walk exercisers without moving parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0087Electric or electronic controls for exercising apparatus of groups A63B21/00 - A63B23/00, e.g. controlling load
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5043Displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5092Optical sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/10Leg
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0087Electric or electronic controls for exercising apparatus of groups A63B21/00 - A63B23/00, e.g. controlling load
    • A63B2024/0096Electric or electronic controls for exercising apparatus of groups A63B21/00 - A63B23/00, e.g. controlling load using performance related parameters for controlling electronic or video games or avatars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • A63B2071/0638Displaying moving images of recorded environment, e.g. virtual environment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/30Speed
    • A63B2220/34Angular speed
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/806Video cameras

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physical Education & Sports Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Epidemiology (AREA)
  • Rehabilitation Therapy (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Pain & Pain Management (AREA)
  • General Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Theoretical Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Rehabilitation Tools (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a human body posture estimation method and a rehabilitation training system based on OpenPose and Kinect. The human body posture estimation module based on OpenPose and Kinect fuses two-dimensional human body joint points obtained by an OpenPose algorithm and depth information of Kinect to obtain three-dimensional human body joint points. The scene interactive rehabilitation training virtual scene part builds a progressive rehabilitation training virtual scene based on a Unity3D platform, and realizes the functions of motion control and the like of a virtual agent. The three-dimensional joint point motion trail database is used for storing and reading motion trail data of each joint point of a patient in the rehabilitation training process. The system can capture the space coordinates of three-dimensional human body joint points in real time, the scene interactive rehabilitation training is vivid and interesting, and targeted rehabilitation training can be provided according to the condition of a patient.

Description

Human body posture estimation method based on OpenPose and Kinect and rehabilitation training system
Technical Field
The invention belongs to a cross technology in the field of computer vision and rehabilitation robots, and relates to a human body posture estimation method and a rehabilitation training system based on OpenPose and Kinect.
Background
Since the 90 s of the 20 th century, the robot-assisted rehabilitation training technology has rapidly developed and attracted general attention of various developed countries. The existing research and clinical application show that: the rehabilitation training robot can perform safe, reliable, highly targeted and adaptive rehabilitation training on patients with limb movement dysfunction caused by stroke, spinal cord injury and the like, and has important significance in improving the rehabilitation training quality of the patients with limb movement dysfunction, promoting early rehabilitation of the patients and reducing family and social burdens.
In recent years, intelligent rehabilitation is carried out to develop wider rehabilitation training means and further improve rehabilitation efficiency. The virtual scene human-computer interaction technology is introduced to stimulate the active participation consciousness of the patient, so that the training time, the training intensity and the training frequency are increased, and the training effect is improved. In addition, the human body posture estimation technology is used for capturing three-dimensional motion data of limbs in the rehabilitation training process of a patient so as to control a virtual agent in a rehabilitation game and realize human-computer interaction. A great deal of application research is carried out at home and abroad on a scene interactive virtual environment technology and a human body posture estimation technology which are applied to the rehabilitation training robot, and various rehabilitation training robot scene interactive systems which are fused with the two technologies are developed.
The method for capturing human body gestures by utilizing Kinect to realize human-computer interaction with the virtual game is more and more approved due to the characteristics of low cost and convenient use. However, the bone binding algorithm of the Kinect is very easily affected by illumination, foreground shielding and human body self-shielding, and the phenomenon of non-recognition or false recognition occurs. However, a stroke patient often needs to wear some supporting or fixing devices to maintain body balance due to partial limb disability, so that the scenario interaction system based on the bone binding algorithm carried by the Kinect cannot be applied to the rehabilitation robot with partial body occlusion. Moreover, most of the existing rehabilitation training scene interactive systems do not provide a target-oriented rehabilitation training virtual game scene according to different disabilities and different rehabilitation stages of the limbs of the patient, so that a targeted rehabilitation training scheme cannot be provided according to the rehabilitation conditions of the patient.
Disclosure of Invention
The technical problem to be solved by the invention is as follows:
the invention aims to solve the defects in the prior art and provides a human body posture estimation method based on OpenPose and Kinect and a scene interactive rehabilitation training system based on OpenPose and Kinect.
The invention adopts the following technical scheme for solving the technical problems:
a human body posture estimation method based on OpenPose and Kinect comprises the following steps:
(1) calibrating a depth camera and a color camera of the Kinect to obtain internal reference matrixes of the color camera and the depth camera and a rotation matrix and a translation vector from a depth camera coordinate system to a color camera coordinate system;
(2) generating a point cloud array of a three-dimensional space by combining a depth image and a color image of Kinect according to the internal reference matrix, the rotation matrix and the translation vector obtained in the step (1);
(3) synchronizing the Kinect color image with the point cloud array through the timestamp;
(4) obtaining a two-dimensional joint point image coordinate according to the Kinect color image by using an OpenPose algorithm;
(5) searching a three-dimensional joint point space coordinate corresponding to the two-dimensional joint point image coordinate in the point cloud array synchronized in the step (3);
(6) and (5) smoothing and predicting the space coordinates of the human body three-dimensional joint points obtained in the step (5) by using a median filtering method and a Hott two-parameter exponential smoothing method.
Preferably, the obtaining of the internal reference matrices of the color camera and the depth camera in the step (1) is respectively
Figure BDA0001722360280000021
Figure BDA0001722360280000022
And
Figure BDA0001722360280000023
wherein (f)x_RGB,fy_RGB) Is the focal length of the color camera, (c)x_RGB,cy_RGB) Is the center point coordinate of the color camera, (f)x_D,fy_D) Is the focal length of the depth camera, (c)x_D,cy_D) Is the coordinate of the central point of the depth camera, and the obtained rotation matrix and translation vector from the depth camera coordinate system to the color camera coordinate system are respectively RD-RGBAnd tD-RGB
Preferably, the step (2) of generating a point cloud array of a three-dimensional space comprises performing the following steps:
I. according to the internal reference matrix of the depth camera, the two-dimensional image coordinates of the depth image are mapped into three-dimensional space coordinates in a depth camera coordinate system, and one point on the depth image is set as (x)D,yD) Depth value of the point is depth (x)D,yD) Then the three-dimensional coordinate (X) of the point is in the depth camera coordinatesD,YD,ZD) Comprises the following steps:
Figure BDA0001722360280000024
three-dimensional coordinates (X) in depth camera coordinatesD,YD,ZD) Conversion to three-dimensional coordinates (X) in color Camera coordinatesRGB,YRGB,ZRGB) Comprises the following steps:
Figure BDA0001722360280000025
further converting the three-dimensional coordinates (X) in the color camera coordinate systemRGB,YRGB,ZRGB) Projecting the image onto a two-dimensional color image plane to obtain the coordinates (x) of the two-dimensional color imageRGB,yRGB) Comprises the following steps:
Figure BDA0001722360280000031
the coordinates in the color image are taken as (x)RGB,yRGB) The corresponding RGB value of point(s) is taken as the three-dimensional coordinate (X) in the color camera coordinate systemRGB,YRGB,ZRGB) The RGB value of (1);
and IV, repeating the steps I to III on each point in the depth image, thereby generating a point cloud array of the three-dimensional space in the XYZRGB format.
Preferably, the smoothing and predicting the spatial coordinates of the three-dimensional joint points of the human body by using a median filtering method and a hottop two-parameter exponential smoothing method in the step (6) comprises the following steps:
the point cloud array coordinates (X, Y, Z) of the joint points comprise n points in a certain neighborhood window S, and the coordinates of the points are respectively (U)i,Vi,Wi) I is 1, … n, the coordinates of the joint point are modified to be the median of the coordinates of the n points, i.e. the coordinates of the n points
Figure BDA0001722360280000032
The hotte two-parameter exponential smoothing method comprises two basic smoothing formulas and a prediction model, namely:
smoothing formula:
St=αPt+(1-α)(St-1+bt-1)
bt=β(St-St-1)+(1-β)bt-1
and (3) prediction model:
Ft+m=St+btm
wherein alpha and beta are smoothing coefficients, values are between (0, 1), delay and mean square error characteristics are observed by drawing curves of predicted values and actual values, and the smoothing coefficients alpha and beta are adjusted to optimize an optimal prediction model to complete filtering of the three-dimensional joint point coordinates;
time series P for space coordinates of three-dimensional joint points of human bodyt={P1,P2,P3……},PtIs the three-dimensional joint point coordinate of the t-th stage of the time series, StAs smoothed values of the t-th period of the time series, btIs the smooth value of the trend of the t-th phase of the time series, m is the predicted number of the lead phases, Ft+mFor the prediction of the t + m th stage of the time series, S is initialized1Is P1,b1Is P2-P1Subsequent StAnd btS according to the preamblet-1And bt-1Iterating to obtain a predicted value F of the t + m staget+mAccording to the t-th stage StAnd btAnd (6) calculating.
In another embodiment, a scene interactive rehabilitation training system based on openpos and Kinect is provided, which includes:
the human body posture estimation module based on OpenPose and Kinect identifies three-dimensional joint point data of a patient in real time according to a depth image and a color image of the Kinect;
the scene interactive rehabilitation training virtual scene module is used for building a progressive rehabilitation training virtual scene based on a Unity3D platform, and realizing the functions of motion control of a virtual agent, drawing of a joint point chart, visual and auditory feedback, calculation of collision acting force and user basic information input; and
and the three-dimensional joint point motion track database is used for storing the basic information of the user and the space coordinates of the three-dimensional joint points.
Preferably, the human body posture estimation module based on OpenPose and Kinect comprises a Kinect point cloud array generation node, an OpenPose node, a human body three-dimensional joint point mapping and filtering node, an ROS master controller, a Unity3D communication node and a database communication node,
the Kinect point cloud array generating node generates a point cloud array of a three-dimensional space according to the internal reference matrix of the Kinect depth camera and the color camera, the rotation matrix and the translation vector from the depth camera coordinate system to the color camera coordinate system, and the depth image and the color image of the Kinect;
the OpenPose node obtains a two-dimensional joint point image coordinate according to the color image of the Kinect;
synchronizing a color image of the Kinect with a point cloud array through a time stamp by a human body three-dimensional joint point mapping and filtering node, searching a three-dimensional joint point space coordinate corresponding to a two-dimensional joint point image coordinate in the synchronized point cloud array, and smoothing and predicting the three-dimensional joint point space coordinate by using a median filtering method and a Hott two-parameter index smoothing method;
the database communication node acquires the space coordinates of the three-dimensional joint points for rehabilitation evaluation and stores the space coordinates to the three-dimensional joint point motion track database;
the Unity3D communication node acquires three-dimensional joint point space coordinates for controlling the movement of the virtual agent and sends the three-dimensional joint point space coordinates to the scene interactive rehabilitation training virtual scene module;
the ROS master controller realizes the intercommunication of the Kinect point cloud array generation node, the OpenPose node, the human body three-dimensional joint point mapping and filtering node, the Unity3D communication node and the database communication node.
Preferably, the scene interactive rehabilitation training virtual scene module comprises:
a user login interface for a patient to enter basic information;
the progressive rehabilitation training virtual scene generation module is used for providing a target-oriented virtual game environment for different disabled parts and different rehabilitation training stages based on a Unity3D platform;
the virtual agent control module is used for controlling the action of a virtual agent in the rehabilitation training virtual game through the obtained three-dimensional joint point space coordinates and simultaneously displaying the motion parameters in a virtual scene in real time; and
and the feedback module is used for triggering visual and auditory feedback according to the events in the scene and calculating the acting force to be sent to the rehabilitation training robot so as to provide force feedback for the patient.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
(1) according to the three-dimensional human body skeleton joint point recognition method, the two-dimensional human body skeleton joint points obtained by the OpenPose algorithm are combined with the depth data of the Kinect to obtain the three-dimensional human body skeleton joint points, and the problem that the skeleton binding algorithm carried by the Kinect cannot recognize or mistakenly recognizes when the body of a patient is partially shielded by a rehabilitation training robot is solved to a certain extent.
(2) The three-dimensional human joint point data based on OpenPose and Kinect is used for controlling the action of a virtual agent in a virtual environment, and the three-dimensional joint point data is stored in the MySQL database, so that digital quantitative data are provided for subsequent rehabilitation evaluation, and rehabilitation status tracking of a rehabilitation doctor is facilitated.
(3) Aiming at different rehabilitation stages of a patient, the invention designs a plurality of progressive rehabilitation training virtual game environments with rehabilitation pertinence so as to match the scene interaction requirements of the rehabilitation training robot in different rehabilitation stages.
(4) The invention adopts a modularized design idea, the design of a rehabilitation training scene interaction system is independent of a rehabilitation training robot, and a Kinect is adopted to capture the space coordinates of human body joint points as the input of the system. The system of the invention can be conveniently applied to the existing rehabilitation training robot, and the portability and the expansibility of the software system are improved.
Drawings
FIG. 1 is a skeletal structure of OpenPose;
FIG. 2 is a schematic flow chart of the human body posture estimation method based on OpenPose and Kinect of the present invention;
FIG. 3 is a diagram of a ROS-based node communication software framework;
fig. 4(a) to 4(c) are scene screenshots of the progressive scene interactive rehabilitation training virtual scene part in the invention.
Detailed Description
The technical scheme of the invention is more clearly and more specifically described below with reference to the accompanying drawings.
As shown in fig. 2, a human body posture estimation method based on openpos and Kinect includes the following steps:
(1) calibrating a depth camera and a color camera of the Kinect to obtain internal reference matrixes of the color camera and the depth camera and a rotation matrix and a translation vector from a depth camera coordinate system to a color camera coordinate system;
(2) generating a point cloud array of a three-dimensional space by combining a depth image and a color image of Kinect according to the internal reference matrix, the rotation matrix and the translation vector obtained in the step (1);
(3) synchronizing the Kinect color image with the point cloud array through the timestamp;
(4) obtaining a two-dimensional joint point image coordinate according to the Kinect color image by using an OpenPose algorithm;
(5) searching a three-dimensional joint point space coordinate corresponding to the two-dimensional joint point image coordinate in the point cloud array synchronized in the step (3);
(6) and (5) smoothing and predicting the space coordinates of the human body three-dimensional joint points obtained in the step (5) by using a median filtering method and a Hott two-parameter exponential smoothing method.
In another embodiment, the scene interactive rehabilitation training system based on OpenPose and Kinect comprises human body posture estimation based on OpenPose and Kinect, a progressive scene interactive rehabilitation training virtual scene and a three-dimensional joint point motion track database. The human body posture estimation part based on OpenPose and Kinect is used for capturing three-dimensional joint point data of a patient in real time, the progressive scene interactive rehabilitation training virtual scene part designs a progressive rehabilitation training virtual scene based on a Unity3D platform, and the three-dimensional joint point motion track database part builds a database based on MySQL, so that the joint point data can be stored and called again conveniently.
A scene interactive rehabilitation training system based on OpenPose and Kinect comprises the following steps:
step 1: firstly, calibrating a depth camera and a color camera of Kinect to obtain internal reference matrixes of the color camera and the depth camera respectively
Figure BDA0001722360280000061
And
Figure BDA0001722360280000062
wherein (f)x_RGB,fy_RGB) Is the focal length of the color camera, (c)x_RGB,cy_RGB) Is the center point coordinates of the color camera. (f)x_D,fy_D) Is the focal length of the depth camera, (c)x_D,cy_D) Is the center point coordinates of the depth camera. Then, the transformation relation between the color camera and the depth camera is calibrated, and R is setD-RGBAnd tD-RGBRespectively, the rotation matrix and translation vector of the depth camera coordinate system to the color camera coordinate system.
Step 2: and (3) generating a point cloud array of a three-dimensional space according to the internal reference matrix, the rotation matrix and the translation vector obtained in the step (1) and by combining the depth image and the color image of the Kinect. The method comprises the following specific steps.
Step 2.1: and mapping the two-dimensional image coordinates of the depth image into three-dimensional space coordinates under a depth camera coordinate system according to an internal reference matrix and a pinhole imaging principle of the depth camera. Let a point on the depth image be (x)D,yD) Depth value of the point is depth (x)D,yD) Then the three-dimensional coordinate (X) of the point is in the depth camera coordinatesD,YD,ZD) Comprises the following steps:
Figure BDA0001722360280000063
step 2.2: three-dimensional coordinates (X) under depth camera coordinatesD,YD,ZD) Conversion to three-dimensional coordinates (X) in color Camera coordinatesRGB,YRGB,ZRGB) Comprises the following steps:
Figure BDA0001722360280000064
step 2.3: further combining the three-dimensional coordinates (X) in the color camera coordinate systemRGB,YRGB,ZRGB) Projecting the image onto a two-dimensional color image plane to obtain the coordinates (x) of the two-dimensional color imageRGB,yRGB) Comprises the following steps:
Figure BDA0001722360280000065
the coordinates in the color image are taken as (x)RGB,yRGB) The corresponding RGB value of point(s) is taken as the three-dimensional coordinate (X) in the color camera coordinate systemRGB,YRGB,ZRGB) The RGB value of (a).
Step 2.4: and (3) repeating the steps I to III on each point in the depth image to generate a point cloud array of the three-dimensional space in the XYZRGB format.
And step 3: the intercommunication of a Kinect node, an OpenPose node, a human body three-dimensional joint point mapping and filtering node, a Unity3D communication node and a database communication node is realized through an ROS platform, and the function of mapping a two-dimensional human body joint point to a synchronous point cloud array to obtain a three-dimensional human body joint point is realized. And simultaneously, respectively sending the three-dimensional human joint data to the progressive scene interactive rehabilitation training virtual scene module and the three-dimensional joint motion track database module.
The ROS-based node communication software framework is shown in fig. 3, and the specific steps are as follows:
step 3.1: the software framework can be divided into three layers, namely a sensing layer, an attitude estimation and data storage layer and an application layer. The Kinect nodes of the attitude estimation and data storage layer are communicated with the Kinect nodes of the perception layer and calculate to generate a point cloud array, the human body three-dimensional joint point mapping and filtering nodes on the same layer subscribe a color image and a point cloud array topic issued by the Kinect nodes, and synchronization of the Kinect nodes and the point cloud array topic is completed through a timestamp. Step 3.2: and after the human body three-dimensional joint point mapping and filtering nodes are synchronized, the color image of the Kinect is sent to the OpenPose node in a request mode, and after a two-dimensional joint point image coordinate response returned by the OpenPose node is obtained, the three-dimensional joint point space coordinate corresponding to the two-dimensional joint point image coordinate is searched in the synchronized point cloud array. And finally, finishing filtering the space coordinates of the three-dimensional joint points by using a median filtering method and a Hott two-parameter exponential smoothing method.
Step 3.3: after the human body three-dimensional joint point mapping and filtering nodes complete smooth filtering, the obtained 18 human body three-dimensional joint point space coordinates shown in fig. 1 are released as topics, and nodes subscribing the topics extract interested joint point information. And selecting relevant joint point motion track information for rehabilitation evaluation by the database communication nodes, storing the information into the three-dimensional joint point motion track database, and providing digital quantitative data for subsequent rehabilitation effect evaluation. And the Unity3D communication node selects joint point information for controlling the movement of the virtual agent and sends the joint point information to the progressive scene interactive rehabilitation training virtual scene in a UDP communication mode.
And 4, step 4: and (3) smoothing and predicting the space coordinates of the three-dimensional joint points obtained in the step (3) by using a median filtering method and a Hott two-parameter exponential smoothing method.
Step 4.1: the median filtering method can be expressed by equation (4). The point cloud array coordinates (X, Y, Z) of the joint points comprise n points in a certain neighborhood window S, and the coordinates of the points are respectively (U)i,Vi,Wi) And i is 1, … n. The coordinates of the joint point are modified to be the median of the coordinates of the n points, i.e.
Figure BDA0001722360280000071
Step 4.2: the hotte two-parameter exponential smoothing method includes two basic smoothing formulas and a prediction model, namely:
smoothing formula:
St=αPt+(1-α)(St-1+bt-1) Formula (5)
bt=β(St-St-1)+(1-β)bt-1
And (3) prediction model:
Ft+m=St+btm formula (6)
Wherein, alpha and beta are smoothing coefficients and take values between (0, 1). Observing delay and mean square error characteristics by drawing curves of predicted values and actual values, and adjusting smoothing coefficients alpha and beta to preferably select an optimal prediction model to finish filtering of the three-dimensional joint point coordinates;
time series P for space coordinates of three-dimensional joint points of human bodyt={P1,P2,P3……},PtIs the three-dimensional joint point coordinate of the t-th stage of the time series, StAs smoothed values of the t-th period of the time series, btIs the smooth value of the trend of the t-th phase of the time series, m is the predicted number of the lead phases, Ft+mFor the prediction of the t + m th stage of the time series, S is initialized1Is P1,b1Is P2-P1Subsequent StAnd btS according to the preamblet-1And bt-1Iterating to obtain a predicted value F of the t + m staget+mAccording to the t-th stage StAnd btAnd (6) calculating.
And 5: and the user logs in a starting interface of the progressive scene interactive rehabilitation training virtual scene and inputs basic information.
Step 6: the rehabilitation training system provides a target-oriented game for different disability parts and different rehabilitation training stages. For example, different rehabilitation training virtual games are respectively provided for elbow joints and knee joints, and different rehabilitation training virtual game environments are provided for passive rehabilitation training at the early stage of rehabilitation, active rehabilitation training at the middle stage of rehabilitation, and resistance rehabilitation training at the later stage of rehabilitation. Taking the knee joint as an example, fig. 4(a) is a bicycle riding scene for the passive rehabilitation training at the early stage of rehabilitation, fig. 4(b) is a lake side walking scene for the active rehabilitation training at the middle stage of lower limb rehabilitation, and fig. 4(c) is a climbing scene for the resistance rehabilitation training at the later stage of lower limb rehabilitation.
In passive rehabilitation training, the rehabilitation robot drives the lower limbs of a patient to move, and the angle speed of the knee joint of the patient is used for controlling the speed of the virtual character riding. Meanwhile, the gesture actions of the patient of waving the hand leftwards and waving the hand rightwards are judged according to the movement tracks of the joints of the left arm and the right arm, and the bicycle is controlled to turn left and turn right so as to collide with gold coins in a game scene and obtain game bonus points. In the active rehabilitation training, the angular speed of the knee joint of the patient during active walking is mapped into the walking speed of the virtual character in the scene. And judging whether the virtual character climbs or not according to the height of the ground where the virtual character is located in the resistance rehabilitation training, and if the virtual character is in the climbing process, sending an instruction to the rehabilitation robot to request the rehabilitation robot to provide resistance feedback for the patient.
And 7: and (4) controlling the virtual agent in the rehabilitation training virtual game environment in the step 6 to move, rotate, play animation and the like through the smoothed three-dimensional human body joint point information obtained in the step 4, and simultaneously displaying the joint angle change curve, the limb reachable space, the motion rate and other motion parameters in a virtual training scene in real time in a chart drawing mode.
And 8: visual and auditory effect feedback is triggered through events such as collision between the virtual agent and objects in the virtual scene, and collision acting force is calculated and sent to the rehabilitation training robot so as to provide force feedback to a patient.
And step 9: and (4) storing the smoothed three-dimensional joint point data obtained in the step (4) in real time in the rehabilitation training process based on a database built by the MySQL platform, and connecting the rehabilitation training data with the corresponding patient according to the basic information of the patient obtained in the step (5) when the data is stored. After training is finished, historical rehabilitation training data can be called as required.
The technical idea of the present invention is described in the above technical solutions, and the protection scope of the present invention is not limited thereto, and any changes and modifications made to the above technical solutions according to the technical essence of the present invention belong to the protection scope of the technical solutions of the present invention.

Claims (4)

1. A human body posture estimation method based on OpenPose and Kinect is characterized by comprising the following steps:
(1) calibrating a depth camera and a color camera of the Kinect to obtain internal reference matrixes of the color camera and the depth camera and a rotation matrix and a translation vector from a depth camera coordinate system to a color camera coordinate system;
wherein the internal reference matrixes of the color camera and the depth camera are obtained respectively
Figure FDA0003470422390000011
And
Figure FDA0003470422390000012
wherein the content of the first and second substances,
Figure FDA0003470422390000013
is the focal length of the color camera and,
Figure FDA0003470422390000014
is the center point coordinate of the color camera, (f)x_D,fy_D) Is the focal length of the depth camera, (c)x_D,cy_D) Is the coordinate of the central point of the depth camera, and the obtained rotation matrix and translation vector from the depth camera coordinate system to the color camera coordinate system are respectively RD-RGBAnd tD-RGB
(2) Generating a point cloud array of a three-dimensional space by combining a depth image and a color image of Kinect according to the internal reference matrix, the rotation matrix and the translation vector obtained in the step (1);
the generating of the point cloud array of the three-dimensional space comprises performing the following steps:
I. according to the internal reference matrix of the depth camera, the two-dimensional image coordinates of the depth image are mapped into three-dimensional space coordinates in a depth camera coordinate system, and one point on the depth image is set as (x)D,yD) Depth value of the point is depth (x)D,yD) Then the three-dimensional coordinate (X) of the point is in the depth camera coordinatesD,YD,ZD) Comprises the following steps:
Figure FDA0003470422390000015
three-dimensional coordinates (X) in depth camera coordinatesD,YD,ZD) Conversion to three-dimensional coordinates (X) in color Camera coordinatesRGB,YRGB,ZRGB) Comprises the following steps:
Figure FDA0003470422390000016
further converting the three-dimensional coordinates (X) in the color camera coordinate systemRGB,YRGB,ZrGB) Projecting the image onto a two-dimensional color image plane to obtain the coordinates (x) of the two-dimensional color imageRGB,yRGB) Comprises the following steps:
Figure FDA0003470422390000017
the coordinates in the color image are taken as (x)RGB,yRGB) The corresponding RGB value of point(s) is taken as the three-dimensional coordinate (X) in the color camera coordinate systemRGB,YRGB,ZRGB) The RGB value of (1);
IV, repeating the steps I to III on each point in the depth image so as to generate a point cloud array of a three-dimensional space in an XYZRGB format;
(3) synchronizing the Kinect color image with the point cloud array through the timestamp;
(4) obtaining a two-dimensional joint point image coordinate according to the Kinect color image by using an OpenPose algorithm;
(5) searching a three-dimensional joint point space coordinate corresponding to the two-dimensional joint point image coordinate in the point cloud array synchronized in the step (3);
(6) and (5) smoothing and predicting the space coordinates of the human body three-dimensional joint points obtained in the step (5) by using a median filtering method and a Hott two-parameter exponential smoothing method.
2. The openpos and Kinect-based human body posture estimation method according to claim 1, wherein the smoothing and prediction of the spatial coordinates of the three-dimensional joint points of the human body by using a median filtering method and a hott two-parameter exponential smoothing method in step (6) comprises:
the point cloud array coordinates (X, Y, Z) of the joint points comprise n points in a certain neighborhood window S, and the coordinates of the points are respectively (U)i,Vi,Wi),i=1,…n,The coordinates of the joint point are modified to be the median of the coordinates of the n points, i.e.
Figure FDA0003470422390000021
The hotte two-parameter exponential smoothing method comprises two basic smoothing formulas and a prediction model, namely:
smoothing formula:
St=αPt+(1-α)(St-1+bt-1)
bt=β(St-St-1)+(1-β)bt-1
and (3) prediction model:
Ft+m=St+btm
wherein alpha and beta are smoothing coefficients, values are between (0, 1), delay and mean square error characteristics are observed by drawing curves of predicted values and actual values, and the smoothing coefficients alpha and beta are adjusted to optimize an optimal prediction model to complete filtering of the three-dimensional joint point coordinates;
time series P for space coordinates of three-dimensional joint points of human bodyt={P1,P2,P3……},PtIs the three-dimensional joint point coordinate of the t-th stage of the time series, StAs smoothed values of the t-th period of the time series, btIs the smooth value of the trend of the t-th phase of the time series, m is the predicted number of the lead phases, Ft+mFor the prediction of the t + m th stage of the time series, S is initialized1Is P1,b1Is P2-P1Subsequent stAnd btS according to the preamblet-1And bt-1Iterating to obtain a predicted value F of the t + m staget+mAccording to the t-th stage StAnd btAnd (6) calculating.
3. A scene interactive rehabilitation training system based on OpenPose and Kinect is characterized by comprising:
the human body posture estimation module based on OpenPose and Kinect identifies three-dimensional joint point data of a patient in real time according to a depth image and a color image of the Kinect;
the scene interactive rehabilitation training virtual scene module is used for building a progressive rehabilitation training virtual scene based on a Unity3D platform, and realizing the functions of motion control of a virtual agent, drawing of a joint point chart, visual and auditory feedback, calculation of collision acting force and user basic information input; and
the three-dimensional joint point motion track database is used for storing the basic information of the user and the space coordinates of the three-dimensional joint points;
the human body posture estimation module based on OpenPose and Kinect comprises a Kinect point cloud array generation node, an OpenPose node, a human body three-dimensional joint point mapping and filtering node, an ROS master controller, a Unity3D communication node and a database communication node,
the Kinect point cloud array generating node generates a point cloud array of a three-dimensional space according to the internal reference matrix of the Kinect depth camera and the color camera, the rotation matrix and the translation vector from the depth camera coordinate system to the color camera coordinate system, and the depth image and the color image of the Kinect;
the OpenPose node obtains a two-dimensional joint point image coordinate according to the color image of the Kinect;
synchronizing a color image of the Kinect with a point cloud array through a time stamp by a human body three-dimensional joint point mapping and filtering node, searching a three-dimensional joint point space coordinate corresponding to a two-dimensional joint point image coordinate in the synchronized point cloud array, and smoothing and predicting the three-dimensional joint point space coordinate by using a median filtering method and a Hott two-parameter index smoothing method;
the database communication node acquires the space coordinates of the three-dimensional joint points for rehabilitation evaluation and stores the space coordinates to the three-dimensional joint point motion track database;
the Unity3D communication node acquires three-dimensional joint point space coordinates for controlling the movement of the virtual agent and sends the three-dimensional joint point space coordinates to the scene interactive rehabilitation training virtual scene module;
the ROS master controller realizes the intercommunication of the Kinect point cloud array generation node, the OpenPose node, the human body three-dimensional joint point mapping and filtering node, the Unity3D communication node and the database communication node.
4. The OpenPose and Kinect based contextual interactive rehabilitation training system of claim 3, wherein the contextual interactive rehabilitation training virtual scene module comprises:
a user login interface for a patient to enter basic information;
the progressive rehabilitation training virtual scene generation module is used for providing a target-oriented virtual game environment for different disabled parts and different rehabilitation training stages based on a Unity3D platform;
the virtual agent control module is used for controlling the action of a virtual agent in the rehabilitation training virtual game through the obtained three-dimensional joint point space coordinates and simultaneously displaying the motion parameters in a virtual scene in real time; and
and the feedback module is used for triggering visual and auditory feedback according to the events in the scene and calculating the acting force to be sent to the rehabilitation training robot so as to provide force feedback for the patient.
CN201810737327.5A 2018-07-06 2018-07-06 Human body posture estimation method based on OpenPose and Kinect and rehabilitation training system Active CN109003301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810737327.5A CN109003301B (en) 2018-07-06 2018-07-06 Human body posture estimation method based on OpenPose and Kinect and rehabilitation training system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810737327.5A CN109003301B (en) 2018-07-06 2018-07-06 Human body posture estimation method based on OpenPose and Kinect and rehabilitation training system

Publications (2)

Publication Number Publication Date
CN109003301A CN109003301A (en) 2018-12-14
CN109003301B true CN109003301B (en) 2022-03-15

Family

ID=64598431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810737327.5A Active CN109003301B (en) 2018-07-06 2018-07-06 Human body posture estimation method based on OpenPose and Kinect and rehabilitation training system

Country Status (1)

Country Link
CN (1) CN109003301B (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109828658B (en) * 2018-12-17 2022-03-08 彭晓东 Man-machine co-fusion remote situation intelligent sensing system
CN109710071B (en) * 2018-12-26 2022-05-17 青岛小鸟看看科技有限公司 Screen control method and device
CN111382613B (en) * 2018-12-28 2024-05-07 中国移动通信集团辽宁有限公司 Image processing method, device, equipment and medium
CN109740513B (en) * 2018-12-29 2020-11-27 青岛小鸟看看科技有限公司 Action behavior analysis method and device
CN109949368B (en) * 2019-03-14 2020-11-06 郑州大学 Human body three-dimensional attitude estimation method based on image retrieval
CN110009717B (en) * 2019-04-01 2020-11-03 江南大学 Animation figure binding recording system based on monocular depth map
US11107236B2 (en) * 2019-04-22 2021-08-31 Dag Michael Peter Hansson Projected augmented reality interface with pose tracking for directing manual processes
CN110070573B (en) * 2019-04-25 2021-07-06 北京卡路里信息技术有限公司 Joint map determination method, device, equipment and storage medium
CN110097024B (en) * 2019-05-13 2020-12-25 河北工业大学 Human body posture visual recognition method of transfer, transportation and nursing robot
CN110379480B (en) * 2019-07-18 2022-09-16 合肥工业大学 Rehabilitation training evaluation method and system
CN110480634B (en) * 2019-08-08 2020-10-02 北京科技大学 Arm guide motion control method for mechanical arm motion control
CN110503077B (en) * 2019-08-29 2022-03-11 郑州大学 Real-time human body action analysis method based on vision
CN110544302A (en) * 2019-09-06 2019-12-06 广东工业大学 Human body action reconstruction system and method based on multi-view vision and action training system
CN110728739B (en) * 2019-09-30 2023-04-14 杭州师范大学 Virtual human control and interaction method based on video stream
GB2589843B (en) * 2019-11-19 2022-06-15 Move Ai Ltd Real-time system for generating 4D spatio-temporal model of a real-world environment
CN111199576B (en) * 2019-12-25 2023-08-18 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
CN111145865A (en) * 2019-12-26 2020-05-12 中国科学院合肥物质科学研究院 Vision-based hand fine motion training guidance system and method
CN111241936A (en) * 2019-12-31 2020-06-05 浙江工业大学 Human body posture estimation method based on depth and color image feature fusion
CN111259749A (en) * 2020-01-10 2020-06-09 上海大学 Real-time human body posture recognition method in complex environment based on bidirectional LSTM
CN111553229B (en) * 2020-04-21 2021-04-16 清华大学 Worker action identification method and device based on three-dimensional skeleton and LSTM
CN111597976A (en) * 2020-05-14 2020-08-28 杭州相芯科技有限公司 Multi-person three-dimensional attitude estimation method based on RGBD camera
CN111798995A (en) * 2020-06-28 2020-10-20 四川大学 OpenPose algorithm-based postoperative rehabilitation method and data acquisition device support thereof
CN111754620B (en) * 2020-06-29 2024-04-26 武汉市东旅科技有限公司 Human body space motion conversion method, conversion device, electronic equipment and storage medium
CN111968723A (en) * 2020-07-30 2020-11-20 宁波羽扬科技有限公司 Kinect-based upper limb active rehabilitation training method
CN112109090A (en) * 2020-09-21 2020-12-22 金陵科技学院 Multi-sensor fusion search and rescue robot system
CN112215172A (en) * 2020-10-17 2021-01-12 西安交通大学 Human body prone position three-dimensional posture estimation method fusing color image and depth information
CN112101326A (en) * 2020-11-18 2020-12-18 北京健康有益科技有限公司 Multi-person posture recognition method and device
CN112617810A (en) * 2021-01-04 2021-04-09 重庆大学 Virtual scene parameter self-adaption method for restraining upper limb shoulder elbow rehabilitation compensation
CN113041092B (en) * 2021-03-11 2022-12-06 山东大学 Remote rehabilitation training system and method based on multi-sensor information fusion
CN112906653A (en) * 2021-03-26 2021-06-04 河北工业大学 Multi-person interactive exercise training and evaluation system
CN113100755B (en) * 2021-03-26 2023-01-24 河北工业大学 Limb rehabilitation training and evaluating system based on visual tracking control
CN113192206B (en) * 2021-04-28 2023-04-07 华南理工大学 Three-dimensional model real-time reconstruction method and device based on target detection and background removal
CN113505735B (en) * 2021-05-26 2023-05-02 电子科技大学 Human body key point stabilization method based on hierarchical filtering
CN114271814A (en) * 2021-12-24 2022-04-05 安徽大学 Kinect-based rehabilitation training and evaluation method and system for stroke patient
CN114359328B (en) * 2021-12-28 2022-08-12 山东省人工智能研究院 Motion parameter measuring method utilizing single-depth camera and human body constraint
CN115115810B (en) * 2022-06-29 2023-06-02 广东工业大学 Multi-person cooperative focus positioning and enhanced display method based on space gesture capturing
CN115496863B (en) * 2022-11-01 2023-03-21 之江实验室 Short video generation method and system for scene interaction of movie and television intelligent creation
CN117095472B (en) * 2023-10-18 2024-02-20 广州华夏汇海科技有限公司 Swimming foul action judging method and system based on AI

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN104731307A (en) * 2013-12-20 2015-06-24 孙伯元 Somatic action identifying method and man-machine interaction device
CN105740450A (en) * 2016-02-03 2016-07-06 浙江大学 Multi-Kinect based 3D human body posture database construction method
CN106779045A (en) * 2016-11-30 2017-05-31 东南大学 Rehabilitation training robot system and its application method based on virtual scene interaction
WO2017158569A1 (en) * 2016-03-18 2017-09-21 Tata Consultancy Services Limited Kinect based balance analysis using single leg stance (sls) exercise
CN107908288A (en) * 2017-11-30 2018-04-13 沈阳工业大学 A kind of quick human motion recognition method towards human-computer interaction

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8953909B2 (en) * 2006-01-21 2015-02-10 Elizabeth T. Guckenberger System, method, and computer software code for mimic training
US20140081659A1 (en) * 2012-09-17 2014-03-20 Depuy Orthopaedics, Inc. Systems and methods for surgical and interventional planning, support, post-operative follow-up, and functional recovery tracking

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN104731307A (en) * 2013-12-20 2015-06-24 孙伯元 Somatic action identifying method and man-machine interaction device
CN105740450A (en) * 2016-02-03 2016-07-06 浙江大学 Multi-Kinect based 3D human body posture database construction method
WO2017158569A1 (en) * 2016-03-18 2017-09-21 Tata Consultancy Services Limited Kinect based balance analysis using single leg stance (sls) exercise
CN106779045A (en) * 2016-11-30 2017-05-31 东南大学 Rehabilitation training robot system and its application method based on virtual scene interaction
CN107908288A (en) * 2017-11-30 2018-04-13 沈阳工业大学 A kind of quick human motion recognition method towards human-computer interaction

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
""互联网+"背景下老年骨折病人康复训练研究进展";赖榕霏 等;《护理研究》;20170211;第31卷(第4期);388-391 *
"Real-time human gesture grading based on OpenPose";S. Qiao 等;《IEEE》;20180228;1-6 *
"一种基于Kinect的虚拟现实姿态交互工具";鲁明 等;《系统仿真学报》;20130908;第25卷(第9期);2124-2130 *
"基于振动触觉的移动机器人为人导航系统";刘杰;《东南大学学报(自然科学版) 》;20160920;第46卷(第5期);1013-1019 *

Also Published As

Publication number Publication date
CN109003301A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN109003301B (en) Human body posture estimation method based on OpenPose and Kinect and rehabilitation training system
CN110570455B (en) Whole body three-dimensional posture tracking method for room VR
CN111460875B (en) Image processing method and apparatus, image device, and storage medium
CN110930483B (en) Role control method, model training method and related device
WO2021169839A1 (en) Action restoration method and device based on skeleton key points
US20170038829A1 (en) Social interaction for remote communication
CN104057450B (en) A kind of higher-dimension motion arm teleoperation method for service robot
KR102065687B1 (en) Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing
CN106527709B (en) Virtual scene adjusting method and head-mounted intelligent device
KR20220025023A (en) Animation processing method and apparatus, computer storage medium, and electronic device
WO2012106978A1 (en) Method for controlling man-machine interaction and application thereof
CN111553968A (en) Method for reconstructing animation by three-dimensional human body
CN107930048B (en) Space somatosensory recognition motion analysis system and motion analysis method
CN103207667A (en) Man-machine interaction control method and application thereof
CN105107200A (en) Face change system and method based on real-time deep somatosensory interaction and augmented reality technology
CN102184342B (en) Virtual-real fused hand function rehabilitation training system and method
CN113221726A (en) Hand posture estimation method and system based on visual and inertial information fusion
CN109395375A (en) A kind of 3d gaming method of interface interacted based on augmented reality and movement
CN108908353B (en) Robot expression simulation method and device based on smooth constraint reverse mechanical model
Hwang et al. Monoeye: Monocular fisheye camera-based 3d human pose estimation
CN110348359A (en) The method, apparatus and system of hand gestures tracking
CN109407826A (en) Ball game analogy method, device, storage medium and electronic equipment
CN116248920A (en) Virtual character live broadcast processing method, device and system
CN116485953A (en) Data processing method, device, equipment and readable storage medium
CN115222847A (en) Animation data generation method and device based on neural network and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant