CN109086755B - Virtual reality display method and system of rehabilitation robot based on image segmentation - Google Patents

Virtual reality display method and system of rehabilitation robot based on image segmentation Download PDF

Info

Publication number
CN109086755B
CN109086755B CN201811319242.1A CN201811319242A CN109086755B CN 109086755 B CN109086755 B CN 109086755B CN 201811319242 A CN201811319242 A CN 201811319242A CN 109086755 B CN109086755 B CN 109086755B
Authority
CN
China
Prior art keywords
image
patient
image segmentation
model
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811319242.1A
Other languages
Chinese (zh)
Other versions
CN109086755A (en
Inventor
徐胤
汤雪华
王永波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Electric Group Corp
Original Assignee
Shanghai Electric Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Electric Group Corp filed Critical Shanghai Electric Group Corp
Priority to CN201811319242.1A priority Critical patent/CN109086755B/en
Publication of CN109086755A publication Critical patent/CN109086755A/en
Application granted granted Critical
Publication of CN109086755B publication Critical patent/CN109086755B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a virtual reality display method and a virtual reality display system of a rehabilitation robot based on image segmentation, wherein the virtual reality display method comprises the following steps: acquiring facial features of a patient; acquiring shooting information containing the patient in the process of rehabilitation movement of the patient by using a rehabilitation robot, wherein the shooting information comprises a plurality of frames of shooting pictures; dividing the content representing the same object in each frame of shot picture based on an image division model to generate a plurality of block images; identifying a target block image including the facial features from the plurality of block images; and generating VR data of the patient according to the target block image and displaying the VR data. The invention can effectively eliminate the interference of external factors when the virtual reality display module of the rehabilitation robot extracts the moving limb image of the patient in the rehabilitation training process of the patient, so that the virtual reality display is only the movement condition of the appointed patient.

Description

Virtual reality display method and system of rehabilitation robot based on image segmentation
Technical Field
The invention belongs to the field of rehabilitation instruments, and particularly relates to a virtual reality display system of a rehabilitation robot based on image segmentation.
Background
The rehabilitation robot is taken as a rehabilitation medical device, scientifically and effectively rehabilitates by assisting a patient to perform rehabilitation training, thereby achieving the purpose of rehabilitating the motor function of the patient, strengthening and promoting the active movement intention of the patient to rehabilitate the motor function of the patient, being beneficial to keeping the mental tension of the patient and strengthening the rehabilitation of a neuromuscular movement path, meanwhile, along with the application of VR (virtual reality) technology, VR equipment can be added into the rehabilitation robot, the skeleton image of the limb of the patient during the training can be collected, and joint points are marked, so that the patient can visually see how the limb of the patient moves during the rehabilitation training, however, in the rehabilitation training process of the patient, a rehabilitation doctor generally assists the treatment at the side, or other people can hardly pass by the side of the patient, while the existing VR equipment collects the limb of the patient, the object cannot be effectively identified, the object is easily interfered by the limbs of other people, the extracted skeleton information often contains the skeleton information of other people except the patient, the display of the VR module is influenced to a certain extent, and the user experience is poor.
Disclosure of Invention
The invention aims to overcome the defects that a VR module of a rehabilitation robot in the prior art is easy to collect information of non-patients, so that VR display is not friendly and user experience is poor, and provides a virtual reality display system of the rehabilitation robot based on image segmentation.
The invention solves the technical problems through the following technical scheme:
a virtual reality display method of a rehabilitation robot based on image segmentation comprises the following steps:
acquiring facial features of a patient;
acquiring shooting information containing the patient in the process of rehabilitation movement of the patient by using a rehabilitation robot, wherein the shooting information comprises a plurality of frames of shooting pictures;
dividing the content representing the same object in each frame of shot picture based on an image division model to generate a plurality of block images;
identifying a target block image including the facial features from the plurality of block images;
and generating VR data of the patient according to the target block image and displaying the VR data.
Preferably, the step of acquiring facial features of the patient specifically comprises:
acquiring a facial image of a patient;
identifying the facial image results in the facial features including at least one of eyes, nose tip, mouth corners, eyebrows, and facial contours.
Preferably, the step of segmenting the content representing the same object in each frame of the captured picture based on the image segmentation algorithm to generate a plurality of block images includes:
presetting an image element set, wherein the image element set comprises a plurality of standard images, and each standard image comprises an object;
inputting the standard image into an FCN (full convolution neural network) model for training, and outputting the image segmentation model and model parameters of the image segmentation model;
and segmenting the content representing the same object in each frame of shot picture based on the model parameters and the image segmentation model and generating the plurality of block images.
Preferably, after the step of inputting the standard image into an FCN model for training and outputting the image segmentation model and the model parameters of the image segmentation model, the virtual reality display method further includes:
inputting the standard image, the model parameters and the image segmentation model into a CRF (conditional random field) algorithm to obtain an optimized image segmentation model and optimized model parameters;
in the step of dividing each frame of the photographed picture based on the model parameters and the image division model and generating the plurality of block images, each frame of the photographed picture is divided based on the optimized model parameters and the optimized image division model.
Preferably, the step of generating VR data of the patient according to the target block image and displaying the VR data specifically includes:
extracting skeleton information of a patient in the target block image;
and generating the VR data based on the skeleton information and displaying the VR data.
A virtual reality display system of a rehabilitation robot based on image segmentation comprises a facial feature acquisition module, a camera module, an image segmentation module, an image recognition module and a VR module;
the facial feature acquisition module is used for acquiring facial features of a patient;
the camera module is used for acquiring shooting information containing the patient in the process that the patient uses the rehabilitation robot to perform rehabilitation movement, and the shooting information comprises a plurality of frames of shooting pictures;
the image segmentation module is used for segmenting the content representing the same object in each frame of shot picture based on an image segmentation model to generate a plurality of block images;
the image identification module is used for identifying a target block image containing the facial features from the plurality of block images;
the VR module is used for generating VR data of the patient according to the target block image and displaying the VR data.
Preferably, the facial feature acquisition module comprises a facial image acquisition unit and a facial feature recognition unit;
the facial image acquisition unit is used for acquiring a facial image of a patient;
the facial feature recognition unit is used for recognizing the facial image to obtain the facial features, and the facial features comprise at least one of eyes, nose tips, mouth corners, eyebrows and facial contours.
Preferably, the image segmentation module comprises a preset unit, a training unit and a segmentation unit;
the preset unit is used for presetting an image element set, wherein the image element set comprises a plurality of standard images, and each standard image comprises an object;
the training unit is used for inputting the standard image into an FCN model for training and outputting the image segmentation model and model parameters of the image segmentation model;
the segmentation unit is configured to segment content representing the same object in each frame of the captured picture based on the model parameters and the image segmentation model, and generate the block images.
Preferably, the image segmentation module further comprises an optimization unit;
the optimization unit is used for inputting the standard image, the model parameters and the image segmentation model into a CRF algorithm to obtain an optimized image segmentation model and optimized model parameters;
the segmentation unit is used for segmenting each frame of shot picture based on the optimized model parameters and the optimized image segmentation model.
Preferably, the VR module includes a skeleton information extraction unit, a VR data generation unit, and a display unit;
the skeleton information extraction unit is used for extracting skeleton information of the patient in the target block image;
the VR data generation unit is used for generating the VR data based on the skeleton information;
the display unit is used for displaying the VR data.
The positive progress effects of the invention are as follows: the invention can effectively eliminate the interference of external factors when the virtual reality display module of the rehabilitation robot extracts the moving limb image of the patient in the rehabilitation training process of the patient, so that the virtual reality display is only the movement condition of the appointed patient.
Drawings
Fig. 1 is a flowchart of a virtual reality display method of a rehabilitation robot based on image segmentation according to embodiment 1 of the present invention.
Fig. 2 is a flowchart of step 10 in a virtual reality display method of a rehabilitation robot based on image segmentation according to embodiment 1 of the present invention.
Fig. 3 is a flowchart of step 50 in the method for displaying virtual reality of a rehabilitation robot based on image segmentation according to embodiment 1 of the present invention.
Fig. 4 is a flowchart of step 30 in the method for displaying virtual reality of a rehabilitation robot based on image segmentation according to embodiment 2 of the present invention.
Fig. 5 is a schematic block diagram of a virtual reality display system of a rehabilitation robot based on image segmentation according to embodiment 3 of the present invention.
Fig. 6 is a schematic block diagram of an image segmentation module in a virtual reality display system of a rehabilitation robot based on image segmentation according to embodiment 4 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
A virtual reality display method of a rehabilitation robot based on image segmentation, as shown in fig. 1, the virtual reality display method includes:
step 10, acquiring facial features of a patient;
step 20, acquiring shooting information containing a patient in the process of using the rehabilitation robot to perform rehabilitation movement by the patient; the shooting information comprises a plurality of frames of shooting pictures;
step 30, segmenting the content representing the same object in each frame of shot picture based on an image segmentation model to generate a plurality of block images;
step 40, identifying a target block image containing facial features from a plurality of block images;
step 50, generating VR data of the patient according to the target block image;
and step 60, displaying VR data.
Referring to fig. 2, step 10 specifically includes:
step 101, acquiring a facial image of a patient;
step 102, identifying a facial image to obtain facial features; the facial features include at least one of eyes, nose tip, mouth corners, eyebrows, and facial contours.
In addition, referring to fig. 3, step 50 specifically includes:
step 501, extracting skeleton information of a patient in a target block image;
and 502, generating VR data based on the skeleton information.
In the embodiment, the virtual reality display method can effectively eliminate the interference of external factors when the virtual reality display module of the rehabilitation robot extracts the moving limb image of the patient in the rehabilitation training process of the patient, so that the virtual reality display is only the movement condition of the specified patient.
Example 2
The virtual reality display method of the rehabilitation robot based on image segmentation of the present embodiment is further improved on the basis of embodiment 1, as shown in fig. 4, step 30 specifically includes:
step 301, presetting an image element set; the image element set comprises a plurality of standard images, and each standard image comprises an object;
step 302, inputting the standard image into an FCN model for training, and outputting an image segmentation model and model parameters of the image segmentation model;
step 303, inputting the standard image, the model parameters and the image segmentation model into a CRF algorithm to obtain an optimized image segmentation model and optimized model parameters;
and step 304, segmenting the content representing the same object in each frame of shot picture based on the optimized model parameters and the optimized image segmentation model, and generating a plurality of block images.
The description is given taking a specific example:
an image segmentation data set of Pascal VOC (automated tool) is adopted in advance, and the downloaded image segmentation data set is input into FCN (fuzzy C-means) for training to obtain model parameters and an image segmentation model for image segmentation; the model parameters of image segmentation and the image segmentation model can also be optimized based on CRF (fuzzy C-means) calculation to ensure the accuracy of image segmentation;
when a patient uses a rehabilitation robot to perform rehabilitation exercise, firstly using a Kinect to obtain a facial image of the patient, identifying and obtaining facial features of the patient, then shooting a shot image of the patient in the rehabilitation exercise process in real time through the Kinect, inputting the shot image into an image segmentation model to obtain a segmented image, wherein the segmented block image obtained in the image comprises contents such as people, a treadmill and the like, and the contents form a connected area set (area _1, area _2, …, area _ i, … and area _ n), and each content is an area of one block image; then, the area _ i containing the facial features of the patient is found out from all the block images, when the method is actually used, only the minimum rectangular area or irregular area a _ region _ i containing the area is found out, VR data is generated by identifying the skeleton information of the patient in the area, and when a shot picture is obtained in the next frame, the a _ region _ i can be used as the area of the Kinect detection movement limb image of the next frame, so that the block to which the patient belongs can be tracked and identified.
Example 3
As shown in fig. 5, the virtual reality display system of the rehabilitation robot based on image segmentation comprises a facial feature acquisition module 1, a camera module 2, an image segmentation module 3, an image recognition module 4 and a VR module 5;
the facial feature acquisition module 1 is used for acquiring facial features of a patient;
the camera module 2 is used for acquiring shooting information containing the patient in the process that the patient uses the rehabilitation robot to perform rehabilitation movement, and the shooting information comprises a plurality of frames of shooting pictures;
the image segmentation module 3 is used for segmenting the content representing the same object in each frame of shot picture based on an image segmentation model to generate a plurality of block images;
the image recognition module 4 is configured to recognize a target block image including the facial feature from the plurality of block images;
the VR module 5 is configured to generate VR data of the patient from the target block image and display the VR data.
Wherein the facial feature acquisition module 1 comprises a facial image acquisition unit 11 and a facial feature recognition unit 12;
the facial image acquisition unit 11 is used for acquiring a facial image of a patient;
the facial feature recognition unit 12 is configured to recognize the facial image to obtain the facial features, wherein the facial features include at least one of eyes, nose tip, mouth corner, eyebrows, and facial contours.
The VR module 5 includes a skeleton information extraction unit 51, a VR data generation unit 52, and a display unit 53;
the skeleton information extraction unit 51 is configured to extract skeleton information of a patient in the target block image;
the VR data generating unit 52 is configured to generate the VR data based on the skeleton information;
the display unit 53 is configured to display the VR data.
In the embodiment, the virtual reality display method can effectively eliminate the interference of external factors when the virtual reality display module of the rehabilitation robot extracts the moving limb image of the patient in the rehabilitation training process of the patient, so that the virtual reality display is only the movement condition of the specified patient.
Example 4
The virtual reality display system of the rehabilitation robot based on image segmentation of the present embodiment is a further improvement on the basis of embodiment 3, as shown in fig. 6, the image segmentation module 3 includes a preset unit 31, a training unit 32 and a segmentation unit 33;
the presetting unit 31 is configured to preset an image element set, where the image element set includes a plurality of standard images, and each standard image includes an object;
the training unit 32 is configured to input the standard image into an FCN model for training, and output the image segmentation model and model parameters of the image segmentation model;
the segmentation unit 33 is configured to segment the content representing the same object in each of the captured pictures based on the model parameters and the image segmentation model, and generate the block images.
In addition, referring to fig. 6, the image segmentation module 3 further comprises an optimization unit 34;
the optimization unit 34 is configured to input the standard image, the model parameters, and the image segmentation model into a CRF algorithm to obtain an optimized image segmentation model and optimized model parameters;
the segmentation unit 33 is configured to segment the content representing the same object in each frame of the captured picture based on the optimized model parameters and the optimized image segmentation model, and generate the plurality of block images.
The description is given taking a specific example:
an image segmentation data set of Pascal VOC (automated tool) is adopted in advance, and the downloaded image segmentation data set is input into FCN (fuzzy C-means) for training to obtain model parameters and an image segmentation model for image segmentation; the model parameters of image segmentation and the image segmentation model can also be optimized based on CRF (fuzzy C-means) calculation to ensure the accuracy of image segmentation;
when a patient uses a rehabilitation robot to perform rehabilitation exercise, firstly using a Kinect to obtain a facial image of the patient, identifying and obtaining facial features of the patient, then shooting a shot image of the patient in the rehabilitation exercise process in real time through the Kinect, inputting the shot image into an image segmentation model to obtain a segmented image, wherein the segmented block image obtained in the image comprises contents such as people, a treadmill and the like, and the contents form a connected area set (area _1, area _2, …, area _ i, … and area _ n), and each content is an area of one block image; then, the area _ i containing the facial features of the patient is found out from all the block images, when the method is actually used, only the minimum rectangular area or irregular area a _ region _ i containing the area is found out, VR data is generated by identifying the skeleton information of the patient in the area, and when a shot picture is obtained in the next frame, the a _ region _ i can be used as the area of the Kinect detection movement limb image of the next frame, so that the block to which the patient belongs can be tracked and identified.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (8)

1. A virtual reality display method of a rehabilitation robot based on image segmentation is characterized by comprising the following steps:
acquiring facial features of a patient;
acquiring shooting information containing the patient in the process of rehabilitation movement of the patient by using a rehabilitation robot, wherein the shooting information comprises a plurality of frames of shooting pictures;
dividing the content representing the same object in each frame of shot picture based on an image division model to generate a plurality of block images;
identifying a target block image including the facial features from the plurality of block images;
generating VR data for the patient from the target block image and displaying the VR data;
the step of generating VR data of the patient from the target block image and displaying the VR data specifically includes:
extracting skeleton information of a patient in the target block image;
and generating the VR data based on the skeleton information and displaying the VR data.
2. The method for displaying virtual reality of a rehabilitation robot based on image segmentation as claimed in claim 1, wherein said step of acquiring facial features of the patient specifically comprises:
acquiring a facial image of a patient;
identifying the facial image results in the facial features including at least one of eyes, nose tip, mouth corners, eyebrows, and facial contours.
3. The method for displaying virtual reality of a rehabilitation robot based on image segmentation as claimed in claim 1, wherein the step of segmenting the content representing the same object in each frame of the taken picture based on the image segmentation algorithm to generate a plurality of block images specifically comprises:
presetting an image element set, wherein the image element set comprises a plurality of standard images, and each standard image comprises an object;
inputting the standard image into an FCN model for training, and outputting the image segmentation model and model parameters of the image segmentation model;
and segmenting the content representing the same object in each frame of shot picture based on the model parameters and the image segmentation model and generating the plurality of block images.
4. The virtual reality display method of an image segmentation-based rehabilitation robot according to claim 3, wherein after the step of inputting the standard image to the FCN model for training and outputting the image segmentation model and the model parameters of the image segmentation model, the virtual reality display method further comprises:
inputting the standard image, the model parameters and the image segmentation model into a CRF algorithm to obtain an optimized image segmentation model and optimized model parameters;
in the step of dividing each frame of the photographed picture based on the model parameters and the image division model and generating the plurality of block images, each frame of the photographed picture is divided based on the optimized model parameters and the optimized image division model.
5. A virtual reality display system of a rehabilitation robot based on image segmentation is characterized by comprising a facial feature acquisition module, a camera module, an image segmentation module, an image recognition module and a VR module;
the facial feature acquisition module is used for acquiring facial features of a patient;
the camera module is used for acquiring shooting information containing the patient in the process that the patient uses the rehabilitation robot to perform rehabilitation movement, and the shooting information comprises a plurality of frames of shooting pictures;
the image segmentation module is used for segmenting the content representing the same object in each frame of shot picture based on an image segmentation model to generate a plurality of block images;
the image identification module is used for identifying a target block image containing the facial features from the plurality of block images;
the VR module is used for generating VR data of the patient according to the target block image and displaying the VR data;
the VR module comprises a skeleton information extraction unit, a VR data generation unit and a display unit;
the skeleton information extraction unit is used for extracting skeleton information of the patient in the target block image;
the VR data generation unit is used for generating the VR data based on the skeleton information;
the display unit is used for displaying the VR data.
6. The virtual reality display system of an image segmentation-based rehabilitation robot according to claim 5, wherein the facial feature acquisition module includes a facial image acquisition unit and a facial feature recognition unit;
the facial image acquisition unit is used for acquiring a facial image of a patient;
the facial feature recognition unit is used for recognizing the facial image to obtain the facial features, and the facial features comprise at least one of eyes, nose tips, mouth corners, eyebrows and facial contours.
7. The virtual reality display system of a rehabilitation robot based on image segmentation as claimed in claim 5, wherein the image segmentation module includes a preset unit, a training unit and a segmentation unit;
the preset unit is used for presetting an image element set, wherein the image element set comprises a plurality of standard images, and each standard image comprises an object;
the training unit is used for inputting the standard image into an FCN model for training and outputting the image segmentation model and model parameters of the image segmentation model;
the segmentation unit is configured to segment content representing the same object in each frame of the captured picture based on the model parameters and the image segmentation model, and generate the block images.
8. The virtual reality display system of an image segmentation-based rehabilitation robot according to claim 7, wherein the image segmentation module further includes an optimization unit;
the optimization unit is used for inputting the standard image, the model parameters and the image segmentation model into a CRF algorithm to obtain an optimized image segmentation model and optimized model parameters;
the segmentation unit is used for segmenting each frame of shot picture based on the optimized model parameters and the optimized image segmentation model.
CN201811319242.1A 2018-11-07 2018-11-07 Virtual reality display method and system of rehabilitation robot based on image segmentation Active CN109086755B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811319242.1A CN109086755B (en) 2018-11-07 2018-11-07 Virtual reality display method and system of rehabilitation robot based on image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811319242.1A CN109086755B (en) 2018-11-07 2018-11-07 Virtual reality display method and system of rehabilitation robot based on image segmentation

Publications (2)

Publication Number Publication Date
CN109086755A CN109086755A (en) 2018-12-25
CN109086755B true CN109086755B (en) 2022-07-08

Family

ID=64844828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811319242.1A Active CN109086755B (en) 2018-11-07 2018-11-07 Virtual reality display method and system of rehabilitation robot based on image segmentation

Country Status (1)

Country Link
CN (1) CN109086755B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104103090A (en) * 2013-04-03 2014-10-15 北京三星通信技术研究有限公司 Image processing method, customized human body display method and image processing system
CN204406327U (en) * 2015-02-06 2015-06-17 长春大学 Based on the limb rehabilitating analog simulation training system of said three-dimensional body sense video camera
CN106937531A (en) * 2014-06-14 2017-07-07 奇跃公司 Method and system for producing virtual and augmented reality
CN107462994A (en) * 2017-09-04 2017-12-12 浙江大学 Immersive VR head-wearing display device and immersive VR display methods
CN108010049A (en) * 2017-11-09 2018-05-08 华南理工大学 Split the method in human hand region in stop-motion animation using full convolutional neural networks
CN108305266A (en) * 2017-12-26 2018-07-20 浙江工业大学 Semantic image dividing method based on the study of condition random field graph structure

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7257237B1 (en) * 2003-03-07 2007-08-14 Sandia Corporation Real time markerless motion tracking using linked kinematic chains
CN104722056A (en) * 2015-02-05 2015-06-24 北京市计算中心 Rehabilitation training system and method using virtual reality technology
CN107844190B (en) * 2016-09-20 2020-11-06 腾讯科技(深圳)有限公司 Image display method and device based on virtual reality VR equipment
CN108211242A (en) * 2016-12-10 2018-06-29 上海邦邦机器人有限公司 A kind of interactive mode lower limb rehabilitation training system and training method
CN108597584A (en) * 2018-03-06 2018-09-28 上海大学 In conjunction with the three stages brain control upper limb healing method of Steady State Visual Evoked Potential and Mental imagery

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104103090A (en) * 2013-04-03 2014-10-15 北京三星通信技术研究有限公司 Image processing method, customized human body display method and image processing system
CN106937531A (en) * 2014-06-14 2017-07-07 奇跃公司 Method and system for producing virtual and augmented reality
CN204406327U (en) * 2015-02-06 2015-06-17 长春大学 Based on the limb rehabilitating analog simulation training system of said three-dimensional body sense video camera
CN107462994A (en) * 2017-09-04 2017-12-12 浙江大学 Immersive VR head-wearing display device and immersive VR display methods
CN108010049A (en) * 2017-11-09 2018-05-08 华南理工大学 Split the method in human hand region in stop-motion animation using full convolutional neural networks
CN108305266A (en) * 2017-12-26 2018-07-20 浙江工业大学 Semantic image dividing method based on the study of condition random field graph structure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视频序列的人体骨架提取与三维重建;肖雪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20100715;第I138-940页 *

Also Published As

Publication number Publication date
CN109086755A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN103336576B (en) A kind of moving based on eye follows the trail of the method and device carrying out browser operation
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
CN105426827A (en) Living body verification method, device and system
CN110298286B (en) Virtual reality rehabilitation training method and system based on surface myoelectricity and depth image
KR102106135B1 (en) Apparatus and method for providing application service by using action recognition
DE102015206110A1 (en) SYSTEM AND METHOD FOR PRODUCING COMPUTER CONTROL SIGNALS FROM ATATOR ATTRIBUTES
CN103815890A (en) Method for detecting heart rate by utilizing intelligent mobile phone camera
CN110427900B (en) Method, device and equipment for intelligently guiding fitness
CN111291674B (en) Method, system, device and medium for extracting expression actions of virtual figures
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN110837580A (en) Pedestrian picture marking method and device, storage medium and intelligent device
CN114550027A (en) Vision-based motion video fine analysis method and device
CN107247466B (en) Robot head gesture control method and system
CN109086755B (en) Virtual reality display method and system of rehabilitation robot based on image segmentation
Pantic et al. Facial action recognition in face profile image sequences
CN111145082A (en) Face image processing method and device, electronic equipment and storage medium
Cerrolaza et al. Fully-automatic glottis segmentation with active shape models
CN115530814A (en) Child motion rehabilitation training method based on visual posture detection and computer deep learning
CN109492585A (en) A kind of biopsy method and electronic equipment
CN111046848B (en) Gait monitoring method and system based on animal running platform
CN113100755A (en) Limb rehabilitation training and evaluating system based on visual tracking control
CN113870639A (en) Training evaluation method and system based on virtual reality
CN107578396A (en) Interactive image processing device
CN111968723A (en) Kinect-based upper limb active rehabilitation training method
Tsuruta et al. Real-time recognition of body motion for virtual dance collaboration system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20181225

Assignee: SHANGHAI ELECTRIC INTELLIGENT REHABILITATION MEDICAL TECHNOLOGY Co.,Ltd.

Assignor: Shanghai Electric Group Co.,Ltd.

Contract record no.: X2023310000146

Denomination of invention: Virtual reality display method and system for rehabilitation robots based on image segmentation

Granted publication date: 20220708

License type: Exclusive License

Record date: 20230919