WO2017161733A1 - 通过电视和体感配件进行康复训练及系统 - Google Patents

通过电视和体感配件进行康复训练及系统 Download PDF

Info

Publication number
WO2017161733A1
WO2017161733A1 PCT/CN2016/088188 CN2016088188W WO2017161733A1 WO 2017161733 A1 WO2017161733 A1 WO 2017161733A1 CN 2016088188 W CN2016088188 W CN 2016088188W WO 2017161733 A1 WO2017161733 A1 WO 2017161733A1
Authority
WO
WIPO (PCT)
Prior art keywords
somatosensory
motion
human body
rehabilitation training
accessory
Prior art date
Application number
PCT/CN2016/088188
Other languages
English (en)
French (fr)
Inventor
李水旺
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017161733A1 publication Critical patent/WO2017161733A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16ZINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
    • G16Z99/00Subject matter not provided for in other main groups of this subclass

Definitions

  • the embodiments of the present application relate to the field of image recognition technologies, and in particular, to a rehabilitation training and system through a television and a somatosensory accessory.
  • the inventor found that due to the high cost, some users choose to follow the rehabilitation action in the TV at home to perform fitness, then the exerciser can not get professional rehabilitation guidance, and often can not observe their own movements, sometimes Inadequate action will greatly reduce the efficiency of rehabilitation training. As a result, the current rehabilitation training can only be carried out in professional rehabilitation institutions, and the rehabilitation cycle is long, and it is impossible to achieve daily rehabilitation and family rehabilitation for patients.
  • the embodiment of the present application provides a method and system for performing rehabilitation training through a television and a somatosensory accessory, and realizing rehabilitation medical treatment of the family by using a television and a somatosensory accessory as a carrier.
  • An embodiment of the present application provides a method for performing rehabilitation training through a television and a somatosensory accessory.
  • the television and the somatosensory accessory are communicatively coupled.
  • the method includes: the somatosensory accessory identifies a human body motion in a current frame image, and acquires the human body The simulated action corresponding to the action; the somatosensory accessory compares the acquired simulated action with the preset rehabilitation exercise action specimen, and determines a difference in motion between the simulated action and the rehabilitation training action specimen; the somatosensory accessory Acquiring user feedback; performing rehabilitation training evaluation based on the action difference and the user feedback.
  • the simulated action includes a human skeleton corresponding to the human body motion; the identifying a human body motion in the current frame image, and acquiring a simulated motion corresponding to the human body motion, including: using a preset The body part classifier identifies a preset number of target parts corresponding to the human body motion in the current frame picture; performs clustering processing on the pixel points in the preset number of target parts identified according to a preset clustering algorithm, and acquires each a bone point corresponding to the target part; the acquired bone point constitutes a simulated motion corresponding to the human body motion.
  • the pre-set body part classifier is configured to identify a preset number of target parts corresponding to the human body motion in the current frame picture, including: acquiring a human body part training set, where the human body part training set includes a preset number of human body parts a part sample map; extracting a feature value vector of the body part sample map of the human body part training; calculating a classification condition of the body part training part sample map according to the extracted feature value vector; based on the classification condition Identifying a preset number of target parts corresponding to human motion in the current frame picture.
  • comparing the acquired simulated action with a preset rehabilitation training action specimen determining a motion difference between the simulated motion and the rehabilitation training motion specimen, including: the simulated motion to be acquired The center point coincides with a center point of the preset rehabilitation exercise action specimen; and determines a difference in motion between the simulated motion and the rehabilitation training motion specimen at the preset position.
  • the user feedback is a voice feedback or a touch feedback
  • the content of the user feedback includes a body feeling and a body feeling level
  • the body feeling component obtains user feedback, including: the somatosensory device notifies the user to perform voice feedback through the television, and the television collects the user's
  • the voice feedback is sent to the somatosensory device; or the somatosensory device is externally connected to the touch device by using a wired or wireless method, and the touch device is sent to the somatosensory device by using the user's touch feedback.
  • the performing the rehabilitation training evaluation according to the action difference and the user feedback comprises: determining a difference in motion between the simulated action and the rehabilitation training action sample, and a body feeling obtained by user feedback and The level of somatosensory, the evaluation of rehabilitation training, and the adjustment of the action specimens of rehabilitation training.
  • An embodiment of the present application provides a computer readable recording medium having recorded thereon a program configured to execute the above method.
  • An embodiment of the present application provides a system for performing rehabilitation training through a television and a somatosensory accessory, the system comprising: a somatosensory accessory configured to recognize a human body motion in a current frame picture, and obtain a simulated motion corresponding to the human body motion And comparing the obtained simulated action with a preset rehabilitation exercise action specimen, determining a difference in motion between the simulated action and the rehabilitation training action specimen; acquiring user feedback; and according to the action difference and the user Feedback, performing a rehabilitation training evaluation; a television, in communication with the somatosensory accessory, configured to display a rehabilitation training action specimen provided by the somatosensory accessory.
  • the simulated action includes a human skeleton corresponding to the human body motion; the somatosensory accessory identifies a human body motion in a current frame image, and acquires a simulated motion corresponding to the human body motion, specifically:
  • the somatosensory accessory uses a pre-set body part classifier to identify a preset number of target parts corresponding to the human body motion in the current frame picture; and perform pixel points in the preset number of target parts identified according to a preset clustering algorithm
  • the clustering process acquires a skeletal point corresponding to each target part; and the acquired skeletal point constitutes a simulated action corresponding to the human body motion.
  • the somatosensory accessory uses a pre-set body part classifier to identify a preset number of target parts corresponding to the human body motion in the current frame picture, specifically: the somatosensory accessory acquires a human body part training set, and the body part training Concentrating includes a preset number of human body part sample maps; extracting a feature value vector of the human body part training concentrated human body part sample map; calculating a classification condition of the human body part training concentrated human body part sample map based on the extracted feature value vector And identifying, according to the classification condition, a preset number of target parts corresponding to the human body motion in the current frame picture.
  • the user feedback is a voice feedback or a touch feedback
  • the content of the user feedback includes a body feeling and a body feeling level
  • the body feeling component obtains user feedback, specifically: the somatosensory device notifies the user to perform voice feedback through the television, and the television collects the user.
  • the voice feedback is sent to the somatosensory device; or the somatosensory device externally connects the touch device through a wired or wireless manner, and the touch device uses the user's touch feedback and sends the touch device to the somatosensory device.
  • the present invention provides a method and system for performing rehabilitation training through a television and a somatosensory accessory.
  • the television and the somatosensory device as a whole implement a series of proven rehabilitation actions in the form of somatosensory games and human-machine interaction.
  • comprehensive rehabilitation training for upper and lower limbs and brain cognition is replaced by traditional rehabilitation operations Auxiliary instruments in therapy, comprehensive rehabilitation training for upper and lower limbs and brain cognition.
  • the program has the functions of improving the exercise ability, coordination ability, cognitive ability, work ability and psychological treatment of the rehabilitation object, stimulating the training desire of the rehabilitation object, improving the enthusiasm for rehabilitation and multiplying the treatment effect; and rehabilitating the patient through the somatosensory camera
  • the user can feedback according to the training situation during the rehabilitation process.
  • the somatosensory accessory can perform rehabilitation training evaluation and moderately adjust the action specimen of the rehabilitation training according to the action difference and the user feedback.
  • doctors can remotely monitor rehabilitation data through the rehabilitation medical service platform, view the treatment effect in real time, provide remote auxiliary treatment for home rehabilitation patients, and save a large hospitalization cost.
  • FIG. 1 is a flow chart of a method for performing rehabilitation training through a television and a somatosensory accessory according to an embodiment of the present application
  • FIG. 2 is a flowchart of a method for identifying a human skeleton corresponding to a human body motion in a current frame picture according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of calculating a classification condition by a support vector machine according to an embodiment of the present application.
  • FIG. 1 is a flow chart of a method for performing rehabilitation training through a television and a somatosensory accessory according to an embodiment of the present application. As shown in FIG. 1, the method includes:
  • Step S1 The somatosensory accessory recognizes the human body motion in the current frame picture, and acquires a simulated motion corresponding to the human body motion.
  • the body motion of the TV front exerciser can be monitored by installing the somatosensory accessory on the television, wherein the somatosensory accessory can be a somatosensory camera.
  • the human body motion in the current frame picture may be identified to acquire a simulated action corresponding to the human body motion.
  • the process of recognizing the human body motion may be performed in the somatosensory accessory or in a processor connected to the somatosensory accessory. For example, after acquiring the current frame picture, the somatosensory accessory can send the picture to a processor connected thereto, and then the current frame picture can be identified by the processor.
  • the simulated action may be an action that is completely consistent with the action of the human body, or may be only a human bone corresponding to the action of the human body.
  • the human body considering that the human body can be represented by 20 skeletal points, the human body skeleton corresponding to the human body motion can be generated after recognizing the human body motion in the current frame picture. Multiple bone points can be included in the bone, such as at least 20.
  • the human bone can reflect the movements of various parts of the human body in front of the television, thereby facilitating comparison of the human body motion with the standard motion in the television.
  • the human skeleton corresponding to the human motion in the current frame picture may be specifically identified by the following steps.
  • Step S11 Identify a preset number of target parts corresponding to the human body motion in the current frame picture by using a preset body part classifier.
  • a human body part classifier may be preset, and the human body part classifier may analyze a picture of the human body to identify various parts included in the human body.
  • the human body can be divided into a head, a shoulder, an arm, an elbow, a crotch, a foot, a wrist, a hand, a trunk, a leg, and a knee, and can be divided at each of the above-mentioned parts. It is a plurality of parts up and down and left and right for more accurate identification of the human body.
  • the human body part classifier can be established by a machine learning method, that is, the human body part classifier is trained by using pictures of various parts of the human body, so that the human body part classifier can be generated.
  • the classification condition of the different parts is divided, and then the picture to be processed can be input to the body part classifier, so that each body part in the picture to be processed can be identified according to the classification condition.
  • the human body part training set may be acquired in advance, and the human body part training set includes a preset number of human body part sample maps.
  • the human body part sample map can cover the above various human body parts.
  • the feature value vector of the sample map of the human body part in the training part of the human body part may be extracted.
  • the feature value vector may be a pixel value vector corresponding to the body part sample map.
  • the body part sample map is composed of a plurality of pixel points
  • the RGB values corresponding to the respective pixel points may be extracted, and the feature values extracted by each pixel point are sequentially arranged to constitute the feature.
  • Value vector For example, the eigenvalue vector formed after the alignment is a series of arranged values in the form of:
  • Ra, Gb, Bc are any integers from 0 to 255 to represent the RGB values corresponding to the pixel points, respectively.
  • the classification condition of the human body part training concentrated body part sample map may be calculated based on the extracted feature value vector.
  • an implementation method of calculating a classification condition of a body part sample map in the training sample is introduced by using a support vector machine (Support Vector Machine) algorithm as an example.
  • the support vector machine was first proposed by Cortes and Vapnik in 1995. It shows many unique advantages in solving small sample, nonlinear and high dimensional pattern recognition. And can be applied to other machine learning problems such as function fitting. Overall, support vector machines can solve the problem of classification and classification criteria for complex transactions.
  • the points in the left graph represent the input training samples
  • the points represented by the forks in the right graph represent the calculated C1 training samples
  • the points represented by the circles represent the calculated C2 training samples.
  • the classified training samples C1 and C2 can be obtained, and the classification conditions of the C1 and C2 classes can be obtained.
  • the categorization condition (vv' line in the figure, also known as hyperplane) can be represented by a linear function, for example as:
  • w and b are parameters obtained by the support vector machine to calculate the eigenvalue vector set (called “training” in the support vector machine), and x represents the eigenvalue vector of the picture.
  • the input feature value vectors are, for example, two-dimensional vectors, that is, corresponding to each point on the coordinates in FIG.
  • the support vector machine algorithm which continuously searches for the straight line within the range of the input feature value vector, obtains such a straight line by trying to calculate the distance between each searched line and each feature value vector (the point in the figure):
  • the straight line is the largest and equal distance from the nearest feature value vector on both sides.
  • the calculated straight line vv' is a hyperplane. It can be seen from the right graph in Fig. 3 that in the two-dimensional case, the hyperplane vv' is a straight line, and the straight line is the largest and equal distance from the nearest feature value vector on both sides, and the distance is L.
  • the classification condition of different human body parts in the training samples can be obtained by the algorithm of the support vector machine.
  • a preset number of target parts corresponding to the human body motion in the current frame picture may be identified based on the categorization condition.
  • the human body part classifier may classify the human body motion in the current frame picture by using the classification condition, thereby identifying a preset number of target parts included in the human body motion in the current frame picture. Bit.
  • Step S12 Perform clustering processing on the pixel points in the preset target number of the identified target points according to the preset clustering algorithm, and acquire the skeletal points corresponding to each target part.
  • the pixel points in the preset target number of the target parts may be performed according to a preset clustering algorithm.
  • the clustering algorithm may include at least one of a K-MEANS algorithm, a cohesive hierarchical clustering algorithm, or a DBSCAN algorithm.
  • the clustering algorithm can gather the pixel points in the identified target part to a point, and the final gathered point can be used as the skeletal point corresponding to the target part.
  • each identified target part is clustered, so that the skeletal points corresponding to the respective target parts can be obtained.
  • Step S13 The acquired skeleton points constitute a simulation action corresponding to the human body motion.
  • the skeleton points corresponding to the respective target portions are acquired, the skeleton points are sequentially connected, and a skeleton map corresponding to the human body motion can be obtained, and the skeleton map can be used as Get the simulated action.
  • connection between two adjacent bone points can form a human body motion
  • the line between the left shoulder bone point and the left elbow bone point can outline the left upper arm line of the human body.
  • the outlined line can be used as a simulation action corresponding to the left upper arm of the human body motion.
  • Step S2 comparing the obtained simulated motion with a preset rehabilitation training motion specimen, and determining a motion difference between the simulated motion and the rehabilitation training motion specimen.
  • the acquired simulated motion can be compared with the preset rehabilitation training motion specimen to determine the current human motion and rehabilitation training motion specimen. Whether it is consistent, that is, by comparing the simulated action with the rehabilitation training action specimen, it can be determined whether the human body action is in place at the current moment.
  • the center point and the preset of the acquired simulated action may be preset.
  • the center points of the rehabilitation training action specimens coincide.
  • the center point may be the center point of the human torso, such as the center point of the chest cavity.
  • the preset position may be pre-specified for different rehabilitation training action specimens.
  • the focus is on the accuracy of the position of the arms and feet.
  • the arm and the foot in the specimen of the rehabilitation training action can be determined as the preset position, and when the simulated action and the rehabilitation exercise action specimen are compared, only the positions of the arm and the foot can be compared.
  • step S3 the somatosensory accessory obtains user feedback.
  • the somatosensory accessory can obtain user feedback in real time during the rehabilitation training.
  • the user feedback may be various forms of feedback such as voice feedback or touch feedback, and the content fed back by the user may include a sense of body and a level of somatosensory.
  • the somatosensory device can notify the user through the television to perform voice feedback and collect the user's voice feedback through the television.
  • the user raises his arm and informs the user to perform a predetermined voice feedback, such as "pain", if the pain is felt.
  • a predetermined voice feedback such as "pain”
  • the level of pain can be set, for example, "very painful,” Generally, the pain and the "no pain", the TV voice collection system collects the user's voice feedback and sends it to the somatosensory device.
  • the somatosensory device can externally connect the touch device by wire or wirelessly, and the touch device can be placed beside the user, and for a language inconvenient user, touch feedback can be performed by using a touch method.
  • touch feedback can be performed by using a touch method.
  • the user raises the arm and simultaneously informs the user to perform preset touch feedback if the pain is felt, for example, touching a case indicating pain, specifically, setting a case corresponding to the level of pain, and touching
  • the device collects the touch feedback and sends it to the somatosensory device.
  • Step S4 Perform rehabilitation training evaluation according to the determined action difference and the obtained user feedback.
  • determining the simulated action and the rehabilitation training action specimen The difference in motion at the preset position, combined with the user feedback during the rehabilitation training, is used to evaluate the rehabilitation training, and the action specimens of the rehabilitation training can be adjusted at any time.
  • the positional relationship between the arm of the simulated motion and the arm of the rehabilitation training specimen can be determined, for example, the arm of the simulated motion is located in the arm of the rehabilitation training specimen. Below and receiving feedback from the user is very painful, then in this case, you can adjust the position of the arm of the rehabilitation training specimen.
  • the training of the rehabilitation is very important to the user. Compared with other ordinary somatosensory accessories, because of the connection with the TV, you can see the contrast between your own actions and the standard actions while using the user, and you can also encourage the voice through the TV. Can improve the effectiveness of rehabilitation training.
  • An embodiment of the present application provides a computer readable recording medium having recorded thereon a program configured to execute the above method.
  • the computer readable recording medium includes any mechanism for storing or transmitting information in a form readable by a computer (eg, a computer).
  • a machine-readable medium includes read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash storage media, electrical, optical, acoustic, or other forms of propagation signals (eg, carrier waves) , infrared signals, digital signals, etc.).
  • Embodiments of the present application also provide a system for performing rehabilitation training through a television and a somatosensory accessory.
  • the system can include:
  • the somatosensory accessory is configured to identify a human body motion in the current frame picture, obtain a simulated motion corresponding to the human body motion, compare the obtained simulated motion with a preset rehabilitation training motion specimen, and determine the simulation a difference in motion between the action and the rehabilitation training action specimen; obtaining user feedback; performing rehabilitation training evaluation according to the motion difference and the user feedback;
  • the television is communicatively coupled to the somatosensory accessory and configured to display a rehabilitation training specimen provided by the somatosensory accessory.
  • the simulated action includes a human skeleton corresponding to the human body motion.
  • the somatosensory accessory identifies the human body motion in the current frame picture, and obtains
  • the simulation action corresponding to the human body motion is specifically:
  • the somatosensory accessory uses a pre-set body part classifier to identify a preset number of target parts corresponding to the human body motion in the current frame picture; and the pixel points in the preset number of target parts identified according to a preset clustering algorithm Performing a clustering process to acquire a skeletal point corresponding to each target part; and the acquired skeletal point constitutes a simulated action corresponding to the human body action.
  • the somatosensory accessory uses a pre-set body part classifier to identify a preset number of target parts corresponding to the human body motion in the current frame picture, specifically: the somatosensory accessory acquires a human body part training set, and the human body part training set Include a preset number of human body part sample maps; extract a feature value vector of the human body part training concentrated human body part sample map; calculate a classification condition of the human body part training concentrated human body part sample map based on the extracted feature value vector; And determining, according to the classification condition, a preset number of target parts corresponding to the human body motion in the current frame picture.
  • the somatosensory accessory compares the acquired simulated motion with a preset rehabilitation training motion specimen, and determines a motion difference between the simulated motion and the rehabilitation training motion specimen, specifically: the somatosensory accessory will acquire The center point of the simulated motion coincides with a center point of the preset rehabilitation training motion specimen; and the motion difference of the simulated motion and the rehabilitation training motion specimen at the preset position is determined.
  • the user feedback is voice feedback or touch feedback
  • the content of the user feedback includes a sense of body and a level of somatosensory.
  • the somatosensory accessory obtains user feedback, specifically: the somatosensory device notifies the user to perform voice feedback through the television, and the television collects the user's voice feedback and sends the feedback to the somatosensory device; or the somatosensory device externally connects the touch device through a wired or wireless manner, and the touch device adopts The user's touch feedback is sent to the somatosensory device.
  • the storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • the method and system for performing rehabilitation training through the television and the somatosensory accessory provided by the embodiment of the present application, the television and the somatosensory device as a whole, realize a series of effective demonstrations in the form of somatosensory games and human-computer interaction.
  • the rehabilitation action replaces the auxiliary equipment in the traditional rehabilitation operation therapy, and carries out all-round rehabilitation training for upper and lower limbs and brain cognition.
  • the program has the functions of improving the exercise ability, coordination ability, cognitive ability, work ability and psychological treatment of the rehabilitation object, stimulating the training desire of the rehabilitation object, improving the enthusiasm for rehabilitation and multiplying the treatment effect; and rehabilitating the patient through the somatosensory camera
  • the somatosensory accessory can perform rehabilitation training evaluation and moderately adjust the action specimen of the rehabilitation training according to the action difference and the user feedback.
  • doctors can remotely monitor rehabilitation data through the rehabilitation medical service platform, view the treatment effect in real time, provide remote auxiliary treatment for home rehabilitation patients, and save a large hospitalization cost.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种通过电视和体感配件进行康复训练的方法及系统,应配置为体感识别技术领域,其中所述方法包括:体感配件对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作(S1);体感配件将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异(S2);体感配件获取用户反馈(S3);根据所述动作差异和所述用户反馈,进行康复训练评估(S4)。该实施方式以电视和体感配件为载体,实现家庭的康复医疗。

Description

通过电视和体感配件进行康复训练及系统
本申请要求于2016年3月24日提交中国专利局、申请号为201610173309.X,发明名称为“通过电视和体感配件矫正人体动作及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施方式涉及图像识别技术领域,尤其涉及一种通过电视和体感配件进行康复训练及系统。
背景技术
现代社会保障基本治疗为核心的医疗制度下,康复一直是我国医疗体系中的短板,长期存在技术落后、设施缺乏、日常康复不便、康复水平低、费用高昂等症结。
目前,如果选择去传统医院或专业的康复机构进行康复训练,那么一方面由于专业的康复机构数量极少而且收费高昂,使得康复训练的价格往往较高,会增加用户的日常支出;另一方面康复训练的专业人数往往有限,不可能针对每个用户制定合适的康复计划或者进行运动动作的指导。
在实现本申请过程中发明人发现,由于费用高,有些用户选择在家中跟随电视内的康复动作进行健身,那么锻炼者无法得到专业的康复指导,也往往无法观察到自身的运动动作,有时候动作不到位会大大降低康复训练的效率。因此导致现在的康复训练只能在专业的康复机构进行,康复周期较长,无法为患者做到日常康复、家庭康复。
发明内容
本申请实施方式提供一种通过电视和体感配件进行康复训练的方法及系统,以电视和体感配件为载体,实现家庭的康复医疗。
本申请实施方式提供一种通过电视和体感配件进行康复训练的方法,所述电视和体感配件通信连接,所述方法包括:体感配件对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作;体感配件将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异;体感配件 获取用户反馈;根据所述动作差异和所述用户反馈,进行康复训练评估。
具体地,所述模拟动作包括与所述人体动作相对应的人体骨骼;所述对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作,包括:利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位;根据预设聚类算法对识别的所述预设数量的目标部位中的像素点进行聚类处理,获取每个目标部位对应的骨骼点;将获取的所述骨骼点构成与所述人体动作相对应的模拟动作。
具体地,所述利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位,包括:获取人体部位训练集,所述人体部位训练集中包括预设数量的人体部位样本图;提取所述人体部位训练集中人体部位样本图的特征值向量;基于提取的所述特征值向量计算所述人体部位训练集中人体部位样本图的归类条件;基于所述归类条件识别当前帧图片中人体动作对应的预设数量的目标部位。
具体地,所述将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异,包括:将获取的所述模拟动作的中心点与预设的康复训练动作标本的中心点重合;确定所述模拟动作和所述康复训练动作标本在预设位置处的动作差异。
具体地,所述用户反馈是语音反馈或者触摸反馈,所述用户反馈的内容包括体感以及体感级别;所述体感配件获取用户反馈,包括:体感设备通过电视通知用户进行语音反馈,电视采集用户的语音反馈后发送给体感设备;或者,体感设备通过有线或无线的方式外接触摸设备,触摸设备采用用户的触摸反馈后发送给体感设备。
具体地,所述根据所述动作差异和所述用户反馈,进行康复训练评估,包括:根据确定所述模拟动作与所述康复训练动作标本之间的动作差异,以及通过用户反馈获取的体感以及体感级别,进行康复训练评估,并调整康复训练的动作标本。
本申请实施方式提供一种在其上记录有配置为执行上述方法的程序的计算机可读记录介质。
本申请实施方式提供一种通过电视和体感配件进行康复训练的系统,所述系统包括:体感配件,配置为对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作;将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异;获取用户反馈;根据所述动作差异和所述用户反馈,进行康复训练评估;电视,与所述体感配件通信连接,配置为显示体感配件提供的康复训练动作标本。
具体地,所述模拟动作包括与所述人体动作相对应的人体骨骼;所述体感配件对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作,具体为:所述体感配件利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位;根据预设聚类算法对识别的所述预设数量的目标部位中的像素点进行聚类处理,获取每个目标部位对应的骨骼点;将获取的所述骨骼点构成与所述人体动作相对应的模拟动作。
具体地,所述体感配件利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位,具体为:所述体感配件获取人体部位训练集,所述人体部位训练集中包括预设数量的人体部位样本图;提取所述人体部位训练集中人体部位样本图的特征值向量;基于提取的所述特征值向量计算所述人体部位训练集中人体部位样本图的归类条件;基于所述归类条件识别当前帧图片中人体动作对应的预设数量的目标部位。
具体地,所述用户反馈是语音反馈或者触摸反馈,所述用户反馈的内容包括体感以及体感级别;所述体感配件获取用户反馈,具体为:体感设备通过电视通知用户进行语音反馈,电视采集用户的语音反馈后发送给体感设备;或者,体感设备通过有线或无线的方式外接触摸设备,触摸设备采用用户的触摸反馈后发送给体感设备。
本申请实施方式提供的一种通过电视和体感配件进行康复训练的方法及系统,电视和体感设备作为一个整体,以体感游戏和人机互动的形式去实现一系列已被论证有效的康复动作,从而替代了传统的康复作业 疗法中的辅助器具,对上下肢以及大脑认知进行全方位的康复训练。该方案具有提高康复对象的运动能力、协调能力、认知能力、作业能力以及对患者进行心理治疗等功能,激发康复对象的训练欲望,提高康复热情使治疗效果倍增;通过体感摄像头对患者的康复运动与标准运动做对比,在康复过程用户可以根据自身训练情况进行反馈,体感配件能够根据所述动作差异和所述用户反馈,进行康复训练评估并适度的调整康复训练的动作标本。此外,医生可以通过康复医疗服务平台,远程监测康复数据,实时查看治疗效果,为在家康复患者提供远程辅助治疗,节约大笔住院费用。
附图说明
为了更清楚地说明本申请实施方式或现有技术中的技术方案,下面将对实施方式或现有技术描述中所需要使用的附图逐一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施方式提供的一种通过电视和体感配件进行康复训练的方法流程图;
图2为本申请实施方式提供的识别出当前帧图片中人体动作对应的人体骨骼的方法流程图;
图3为本申请实施方式通过支持向量机计算分类条件的示意图。
具体实施方式
为使本申请实施方式的目的、技术方案和优点更加清楚,下面将结合本申请实施方式中的附图,对本申请实施方式中的技术方案进行清楚、完整地描述,显然,所描述的实施方式是本申请一部分实施方式,而不是全部的实施方式。基于本申请中的实施方式,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施方式,都属于本申请保护的范围。
虽然下文描述流程包括以特定顺序出现的多个操作,但是应该清楚了解,这些过程可以包括更多或更少的操作,这些操作可以顺序执行或 并行执行,例如使用并行处理器或多线程环境。
图1为本申请实施方式提供的一种通过电视和体感配件进行康复训练的方法流程图。如图1所示,所述方法包括:
步骤S1:体感配件对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作。
在本申请实施方式中,可以通过在电视机上安装体感配件,从而可以对电视机前锻炼者的人体动作进行监测,其中所述体感配件可以是体感摄像头。
在本申请实施方式中,可以对当前帧图片中的人体动作进行识别,以获取与所述人体动作相对应的模拟动作。在本申请实施方式中,对人体动作进行识别的过程可以在体感配件中完成,也可以在与所述体感配件相连的处理器中完成。例如,体感配件在获取到所述当前帧图片后,可以将该图片发送至与其相连的处理器中,然后可以通过所述处理器对所述当前帧图片进行识别。
在本申请实施方式中,所述模拟动作可以为与所述人体动作完全一致的动作,也可以仅仅为与所述人体动作相对应的人体骨骼。在本申请实施方式中,考虑到人体可以通过20个骨骼点来进行表示,因此可以在对当前帧图片中的人体动作进行识别后,生成与所述人体动作相对应的人体骨骼,所述人体骨骼中可以包括多个骨骼点,例如至少20。通过所述人体骨骼可以反应出电视机前的人体各个部位的动作,从而方便将人体动作与电视机中的标准动作进行对比。
在本申请实施方式中,如图2所示,具体可以通过以下几个步骤来识别出当前帧图片中人体动作对应的人体骨骼。
步骤S11:利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位。
在本申请实施方式中,可以预先设置人体部位分类器,所述人体部位分类器可以对人体的图片进行分析,从而识别出该人体中包含的各个部位。例如,人体可以划分为头部、肩部、臂部、肘部、踝部、脚部、腕部、手部、躯干部、腿部以及膝部,在上述的每个部位处又可以划分 为上下左右多个部分,以便对于人体进行更加精确地识别。
在本申请实施方式中,所述人体部位分类器可以通过机器学习的方法来建立,也就是说,利用人体各个部位的图片训练所述人体部位分类器,从而可以让所述人体部位分类器生成用以划分不同部位的分类条件,然后便可以对所述人体部位分类器输入待处理的图片,从而可以根据所述分类条件对所述待处理的图片中的各个人体部位进行识别。
具体地,在本申请实施方式中,可以预先获取人体部位训练集,所述人体部位训练集中包括预设数量的人体部位样本图。为了保证得出的分类条件比较准确,在本申请实施方式中可以在所述人体部位训练集中设置尽可能多的人体部位样本图,所述人体部位样本图可以涵盖上述的各个人体部位。在获取人体部位训练集之后,在本申请实施方式中,可以提取所述人体部位训练集中人体部位样本图的特征值向量。所述特征值向量可以是人体部位样本图对应的像素值向量。由于人体部位样本图是由若干像素点构成,在本申请实施方式中可以将各个像素点对应的RGB值提取出来,并且将每个像素点提取出的特征值按照顺序排列,以构成所述特征值向量。例如,排列后构成的特征值向量如下面形式的一系列排列的数值:
(RGB(1,1),RGB(1,2),…,RGB(1,120),RGB(2,1),RGB(2,2),…,RGB(2,120),…,RGB(200,1),RGB(200,2),…RGB(200,120))
其中,RGB(m,n)=Ra,Gb,Bc,m、n分别表示人体部位样本图中某一像素所处的行和列;对于200像素*120像素的图片而言,m的取值范围可以为1至200,n的取值范围可以为1至120。Ra,Gb,Bc为0-255中的任一整数,用以分别代表该像素点对应的RGB值。
在本申请实施方式中,在提取得到各个人体部位样本图对应的特征值向量后,便可以基于提取的所述特征值向量计算所述人体部位训练集中人体部位样本图的归类条件。具体地,这里以采用支持向量机(Support Vector Machine)算法为例介绍计算所述训练样本中人体部位样本图的分类条件的实现方式。支持向量机是Cortes和Vapnik于1995年首先提出的,它在解决小样本、非线性及高维模式识别中表现出许多特有的优势, 并能够推广应用到函数拟合等其他机器学习问题中。整体来说,支持向量机可以解决复杂事务的分类及分类标准的问题。
利用图3显示的线性分类的例子解释通过支持向量机算法进行分类的基本原理。如图3所示,左侧坐标图中的点表示输入的训练样本,右侧坐标图中的叉代表的点表示计算得到的C1类训练样本,圆圈代表的点表示计算得到的C2类训练样本。如图3所示,将训练样本通过支持向量机算法计算后,可以获得分类后的C1和C2两类训练样本,并且可以得到划分C1和C2两类的归类条件。
对于图3的线性分类来说,所述归类条件(图中的vv’线,也称为超平面)可以用一个线性函数来表示,例如表示为:
f(x)=wx+b
其中,w和b为支持向量机对特征值向量集合进行计算(支持向量机中称为“训练”)后得到的参数,x代表图片的特征值向量。
f(x)表示支持向量机中的映射关系。对于f(x)=0的情况,此时的特征值向量x即位于所述超平面上。对于f(x)大于0的情况,对应图3右侧坐标图中超平面右上侧的特征值向量;对于f(x)小于0的情况,对应图3右侧坐标图中超平面左下侧的特征值向量。
输入的特征值向量例如均为二维向量,即对应图3中坐标上的每个点。支持向量机算法,即不断搜索输入的特征值向量范围内的直线,通过尝试计算每一个搜索到的这种直线与每一特征值向量(图中的点)的距离,得到一个这样的直线:该直线距离两侧最近特征值向量的距离最大且相等。如图3中右侧坐标图所示,计算得到的直线vv’即超平面。从图3中右侧坐标图可以看出,二维情况下超平面vv’为一直线,该直线距离两侧最近特征值向量的距离最大且相等,该距离均为L。
这样,通过支持向量机的算法,便可以得到划分训练样本中不同人体部位的分类条件。接着,在本申请实施方式中,可以基于所述归类条件识别当前帧图片中人体动作对应的预设数量的目标部位。具体地,所述人体部位分类器可以利用分类条件对当前帧图片中的人体动作进行分类,从而识别出所述当前帧图片中的人体动作包含的预设数量的目标部 位。
步骤S12:根据预设聚类算法对识别的所述预设数量的目标部位中的像素点进行聚类处理,获取每个目标部位对应的骨骼点。
在本申请实施方式中,在获取到所述当前帧图片中人体动作对应的多个目标部位后,便可以根据预设聚类算法对识别的所述预设数量的目标部位中的像素点进行聚类处理,获取每个目标部位对应的骨骼点。具体地,所述聚类算法可以包括K-MEANS算法、凝聚层次聚类算法或DBSCAN算法中的至少一种。所述聚类算法可以将识别出的目标部位中的像素点聚集于一点,最终聚集的这一点便可以作为所述目标部位对应的骨骼点。
这样,对每个识别出的目标部位均进行聚类处理,从而可以得到各个目标部位对应的骨骼点。
步骤S13:将获取的所述骨骼点构成与所述人体动作相对应的模拟动作。
在本申请实施方式中,在获取了各个目标部位对应的骨骼点后,将这些骨骼点按顺序进行连线,便可以得到与所述人体动作相对应的骨骼图,所述骨骼图便可以作为获取的模拟动作。
在所述模拟动作中,相邻两个骨骼点之间的连线便可以形成人体的动作,例如,左肩部骨骼点与左手肘骨骼点之间的连线可以勾勒出人体左上臂的线条,该勾勒出的线条便可以作为与人体动作的左上臂相对应的模拟动作。
步骤S2:将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异。
在本申请实施方式中,在获取了与人体动作相对应的模拟动作之后,便可以将获取的所述模拟动作与预设的康复训练动作标本进行对比,从而判断当前人体动作与康复训练动作标本是否一致,也就是说,通过对所述模拟动作与所述康复训练动作标本进行对比,可以确定当前时刻人体动作是否到位。
在本申请实施方式中,可以将获取的所述模拟动作的中心点与预设 的康复训练动作标本的中心点重合。所述中心点可以为人体躯干的中心点,例如胸腔的中心点。在将所述模拟动作的中心点与预设康复训练动作标本的中心点重合之后,便可以判断模拟动作的其他部位是否与康复训练动作标本的其他部位对应一致。这样,便可以确定所述模拟动作和所述康复训练动作标本在预设位置处的动作差异。
在本申请实施方式中,所述预设位置可以是针对不同的康复训练动作标本预先指定的。例如,对于某个康复训练动作标本而言,其重点在于手臂和脚的位置是否准确。那么,在这种情况下,便可以将该康复训练动作标本中的手臂和脚确定为预设位置,在对模拟动作与康复训练动作标本进行对比时,可以仅对手臂和脚的位置进行对比,从而可以确定模拟动作与康复训练动作标本在手臂和脚的位置处存在的动作差异。
步骤S3,体感配件获取用户反馈。
在本步骤中,在康复训练过程中,体感配件可以实时获取用户反馈。在具体的实现方式中,用户反馈可以是语音反馈或者触摸反馈等多种形式的反馈,用户反馈的内容可以包括体感以及体感级别。
例如,体感设备可以通过电视通知用户进行语音反馈并通过电视采集用户的语音反馈。在康复训练过程中,用户在抬起手臂,同时告知用户如果感觉到疼痛时进行预先设定的语音反馈,例如“疼”,具体地,可以设定疼痛的等级,例如“非常疼”,“一般疼”和“不疼”,电视的语音采集系统采集用户的语音反馈后发送给体感设备。
还例如,体感设备可以通过有线或无线的方式外接触摸设备,该触摸设备可以放置在用户身边,对于语言不便的用户,可以采用触摸方式进行触摸反馈。在康复训练过程中,用户在抬起手臂,同时告知用户如果感觉到疼痛时进行预先设定的触摸反馈,例如触摸表示疼的案件,具体地,可以设定对应于疼痛的等级的案件,触摸设备采集到触摸反馈后发送给体感设备。
步骤S4:根据确定的动作差异和获取的用户反馈,进行康复训练评估。
在本申请实施方式中,确定所述模拟动作和所述康复训练动作标本 在预设位置处存在的动作差异,并结合在康复训练过程中的用户反馈,进行康复训练评估,而且还可以随时调整康复训练的动作标本。
例如,当模拟动作与康复训练动作标本中的手臂部位不一致时,可以判断模拟动作的手臂与康复训练动作标本的手臂之间的位置关系,如模拟动作的手臂位于所述康复训练动作标本的手臂的下方且接收到用户的反馈非常疼,那么在这种情况下,便可以向下调整康复训练的动作标本手臂位置。
此外,康复训练对用户的鼓励十分重要,与其他普通的体感配件相比,由于与电视连接,可以在用户使用时,同时看到自己的动作与标准动作的对比,还可以通过电视进行语音鼓励,能够提高康复训练的效果。
本申请实施方式提供一种在其上记录有配置为执行上述方法的程序的计算机可读记录介质。
所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。例如,机器可读介质包括只读存储器(ROM)、随机存取存储器(RAM)、磁盘存储介质、光存储介质、闪速存储介质、电、光、声或其他形式的传播信号(例如,载波、红外信号、数字信号等)等。
本申请实施方式还提供一种通过电视和体感配件进行康复训练的系统。所述系统可以包括:
体感配件,配置为对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作;将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异;获取用户反馈;根据所述动作差异和所述用户反馈,进行康复训练评估;
电视,与所述体感配件通信连接,配置为显示体感配件提供的康复训练动作标本。
在本申请一优选实施方式中,所述模拟动作包括与所述人体动作相对应的人体骨骼。
相应地,所述体感配件对当前帧图片中的人体动作进行识别,获取 与所述人体动作相对应的模拟动作,具体为:
所述体感配件利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位;根据预设聚类算法对识别的所述预设数量的目标部位中的像素点进行聚类处理,获取每个目标部位对应的骨骼点;将获取的所述骨骼点构成与所述人体动作相对应的模拟动作。
其中,所述体感配件利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位,具体为:所述体感配件获取人体部位训练集,所述人体部位训练集中包括预设数量的人体部位样本图;提取所述人体部位训练集中人体部位样本图的特征值向量;基于提取的所述特征值向量计算所述人体部位训练集中人体部位样本图的归类条件;基于所述归类条件识别当前帧图片中人体动作对应的预设数量的目标部位。
所述体感配件将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异,具体为:所述体感配件将获取的所述模拟动作的中心点与预设的康复训练动作标本的中心点重合;确定所述模拟动作和所述康复训练动作标本在预设位置处的动作差异。
此外,所述用户反馈是语音反馈或者触摸反馈,所述用户反馈的内容包括体感以及体感级别。
所述体感配件获取用户反馈,具体为:体感设备通过电视通知用户进行语音反馈,电视采集用户的语音反馈后发送给体感设备;或者,体感设备通过有线或无线的方式外接触摸设备,触摸设备采用用户的触摸反馈后发送给体感设备。
需要说明的是,上述各个功能模块的具体实现方式与步骤S1至S3中的描述一致,这里便不再赘述。
本领域技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前 述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
由上可见,本申请实施方式提供的一种通过电视和体感配件进行康复训练的方法及系统,电视和体感设备作为一个整体,以体感游戏和人机互动的形式去实现一系列已被论证有效的康复动作,从而替代了传统的康复作业疗法中的辅助器具,对上下肢以及大脑认知进行全方位的康复训练。该方案具有提高康复对象的运动能力、协调能力、认知能力、作业能力以及对患者进行心理治疗等功能,激发康复对象的训练欲望,提高康复热情使治疗效果倍增;通过体感摄像头对患者的康复运动与标准运动做对比,在康复过程用户可以根据自身训练情况进行反馈,体感配件能够根据所述动作差异和所述用户反馈,进行康复训练评估并适度的调整康复训练的动作标本。此外,医生可以通过康复医疗服务平台,远程监测康复数据,实时查看治疗效果,为在家康复患者提供远程辅助治疗,节约大笔住院费用。
上面对本申请的各种实施方式的描述以描述的目的提供给本领域技术人员。其不旨在是穷举的、或者不旨在将本申请限制于单个公开的实施方式。如上所述,本申请的各种替代和变化对于上述技术所属领域技术人员而言将是显而易见的。因此,虽然已经具体讨论了一些另选的实施方式,但是其它实施方式将是显而易见的,或者本领域技术人员相对容易得出。本申请旨在包括在此已经讨论过的本申请的所有替代、修改、和变化,以及落在上述申请的精神和范围内的其它实施方式。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于方法实施例而言,由于其基本相似于系统实施例,所以描述的比较简单,相关之处参见系统实施例的部分说明即可。
虽然通过实施例描绘了本申请,本领域普通技术人员知道,本申请有许多变形和变化而不脱离本申请的精神,希望所附的权利要求包括这些变形和变化而不脱离本申请的精神。

Claims (11)

  1. 一种通过电视和体感配件进行康复训练的方法,所述电视和体感配件通信连接,其特征在于,所述方法包括:
    体感配件对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作;
    体感配件将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异;
    体感配件获取用户反馈;
    根据所述动作差异和所述用户反馈,进行康复训练评估。
  2. 根据权利要求1所述的通过电视和体感配件进行康复训练的方法,其特征在于,所述模拟动作包括与所述人体动作相对应的人体骨骼;
    所述对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作,包括:
    利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位;
    根据预设聚类算法对识别的所述预设数量的目标部位中的像素点进行聚类处理,获取每个目标部位对应的骨骼点;
    将获取的所述骨骼点构成与所述人体动作相对应的模拟动作。
  3. 根据权利要求2所述的通过电视和体感配件进行康复训练的方法,其特征在于,所述利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位,包括:
    获取人体部位训练集,所述人体部位训练集中包括预设数量的人体部位样本图;
    提取所述人体部位训练集中人体部位样本图的特征值向量;
    基于提取的所述特征值向量计算所述人体部位训练集中人体部位样本图的归类条件;
    基于所述归类条件识别当前帧图片中人体动作对应的预设数量的目标部位。
  4. 根据权利要求1所述的通过电视和体感配件进行康复训练的方法, 其特征在于,所述将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异,包括:
    将获取的所述模拟动作的中心点与预设的康复训练动作标本的中心点重合;
    确定所述模拟动作和所述康复训练动作标本在预设位置处的动作差异。
  5. 根据权利要求4所述的通过电视和体感配件进行康复训练的方法,其特征在于,所述用户反馈是语音反馈或者触摸反馈,所述用户反馈的内容包括体感以及体感级别;
    所述体感配件获取用户反馈,包括:
    体感设备通过电视通知用户进行语音反馈,电视采集用户的语音反馈后发送给体感设备;或者,
    体感设备通过有线或无线的方式外接触摸设备,触摸设备采用用户的触摸反馈后发送给体感设备。
  6. 根据权利要求5所述的通过电视和体感配件进行康复训练的方法,其特征在于,所述根据所述动作差异和所述用户反馈,进行康复训练评估,包括:
    根据确定所述模拟动作与所述康复训练动作标本之间的动作差异,以及通过用户反馈获取的体感以及体感级别,进行康复训练评估,并调整康复训练的动作标本。
  7. 一种通过电视和体感配件进行康复训练的系统,其特征在于,所述系统包括:
    体感配件,配置为对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作;将获取的所述模拟动作与预设的康复训练动作标本进行对比,确定所述模拟动作与所述康复训练动作标本之间的动作差异;获取用户反馈;根据所述动作差异和所述用户反馈,进行康复训练评估;
    电视,与所述体感配件通信连接,配置为显示体感配件提供的康复 训练动作标本。
  8. 根据权利要求7所述的通过电视和体感配件进行康复训练的系统,其特征在于,所述模拟动作包括与所述人体动作相对应的人体骨骼;
    所述体感配件对当前帧图片中的人体动作进行识别,获取与所述人体动作相对应的模拟动作,具体为:
    所述体感配件利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位;根据预设聚类算法对识别的所述预设数量的目标部位中的像素点进行聚类处理,获取每个目标部位对应的骨骼点;将获取的所述骨骼点构成与所述人体动作相对应的模拟动作。
  9. 根据权利要求8所述的通过电视和体感配件进行康复训练的系统,其特征在于,所述体感配件利用预先设置的人体部位分类器,识别当前帧图片中人体动作对应的预设数量的目标部位,具体为:
    所述体感配件获取人体部位训练集,所述人体部位训练集中包括预设数量的人体部位样本图;提取所述人体部位训练集中人体部位样本图的特征值向量;基于提取的所述特征值向量计算所述人体部位训练集中人体部位样本图的归类条件;基于所述归类条件识别当前帧图片中人体动作对应的预设数量的目标部位。
  10. 根据权利要求7所述的通过电视和体感配件进行康复训练的系统,其特征在于,所述用户反馈是语音反馈或者触摸反馈,所述用户反馈的内容包括体感以及体感级别;
    所述体感配件获取用户反馈,具体为:
    体感设备通过电视通知用户进行语音反馈,电视采集用户的语音反馈后发送给体感设备;或者,
    体感设备通过有线或无线的方式外接触摸设备,触摸设备采用用户的触摸反馈后发送给体感设备。
  11. 一种在其上记录有配置为执行权利要求1所述方法的程序的计算机可读记录介质。
PCT/CN2016/088188 2016-03-24 2016-07-01 通过电视和体感配件进行康复训练及系统 WO2017161733A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610173309.XA CN105844100A (zh) 2016-03-24 2016-03-24 通过电视和体感配件进行康复训练及系统
CN201610173309.X 2016-03-24

Publications (1)

Publication Number Publication Date
WO2017161733A1 true WO2017161733A1 (zh) 2017-09-28

Family

ID=56584470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/088188 WO2017161733A1 (zh) 2016-03-24 2016-07-01 通过电视和体感配件进行康复训练及系统

Country Status (2)

Country Link
CN (1) CN105844100A (zh)
WO (1) WO2017161733A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415783A (zh) * 2018-04-26 2019-11-05 北京新海樱科技有限公司 一种基于体感的作业疗法康复方法
CN111544290A (zh) * 2020-05-13 2020-08-18 北京金林高科科技有限公司 一种用于健康理疗的智能装置
CN115337607A (zh) * 2022-10-14 2022-11-15 佛山科学技术学院 一种基于计算机视觉的上肢运动康复训练方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106648112A (zh) * 2017-01-07 2017-05-10 武克易 一种体感动作识别方法
CN108538400A (zh) * 2017-03-06 2018-09-14 顾以群 一种用于远程康复系统的动作指示处理方法及装置
CN107422852A (zh) * 2017-06-27 2017-12-01 掣京机器人科技(上海)有限公司 手功能康复训练评估方法和系统
CN107133489A (zh) * 2017-07-03 2017-09-05 广东工业大学 一种基于体感设备的康复训练评估方法及系统
CN107491648A (zh) * 2017-08-24 2017-12-19 清华大学 基于Leap Motion体感控制器的手部康复训练方法
CN109815776B (zh) * 2017-11-22 2023-02-10 腾讯科技(深圳)有限公司 动作提示方法和装置、存储介质及电子装置
CN109191588B (zh) * 2018-08-27 2020-04-07 百度在线网络技术(北京)有限公司 运动教学方法、装置、存储介质及电子设备
CN109821218A (zh) * 2019-02-15 2019-05-31 中国人民解放军总医院 一种康复训练系统及其方法
WO2020223944A1 (zh) * 2019-05-09 2020-11-12 深圳大学 生理机能评估系统和方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970176B2 (en) * 2007-10-02 2011-06-28 Omek Interactive, Inc. Method and system for gesture classification
CN102324041A (zh) * 2011-09-09 2012-01-18 深圳泰山在线科技有限公司 像素归类方法、关节体姿态识别方法及鼠标指令生成方法
CN103230664A (zh) * 2013-04-17 2013-08-07 南通大学 一种基于Kinect传感器的上肢运动康复训练系统及其训练方法
TWM479133U (zh) * 2014-02-11 2014-06-01 Chinese Culture University 體感互動式智力訓練與復健系統
CN204406327U (zh) * 2015-02-06 2015-06-17 长春大学 基于三维体感摄影机的肢体康复模拟仿真训练系统
CN104722056A (zh) * 2015-02-05 2015-06-24 北京市计算中心 一种运用虚拟现实技术的康复训练系统及方法
CN105031908A (zh) * 2015-07-16 2015-11-11 于希萌 一种平衡矫正式训练装置
CN204759525U (zh) * 2015-06-30 2015-11-11 长春大学 一种应用体感互动模式的康复训练系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970176B2 (en) * 2007-10-02 2011-06-28 Omek Interactive, Inc. Method and system for gesture classification
CN102324041A (zh) * 2011-09-09 2012-01-18 深圳泰山在线科技有限公司 像素归类方法、关节体姿态识别方法及鼠标指令生成方法
CN103230664A (zh) * 2013-04-17 2013-08-07 南通大学 一种基于Kinect传感器的上肢运动康复训练系统及其训练方法
TWM479133U (zh) * 2014-02-11 2014-06-01 Chinese Culture University 體感互動式智力訓練與復健系統
CN104722056A (zh) * 2015-02-05 2015-06-24 北京市计算中心 一种运用虚拟现实技术的康复训练系统及方法
CN204406327U (zh) * 2015-02-06 2015-06-17 长春大学 基于三维体感摄影机的肢体康复模拟仿真训练系统
CN204759525U (zh) * 2015-06-30 2015-11-11 长春大学 一种应用体感互动模式的康复训练系统
CN105031908A (zh) * 2015-07-16 2015-11-11 于希萌 一种平衡矫正式训练装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415783A (zh) * 2018-04-26 2019-11-05 北京新海樱科技有限公司 一种基于体感的作业疗法康复方法
CN111544290A (zh) * 2020-05-13 2020-08-18 北京金林高科科技有限公司 一种用于健康理疗的智能装置
CN115337607A (zh) * 2022-10-14 2022-11-15 佛山科学技术学院 一种基于计算机视觉的上肢运动康复训练方法

Also Published As

Publication number Publication date
CN105844100A (zh) 2016-08-10

Similar Documents

Publication Publication Date Title
WO2017161733A1 (zh) 通过电视和体感配件进行康复训练及系统
Islam et al. Yoga posture recognition by detecting human joint points in real time using microsoft kinect
CN107485844B (zh) 一种肢体康复训练方法、系统及嵌入式设备
CN104274183B (zh) 动作信息处理装置
WO2020257777A1 (en) Wearable joint tracking device with muscle activity and methods thereof
Li et al. Human pose estimation based in-home lower body rehabilitation system
US20150320343A1 (en) Motion information processing apparatus and method
Díaz et al. DTCoach: your digital twin coach on the edge during COVID-19 and beyond
WO2017161734A1 (zh) 通过电视和体感配件矫正人体动作及系统
US11403882B2 (en) Scoring metric for physical activity performance and tracking
JP2015514467A (ja) 筋肉活動の取得および分析用のシステムならびにその動作方法
Leightley et al. Benchmarking human motion analysis using kinect one: An open source dataset
Dehbandi et al. Using data from the Microsoft Kinect 2 to quantify upper limb behavior: a feasibility study
CN115346670A (zh) 基于姿态识别的帕金森病评级方法、电子设备及介质
Marusic et al. Evaluating kinect, openpose and blazepose for human body movement analysis on a low back pain physical rehabilitation dataset
Sharma et al. iYogacare: real-time Yoga recognition and self-correction for smart healthcare
Cheng et al. Periodic physical activity information segmentation, counting and recognition from video
Jawed et al. Rehabilitation posture correction using neural network
Amorim et al. Recent trends in wearable computing research: A systematic review
Ilyas et al. Deep transfer learning in human–robot interaction for cognitive and physical rehabilitation purposes
Rodrigues et al. Supervised classification of motor-rehabilitation body movements with rgb cameras and pose tracking data
Zhao et al. Motor function assessment of children with cerebral palsy using monocular video
CN113842622A (zh) 一种运动教学方法、装置、系统、电子设备及存储介质
Zaher et al. A framework for assessing physical rehabilitation exercises
Venugopalan et al. MotionTalk: personalized home rehabilitation system for assisting patients with impaired mobility

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16895084

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16895084

Country of ref document: EP

Kind code of ref document: A1