CN109271918B - Method for distinguishing people with balance ability disorder based on gravity center shift model - Google Patents

Method for distinguishing people with balance ability disorder based on gravity center shift model Download PDF

Info

Publication number
CN109271918B
CN109271918B CN201811052076.3A CN201811052076A CN109271918B CN 109271918 B CN109271918 B CN 109271918B CN 201811052076 A CN201811052076 A CN 201811052076A CN 109271918 B CN109271918 B CN 109271918B
Authority
CN
China
Prior art keywords
gravity
center
image
human body
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811052076.3A
Other languages
Chinese (zh)
Other versions
CN109271918A (en
Inventor
金海燕
肖聪
肖照林
蔡磊
李秀秀
石俊飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201811052076.3A priority Critical patent/CN109271918B/en
Publication of CN109271918A publication Critical patent/CN109271918A/en
Application granted granted Critical
Publication of CN109271918B publication Critical patent/CN109271918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开的基于重心偏移模型区分平衡能力障碍人群的方法,步骤包括如下:首先从45°采集正常人和非正常人在搭建好的虚拟现实场景中的人体行走姿态视频;然后,加载采集到的正常人和非正常人的视频,将视频图像提取成图片,然后将图片进行处理分别得到正常人和非正常人的重心坐标;最后,根据得到的重心数据,并提取重心夹角数据和人体上下重心均值和方差,通过SVM分类器将提取到的重心数据分类后,结合重心夹角数据和人体上下重心均值和方差,快速判断出平衡能力障碍人群。本发明公开的方法解决了传统主观方法过于粗略以及量表测评方法过于复杂和代价过高的问题,通过多个步骤对视频及图像进行处理,最终分类的准确率在85%以上。

Figure 201811052076

The method for distinguishing people with impaired balance ability based on a center of gravity offset model disclosed in the present invention includes the following steps: firstly, from 45°, collecting videos of human walking postures of normal people and non-normal people in the built virtual reality scene; then, loading the collected videos The obtained videos of normal people and abnormal people, extract the video images into pictures, and then process the pictures to obtain the coordinates of the center of gravity of normal and abnormal people respectively; finally, according to the obtained center of gravity data, and extract the center of gravity angle data and The mean and variance of the upper and lower centers of gravity of the human body, after classifying the extracted center of gravity data through the SVM classifier, combined with the angle data of the center of gravity and the mean and variance of the upper and lower centers of gravity of the human body, can quickly determine the balance-impaired population. The method disclosed by the invention solves the problems that the traditional subjective method is too rough, the scale evaluation method is too complicated and the cost is too high, and the video and image are processed through multiple steps, and the final classification accuracy rate is over 85%.

Figure 201811052076

Description

Method for distinguishing people with balance ability disorder based on gravity center shift model
Technical Field
The invention belongs to the technical field of computer digital image processing, and relates to a method for distinguishing people with balance ability disorder based on a gravity center shift model.
Background
The human body balance ability refers to the ability of the human body to maintain self stability and resist balance damage, including the ability to maintain a certain posture or the ability to regulate and control the body to maintain balance when being subjected to external force, and is one of the important physiological functions of the human body. The main factors influencing the balance ability include the factors of support area, height of center of gravity, weight and the like, and are also influenced by the factors of vision, body organs, reaction of a sensing system and the like. Has good balance capability, is beneficial to improving the functions of motor organs and vestibular organs, and improves the regulating function of the central nervous system to muscle tissues and internal organs, thereby ensuring the smooth proceeding of physical activities and improving the capability of adapting to complex environment and self-protection capability.
At present, the traditional subjective human balance ability detection methods mainly comprise a Romberg test method, an enhanced Romberg test method and a single-leg upright test method (OLST). Although the traditional subjective observation method is simple to operate, the traditional subjective observation method is too rough and subjective, lacks objectivity and unified standards, cannot clearly and intuitively judge the degree of balance disorder, and can only be used for clinically performing preliminary tests on patients with suspected balance disorder. Other methods such as scale evaluation methods, including Berg balance scale, Tinetti gait and balance scale, activity balance confidence scale, Brunel balance scale, etc., require complex equipment support, and a large number of patient tests, and obtain corresponding activity data through different posture activities such as continuous unsupported standing and sitting, standing-to-sitting movement, bed-chair transfer, standing-up to take articles from the ground, etc., to judge the balance ability of the human body. Compared with the traditional subjective detection, the method improves the reliability, but the detection method is too complicated, the realization cost is too high, and the method is not beneficial to being implemented in actual situations.
For the improvement of the prior art, the detection of the human body balance ability does not need to be carried out by using the traditional subjective observation method or scale evaluation method. Therefore, the VR system based on virtual reality is greatly utilized, relates to technologies such as computer graphics, man-machine interaction technology, sensing technology and artificial intelligence, and is expected to produce great economic and social benefits. The computer is utilized to generate vivid three-dimensional visual, auditory, olfactory and other senses, so that the participants naturally experience and interact with the virtual world, and the feeling of being personally on the scene is generated through accurate 3D world images. Different virtual scenes are simulated by the VR system, so that the participants can react and interact according to the corresponding scenes, and meanwhile, the computer can quickly judge the quality of the human body balance capacity according to data obtained by the reaction and a scientific basis and data measurement and calculation method. Has higher accuracy and reliability, and higher efficiency.
Disclosure of Invention
The invention aims to provide a method for distinguishing people with balance ability disorder based on a gravity center shift model, and solves the problems that a traditional subjective method is too rough, and a scale evaluation method is too complex and too costly.
The invention adopts the technical scheme that the method for distinguishing the crowd with balance ability disorder based on the gravity center shift model comprises the following specific operation steps:
step 1, a lens of a camera is right opposite to the position between the front side and the side surface of a human body, namely, human body walking posture videos of normal people and abnormal people in a built virtual reality scene are collected from 45 degrees;
step 2, loading the two collected human body walking posture videos, respectively extracting video images into pictures, and then processing the pictures to respectively obtain the gravity center coordinates of the two types of human bodies;
and 3, extracting included angle of gravity data and mean square deviation of upper and lower centers of gravity of the human body according to the obtained coordinate data of the center of gravity, classifying the extracted included angle of gravity data through a Support Vector Machine (SVM) classifier, and judging the crowd with balance disorder by combining the included angle of gravity data and the mean square deviation of the upper and lower centers of gravity of the human body.
Yet another feature of the present invention is that,
the operation process of the step 2 is as follows:
step 2.1, reading the two collected human body posture videos by using a cvLoadImage function, setting the starting time and the ending time of the read-in videos, resetting the starting time and the ending time of the videos if the capture function does not read the videos containing the portrait, and carrying out the next step if the capture function reads the videos containing the portrait;
step 2.2, frames of the two groups of extracted videos are respectively removed through a CvCapture function in OpenCV, firstly, blank scenes in the two groups of videos are respectively extracted for picture storage, and then, one frame of the shot human body posture video is taken every two seconds and stored as a picture;
2.3, carrying out differential processing on the two groups of stored human body posture pictures and the blank scene respectively to obtain pictures only with human body postures;
step 2.4, carrying out image denoising on the image after the difference;
step 2.5, further carrying out image corrosion on the denoised picture to finally obtain a black and white image only with a human body image;
step 2.6, performing edge extraction on the image obtained after corrosion, processing the corroded image by using a Canny edge detection operator to obtain a connected region of the image, and performing convolution on the image by using a Gaussian filter to reduce the obvious noise influence on an edge detector; then, calculating the gradient strength and direction of each pixel point in the image, and applying non-maximum value inhibition to eliminate stray response caused by edge detection; finally, determining real and potential edges by using double-threshold detection, finally finishing edge detection by inhibiting isolated weak edges, and extracting a human body contour image by using findContours;
and 2.7, calculating the moment of the human body contour image, and calculating the barycentric coordinate of the human body through the moment of the contour image.
The specific process of the differential processing in step 2.3 is as follows:
firstly, carrying out binarization processing on two groups of stored human body posture pictures and blank scene pictures to enable all images to be black and white images;
then, carrying out difference processing on the two groups of binarized human body posture photos and blank scene pictures, and setting the image containing the human body extracted at the kth moment as xkThe image of the blank scene is xjAnd differentiating the two images to obtain a differential image delta xk:Δxk=xk-xj
The image denoising process of the step 2.4 is as follows: and (2) replacing the pixel value of one point in the differential processing image by using a median filtering method to obtain a denoised image by using the pixel median of each point in the neighborhood of the point, wherein the specific process is that f (x, y) and g (x, y) are respectively the image subjected to differential processing and the image subjected to denoising, and the median filtering output is g (x, y) ═ med { f (x-k, y-l), (k, l ∈ W) }, wherein W is a two-dimensional template, and k and l are respectively rows and columns of the image.
The specific operation steps of step 2.5 are as follows: and corroding the image subjected to difference and denoising, defining the size of a corrosion window by a getStructuringElement function, selecting a rectangular window MORPH-RECT, selecting the size of a corrosion kernel by 3 multiplied by 3, and then performing corrosion operation by using an erode function through the corrosion window MORPH-RECT to obtain the image only containing the portrait.
The calculation method of the body weight center coordinates in step 2.7 is as follows:
firstly, calculating the moment of a human body contour image, taking a human body in a picture as a planar object, taking a pixel value of each point as the density of the point, taking an expected value of the point as the moment of the point, and calculating the barycentric coordinate of the human body by adopting the first moment of the image, as shown in formulas 1-3:
Figure BDA0001794829990000041
Figure BDA0001794829990000042
the coordinates of the center of gravity of the human body image are:
Figure BDA0001794829990000051
wherein, V (i, j) represents the gray value of the human body image at the point (i, j), when the image is a binary image and V (i, j) only has black and white, namely two values of 0 and 1, then M is00Expressed as the sum of white areas in the human body image, i.e. the area of the binary image, M10Representing the accumulation of the horizontal coordinate values of the white area in the image; in the same way, M01Representing the accumulation of ordinate values of white areas in the image, xcRepresents the abscissa of the center of gravity, ycRepresenting the ordinate of the center of gravity.
The operation steps of step 3 are as follows:
step 3.1, classifying the obtained gravity center seat as input data through an SVM classifier to obtain gravity center coordinate data of normal people and abnormal people with marks;
step 3.2, respectively extracting the gravity center included angle of the normal person and the abnormal person;
step 3.3, calculating the mean square error of the gravity centers of the normal person and the abnormal person respectively;
and 3.4, combining the steps 3.1-3.3 to obtain the barycentric coordinates, barycentric included angles and barycentric mean square deviations of the human body, and distinguishing the crowd with balance ability disorder.
The process of classification by the SVM classifier in step 3.1 is as follows:
firstly, loading a training data set and a testing data set, wherein the training data set comprises training data, training labels, testing data and testing labels, namely human body barycentric coordinate data and correct labels, dividing the training data and the testing data into two parts, obtaining the optimal parameters of current data through an SVMcgForRegress parameter optimizing function, and obtaining a trained model through the optimized parameters and training data through an svmtrain function; and finally, testing by using an svmpredict function to obtain barycentric coordinate data with a mark of '1' or '-1'.
The specific calculation process of step 3.2 is as follows:
by calculating the three barycentric coordinates of the upper, middle and lower parts of the human body, using the atan2 function and converting the barycentric coordinates into angles, the included angle of the upper, middle and lower barycentric coordinates can be calculated: let the coordinate of the upper center of gravity P1 be (x)1,y1) Center of gravity coordinates P2 (x)2,y2) The lower center of gravity P3 is represented by the coordinate (x)3,y3) When the angle of gravity center is theta, the radian of the angle between P1P2 and the positive direction of the x-axis is atan2 (y)2-y1,x2-x1) Angle of inclination theta1Is atan2 (y)2-y1,x2-x1) 180/pi, and the same principle, the angle theta between P3P2 and the positive direction of the x axis2Is atan2 (y)3-y2,x3-x2) 180/pi, and the angle of gravity theta is represented as theta12As formula 7, the included angle of the center of gravity of the normal person and the abnormal person is calculated respectively:
Figure BDA0001794829990000061
wherein n is the number of the extracted included angles of the center of gravity, p1.y、p2.y、p3.yIs the ordinate of three gravity points, p1.x、p2.x、p3.xThe abscissa of the three gravity points.
The specific process of step 3.3 is as follows:
extracting one frame of picture every two seconds from the human body walking video, wherein the 20 frames of pictures are total, and calculating the mean value and the variance of the vertical coordinates of the gravity centers of the upper half body and the lower half body extracted after calculation by using a formula 8:
Figure BDA0001794829990000062
wherein n is the number of experimental objects, and t is the element [1, n ]],
Figure BDA0001794829990000063
Represents the values of the upper and lower body barycentric ordinates at time t,
Figure BDA0001794829990000064
representing the mean value of the ordinate of the center of gravity, and CGS is the mean square error of the center of gravity;
step 3.4 the method for distinguishing people with balance ability disorder comprises the following steps:
judging according to the mark on the gravity center data, if the output balance capability is marked as '1', distinguishing as a normal person, and showing that the balance capability is good; if the output balance ability is marked as "-1", the person is distinguished as an abnormal person, which indicates that the balance ability is obstructed; the included angle value of the gravity center is relatively large, the balance capability is good, the included angle value of the gravity center is relatively small, and the balance capability is obstructed; the relatively large variance value of the center of gravity indicates that the larger and more unstable the fluctuation of the center of gravity of the human body, the worse the balance ability.
The method has the beneficial effects that the method for distinguishing the crowd with balance ability disorder based on the gravity center shift model solves the problems that the traditional subjective method is too rough, and the scale evaluation method is too complex and too costly. The method has the advantages that the gravity center data are obtained by processing the human body walking video, people with balancing obstacle ability are distinguished through the gravity center offset model without any balance measuring instrument, the balancing ability of the people or other people is objectively judged, the video and the image are processed through multiple steps, and the accuracy rate of final classification is guaranteed to be more than 85%.
Drawings
FIG. 1 is a flow chart of the operation of the method of the present invention for distinguishing persons with balance impairment based on a center of gravity shift model;
FIG. 2 is an overall flow chart of the method of the present invention for distinguishing persons with balance impairment based on a center of gravity shift model;
FIG. 3 is a flow chart of balance ability determination and analysis of the method for distinguishing persons with balance ability disorders based on a center of gravity shift model according to the present invention;
FIG. 4 is an abnormal human frontal pose model;
FIG. 5 is an abnormal human lateral pose model;
FIG. 6 is a normal human frontal pose model;
FIG. 7 is a normal human lateral pose model;
FIG. 8 is a diagram of the difference result of walking images of a human body, the left side is a diagram of the difference result of walking images of a normal person, and the right side is a diagram of the difference result of walking images of an abnormal person;
FIG. 9 is a graph of the difference image erosion denoising result, with the left side being a normal person and the right side being an abnormal person;
FIG. 10 is a barycentric coordinate extraction diagram with a normal person on the left and an abnormal person on the right;
FIG. 11 is a chart of centroid angle analysis;
FIG. 12 is a diagram of upper body weight and mind ANOVA;
fig. 13 is a lower body center of gravity variance analysis diagram.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The method for distinguishing people with balance ability disorder based on the center-of-gravity shift model, disclosed by the invention, is shown in fig. 1 and fig. 2, and comprises the following specific operation steps:
when the camera lens is opposite to the front of the human body, the collection angle is defined to be 0 degrees, when the camera lens is opposite to the side face of the human body, the collection angle is defined to be 90 degrees, the camera lens is selected to be opposite to the position between the front and the side face of the human body, and the collection angle at the moment is defined to be 45 degrees.
Step 1, collecting human body walking posture videos of normal people and abnormal people in a built virtual reality scene from 45 degrees;
step 2, loading the two collected human body walking posture videos, respectively extracting video images into pictures, and then processing the pictures to respectively obtain the gravity center coordinates of the two types of human bodies;
and 3, extracting barycentric included angle data and mean square deviations of upper and lower barycentrics of the human body according to the obtained barycentric coordinate data, classifying the extracted barycentric data through an SVM classifier, and judging the crowd with balance ability obstacle by combining the barycentric included angle data and the mean square deviations of the upper and lower barycentrics of the human body.
The operation process of the step 2 is as follows:
step 2.1, reading posture videos of normal people and abnormal people by using a cvLoadImage function, setting the starting time and the ending time of the read-in video, resetting the starting time and the ending time of the video if the capture function does not read the video containing the portrait, and carrying out the next step if the capture function reads the video containing the portrait;
step 2.2, frames of the two groups of extracted videos are respectively removed through a CvCapture function in OpenCV, firstly, blank scenes in the two groups of videos are respectively extracted for picture storage, and then, one frame of the shot human body posture video is taken every two seconds and stored as a picture;
2.3, carrying out differential processing on the two groups of stored human body posture pictures and the blank scene respectively to obtain pictures only with human body postures;
step 2.4, carrying out image denoising on the image after the difference;
step 2.5, further carrying out image corrosion on the denoised picture to finally obtain a black and white image only with a human body image;
step 2.6, performing edge extraction on the image obtained after corrosion, processing the corroded image by using a Canny edge detection operator to obtain a connected region of the image, and performing convolution on the image by using a Gaussian filter to reduce the obvious noise influence on an edge detector; then, calculating the gradient strength and direction of each pixel point in the image, and applying non-maximum value inhibition to eliminate stray response caused by edge detection; finally, determining real and potential edges by using double-threshold detection, finally finishing edge detection by inhibiting isolated weak edges, and extracting a human body contour image by using findContours;
and 2.7, calculating the moment of the human body contour image, and calculating the barycentric coordinate of the human body through the moment of the contour image.
The specific process of the differential processing in step 2.3 is as follows:
firstly, carrying out binarization processing on two groups of stored human body posture pictures and blank scene pictures to enable all images to be black and white images;
then, carrying out difference processing on the two groups of binarized human body posture photos and blank scene pictures, and setting the image containing the human body extracted at the kth moment as xkThe image of the blank scene is xjAnd differentiating the two images to obtain a differential image delta xk
Δxk=xk-xj
The image denoising process of the step 2.4 is as follows: and (2) replacing the pixel value of one point in the differential processing image by using a median filtering method to obtain a denoised image by using the pixel median of each point in the neighborhood of the point, wherein the specific process is that f (x, y) and g (x, y) are respectively the image subjected to differential processing and the image subjected to denoising, and the median filtering output is g (x, y) ═ med { f (x-k, y-l), (k, l ∈ W) }, wherein W is a two-dimensional template, and k and l are respectively rows and columns of the image.
The specific operation steps of step 2.5 are as follows: and corroding the image subjected to difference and denoising, defining the size of a corrosion window through a getStructuringElement function, selecting a rectangular window MORPH-RECT, selecting the size of a corrosion kernel by 3 multiplied by 3, and performing corrosion operation through the corrosion window by using an anode function through the corrosion window MORPH-RECT to obtain the image only containing the portrait.
The calculation method of the body weight center coordinates in step 2.7 is as follows:
firstly, calculating the moment of a human body contour image, taking a human body in a video picture as a planar object, taking a pixel value of each point as the density of the point, taking an expected value of the point as the moment of the point, and calculating the barycentric coordinate of the human body by adopting the first moment of the image, as shown in formula 1-3:
Figure BDA0001794829990000101
Figure BDA0001794829990000102
the coordinates of the center of gravity of the human body image are:
Figure BDA0001794829990000103
v (i, j) represents the gray value of the human body image at the point (i, j), and when the image is a binary image and the V (i, j) only has black and white, namely two values of 0 and 1, M is obtained00Expressed as the sum of white areas in the human body image, i.e. the area of the binary image, M10Representing the accumulation of the horizontal coordinate values of the white area in the image; in the same way, M01Representing the accumulation of ordinate values of white areas in the image, xcRepresents the abscissa of the center of gravity, ycRepresenting the ordinate of the center of gravity.
The operation steps of step 3 are as follows:
step 3.1, classifying the acquired barycentric coordinates as input data by an SVM classifier to obtain barycentric coordinate data of normal persons and abnormal persons with marks;
step 3.2, respectively extracting the gravity center included angle of the normal person and the abnormal person;
step 3.3, calculating the mean square error of the gravity centers of the upper and lower half bodies of the normal person and the abnormal person respectively;
and 3.4, combining the steps 3.1-3.3 to obtain the barycentric coordinates, barycentric included angles, barycentric mean values and variances of the human body, and classifying the pictures of normal people and abnormal people.
The process of classification by the SVM classifier in step 3.1 is as follows:
firstly, loading a training data set and a testing data set, wherein the training data set comprises training data, training labels, testing data and testing labels, namely, barycentric coordinate data and correct labels of a human body, dividing the training data set into two parts, namely training data and testing data, acquiring optimal parameters of current data through an SVMcgForRegress parameter optimizing function, and acquiring trained model through svmtrain function training data by using the acquired optimized parameters; and finally, testing by using an svmpredict function to obtain barycentric coordinate data with a mark of '1' or '-1'.
The basic principle of the SVM classifier is as follows: the support vector machine is a supervised learning method and is widely applied to statistical classification and regression analysis. In the experiment, the barycentric coordinates of the human body are mapped to a high-dimensional space, and a hyperplane with the maximum interval is searched for in the high-dimensional space
Figure BDA0001794829990000111
The coordinates x of the center of gravity of the human body of the experimental groupiAnd the coordinates x of the center of gravity of the human body in the contrast groupiSeparated from each other. Where W represents the hyperplane normal separating the characteristic vectors of the barycentric coordinates of the experimental group from the barycentric coordinates of the control group, and γ is the displacement interval added for the flexibility of the method. Make the characteristic vector of the barycentric coordinate of the human body in the contrast group satisfy
Figure BDA0001794829990000112
Characteristic vector satisfaction of human body barycentric coordinates of experimental group
Figure BDA0001794829990000113
When there are W and γ satisfying such a condition, such a feature vector is said to be divisible. In practical problems, it may not be possible to completely separate all feature vectors, and the best hyperplane is selected as the optimal solution. 1/omega is
Figure BDA0001794829990000114
And
Figure BDA0001794829990000115
finding the optimal hyperplane translates to minimizing ω. The mathematical expression is
Figure BDA0001794829990000121
Constraint of yi[wxi+γ]1, i ═ 1, 2. By utilizing the mature theory of convex quadratic programming, when the characteristic vector of the barycentric coordinate of the human body is not completely separable, a corresponding kernel function K can be selected, and a characteristic space formed by the barycentric coordinate of the input human body is implicitly mapped to a high-dimensional space, wherein the barycentric coordinate of the input human body is linearly separable in the high-dimensional space. According to the dual condition, the optimization problem can be converted into a corresponding dual problem to be solved.
The specific calculation process of step 3.2 is as follows:
by calculating the three barycentric coordinates of the upper, middle and lower parts of the human body, using the atan2 function and converting the barycentric coordinates into angles, the included angle of the upper, middle and lower barycentric coordinates can be calculated: let the coordinate of the upper center of gravity P1 be (x)1,y1) Center of gravity coordinates P2 (x)2,y2) The lower center of gravity P3 is represented by the coordinate (x)3,y3) When the angle of gravity center is theta, the radian of the angle between P1P2 and the positive direction of the x-axis is atan2 (y)2-y1,x2-x1) Angle of inclination theta1Is atan2 (y)2-y1,x2-x1) 180/pi, and the same principle, the angle theta between P3P2 and the positive direction of the x axis2Is atan2 (y)3-y2,x3-x2) 180/pi, and the angle of gravity theta is represented as theta12As formula 7, the included angle of the center of gravity of the normal person and the abnormal person is calculated respectively:
Figure BDA0001794829990000122
wherein n is the number of the extracted included angles of the center of gravity, p1.y、p2.y、p3.yIs the ordinate of three gravity points, p1.x、p2.x、p3.xThe abscissa of the three gravity points.
The specific calculation process of step 3.3 is as follows:
one frame of picture is extracted every two seconds from the human body walking video, 20 frames of pictures are totally extracted, and mean square deviations of the vertical coordinates of the gravity centers of the upper half body and the lower half body extracted after calculation are respectively calculated by using a formula 8:
Figure BDA0001794829990000123
wherein n is the number of pictures, t is the [1, n ]],
Figure BDA0001794829990000124
Represents the values of the upper and lower body barycentric ordinates at time t,
Figure BDA0001794829990000131
representing the mean value of the ordinate of the center of gravity, and CGS is the mean square error of the center of gravity;
step 3.4 the method for distinguishing people with balance ability disorder comprises the following steps:
as shown in fig. 3, the judgment is performed according to the output mark of the barycentric coordinate data, and after classification is performed by the SVM classifier, if the output balance capability is marked as "1", the classification is performed to be a normal person, which indicates that the balance capability is good; if the output balance ability is marked as "-1", the person is distinguished as an abnormal person, which indicates that the balance ability is obstructed;
the judgment is carried out according to the included angle of the gravity center, which is the posture control capability reflecting proprioception and refers to the size of the included angle between the upper, middle and lower gravity center points of the human body in the motion process. The relatively large included angle indicates that the balance capability is better; the smaller the included angle is, the weaker the human posture control capability is, the worse the balance capability is;
the judgment is carried out according to the gravity center variance, the gravity center variance reflects the discrete degree of variables in the group, and the larger the variance is, the larger and the more unstable the fluctuation of the gravity center of the human body is, and the poorer the balance capability is.
The invention designs a human body balance posture model based on the gravity center according to the shaking of the human body, as shown in figures 4-7, wherein P is1、P2、P3If the balance ability of a normal person is good, the included angle theta of the three gravity centers is larger, and the distance L of the gravity center to the middle axis is smaller. When the included angle of the center of gravity of a person is large, and the distance from the center of gravity to the central axis is small, the posture of the person tends to be a line, and the person is judged to be a normal person and has good balance capability. Factors influencing the balance of the human body are the included angle of the center of gravity, the mean value of the upper and lower centers of gravity and the variance of the upper and lower centers of gravity.
The included angle of the gravity centers of normal people in the virtual environment for balance training is larger than that of abnormal people, and the included angle of the gravity centers of normal people in the virtual environment for training is also higher than that of abnormal people. Meanwhile, due to the weak balance capability, the gravity center fluctuation of the training of the special person in the virtual environment is larger than that of a normal person, and the body shaking is obviously better than that of the normal person. Therefore, the mean value of the center of gravity of the upper and lower half bodies of a normal person is higher than that of an abnormal person in the training process, and the variance of the center of gravity of the upper and lower half bodies is generally smaller than that of a special person. Therefore, various decisions are integrated to measure the balance ability of the human body.
The specific implementation mode is as follows:
the implementation process of the method for classifying people with balance disorder based on the center-of-gravity shift model is described below by extracting the center of gravity and the included angle from a group of human walking videos.
TABLE 1 two-class test data and tags
Figure BDA0001794829990000141
Table 2 SVM test output label
Figure BDA0001794829990000142
Table 35 centre of gravity angle for normal person video analysis
Object A Object B Object C Object D Object E
Angle
1 165.193 143.47 168.943 163.919 159.417
Angle 2 117.495 168.986 144.627 148.402 149.246
Angle 3 127.619 164.707 148.088 157.817 167.652
Angle 4 164.163 166.13 133.002 124.572 134.112
Angle 5 124.437 164.973 148.754 163.538 146.965
Angle 6 120.294 142.347 156.048 142.739 129.503
Angle 7 130.256 140.419 132.031 139.283 145.452
Angle 8 121.957 122.756 143.362 119.525 143.944
Angle 9 122.95 167.096 160.831 135.912 142.712
Angle 10 118.602 157.364 123.537 157.193 159.853
Angle 11 126.105 177.533 132.831 146.615 135.736
Angle 12 159.543 152.654 167.266 152.064 151.242
Angle 13 154.525 146.652 132.284 163.148 148.865
Angle 14 125.446 128.54 128.439 141.804 137.409
Angle 15 122.481 118.704 142.858 148.155 132.536
Angle 16 119.33 153.988 131.764 129.525 146.693
Angle 17 131.288 131.373 131.764 142.731 139.557
Angle 18 118.496 152.749 156.048 155.912 153.752
Angle 19 116.124 119.282 129.324 149.381 171.355
Angle 20 125.705 120.03 135.617 135.485 128.542
TABLE 45 Special person video analysis barycenter angle
Object A Object B Object C Object D Object E
Angle
1 81.6148 111.057 116.515 118.419 121.318
Angle 2 109.372 112.516 113.043 105.832 128.467
Angle 3 120.73 109.153 114.948 127.089 112.709
Angle 4 100.799 107.415 104.563 105.07 102.759
Angle 5 109.668 108.88 101.023 124.502 106.706
Angle 6 72.7893 105.285 118.588 107.603 125.628
Angle 7 114.999 101.03 108.091 122.447 97.4247
Angle 8 104.504 104.219 112.477 106.48 115.254
Angle 9 112.66 109.335 95.686 106.073 124.882
Angle 10 133.608 123.346 105.898 114.945 116.532
Angle 11 122.385 118.258 107.626 103.139 128.328
Angle 12 103.523 119.35 106.692 123.779 113.849
Angle 13 86.4591 119.693 112.627 123.904 98.7091
Angle 14 131.69 104.884 120.48 126.066 122.961
Angle 15 121.503 107.016 119.959 125.433 118.74
Angle 16 109.578 110.361 116.628 96.6743 122.095
Angle 17 123.69 109.257 110.754 121.265 125.428
Angle 18 115.133 118.679 111.425 106.873 111.713
Angle 19 109.111 109.078 114.059 116.375 120.102
Angle 20 102.369 121.584 108.353 106.241 125.246
(1) Firstly, shooting three conditions of an empty scene, normal characters and abnormal characters from the positions of 0 degrees, 45 degrees and 90 degrees of included angles between a camera and a target respectively, and extracting pictures from a video; (2) differentiating the pictures, wherein the left side is a normal person, and the right side is an abnormal person, and the obtained result is shown in fig. 8; (3) performing median filtering and corrosion denoising on the differential picture, wherein the result is shown in fig. 9; (4) extracting the coordinates of the barycenter of the person as shown in fig. 10; (5) classifying the gravity coordinates by using an SVM, wherein the correct label of the test data is shown in table 1, the gravity data of a normal person is marked as 1, the gravity data of an abnormal person is marked as-1, and the label of the test data after SVM classification is shown in table 2; (6) calculating and analyzing the included angles of the centers of gravity of 5 normal persons and 5 abnormal persons, as shown in tables 3, 4 and 11, the included angles of the centers of gravity of the normal persons are all larger than 120 degrees, and the included angles of the centers of the abnormal persons are relatively smaller; (7) the mean value of the center of gravity and the variances of the upper and lower half bodies are analyzed and compared, as shown in fig. 12 and 13, the variances of the upper and lower centers of gravity of a normal person are generally smaller than those of an obstacle group in the walking process, which shows that the centers of gravity of three points of the normal person are close to the central axis, the body is not obviously shaken in the walking process, and the balance capability is good.

Claims (7)

1.基于重心偏移模型区分平衡能力障碍人群的方法,其特征在于,具体操作步骤包括如下:1. the method for distinguishing the balance ability disorder crowd based on the center of gravity shift model, it is characterized in that, concrete operation steps comprise as follows: 步骤1.摄像机的镜头正对在人体正面和侧面中间的位置,即从45°采集正常人和非正常人在搭建好的虚拟现实场景中的人体行走姿态视频;Step 1. The lens of the camera is facing the middle of the front and side of the human body, that is, the video of the walking posture of normal people and abnormal people in the built virtual reality scene is collected from 45°; 步骤2.加载采集到的两组人体行走姿态视频,分别将视频图像提取成图片,然后将图片进行处理分别得到两类人体的上中下重心坐标;Step 2. Load the collected two groups of human walking posture videos, extract the video images into pictures respectively, and then process the pictures to obtain the upper, middle and lower center of gravity coordinates of the two types of human bodies; 步骤3.根据得到的重心坐标数据,并提取重心夹角数据和人体上下重心均方差,通过SVM分类器将提取到的重心数据分类后,结合重心夹角数据和人体上下重心均方差,判断出平衡能力障碍人群;操作步骤如下:Step 3. According to the obtained barycentric coordinate data, extract the barycentric angle data and the mean square error of the upper and lower barycenters of the human body. After classifying the extracted barycentric data by the SVM classifier, combine the barycentric angle data and the mean square error of the upper and lower barycenters of the human body to determine People with balance disabilities; the steps are as follows: 步骤3.1将获取的重心坐标 作为输入数据通过SVM分类器进行分类,得到带有标记的正常人和非正常人重心坐标数据;In step 3.1, the obtained barycentric coordinates are used as input data to be classified by the SVM classifier, and the labeled normal and abnormal barycentric coordinates are obtained; 步骤3.2分别提取正常人和非正常人的重心夹角;具体计算过程如下:Step 3.2 Extract the center of gravity angle of normal and abnormal people respectively; the specific calculation process is as follows: 通过计算出来的人体上中下三个重心坐标,使用atan2函数并转化为角度,可以计算出上中下重心坐标的夹角:设上重心P1的坐标为(x1,y1),中间重心坐标P2(x2,y2),下重心P3坐标为(x3,y3),重心夹角为θ,则P1P2与x轴正方向的夹角弧度为a tan 2(y2-y1,x2-x1),夹角θ1为a tan 2(y2-y1,x2-x1)*180/π,同理,P3P2与x轴正方向的夹角θ2为a tan2(y3-y2,x3-x2)*180/π,重心夹角θ表示为θ12,如公式7,分别计算出正常人和非正常人的重心夹角:By calculating the upper, middle and lower barycentric coordinates of the human body, using the atan2 function and converting it into an angle, the angle between the upper, middle and lower barycentric coordinates can be calculated: set the coordinates of the upper barycenter P1 as (x 1 , y 1 ), the middle barycenter Coordinates P2(x 2 , y 2 ), the coordinates of the lower center of gravity P3 are (x 3 , y 3 ), and the angle between the center of gravity is θ, then the angle between P1P2 and the positive direction of the x-axis in radians is a tan 2(y 2 -y 1 ,x 2 -x 1 ), the included angle θ 1 is a tan 2(y 2 -y 1 ,x 2 -x 1 )*180/π, for the same reason, the included angle θ 2 between P3P2 and the positive direction of the x-axis is a tan2(y 3 -y 2 ,x 3 -x 2 )*180/π, the center of gravity angle θ is expressed as θ 12 , as shown in formula 7, to calculate the center of gravity angle of normal and abnormal people respectively:
Figure FDA0003197242470000021
Figure FDA0003197242470000021
其中,N为所提取的重心夹角数量,p1.y、p2.y、p3.y为三个重心点的纵坐标,p1.x、p2.x、p3.x为三个重心点的横坐标;Among them, N is the number of the extracted centroid angles, p 1.y , p 2.y , p 3.y are the ordinates of the three centroid points, p 1.x , p 2.x , p 3.x are The abscissa of the three centroid points; 步骤3.3分别计算正常人和非正常人的重心均方差;具体见过程如下:Step 3.3 Calculate the mean square error of the center of gravity of normal and abnormal people respectively; the specific process is as follows: 对人体行走视频每隔两秒提取一帧图片,共20帧图片,对经过计算后提取出的上半身以及下半身重心的纵坐标,利用公式8分别计算均值以及方差:A frame of pictures is extracted every two seconds from the human walking video, with a total of 20 frames of pictures. For the ordinates of the upper and lower body centers of gravity extracted after calculation, use formula 8 to calculate the mean and variance respectively:
Figure FDA0003197242470000022
Figure FDA0003197242470000022
其中,n为实验对象个数,t∈[1,n],
Figure FDA0003197242470000023
代表在t时间的上半身和下半身重心纵坐标的值,
Figure FDA0003197242470000024
代表重心纵坐标的均值,CGS为重心的均方差;
Among them, n is the number of experimental objects, t∈[1,n],
Figure FDA0003197242470000023
represents the value of the ordinate of the center of gravity of the upper body and lower body at time t,
Figure FDA0003197242470000024
Represents the mean of the ordinate of the center of gravity, and CGS is the mean square error of the center of gravity;
步骤3.4结合步骤3.1-3.3得到人体重心坐标、重心夹角和重心均方差,区分出平衡能力障碍人群;区分平衡能力障碍人群的方法是:Step 3.4 is combined with steps 3.1-3.3 to obtain the coordinates of the center of gravity, the angle of the center of gravity and the mean square error of the center of gravity to distinguish the balance-disabled population; the method to distinguish the balance-disabled population is: 根据重心数据上的标记进行判断,若输出的平衡能力标记为“1”,则被区分为正常人,说明平衡能力较好;若输出的平衡能力标记为“-1”,则被区分为非正常人,说明存在平衡能力障碍;重心的夹角值相对大的,平衡能力好,重心的夹角值相对较小,存在平衡能力障碍;重心方差值相对大的说明人体重心的波动越大、越不稳定,平衡能力越差。Judging according to the mark on the center of gravity data, if the output balance ability is marked as "1", it is classified as a normal person, indicating that the balance ability is good; if the output balance ability is marked as "-1", it is classified as non-normal A normal person means that there is a balance ability disorder; a relatively large angle value of the center of gravity indicates a good balance ability, and a relatively small angle value of the center of gravity indicates a balance ability disorder; a relatively large center of gravity variance value indicates that the human body’s center of gravity fluctuates more. , the more unstable, the worse the balance ability.
2.如权利要求1所述的基于重心偏移模型区分平衡能力障碍人群的方法,其特征在于,所述步骤2操作过程如下:2. the method for distinguishing balance ability disorder crowd based on gravity center shift model as claimed in claim 1, is characterized in that, described step 2 operation process is as follows: 步骤2.1使用cvLoadImage函数读取采集到的两类人体姿态视频,设定读入视频的开始时间和结束时间,如果capture函数没有读取到含有人像的视频,则重新设定视频的开始时间和结束时间,如果capture函数读取到含有人像的视频,则进行下一步;Step 2.1 Use the cvLoadImage function to read the two types of body pose videos collected, and set the start time and end time of the read video. If the capture function does not read a video containing a portrait, reset the start time and end of the video. time, if the capture function reads a video containing a portrait, go to the next step; 步骤2.2通过OpenCV中的CvCapture函将提取的两组视频分别拆帧,首先分别提取出两组视频中的空白场景进行图片保存,然后将拍摄的人体姿态视频每两秒取一帧保存为图片;Step 2.2 Split the frames of the two groups of videos extracted through the CvCapture function in OpenCV. First, the blank scenes in the two groups of videos are extracted and saved as pictures, and then one frame of the captured human pose video is saved as a picture every two seconds; 步骤2.3将保存的两组人体姿态图片分别与空白场景进行差分处理,得到只有人体姿态的图片;Step 2.3 Differential processing is performed on the saved two sets of human body posture pictures with the blank scene respectively to obtain pictures with only human body postures; 步骤2.4将差分后的图片进行图像去噪;Step 2.4 Perform image denoising on the differenced image; 步骤2.5将去噪后的图片进一步进行图像腐蚀,最终得到只有人体图像的黑白图像;In step 2.5, the denoised image is further subjected to image erosion, and finally a black and white image with only a human body image is obtained; 步骤2.6将腐蚀后得到的图像进行边缘提取,利用Canny边缘检测算子对腐蚀后的图像进行处理得到图像的连通区域,首先使用高斯滤波器与图像进行卷积,以减少边缘检测器上明显的噪声影响;然后,计算图像中每个像素点的梯度强度和方向,并应用非极大值抑制,以消除边缘检测带来的杂散响应;最后,使用双阈值检测来确定真实的和潜在的边缘,并通过抑制孤立的弱边缘最终完成边缘检测,使用findContours提取出人体轮廓图像;Step 2.6 Extract the edge of the corroded image, and use the Canny edge detection operator to process the corroded image to obtain the connected area of the image. First, use the Gaussian filter to convolve the image to reduce the obvious edge detector. Noise effects; then, calculate the gradient strength and direction of each pixel in the image, and apply non-maximum suppression to eliminate spurious responses from edge detection; finally, use double-threshold detection to determine true and potential edge, and finally complete the edge detection by suppressing the isolated weak edge, and use findContours to extract the human contour image; 步骤2.7计算人体轮廓图像的矩,通过轮廓图像的矩计算人体重心坐标。Step 2.7: Calculate the moment of the contour image of the human body, and calculate the coordinates of the center of gravity of the human body through the moment of the contour image. 3.如权利要求2所述的基于重心偏移模型区分平衡能力障碍人群的方法,其特征在于,所述步骤2.3中差分处理的具体过程是如下:3. the method for distinguishing balance ability disorder crowd based on gravity center shift model as claimed in claim 2, is characterized in that, the concrete process of differential processing in described step 2.3 is as follows: 首先,将保存的两组人体姿态图片与空白场景图片进行二值化处理,使得所有图像均为黑白图像;First, binarize the saved two sets of human pose pictures and blank scene pictures, so that all images are black and white images; 然后,将二值化后的两组人体姿态照片与空白场景图片做差分处理,设第k时刻所提取到的含有人体的图像为xk,空白场景的图像为xj,对两张图像进行差分,得到差分图像为Δxk:Δxk=xk-xjThen, the two groups of human pose photos after binarization and the blank scene image are differentially processed, and the image containing the human body extracted at the kth moment is x k , and the image of the blank scene is x j . Difference, the obtained difference image is Δx k : Δx k =x k -x j . 4.如权利要求2所述的基于重心偏移模型区分平衡能力障碍人群的方法,其特征在于,所述步骤2.4的图像去噪过程为:采用中值滤波法将差分处理图像中一点的像素值用该点邻域内各点的像素中值代换,得到去噪后的图片,具体过程是,设f(x,y),g(x,y)分别为差分处理的图像和去噪处理后的图像,则中值滤波输出为g(x,y)=med{f(x-k,y-l),(k,l∈W)},其中,W为二维模板,k,l分别为图像的行和列。4. the method for distinguishing the balance ability disabled crowd based on the center of gravity shift model as claimed in claim 2, it is characterized in that, the image denoising process of described step 2.4 is: adopt median filtering method to differentially process the pixel of one point in the image The value is replaced by the pixel median value of each point in the neighborhood of the point to obtain the denoised image. The specific process is to set f(x, y) and g(x, y) as the differentially processed image and the denoised image, respectively. After the image, the median filter output is g(x,y)=med{f(x-k,y-l),(k,l∈W)}, where W is the two-dimensional template, k, l are the image Rows and Columns. 5.如权利要求2所述的基于重心偏移模型区分平衡能力障碍人群的方法,其特征在于,所述步骤2.5的具体操作步骤如下:对通过差分以及去噪的图像进行腐蚀,通过getStructuringElement函数定义腐蚀窗口的大小,选择矩形窗口MORPH_RECT,腐蚀核的大小选择3×3,然后,通过腐蚀窗口MORPH_RECT使用erode函数进行腐蚀操作,得到只含有人像的图片。5. the method for distinguishing balance ability disabled people based on the center of gravity shift model as claimed in claim 2, is characterized in that, the concrete operation steps of described step 2.5 are as follows: the image that passes through difference and denoising is corroded, through getStructuringElement function Define the size of the corrosion window, select the rectangular window MORPH_RECT, and select 3×3 for the size of the corrosion core. Then, use the erode function to perform the corrosion operation through the corrosion window MORPH_RECT to obtain a picture containing only portraits. 6.如权利要求2所述的基于重心偏移模型区分平衡能力障碍人群的方法,其特征在于,所述步骤2.7中人体重心坐标的计算方法是:6. the method for distinguishing balance ability disorder crowd based on the center of gravity shift model as claimed in claim 2, is characterized in that, the calculation method of human body center of gravity coordinates in described step 2.7 is: 首先,计算人体轮廓图像的矩,将图片中的人体作为一个平面物体,将每个点的像素值作为该点的密度,该点的期望值就是该点的矩,采用图像的一阶矩计算人体重心坐标,如公式1-3所示:First, calculate the moment of the human silhouette image, take the human body in the picture as a plane object, take the pixel value of each point as the density of the point, the expected value of the point is the moment of the point, and use the first-order moment of the image to calculate the human body The barycentric coordinates, as shown in Equation 1-3:
Figure FDA0003197242470000051
Figure FDA0003197242470000051
Figure FDA0003197242470000052
Figure FDA0003197242470000052
则人体图像重心的坐标为:Then the coordinates of the center of gravity of the human body image are:
Figure FDA0003197242470000053
Figure FDA0003197242470000053
其中,V(i,j)表示人体图像在点(i,j)上的灰度值,当图像为二值图,V(i,j)只有黑色和白色,即0和1两种值,则M00表示为人体图像中白色区域的总和,即二值图像的面积,M10代表图像中白色区域横坐标值的累加;同理,M01代表图像中白色区域纵坐标值的累加,xc代表重心横坐标,yc代表重心纵坐标。Among them, V(i,j) represents the gray value of the human image at point (i,j), when the image is a binary image, V(i,j) has only black and white, that is, 0 and 1 two values, Then M 00 represents the sum of the white areas in the human image, that is, the area of the binary image, M 10 represents the accumulation of the abscissa values of the white areas in the image; similarly, M 01 represents the accumulation of the ordinate values of the white areas in the image, x c represents the abscissa of the center of gravity, and y c represents the ordinate of the center of gravity.
7.如权利要求1所述的基于重心偏移模型区分平衡能力障碍人群的方法,其特征在于,所述步骤3.1中采用SVM分类器分类的过程如下:7. the method for distinguishing balance ability disorder crowd based on the center of gravity shift model as claimed in claim 1, is characterized in that, the process of adopting SVM classifier classification in described step 3.1 is as follows: 首先,加载训练数据集和测试数据集,包含了训练数据和训练标签以及测试数据和测试标签,即人体重心坐标数据及正确标签,将其分为训练数据和测试数据两部分,通过SVMcgForRegress参数寻优函数获取当前数据的最优参数,使用得到的优化后的参数再通过svmtrain函数训练数据,得到训练完成的model;最后使用svmpredict函数进行测试,得到带有标记“1”或“-1”重心坐标数据。First, load the training data set and the test data set, including the training data and training labels, as well as the test data and test labels, that is, the body center of gravity coordinate data and the correct label, and divide it into two parts: training data and test data. The optimal function obtains the optimal parameters of the current data, and then uses the obtained optimized parameters to train the data through the svmtrain function to obtain the trained model; finally, use the svmpredict function to test, and obtain the center of gravity with the mark "1" or "-1" Coordinate data.
CN201811052076.3A 2018-09-10 2018-09-10 Method for distinguishing people with balance ability disorder based on gravity center shift model Active CN109271918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811052076.3A CN109271918B (en) 2018-09-10 2018-09-10 Method for distinguishing people with balance ability disorder based on gravity center shift model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811052076.3A CN109271918B (en) 2018-09-10 2018-09-10 Method for distinguishing people with balance ability disorder based on gravity center shift model

Publications (2)

Publication Number Publication Date
CN109271918A CN109271918A (en) 2019-01-25
CN109271918B true CN109271918B (en) 2021-11-16

Family

ID=65187690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811052076.3A Active CN109271918B (en) 2018-09-10 2018-09-10 Method for distinguishing people with balance ability disorder based on gravity center shift model

Country Status (1)

Country Link
CN (1) CN109271918B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555394A (en) * 2019-08-19 2019-12-10 西安理工大学 Fall risk assessment method based on human body shape characteristics
CN110705367A (en) * 2019-09-05 2020-01-17 西安理工大学 Human body balance ability classification method based on three-dimensional convolutional neural network
JP6813206B1 (en) * 2019-11-20 2021-01-13 株式会社Taos研究所 Biological condition determination system, biological condition determination method and biological condition determination program
CN110931131B (en) * 2019-12-30 2023-04-28 华中科技大学鄂州工业技术研究院 Balance capability evaluation method and device
CN113017571A (en) * 2021-03-16 2021-06-25 西南交通大学 Balance capability evaluation method and system based on image recognition and balance beam test

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009093631A1 (en) * 2008-01-23 2009-07-30 Panasonic Electric Works Co., Ltd. Device for evaluating center of gravity balancing
CN106491088A (en) * 2016-11-01 2017-03-15 吉林大学 A kind of balanced ability of human body appraisal procedure based on smart mobile phone
CN108309236A (en) * 2018-01-15 2018-07-24 新绎健康科技有限公司 Total balance of the body appraisal procedure and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009093631A1 (en) * 2008-01-23 2009-07-30 Panasonic Electric Works Co., Ltd. Device for evaluating center of gravity balancing
CN106491088A (en) * 2016-11-01 2017-03-15 吉林大学 A kind of balanced ability of human body appraisal procedure based on smart mobile phone
CN108309236A (en) * 2018-01-15 2018-07-24 新绎健康科技有限公司 Total balance of the body appraisal procedure and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Three-dimensional, virtual reality vestibular rehabilitation for chronic imbalance problem caused by Ménière"s disease: a pilot study》;Hsu, Su-Yi et al;《Disability & Rehabilitation》;20171231;第39卷(第16期);全文 *
《基于体感交互设备的人体重心计算方法》;杨再兴;《中国优秀硕士学位论文全文数据库 基础科学辑》;20180215(第2期);第A002-793页 *

Also Published As

Publication number Publication date
CN109271918A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN109271918B (en) Method for distinguishing people with balance ability disorder based on gravity center shift model
Hesse et al. Computer vision for medical infant motion analysis: State of the art and rgb-d data set
US10417775B2 (en) Method for implementing human skeleton tracking system based on depth data
CN105279483B (en) A kind of tumble behavior real-time detection method based on depth image
Zeng et al. Silhouette-based gait recognition via deterministic learning
CN104794463B (en) The system and method for indoor human body fall detection is realized based on Kinect
CN105022982B (en) Hand motion recognition method and apparatus
CN105335725B (en) A Gait Recognition Identity Authentication Method Based on Feature Fusion
CN105740780B (en) Method and device for detecting living human face
JP5675229B2 (en) Image processing apparatus and image processing method
CN109670396A (en) A kind of interior Falls Among Old People detection method
CN108549886A (en) A kind of human face in-vivo detection method and device
JP2017033469A (en) Image identification method, image identification device and program
JP2013089252A (en) Video processing method and device
CN102722715A (en) Tumble detection method based on human body posture state judgment
Hu et al. Surveillance video face recognition with single sample per person based on 3D modeling and blurring
CN106951826B (en) Face detection method and device
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
Yun et al. Human fall detection via shape analysis on Riemannian manifolds with applications to elderly care
CN112395977A (en) Mammal posture recognition method based on body contour and leg joint skeleton
CN108470178B (en) A depth map saliency detection method combined with depth reliability evaluation factor
CN104200200A (en) System and method for realizing gait recognition by virtue of fusion of depth information and gray-scale information
CN108256462A (en) A kind of demographic method in market monitor video
CN107122711A (en) A kind of night vision video gait recognition method based on angle radial transformation and barycenter
CN109558797B (en) Method for distinguishing human body balance disorder based on gravity center area model under visual stimulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant