CN114495177A - Scene interactive human body action and balance intelligent evaluation method and system - Google Patents

Scene interactive human body action and balance intelligent evaluation method and system Download PDF

Info

Publication number
CN114495177A
CN114495177A CN202210356000.XA CN202210356000A CN114495177A CN 114495177 A CN114495177 A CN 114495177A CN 202210356000 A CN202210356000 A CN 202210356000A CN 114495177 A CN114495177 A CN 114495177A
Authority
CN
China
Prior art keywords
motion
balance
target
scene
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210356000.XA
Other languages
Chinese (zh)
Inventor
何玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lantian Medical Equipment Co ltd
Original Assignee
Beijing Lantian Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lantian Medical Equipment Co ltd filed Critical Beijing Lantian Medical Equipment Co ltd
Priority to CN202210356000.XA priority Critical patent/CN114495177A/en
Publication of CN114495177A publication Critical patent/CN114495177A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Game Theory and Decision Science (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a scene interactive human body action and balance intelligent evaluation method and system, wherein the method comprises the following steps: constructing a target human body three-dimensional model based on the body type characteristic information; according to the motion evaluation situation, performing motion joint point identification on the target human body three-dimensional model to obtain a motion associated joint point set; based on the motion correlation joint point set, capturing a motion gesture of the target user to obtain a motion gesture data set; calling a first balance index evaluation model according to the basic information of the target user; inputting the motion attitude data set and the reference balance point information into a first balance index evaluation model to obtain a target dynamic balance index; and if the target dynamic balance index does not reach the preset balance index, reminding the target user to correct the action according to the first reminding instruction. The technical problem that the assessment of the exercise balance ability in the prior art is not reasonable and accurate enough, and then the exercise training effect is not up to the standard is solved.

Description

Scene interactive human body action and balance intelligent evaluation method and system
Technical Field
The invention relates to the field of motion evaluation, in particular to a scene interactive type human body action and balance intelligent evaluation method and system.
Background
The balance ability is an important physiological function of the human body, and various activities such as actions, standing, walking and the like in daily life depend on the balance ability, and the balance ability is the ability of returning to the original position no matter where the human body is in, moves or is pushed by external force. Therefore, in order to ensure the training effect and improve the exercise ability, the evaluation and the training of the balance function are indispensable links.
However, the assessment of the exercise balance ability in the prior art is not reasonable and accurate enough, and the exercise training effect is not up to the standard.
Disclosure of Invention
The method and the system solve the technical problem that the assessment of the motion balance capability in the prior art is not reasonable and accurate enough and further the motion training effect does not reach the standard, achieve the aim of capturing the motion posture in real time by combining the motion scene, call a balance index assessment model to assess the personalized balance capability of a target user, improve the accuracy and the rationality of the motion balance capability assessment result and further improve the technical effect of the motion training effect of the user.
In view of the above problems, the present invention provides a method and system for intelligent assessment of scene interactive human body actions and balance.
In a first aspect, the present application provides a scene interactive human body action and balance intelligent assessment method, including: setting a motion evaluation scenario based on the motion training target; obtaining body type characteristic information of a target user through an image acquisition device, and constructing a target human body three-dimensional model based on the body type characteristic information; according to the motion evaluation situation, performing motion joint point identification on the target human body three-dimensional model to obtain a motion associated joint point set; based on the motion-related joint point set, performing motion gesture capture on the target user through the image acquisition device to obtain a motion gesture data set; obtaining reference balance point information according to the body type feature information and the basic information of the target user; calling a first balance index evaluation model from a motion balance index evaluation model library according to the basic information of the target user; inputting the motion attitude data set and the reference balance point information into the first balance index evaluation model to obtain a target dynamic balance index; and if the target dynamic balance index does not reach the preset balance index, generating a first reminding instruction, wherein the first reminding instruction is used for reminding the target user to correct the action.
On the other hand, this application still provides a scene interactive human action and balanced intelligent evaluation system, the system includes: a first setting unit configured to set a motion evaluation scenario based on a motion training target; the first construction unit is used for obtaining body type characteristic information of a target user through an image acquisition device and constructing a target human body three-dimensional model based on the body type characteristic information; a first obtaining unit, configured to perform, according to the motion evaluation scenario, motion joint point identification on the target human body three-dimensional model to obtain a motion-associated joint point set; a second obtaining unit, configured to capture, by the image acquisition device, a motion pose of the target user based on the motion-related joint point set, and obtain a motion pose data set; a third obtaining unit, configured to obtain reference balance point information according to the body shape feature information and basic information of a target user; the first calling unit is used for calling a first balance index evaluation model from a motion balance index evaluation model library according to the basic information of the target user; a fourth obtaining unit, configured to input the motion posture data set and the reference balance point information into the first balance index evaluation model, so as to obtain a target dynamic balance index; the first processing unit is used for generating a first reminding instruction if the target dynamic balance index does not reach a preset balance index, and the first reminding instruction is used for reminding the target user to correct the action.
In a third aspect, the present application provides an electronic device comprising a bus, a transceiver, a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the transceiver, the memory, and the processor are connected via the bus, and the computer program implements the steps of any of the methods when executed by the processor.
In a fourth aspect, the present application also provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of any of the methods described above.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
as the method sets the motion evaluation scene based on the motion training target, and then constructs the target human body three-dimensional model based on the body type characteristic information by carrying out image acquisition on the body type characteristic information of the target user, performing action joint point identification on the target human body three-dimensional model, associating a joint point set based on the identified action, capturing the motion attitude of a target user to obtain a corresponding motion attitude data set, calling a first balance index evaluation model from a motion balance index evaluation model library according to basic information of the target user, inputting the motion attitude data set and reference balance point information of the user into the first balance index evaluation model, outputting a result, namely a target dynamic balance index, and if the target dynamic balance index does not reach the preset balance index, reminding the target user to correct the action through the first reminding instruction. And then the motion posture is captured in real time by combining the motion scene, so that the balance index evaluation model is called to carry out personalized balance capability evaluation on the target user, the accuracy and the rationality of the motion balance capability evaluation result are improved, and the technical effect of the motion training effect of the user is further improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flowchart of a scene interactive human body action and balance intelligent assessment method according to the present application;
FIG. 2 is a schematic flow chart illustrating a motion gesture data set obtained by the method for intelligent evaluation of scene interactive human body motion and balance according to the present application;
FIG. 3 is a schematic flow chart illustrating a process of obtaining a scene pose coordinate data set in the context-interactive human body action and balance intelligent assessment method according to the present application;
fig. 4 is a schematic flow chart illustrating a process of correcting a target dynamic balance index in the context-interactive intelligent evaluation method for human body movement and balance according to the present application;
FIG. 5 is a schematic structural diagram of a scene interactive human body action and balance intelligent evaluation system according to the present application;
fig. 6 is a schematic structural diagram of an exemplary electronic device of the present application.
Description of the reference numerals: a first setting unit 11, a first constructing unit 12, a first obtaining unit 13, a second obtaining unit 14, a third obtaining unit 15, a first calling unit 16, a fourth obtaining unit 17, a first processing unit 18, a bus 1110, a processor 1120, a transceiver 1130, a bus interface 1140, a memory 1150, an operating system 1151, an application 1152 and a user interface 1160.
Detailed Description
In the description of the present application, it will be appreciated by those skilled in the art that the present application may be embodied as methods, apparatuses, electronic devices, and computer-readable storage media. Thus, the present application may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), a combination of hardware and software. Furthermore, in some embodiments, the present application may also be embodied in the form of a computer program product in one or more computer-readable storage media having computer program code embodied therein.
The computer-readable storage media described above may take any combination of one or more computer-readable storage media. The computer-readable storage medium includes: an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium include: a portable computer diskette, a hard disk, a random access memory, a read-only memory, an erasable programmable read-only memory, a flash memory, an optical fiber, a compact disc read-only memory, an optical storage device, a magnetic storage device, or any combination thereof. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, device, or system.
According to the technical scheme, the data acquisition, storage, use, processing and the like meet relevant regulations of national laws.
The method, the device and the electronic equipment are described by the flow chart and/or the block diagram.
It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions. These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner. Thus, the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The present application is described below with reference to the drawings attached hereto.
Example one
As shown in fig. 1, the present application provides a scene interactive human body motion and balance intelligent evaluation method, which is applied to a motion balance evaluation system, the system includes an image acquisition device, and the method includes:
step S100: setting a motion evaluation scenario based on the motion training target;
specifically, the balance ability is an important physiological function of the human body, and various activities such as standing and walking in daily life depend on the balance ability, and is the ability of the human body to return to its original position regardless of the position, movement, or pushing action by external force. Therefore, in order to ensure the training effect and improve the exercise ability, the evaluation and the training of the balance function are indispensable links. The exercise training target is an exercise target which needs to be achieved by a user, such as patient training and rehabilitation, upper body fitness, running exercise, badminton exercise, gymnastics training and the like, an exercise evaluation scenario is set based on the exercise training target, the exercise evaluation scenario carries out action balance evaluation on the user exercise in a scenario interaction mode, and evaluation applicable scenario comprehensiveness is improved.
Step S200: obtaining body type characteristic information of a target user through the image acquisition device, and constructing a target human body three-dimensional model based on the body type characteristic information;
specifically, the image acquisition device is used for acquiring body shape characteristic information of a moving target user, including height, body shape, limb proportion and the like, and the image acquisition device can be a camera in an evaluation scene and is used for shooting and acquiring the motion state of the user in real time. And constructing a target human body three-dimensional model based on the body type characteristic information, namely performing computer modeling on the body of the user through the collected human body type data, and realizing more visual and accurate follow-up capture of human body motion.
Step S300: according to the motion evaluation situation, performing motion joint point identification on the target human body three-dimensional model to obtain a motion associated joint point set;
specifically, according to the motion assessment scenario, joint point recognition of relevant motions is performed on the target human body three-dimensional model, for example, in a gymnastic training, motion joints such as hip joints, knee joints, ankle joints, neck joints, shoulder joints, and elbow joints of a human body need to be recognized and acquired, and in a leg rehabilitation process, knee joints and ankle joints are recognized and acquired, so that a corresponding motion-related joint point set is obtained for capturing subsequent motions in real time.
Step S400: based on the motion-related joint point set, performing motion gesture capture on the target user through the image acquisition device to obtain a motion gesture data set;
as shown in fig. 2, further to obtain the motion posture data set, step S400 of the present application further includes:
step S410: constructing a reference motion coordinate system according to the target human body three-dimensional model;
step S420: capturing the motion attitude of the target user according to the reference motion coordinate system to obtain a reference attitude coordinate data set;
step S430: constructing a scene motion coordinate system based on the motion estimation scene;
step S440: mapping and fusing the reference attitude coordinate data set in the scene motion coordinate system to obtain a scene attitude coordinate data set;
step S450: and generating the motion attitude data set according to the scene attitude coordinate data set.
Specifically, the motion postures of the motion-related joint point set of the target user are collected in real time through the image collecting device, and human motion posture data are captured. Firstly, a reference motion coordinate system is constructed according to the target three-dimensional human body model, and the reference motion coordinate system is a three-dimensional coordinate system constructed by taking the three-dimensional human body model of the target user as a base point. And capturing the motion attitude of the target user according to the reference motion coordinate system to obtain a reference attitude coordinate data set corresponding to the motion under the reference motion coordinate system. And constructing a scene motion coordinate system based on the motion estimation scene, wherein the scene motion coordinate system is a three-dimensional coordinate system constructed by taking the interactive scene as a base point.
And mapping and fusing the reference attitude coordinate data set in the scene motion coordinate system, namely performing coordinate fusion conversion from the reference motion coordinate system to the scene motion coordinate system, and obtaining a scene attitude coordinate data set corresponding to the reference attitude coordinate data set after conversion. And generating the motion attitude data set corresponding to each motion attitude in the scene motion coordinate system according to the scene attitude coordinate data set, wherein the motion attitude data set comprises data such as motion amplitude, motion angle, motion frequency and the like, comprehensively and accurately collects the motion attitude of the user, and provides a more accurate data base for the subsequent evaluation of the motion balance capability.
Step S500: obtaining reference balance point information according to the body type feature information and the basic information of the target user;
specifically, the basic information of the target user includes the weight, height, age, sex, etc. of the user, and the basic information of the target user is used to determine the basic balance point information, which is the static barycentric balance position of the target user and is the resultant force point of the gravity borne by the whole human body, and because of the difference in sex, age and body type, the barycentric position of the human body is slightly different, and the relative height of the barycentric position of a man is generally higher than that of a woman, and when the man stands naturally, the barycentric height of the man is about 56% of the height, and the woman is about 55% of the height.
Step S600: calling a first balance index evaluation model from a motion balance index evaluation model library according to the basic information of the target user;
specifically, the exercise balance index assessment model library is a exercise balance index assessment model library including training of data of different ages and different body states, and the user personal basic information is different and the corresponding exercise balance assessment models are also different. And calling a first balance index evaluation model corresponding to the body state data of the target user from a motion balance index evaluation model library through the basic information of the target user, wherein the first balance index evaluation model is a neural network model and is used for evaluating the motion balance ability of the target user.
Step S700: inputting the motion attitude data set and the reference balance point information into the first balance index evaluation model to obtain a target dynamic balance index;
specifically, the exercise posture data set and the reference balance point information are input into the first balance index evaluation model, and a target dynamic balance index, which is a training output result of the model, is obtained, the target dynamic balance index is used for indicating the balance ability of the target user during exercise, and the larger the dynamic balance index is, the better the exercise training balance ability of the target user is.
Step S800: and if the target dynamic balance index does not reach the preset balance index, generating a first reminding instruction, wherein the first reminding instruction is used for reminding the target user to correct the action.
Specifically, the preset balance index is a standard that the balance needs to be achieved under the scene interaction, and if the target dynamic balance index does not reach the preset balance index, it indicates that the balance ability of the target user is not enough, and the exercise training effect does not reach the standard. And reminding the target user to correct the action according to the first reminding instruction, for example, adjusting the serving angle of the hand or the step changing action in badminton. And the target user is subjected to personalized balance ability evaluation by combining the motion scene, so that the accuracy and the rationality of the motion balance ability evaluation result are improved, and the motion training effect of the target user is further improved.
As shown in fig. 3, further to obtain the scene pose coordinate data set, step S440 of the present application further includes:
step S441: acquiring reference coordinate information and scene coordinate information of the reference balance point information;
step S442: matching feature points of the reference coordinate information and the scene coordinate information to determine a feature conversion type;
step S443: obtaining a reference feature point conversion matrix based on the feature conversion type;
step S444: and mapping and converting the reference attitude coordinate data set according to the reference characteristic point conversion matrix to obtain the scene attitude coordinate data set.
Specifically, in order to accurately convert the scene coordinate system, first, reference coordinate information of a reference balance point in the reference motion coordinate system and scene coordinate information in the scene motion coordinate system are obtained, respectively. And then, performing feature point matching on the reference coordinate information and the scene coordinate information, and determining a feature conversion type of a reference balance point, namely a change feature of coordinate system conversion, such as translation conversion, rigid body conversion, rotation conversion, projection conversion and the like.
And determining a reference feature point conversion matrix according to the feature conversion types, wherein the corresponding conversion matrixes are different in different feature conversion types. And finally, carrying out coordinate mapping conversion on the reference attitude coordinate data set according to the reference characteristic point conversion matrix, and obtaining the scene attitude coordinate data set corresponding to each motion attitude in the scene motion coordinate system based on a product result of the coordinates and the conversion matrix. Through the conversion from the reference coordinate system to the scene coordinate system, the action scene interaction is realized, so that the scale invariance and the conversion accuracy of the motion attitude are ensured.
As shown in fig. 4, further, the steps of the present application further include:
step S910: carrying out unsupervised learning classification on the motion attitude data set to generate a region attitude data set;
step S920: performing fit degree analysis based on the region attitude data set to obtain motion characteristic fit degree;
step S930: acquiring the stress distribution information of the supporting surface of the target user through a force sensor;
step S940: and correcting the target dynamic balance index based on the bearing surface stress distribution information and the motion characteristic matching degree.
Specifically, in order to evaluate the dynamic balance of the whole body of the target user during movement, the movement posture data set is subjected to an unsupervised learning classification method such as cluster analysis and principal component analysis, for example, classification is carried out according to the movement parts of the human body, and a region posture data set is generated, wherein the region posture data set comprises the movement posture data of the regions such as hands, legs and shoulders. And analyzing the fit degree of all body parts based on the region posture data set, for example, analyzing the posture data association degree to obtain the corresponding motion characteristic fit degree, wherein the higher the fit degree is, the more standard the motion posture of the target user is.
The force distribution information of the supporting surface of the target user is obtained through the force sensor, the supporting surface refers to a contact surface on which a human body depends under various movement positions, for example, the supporting surface when the human body stands is an area between two feet, and the stress of the supporting surface can indicate the movement stability of the target user. And correcting the target dynamic balance index based on the bearing surface stress distribution information and the motion characteristic matching degree, so that the motion balance capability evaluation result is more comprehensive and accurate, and the motion training effect of the target user is improved.
Further, the step S940 of modifying the target dynamic balance index based on the information of the stress distribution of the supporting surface and the degree of matching of the motion characteristics further includes:
step S941: obtaining a balance ability coefficient of the target user;
step S942: determining a target stability limit according to the balance capacity coefficient;
step S943: determining a target offset angle based on the bearing surface stress distribution information and the motion characteristic matching degree;
step S944: and obtaining a balance offset coefficient based on the difference value between the target offset angle and the target stability limit, and correcting the target dynamic balance index according to the balance offset coefficient.
Specifically, the balance ability of the target user is evaluated, for example, a balance function evaluation scale evaluation method, and a corresponding balance ability coefficient is obtained, wherein the larger the coefficient is, the better the balance ability of the user is. And determining a target stability limit according to the balance capability coefficient, wherein the target stability limit refers to a maximum angle formed by the human body and a vertical line when the human body tilts in a range capable of keeping balance again. And determining a target deviation angle based on the supporting surface stress distribution information and the motion characteristic matching degree, wherein the target deviation angle is the whole body deviation angle of the target user during motion. And determining a balance deviation coefficient based on the difference value between the target deviation angle and the target stability limit, wherein the larger the difference value is, the smaller the coefficient is, and the better the user stability is. And correcting the target dynamic balance index according to the balance offset coefficient, so that the exercise balance capability evaluation result is more comprehensive and accurate, and the exercise training effect of the target user is improved.
Further, the step S920 of obtaining the fitting degree of the motion characteristic further includes:
step S921: constructing a motion scene posture database, wherein the motion scene posture database comprises multi-category motion scene posture information;
step S922: extracting a feature vector from the region attitude data set to obtain region attitude vector information;
step S923: and performing action matching on the region attitude vector information and the motion scene and posture database to obtain the motion characteristic matching degree.
Specifically, a motion scenario pose database is constructed that includes multi-category motion scenario pose information, such as table tennis, badminton, volleyball, gymnastics, and the like. And extracting feature vectors from the region posture data set to obtain region posture vector information corresponding to each body part of the target user, and performing action matching on the region posture vector information and the motion scene posture database to obtain the matching degree of each body part under the motion action, wherein the higher the motion feature matching degree is, the more standard the motion posture of the target user is, the higher the matching degree between each body part of the body motion is. By evaluating the movement fitness of the target user, the target dynamic balance index is more reasonable and accurate, and the movement training effect of the target user is improved.
Further, step S923 of the present application includes:
step S9231: carrying out feature point marking on the motion scene and posture database to obtain a posture feature key point set;
step S9232: determining a posture change coefficient according to the posture feature key point set;
step S9233: and performing image enhancement on the motion scene posture database based on the posture change coefficient to obtain an augmented motion scene posture database.
Specifically, feature point labeling is performed on the motion scene pose database, for example, feature points of a badminton team serve hand and shoulder are labeled, and a corresponding pose feature key point set is obtained. And determining a posture change coefficient according to the posture characteristic key point set, wherein the posture change coefficient is used for carrying out changes such as scaling, image interference and the like on the posture of the user under the condition that the posture characteristic key points are not changed.
And performing image enhancement on the motion scene posture database based on the posture change coefficient, expanding the data capacity of the motion scene posture, and obtaining an expanded motion scene posture database, so that the data quantity of the matched motion posture is more comprehensive. Important features in the image are selectively highlighted through the feature point marks, the sample data capacity and the anti-interference performance of the motion scene posture database are enhanced, and the evaluation accuracy of the target dynamic balance index is further improved.
In summary, the method and system for intelligently evaluating scene interactive human body actions and balance provided by the present application have the following technical effects:
as the method sets the motion evaluation scene based on the motion training target, and then constructs the target human body three-dimensional model based on the body type characteristic information by carrying out image acquisition on the body type characteristic information of the target user, performing action joint point identification on the target human body three-dimensional model, associating a joint point set based on the identified action, capturing the motion attitude of a target user to obtain a corresponding motion attitude data set, calling a first balance index evaluation model from a motion balance index evaluation model library according to basic information of the target user, inputting the motion attitude data set and reference balance point information of the user into the first balance index evaluation model, outputting a result, namely a target dynamic balance index, and if the target dynamic balance index does not reach the preset balance index, reminding the target user to correct the action through the first reminding instruction. And then the motion posture is captured in real time by combining the motion scene, so that the balance index evaluation model is called to carry out personalized balance capability evaluation on the target user, the accuracy and the rationality of the motion balance capability evaluation result are improved, and the technical effect of the motion training effect of the user is further improved.
Example two
Based on the same inventive concept as the scene interactive human body action and balance intelligent evaluation method in the foregoing embodiment, the present invention further provides a scene interactive human body action and balance intelligent evaluation system, as shown in fig. 5, the system includes:
a first setting unit 11, the first setting unit 11 being configured to set a motion evaluation scenario based on a motion training target;
the first construction unit 12 is configured to obtain body type feature information of a target user through the image acquisition device, and construct a target human body three-dimensional model based on the body type feature information;
a first obtaining unit 13, where the first obtaining unit 13 is configured to perform, according to the motion estimation scenario, motion joint point identification on the target human body three-dimensional model to obtain a motion-related joint point set;
a second obtaining unit 14, where the second obtaining unit 14 is configured to capture a motion gesture of the target user through the image capturing apparatus based on the motion-related joint point set, and obtain a motion gesture data set;
a third obtaining unit 15, where the third obtaining unit 15 is configured to obtain reference balance point information according to the body type feature information and the basic information of the target user;
a first calling unit 16, where the first calling unit 16 is configured to call a first balance index evaluation model from a motion balance index evaluation model library according to the basic information of the target user;
a fourth obtaining unit 17, where the fourth obtaining unit 17 is configured to input the motion posture data set and the reference balance point information into the first balance index evaluation model to obtain a target dynamic balance index;
the first processing unit 18, the first processing unit 18 is configured to generate a first prompting instruction if the target dynamic balance index does not reach a preset balance index, where the first prompting instruction is used to prompt the target user to perform action correction.
Further, the system further comprises:
the second construction unit is used for constructing a reference motion coordinate system according to the target human body three-dimensional model;
a fifth obtaining unit, configured to capture a motion gesture of the target user according to the reference motion coordinate system, and obtain a reference gesture coordinate data set;
a third constructing unit configured to construct a scene motion coordinate system based on the motion estimation scene;
a sixth obtaining unit, configured to perform mapping fusion on the reference attitude coordinate data set in the scene motion coordinate system to obtain a scene attitude coordinate data set;
a first generating unit configured to generate the motion attitude data set according to the scenario attitude coordinate data set.
Further, the system further comprises:
a seventh obtaining unit configured to obtain reference coordinate information and scene coordinate information of the reference balance point information;
a first determining unit, configured to perform feature point matching on the reference coordinate information and the scene coordinate information, and determine a feature conversion type;
an eighth obtaining unit, configured to obtain a reference feature point transformation matrix based on the feature transformation type;
a ninth obtaining unit, configured to perform mapping conversion on the reference posture coordinate data set according to the reference feature point conversion matrix, so as to obtain the scene posture coordinate data set.
Further, the system further comprises:
a second generation unit, configured to perform unsupervised learning classification on the motion posture data set, and generate a region posture data set;
a tenth obtaining unit, configured to perform a fitness analysis based on the region posture data set, to obtain a motion feature fitness;
an eleventh obtaining unit, configured to obtain, through a force sensor, force distribution information of a supporting surface of the target user;
the first correcting unit is used for correcting the target dynamic balance index based on the bearing surface stress distribution information and the motion characteristic matching degree.
Further, the system further comprises:
a twelfth obtaining unit, configured to obtain a balance ability coefficient of the target user;
a second determination unit for determining a target stability limit according to the balance capability coefficient;
a third determination unit, configured to determine a target offset angle based on the supporting surface stress distribution information and the motion characteristic matching degree;
and the second correction unit is used for obtaining a balance offset coefficient based on the difference value between the target offset angle and the target stability limit, and correcting the target dynamic balance index according to the balance offset coefficient.
Further, the system further comprises:
a fourth construction unit configured to construct a motion scene pose database, the motion scene pose database including multi-category motion scene pose information;
a thirteenth obtaining unit, configured to perform feature vector extraction on the region attitude data set, and obtain region attitude vector information;
a fourteenth obtaining unit, configured to perform action matching on the region posture vector information and the motion scene posture database, to obtain the motion feature matching degree.
Further, the system further comprises:
a fifteenth obtaining unit, configured to perform feature point labeling on the motion scene pose database, and obtain a pose feature key point set;
a fourth determining unit, configured to determine a posture change coefficient according to the posture feature key point set;
a sixteenth obtaining unit, configured to perform image enhancement on the motion scene pose database based on the pose change coefficient, and obtain an augmented motion scene pose database.
Various variations and specific examples of the scenario-interactive human body action and balance intelligent evaluation method in the first embodiment of fig. 1 are also applicable to the scenario-interactive human body action and balance intelligent evaluation system in the present embodiment, and through the foregoing detailed description of the scenario-interactive human body action and balance intelligent evaluation method, those skilled in the art can clearly know an implementation method of the scenario-interactive human body action and balance intelligent evaluation system in the present embodiment, so for the sake of brevity of the description, detailed descriptions are not repeated here.
In addition, the present application further provides an electronic device, which includes a bus, a transceiver, a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the transceiver, the memory, and the processor are respectively connected through the bus, and when the computer program is executed by the processor, the processes of the above-mentioned method embodiment for controlling output data are implemented, and the same technical effects can be achieved, and are not described herein again to avoid repetition.
Exemplary electronic device
In particular, referring to fig. 6, the present application further provides an electronic device comprising a bus 1110, a processor 1120, a transceiver 1130, a bus interface 1140, a memory 1150, and a user interface 1160.
In this application, the electronic device further includes: a computer program stored on the memory 1150 and executable on the processor 1120, the computer program, when executed by the processor 1120, implementing the various processes of the method embodiments of controlling output data described above.
A transceiver 1130 for receiving and transmitting data under the control of the processor 1120.
In this application, a bus architecture (represented by bus 1110), bus 1110 may include any number of interconnected buses and bridges, bus 1110 connecting various circuits including one or more processors, represented by processor 1120, and memory, represented by memory 1150.
Bus 1110 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include: industry standard architecture bus, micro-channel architecture bus, expansion bus, video electronics standards association, peripheral component interconnect bus.
Processor 1120 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits in hardware or instructions in software in a processor. The processor described above includes: general purpose processors, central processing units, network processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, complex programmable logic devices, programmable logic arrays, micro-control units or other programmable logic devices, discrete gates, transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in this application may be implemented or performed. For example, the processor may be a single core processor or a multi-core processor, which may be integrated on a single chip or located on multiple different chips.
Processor 1120 may be a microprocessor or any conventional processor. The method steps disclosed in connection with the present application may be performed directly by a hardware decoding processor or by a combination of hardware and software modules within the decoding processor. The software modules may reside in random access memory, flash memory, read only memory, programmable read only memory, erasable programmable read only memory, registers, and the like, as is known in the art. The readable storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The bus 1110 may also connect various other circuits such as peripherals, voltage regulators, or power management circuits to provide an interface between the bus 1110 and the transceiver 1130, as is well known in the art. Therefore, it will not be further described in this application.
The transceiver 1130 may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. For example: the transceiver 1130 receives external data from other devices, and the transceiver 1130 transmits data processed by the processor 1120 to other devices. Depending on the nature of the computer device, a user interface 1160 may also be provided, such as: touch screen, physical keyboard, display, mouse, speaker, microphone, trackball, joystick, stylus.
It is to be appreciated that in the subject application, the memory 1150 can further include memory remotely located from the processor 1120, which can be coupled to a server via a network. One or more portions of the above-described network may be an ad hoc network, an intranet, an extranet, a virtual private network, a local area network, a wireless local area network, a wide area network, a wireless wide area network, a metropolitan area network, the internet, a public switched telephone network, a pots network, a cellular telephone network, a wireless network, a wifi network, and a combination of two or more of the above-described networks. For example, the cellular telephone network and the wireless network may be global mobile communications devices, code division multiple access devices, global microwave interconnect access devices, general packet radio service devices, wideband code division multiple access devices, long term evolution devices, LTE frequency division duplex devices, LTE time division duplex devices, long term evolution advanced devices, universal mobile communications devices, enhanced mobile broadband devices, mass machine type communications devices, ultra-reliable low-latency communications devices, and the like.
It will be appreciated that the memory 1150 in the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. Wherein the nonvolatile memory includes: a read only memory, a programmable read only memory, an erasable programmable read only memory, an electrically erasable programmable read only memory, or a flash memory.
The volatile memory includes: random access memory, which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as: static random access memory, dynamic random access memory, synchronous dynamic random access memory, double data rate synchronous dynamic random access memory, enhanced synchronous dynamic random access memory, synchronous link dynamic random access memory, and direct memory bus random access memory. The memory 1150 of the electronic device described herein includes, but is not limited to, the above-described and any other suitable types of memory.
In the present application, memory 1150 stores the following elements of operating system 1151 and application programs 1152: an executable module, a data structure, or a subset thereof, or an expanded set thereof.
Specifically, the operating system 1151 includes various device programs, such as: a framework layer, a core library layer, a driver layer, etc. for implementing various basic services and processing hardware-based tasks. Applications 1152 include various applications such as: media player, browser, used to realize various application services. A program implementing the method of the present application may be included in the application 1152. The application programs 1152 include: applets, objects, components, logic, data structures, and other computer device-executable instructions that perform particular tasks or implement particular abstract data types.
In addition, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements each process of the above method for controlling output data, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A scene interactive human body action and balance intelligent evaluation method is applied to a motion balance evaluation system, the system comprises an image acquisition device, and the method comprises the following steps:
setting a motion evaluation scenario based on the motion training target;
obtaining body type characteristic information of a target user through the image acquisition device, and constructing a target human body three-dimensional model based on the body type characteristic information;
according to the motion evaluation situation, performing motion joint point identification on the target human body three-dimensional model to obtain a motion associated joint point set;
based on the motion-related joint point set, performing motion gesture capture on the target user through the image acquisition device to obtain a motion gesture data set;
obtaining reference balance point information according to the body type feature information and the basic information of the target user;
calling a first balance index evaluation model from a motion balance index evaluation model library according to the basic information of the target user;
inputting the motion attitude data set and the reference balance point information into the first balance index evaluation model to obtain a target dynamic balance index;
and if the target dynamic balance index does not reach the preset balance index, generating a first reminding instruction, wherein the first reminding instruction is used for reminding the target user to correct the action.
2. The method of claim 1, wherein the obtaining a set of motion gesture data comprises:
constructing a reference motion coordinate system according to the target human body three-dimensional model;
capturing the motion attitude of the target user according to the reference motion coordinate system to obtain a reference attitude coordinate data set;
constructing a scene motion coordinate system based on the motion estimation scene;
mapping and fusing the reference attitude coordinate data set in the scene motion coordinate system to obtain a scene attitude coordinate data set;
and generating the motion attitude data set according to the scene attitude coordinate data set.
3. The method of claim 2, wherein the obtaining a scene pose coordinate data set comprises:
acquiring reference coordinate information and scene coordinate information of the reference balance point information;
matching feature points of the reference coordinate information and the scene coordinate information to determine a feature conversion type;
obtaining a reference feature point conversion matrix based on the feature conversion type;
and mapping and converting the reference attitude coordinate data set according to the reference characteristic point conversion matrix to obtain the scene attitude coordinate data set.
4. The method of claim 1, wherein the method comprises:
carrying out unsupervised learning classification on the motion attitude data set to generate a region attitude data set;
performing fit degree analysis based on the region attitude data set to obtain motion characteristic fit degree;
acquiring the stress distribution information of the supporting surface of the target user through a force sensor;
and correcting the target dynamic balance index based on the bearing surface stress distribution information and the motion characteristic matching degree.
5. The method of claim 4, wherein the modifying the target dynamic balance index based on the bearing surface force distribution information and the kinematic feature fitness comprises:
obtaining a balance ability coefficient of the target user;
determining a target stability limit according to the balance capacity coefficient;
determining a target offset angle based on the bearing surface stress distribution information and the motion characteristic matching degree;
and obtaining a balance offset coefficient based on the difference value between the target offset angle and the target stability limit, and correcting the target dynamic balance index according to the balance offset coefficient.
6. The method of claim 4, wherein said obtaining a motion feature fit comprises:
constructing a motion scene posture database, wherein the motion scene posture database comprises multi-category motion scene posture information;
extracting a feature vector from the region attitude data set to obtain region attitude vector information;
and performing action matching on the region attitude vector information and the motion scene and posture database to obtain the motion characteristic matching degree.
7. The method of claim 6, wherein the method comprises:
carrying out feature point marking on the motion scene and posture database to obtain a posture feature key point set;
determining a posture change coefficient according to the posture feature key point set;
and performing image enhancement on the motion scene posture database based on the posture change coefficient to obtain an augmented motion scene posture database.
8. A situational interactive human body motion and balance intelligent assessment system, said system comprising:
a first setting unit configured to set a motion evaluation scenario based on a motion training target;
the first construction unit is used for obtaining body type characteristic information of a target user through an image acquisition device and constructing a target human body three-dimensional model based on the body type characteristic information;
a first obtaining unit, configured to perform, according to the motion evaluation scenario, motion joint point identification on the target human body three-dimensional model to obtain a motion-associated joint point set;
a second obtaining unit, configured to capture, by the image acquisition device, a motion pose of the target user based on the motion-related joint point set, and obtain a motion pose data set;
a third obtaining unit, configured to obtain reference balance point information according to the body type feature information and basic information of a target user;
the first calling unit is used for calling a first balance index evaluation model from a motion balance index evaluation model library according to the basic information of the target user;
a fourth obtaining unit, configured to input the motion posture data set and the reference balance point information into the first balance index evaluation model, so as to obtain a target dynamic balance index;
the first processing unit is used for generating a first reminding instruction if the target dynamic balance index does not reach a preset balance index, and the first reminding instruction is used for reminding the target user to correct the action.
9. A situational interactive human body movement and balance intelligent assessment electronic device comprising a bus, a transceiver, a memory, a processor and a computer program stored on the memory and executable on the processor, the transceiver, the memory and the processor being connected via the bus, characterized in that the computer program realizes the steps of the method according to any one of claims 1-7 when executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-7.
CN202210356000.XA 2022-04-06 2022-04-06 Scene interactive human body action and balance intelligent evaluation method and system Pending CN114495177A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210356000.XA CN114495177A (en) 2022-04-06 2022-04-06 Scene interactive human body action and balance intelligent evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210356000.XA CN114495177A (en) 2022-04-06 2022-04-06 Scene interactive human body action and balance intelligent evaluation method and system

Publications (1)

Publication Number Publication Date
CN114495177A true CN114495177A (en) 2022-05-13

Family

ID=81488849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210356000.XA Pending CN114495177A (en) 2022-04-06 2022-04-06 Scene interactive human body action and balance intelligent evaluation method and system

Country Status (1)

Country Link
CN (1) CN114495177A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205740A (en) * 2022-07-08 2022-10-18 温州医科大学 Body-building exercise auxiliary teaching method and system
CN115240367A (en) * 2022-09-23 2022-10-25 杭州中芯微电子有限公司 UWB (ultra wide band) intelligent positioning based user management early warning method and system
CN115240856A (en) * 2022-08-29 2022-10-25 成都体育学院 Exercise health assessment method, system and equipment based on exercise posture
CN115813377A (en) * 2023-01-05 2023-03-21 北京蓝田医疗设备有限公司 Intelligent posture assessment method and system
CN116246780A (en) * 2022-12-14 2023-06-09 北京诺亦腾科技有限公司 Method and device for evaluating fitness, electronic equipment and storage medium
WO2024020838A1 (en) * 2022-07-27 2024-02-01 Intel Corporation Apparatus, method, device and medium for dynamic balance ability evaluation
CN117558457A (en) * 2024-01-11 2024-02-13 合肥特斯塔信息科技有限公司 Customer portrait analysis method in customer relationship management system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105797350A (en) * 2016-03-18 2016-07-27 深圳大学 Intelligent method and system for body building posture recognition, evaluation, early-warning and intensity estimation
US20180085044A1 (en) * 2016-09-27 2018-03-29 Diversified Healthcare Development, Llc Clinical assessment of balance on a platform with controlled stability
CN109731292A (en) * 2018-12-29 2019-05-10 北京工业大学 A kind of balanced capacity test and training system and method based on virtual reality technology
CN113100708A (en) * 2020-01-13 2021-07-13 北京大学 Quantitative evaluation system for human body balance function
US20210322856A1 (en) * 2018-09-14 2021-10-21 Mirrorar Llc Systems and methods for assessing balance and form during body movement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105797350A (en) * 2016-03-18 2016-07-27 深圳大学 Intelligent method and system for body building posture recognition, evaluation, early-warning and intensity estimation
US20180085044A1 (en) * 2016-09-27 2018-03-29 Diversified Healthcare Development, Llc Clinical assessment of balance on a platform with controlled stability
US20210322856A1 (en) * 2018-09-14 2021-10-21 Mirrorar Llc Systems and methods for assessing balance and form during body movement
CN109731292A (en) * 2018-12-29 2019-05-10 北京工业大学 A kind of balanced capacity test and training system and method based on virtual reality technology
CN113100708A (en) * 2020-01-13 2021-07-13 北京大学 Quantitative evaluation system for human body balance function

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205740A (en) * 2022-07-08 2022-10-18 温州医科大学 Body-building exercise auxiliary teaching method and system
WO2024020838A1 (en) * 2022-07-27 2024-02-01 Intel Corporation Apparatus, method, device and medium for dynamic balance ability evaluation
CN115240856A (en) * 2022-08-29 2022-10-25 成都体育学院 Exercise health assessment method, system and equipment based on exercise posture
CN115240367A (en) * 2022-09-23 2022-10-25 杭州中芯微电子有限公司 UWB (ultra wide band) intelligent positioning based user management early warning method and system
CN116246780A (en) * 2022-12-14 2023-06-09 北京诺亦腾科技有限公司 Method and device for evaluating fitness, electronic equipment and storage medium
CN115813377A (en) * 2023-01-05 2023-03-21 北京蓝田医疗设备有限公司 Intelligent posture assessment method and system
CN117558457A (en) * 2024-01-11 2024-02-13 合肥特斯塔信息科技有限公司 Customer portrait analysis method in customer relationship management system
CN117558457B (en) * 2024-01-11 2024-03-26 合肥特斯塔信息科技有限公司 Customer portrait analysis method in customer relationship management system

Similar Documents

Publication Publication Date Title
CN114495177A (en) Scene interactive human body action and balance intelligent evaluation method and system
US11468612B2 (en) Controlling display of a model based on captured images and determined information
CN108153421B (en) Somatosensory interaction method and device and computer-readable storage medium
KR102594938B1 (en) Apparatus and method for comparing and correcting sports posture using neural network
CN110428486B (en) Virtual interaction fitness method, electronic equipment and storage medium
CN107930048B (en) Space somatosensory recognition motion analysis system and motion analysis method
KR102241414B1 (en) Electronic device for providing a feedback for a specivic motion using a machine learning model a and machine learning model and method for operating thereof
Kwon et al. Real-Time workout posture correction using OpenCV and MediaPipe
CN113569753B (en) Method, device, storage medium and electronic equipment for comparing actions in video
JP2022043264A (en) Motion evaluation system
Yang et al. Human exercise posture analysis based on pose estimation
CN113409651B (en) Live broadcast body building method, system, electronic equipment and storage medium
Pham et al. A study on skeleton-based action recognition and its application to physical exercise recognition
JP7241004B2 (en) Body motion analysis device, body motion analysis system, body motion analysis method, and program
CN111353347B (en) Action recognition error correction method, electronic device, and storage medium
CN113139506A (en) Step counting method, step counting equipment and storage medium
CN111353345B (en) Method, apparatus, system, electronic device, and storage medium for providing training feedback
JP7103998B2 (en) Skeleton extraction method, equipment and program
CN113569775B (en) Mobile terminal real-time 3D human motion capturing method and system based on monocular RGB input, electronic equipment and storage medium
CN113842622A (en) Motion teaching method, device, system, electronic equipment and storage medium
CN112257642B (en) Human body continuous motion similarity evaluation method and evaluation device
JP7482471B2 (en) How to generate a learning model
CN211180839U (en) Motion teaching equipment and motion teaching system
CN110148202B (en) Method, apparatus, device and storage medium for generating image
KR20230009676A (en) 3d human pose estimation apparatus and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination