CN114373531A - Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium - Google Patents

Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium Download PDF

Info

Publication number
CN114373531A
CN114373531A CN202210186769.1A CN202210186769A CN114373531A CN 114373531 A CN114373531 A CN 114373531A CN 202210186769 A CN202210186769 A CN 202210186769A CN 114373531 A CN114373531 A CN 114373531A
Authority
CN
China
Prior art keywords
action
current
information
video data
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210186769.1A
Other languages
Chinese (zh)
Other versions
CN114373531B (en
Inventor
黄金叶
陈磊
陈予涵
陈予琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Original Assignee
Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qiyang Special Equipment Technology Engineering Co ltd filed Critical Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Priority to CN202210186769.1A priority Critical patent/CN114373531B/en
Publication of CN114373531A publication Critical patent/CN114373531A/en
Application granted granted Critical
Publication of CN114373531B publication Critical patent/CN114373531B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to the technical field of action detection, and aims to provide a behavior action monitoring and correcting method, a behavior action monitoring and correcting system, electronic equipment and a medium. The method comprises the following steps: acquiring motion video data; obtaining all human body key point information in the current action video data according to the current action video data; obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data; obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database; and performing action evaluation on the current action information according to the action data of the action type corresponding to the current action information, and outputting an action evaluation result. The system comprises an action video data acquisition module, a human body key point information acquisition module, an action information calculation module, an action type calculation module and an action evaluation module which are sequentially in communication connection. The invention can improve the convenience and accuracy of the fitness action evaluation.

Description

Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
Technical Field
The present invention relates to the field of motion detection technologies, and in particular, to a behavior motion monitoring and correcting method, system, electronic device, and medium.
Background
With the continuous improvement of living standard, more and more people pay attention to the health condition of the body, and the gymnasium becomes the first choice for people to exercise, but for many exercise novice, because the exercise in the gymnasium is not known, the action of the exercise performed by the novice cannot reach the effect of body building and shaping, the body can be damaged for a long time, and the exercise is not paid to users, so that the occupation of the fitness coach is generated. However, fitness trainers, while solving the above problems, are expensive and unacceptable to many users.
In order to solve the cost problem, a plurality of fitness software such as keep and fitness auxiliary equipment such as millet bracelets are available at present, the use cost is lower, and in the process, a user can follow the exercise through teaching videos on the fitness software and record fitness data through the fitness auxiliary equipment. However, in the process of using the prior art, the inventor finds that at least the following problems exist in the prior art: the existing fitness software and fitness auxiliary equipment have no function of motion detection, namely motion correction, whether the motion of a user is standard or not can be known only based on the observation of the user or the assistance of others, and the motion correction is carried out based on the function, so that inconvenience is brought to the fitness of the user, and meanwhile certain judgment errors exist in the observation of human eyes, and the improvement of the safety of the fitness is not facilitated.
Disclosure of Invention
The present invention is directed to solve the above technical problems to at least some extent, and the present invention provides a behavior monitoring and correcting method, system, electronic device, and medium.
The technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a behavior monitoring and correcting method, including:
acquiring motion video data;
obtaining all human body key point information in the current action video data according to the current action video data;
obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data;
obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database; the skeleton database comprises a plurality of action types of action data, and the action data of each action type comprises all standard joint angle data of a current action type standard action;
and performing action evaluation on the current action information according to the action data of the action type corresponding to the current action information, and outputting an action evaluation result.
The invention can acquire action video data based on action video data acquisition modules such as a monocular camera and the like, then sequentially acquire human body key point information, corresponding action information and action types in the action video data, and finally evaluate the current action information based on the action data of the corresponding action type in a skeleton database to obtain an action evaluation result. In the process, subsequent action evaluation can be carried out only by using the action video data acquisition module capable of acquiring the action video of the user, the action evaluation is automatically realized, dependence on manual judgment can be avoided, meanwhile, the action evaluation accuracy is high, the use scene range is wide, and the popularization and application values are achieved.
In one possible design, obtaining all the human body key point information in the current motion video data according to the current motion video data includes:
calling a human body key point detection model, and inputting current motion video data into the current human body key point detection model to obtain all human body key point information in the current motion video data; and all the human body key point information in the current action video data is three-dimensional position information.
In one possible design, the three-dimensional human key point detection model employs a videolose 3d model.
In one possible design, after obtaining all the human body key point information in the current motion video data, the method further includes:
judging whether the current action video data has the four-limb key point information or not, if so, obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data; if not, outputting the angle adjustment prompt information.
In one possible design, obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database includes:
calculating joint angle data in the current action information;
respectively calculating the similarity between the current joint angle data and the action data of all action types in the skeleton database according to a pre-stored skeleton database to obtain a plurality of similarities;
obtaining the maximum similarity according to the plurality of similarities; wherein, the maximum similarity is:
Figure 966225DEST_PATH_IMAGE001
wherein the content of the first and second substances,A i the data of the angle of the joint is represented,A i standard joint angle data corresponding to the joint in the current joint angle data in any action type in the skeleton database is represented,iindicates the number of pieces of joint angle data in the current motion information,nrepresents the total number of joint angle data;
and obtaining the action type corresponding to the current action information in the skeleton database according to the maximum similarity.
In one possible design, performing action evaluation on the current action information according to action data of an action type corresponding to the current action information, and outputting an action evaluation result, includes:
based on the action type corresponding to the current action information, carrying out coordinate system transformation on all human body key point information in the current action video data to obtain transformed human body key point information;
obtaining transformed action information corresponding to the current action video data according to the transformed human body key point information;
calculating joint angle data in the current transformed motion information, and obtaining a space vector of each bone segment in the current transformed motion information according to the joint angle data in the current transformed motion information;
acquiring space vectors of any two bone segments in the space vectors of all the bone segments in the current transformed action information, and calculating an included angle of the space vectors of the two current bone segments; wherein, the included angle of the space vectors of the current two bone segments is:
Figure 251713DEST_PATH_IMAGE002
wherein the content of the first and second substances,aandbrespectively representing the space vectors of the current two bone segments;
and calculating the angle difference between the included angle of the space vectors of the two current bone segments and the standard included angle of the space vector of the corresponding bone segment in the action type corresponding to the current action information in the bone database, and obtaining an action evaluation result according to the current angle difference.
In one possible design, after outputting the action evaluation result, the method further includes:
and outputting action standard degree prompt information.
In a second aspect, the invention provides a behavior monitoring and correcting system, which is used for implementing any one of the behavior monitoring and correcting methods; the behavior and action monitoring and correcting system comprises an action video data acquisition module, a human body key point information acquisition module, an action information calculation module, an action type calculation module and an action evaluation module which are sequentially connected in a communication manner,
the action video data acquisition module is used for acquiring action video data;
the human body key point information acquisition module is used for acquiring all human body key point information in the current action video data according to the current action video data;
the action information calculation module is used for obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data;
the action type calculation module is used for obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database; the skeleton database comprises a plurality of action types of action data, and the action data of each action type comprises all standard joint angle data of a current action type standard action;
and the action evaluation module is used for carrying out action evaluation on the current action information according to the action data of the action type corresponding to the current action information and outputting an action evaluation result.
In a third aspect, the present invention provides an electronic device, comprising:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to perform the operations of the behavioral action monitoring orthotic method according to any one of the preceding claims.
In a fourth aspect, the invention provides a computer readable storage medium for storing computer readable computer program instructions configured to, when executed, perform the operations of the behavioral activity monitoring orthotic method according to any one of the preceding claims.
Drawings
FIG. 1 is a flow chart of a behavior monitoring and correction method of the present invention;
FIG. 2 is an example frame of image data in motion video data;
FIG. 3 is a schematic diagram of human key point information obtained from the image data shown in FIG. 2;
FIG. 4 is an exemplary graphical representation of human key point information when a human is squatting against a wall;
FIG. 5 is a schematic diagram of the human body key point information after the coordinate system transformation of FIG. 4;
FIG. 6 is a block diagram of a behavior monitoring and correcting system according to the present invention;
fig. 7 is a block diagram of an electronic device in the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Example 1:
in a first aspect, the present embodiment provides a method, a system, an electronic device, and a medium for monitoring and correcting a behavior, which may be executed by, but not limited to, a Computer device or a virtual machine with certain computing resources, for example, a Personal Computer (PC, which refers to a multipurpose Computer with a size, price, and performance suitable for Personal use, a desktop Computer, a notebook Computer, a mini-notebook Computer, a tablet Computer, a super Computer, and the like all belong to a Personal Computer), a smart phone, a Personal digital assistant (PAD), or an electronic device such as a wearable device, or a virtual machine Hypervisor, so as to improve convenience and accuracy of fitness action evaluation.
As shown in fig. 1, a behavior monitoring and correcting method, system, electronic device and medium may include, but are not limited to, the following steps:
s1, acquiring action video data;
s2, obtaining all human body key point information in the current action video data according to the current action video data;
specifically, in this embodiment, the obtaining of all the human body key point information in the current motion video data according to the current motion video data in step S2 includes: calling a human body key point detection model, and inputting current motion video data into the current human body key point detection model to obtain all human body key point information in the current motion video data; and all the human body key point information in the current action video data is three-dimensional position information.
Specifically, after the current motion video data is input into the current human key point detection model, the human key point detection model can acquire two-dimensional human key point information in the current motion video data, and then each two-dimensional human key point information is promoted to three-dimensional human key point information through back projection and semi-supervised learning.
As an example, fig. 2 is a schematic diagram of a certain frame of image data in motion video data, and a human body key point information obtained according to the data is shown in fig. 3.
In this embodiment, the three-dimensional human body key point detection model may be, but is not limited to, a model in which any two-dimensional human body key point is promoted into a three-dimensional human body key point, such as a semgcn model, a video 3d model, and a simple3dpose model, and may also be a combination of the two-dimensional human body key point and the three-dimensional human body key point.
S3, judging whether the current action video data contains the key point information of the four limbs or not, if so, entering the step S4, otherwise, outputting angle adjustment prompt information so that a user can adjust the shooting angle until the key point information of the four limbs exists in the action video data; it should be noted that the key points of the human body usually correspond to joints with a certain degree of freedom on the human body, such as joints of the neck, shoulder, elbow, wrist, waist, knee, ankle, etc.;
it should be noted that, when the four limbs of the human body are not in the motion video data, if the subsequent motion detection is directly performed, the precision of the motion detection is reduced, and the final motion estimation result is inaccurate. In this embodiment, before performing motion detection, it is determined whether the four-limb key point information is in the motion video data, and subsequent motion detection is performed only when all the four limbs are in the motion video data, thereby improving the accuracy of subsequent motion detection.
S4, obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data;
s5, obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database; the skeleton database comprises a plurality of action types of action data, and the action data of each action type comprises all standard joint angle data of a current action type standard action;
in this embodiment, the obtaining of the action type corresponding to the current action information according to the current action information and the pre-stored bone database in step S5 includes:
s501, calculating joint angle data in the current action information;
s502, respectively calculating the similarity between the current joint angle data and the action data of all action types in the skeleton database according to a pre-stored skeleton database to obtain a plurality of similarities;
s503, obtaining the maximum similarity according to the multiple similarities; wherein, the maximum similarity is:
Figure 503571DEST_PATH_IMAGE001
wherein the content of the first and second substances,A i the data of the angle of the joint is represented,A i standard joint angle data corresponding to the joint in the current joint angle data in any action type in the skeleton database is represented,iindicates the number of pieces of joint angle data in the current motion information,nrepresents the total number of joint angle data;
and S504, obtaining the action type corresponding to the current action information in the skeleton database according to the maximum similarity.
And S6, performing action evaluation on the current action information according to the action data of the action type corresponding to the current action information, and outputting an action evaluation result.
In this embodiment, the step S6 of evaluating the motion of the current motion information based on the motion data of the motion type corresponding to the current motion information and outputting the result of evaluating the motion includes:
s601, based on the action type corresponding to the current action information, performing coordinate system transformation on all human body key point information in the current action video data to obtain transformed human body key point information;
for example, fig. 4 is a schematic diagram showing the initially obtained key point information of the human body when the human body squats against a wall, and in order to facilitate the motion evaluation, the key point information of the human body in fig. 4 is transformed by a coordinate system so as to enable the position of the human body to be parallel to the ground or the wall; after the coordinate system transformation, the human body key point information schematic diagram shown in fig. 5 can be obtained.
It should be appreciated that the transformed human body keypoint information is keypoint information in a coordinate system matched with the motion type corresponding to the current motion information, so that the current motion video data is matched with the coordinates of the information in the skeleton database, and the subsequent motion detection is facilitated.
S602, obtaining transformed action information corresponding to the current action video data according to the transformed human body key point information;
s603, calculating joint angle data in the current transformed motion information, and obtaining a space vector of each bone segment in the current transformed motion information according to the joint angle data in the current transformed motion information;
s604, acquiring space vectors of any two bone segments in the space vectors of the bone segments in the current transformed action information, and calculating an included angle of the space vectors of the two current bone segments; wherein, the included angle of the space vectors of the current two bone segments is:
Figure 292536DEST_PATH_IMAGE002
wherein the content of the first and second substances,aandbrespectively represent the current two bone segmentsA spatial vector of (a);
such as: whether the knee is straightened or not, namely, the included angle corresponding to the two joints of the thigh and the crus is calculated, and the vector included angle of the current two joint vectors is as follows:
Figure DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 519118DEST_PATH_IMAGE004
Figure 26322DEST_PATH_IMAGE005
space vectors respectively representing thigh joint vectors and shank joint vectors;
s605, calculating an angle difference between an included angle of the space vectors of the two current bone segments and a standard included angle of the space vector of the corresponding bone segment in the action type corresponding to the current action information in the bone database, and obtaining an action evaluation result according to the current angle difference.
And S7, outputting action standard degree prompt information. Specifically, the action standard degree prompt information is obtained according to the current angle difference, and can be output in the form of voice or characters and the like, so that the user can timely know the condition of the fitness action standard degree, and details are not repeated here.
The embodiment can acquire motion video data based on motion video data acquisition modules such as a monocular camera and the like, then sequentially acquire human body key point information, corresponding motion information and motion types in the motion video data, and finally evaluate the current motion information based on the motion data of the corresponding motion types in the skeleton database to obtain a motion evaluation result. In the process, subsequent action evaluation can be carried out only by using the action video data acquisition module capable of acquiring the action video of the user, the action evaluation is automatically realized, dependence on manual judgment can be avoided, meanwhile, the action evaluation accuracy is high, the use scene range is wide, and the popularization and application values are achieved.
Example 2:
the embodiment provides a behavior monitoring and correcting system, which is used for implementing the behavior monitoring and correcting method in embodiment 1; as shown in fig. 6, the behavior and action monitoring and correcting system comprises an action video data acquisition module, a human body key point information acquisition module, an action information calculation module, an action type calculation module and an action evaluation module, which are sequentially connected in a communication manner,
the action video data acquisition module is used for acquiring action video data;
the human body key point information acquisition module is used for acquiring all human body key point information in the current action video data according to the current action video data;
the action information calculation module is used for obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data;
the action type calculation module is used for obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database; the skeleton database comprises a plurality of action types of action data, and the action data of each action type comprises all standard joint angle data of a current action type standard action;
and the action evaluation module is used for carrying out action evaluation on the current action information according to the action data of the action type corresponding to the current action information and outputting an action evaluation result.
Example 3:
on the basis of embodiment 1 or 2, this embodiment discloses an electronic device, and this device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like. The electronic device may be referred to as a device for a terminal, a portable terminal, a desktop terminal, or the like, and as shown in fig. 7, the electronic device includes:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to perform the operations of the behavioral action monitoring orthotic method according to any one of embodiments 1.
In particular, the processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 302 is used to store at least one instruction for execution by the processor 801 to implement the behavioral activity monitoring orthotic method provided in embodiment 1 herein.
In some embodiments, the terminal may further include: a communication interface 303 and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power source 306.
The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof.
The power supply 306 is used to power various components in the electronic device.
Example 4:
on the basis of any embodiment of embodiments 1 to 3, the present embodiment discloses a computer-readable storage medium for storing computer-readable computer program instructions configured to, when executed, perform the operations of the behavior monitoring and correction method according to embodiment 1.
It should be noted that the functions described herein, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications of the technical solutions described in the embodiments or equivalent replacements of some technical features may still be made. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A behavior monitoring and correcting method is characterized by comprising the following steps: the method comprises the following steps:
acquiring motion video data;
obtaining all human body key point information in the current action video data according to the current action video data;
obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data;
obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database; the skeleton database comprises a plurality of action types of action data, and the action data of each action type comprises all standard joint angle data of a current action type standard action;
and performing action evaluation on the current action information according to the action data of the action type corresponding to the current action information, and outputting an action evaluation result.
2. A behavioral movement monitoring orthotic method according to claim 1, characterised in that: obtaining all human body key point information in the current motion video data according to the current motion video data, wherein the obtaining comprises the following steps:
calling a human body key point detection model, and inputting current motion video data into the current human body key point detection model to obtain all human body key point information in the current motion video data; and all the human body key point information in the current action video data is three-dimensional position information.
3. A behavioral movement monitoring orthotic method according to claim 2, characterised in that: the three-dimensional human body key point detection model adopts a video 3d model.
4. A behavioral movement monitoring orthotic method according to claim 1, characterised in that: after obtaining all the human body key point information in the current motion video data, the method further comprises the following steps:
judging whether the current action video data has the four-limb key point information or not, if so, obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data; if not, outputting the angle adjustment prompt information.
5. A behavioral movement monitoring orthotic method according to claim 1, characterised in that: obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database, wherein the action type comprises the following steps:
calculating joint angle data in the current action information;
respectively calculating the similarity between the current joint angle data and the action data of all action types in the skeleton database according to a pre-stored skeleton database to obtain a plurality of similarities;
obtaining the maximum similarity according to the plurality of similarities; wherein, the maximum similarity is:
Figure 527035DEST_PATH_IMAGE001
wherein the content of the first and second substances,A i the data of the angle of the joint is represented,A i standard joint angle data corresponding to the joint in the current joint angle data in any action type in the skeleton database is represented,iindicates the number of pieces of joint angle data in the current motion information,nrepresents the total number of joint angle data;
and obtaining the action type corresponding to the current action information in the skeleton database according to the maximum similarity.
6. A behavioral movement monitoring orthotic method according to claim 1, characterised in that: according to the action data of the action type corresponding to the current action information, the action evaluation is carried out on the current action information, and an action evaluation result is output, wherein the action evaluation method comprises the following steps:
based on the action type corresponding to the current action information, carrying out coordinate system transformation on all human body key point information in the current action video data to obtain transformed human body key point information;
obtaining transformed action information corresponding to the current action video data according to the transformed human body key point information;
calculating joint angle data in the current transformed motion information, and obtaining a space vector of each bone segment in the current transformed motion information according to the joint angle data in the current transformed motion information;
acquiring space vectors of any two bone segments in the space vectors of all the bone segments in the current transformed action information, and calculating an included angle of the space vectors of the two current bone segments; wherein, the included angle of the space vectors of the current two bone segments is:
Figure 976340DEST_PATH_IMAGE002
wherein the content of the first and second substances,aandbrespectively representing the space vectors of the current two bone segments;
and calculating the angle difference between the included angle of the space vectors of the two current bone segments and the standard included angle of the space vector of the corresponding bone segment in the action type corresponding to the current action information in the bone database, and obtaining an action evaluation result according to the current angle difference.
7. A behavioral movement monitoring orthotic method according to claim 1, characterised in that: after the action evaluation result is output, the method further comprises the following steps:
and outputting action standard degree prompt information.
8. A behavioral-action monitoring orthotic system, comprising: for implementing a behavioural activity monitoring orthotic method as claimed in any one of claims 1 to 7; the behavior and action monitoring and correcting system comprises an action video data acquisition module, a human body key point information acquisition module, an action information calculation module, an action type calculation module and an action evaluation module which are sequentially connected in a communication manner,
the action video data acquisition module is used for acquiring action video data;
the human body key point information acquisition module is used for acquiring all human body key point information in the current action video data according to the current action video data;
the action information calculation module is used for obtaining action information corresponding to the current action video data according to all human body key point information in the current action video data;
the action type calculation module is used for obtaining an action type corresponding to the current action information according to the current action information and a pre-stored skeleton database; the skeleton database comprises a plurality of action types of action data, and the action data of each action type comprises all standard joint angle data of a current action type standard action;
and the action evaluation module is used for carrying out action evaluation on the current action information according to the action data of the action type corresponding to the current action information and outputting an action evaluation result.
9. An electronic device, characterized in that: the method comprises the following steps:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to perform the operations of the behavioral action monitoring orthotic method of any one of claims 1 to 7.
10. A computer-readable storage medium storing computer-readable computer program instructions, characterized in that: the computer program instructions are configured to, when executed, perform operations of a method of behavioral action monitoring orthotic according to any one of claims 1 to 7.
CN202210186769.1A 2022-02-28 2022-02-28 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium Active CN114373531B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210186769.1A CN114373531B (en) 2022-02-28 2022-02-28 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210186769.1A CN114373531B (en) 2022-02-28 2022-02-28 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN114373531A true CN114373531A (en) 2022-04-19
CN114373531B CN114373531B (en) 2022-10-25

Family

ID=81145558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210186769.1A Active CN114373531B (en) 2022-02-28 2022-02-28 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114373531B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144217A (en) * 2019-11-28 2020-05-12 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN113486771A (en) * 2021-06-30 2021-10-08 福州大学 Video motion uniformity evaluation method and system based on key point detection
CN113947811A (en) * 2021-10-15 2022-01-18 湘潭大学 Taijiquan action correction method and system based on generation of confrontation network
CN113947809A (en) * 2021-09-18 2022-01-18 杭州电子科技大学 Dance action visual analysis system based on standard video
CN113989929A (en) * 2021-10-28 2022-01-28 中国电信股份有限公司 Human body action recognition method and device, electronic equipment and computer readable medium
CN114092971A (en) * 2021-11-26 2022-02-25 重庆大学 Human body action evaluation method based on visual image
CN114092862A (en) * 2021-11-26 2022-02-25 重庆大学 Action evaluation method based on optimal frame selection
CN114093032A (en) * 2021-11-26 2022-02-25 重庆大学 Human body action evaluation method based on action state information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144217A (en) * 2019-11-28 2020-05-12 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN113486771A (en) * 2021-06-30 2021-10-08 福州大学 Video motion uniformity evaluation method and system based on key point detection
CN113947809A (en) * 2021-09-18 2022-01-18 杭州电子科技大学 Dance action visual analysis system based on standard video
CN113947811A (en) * 2021-10-15 2022-01-18 湘潭大学 Taijiquan action correction method and system based on generation of confrontation network
CN113989929A (en) * 2021-10-28 2022-01-28 中国电信股份有限公司 Human body action recognition method and device, electronic equipment and computer readable medium
CN114092971A (en) * 2021-11-26 2022-02-25 重庆大学 Human body action evaluation method based on visual image
CN114092862A (en) * 2021-11-26 2022-02-25 重庆大学 Action evaluation method based on optimal frame selection
CN114093032A (en) * 2021-11-26 2022-02-25 重庆大学 Human body action evaluation method based on action state information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王刘涛等: "基于关键帧轮廓特征提取的人体动作识别方法", 《重庆邮电大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN114373531B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
US11468612B2 (en) Controlling display of a model based on captured images and determined information
EP3885967A1 (en) Object key point positioning method and apparatus, image processing method and apparatus, and storage medium
EP3828765A1 (en) Human body detection method and apparatus, computer device, and storage medium
CN108205655B (en) Key point prediction method and device, electronic equipment and storage medium
CN110349081B (en) Image generation method and device, storage medium and electronic equipment
US10489956B2 (en) Robust attribute transfer for character animation
CN109063584B (en) Facial feature point positioning method, device, equipment and medium based on cascade regression
WO2022032823A1 (en) Image segmentation method, apparatus and device, and storage medium
CN112597933B (en) Action scoring method, device and readable storage medium
EP3617934A1 (en) Image recognition method and device, electronic apparatus, and readable storage medium
EP4053736B1 (en) System and method for matching a test frame sequence with a reference frame sequence
CN107481280A (en) The antidote and computing device of a kind of skeleton point
WO2021217937A1 (en) Posture recognition model training method and device, and posture recognition method and device
CN114782497B (en) Motion function analysis method and electronic device
CN114742925A (en) Covering method and device for virtual object, electronic equipment and storage medium
CN116580211B (en) Key point detection method, device, computer equipment and storage medium
CN114373531B (en) Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
WO2020147797A1 (en) Image processing method and apparatus, image device, and storage medium
CN115346074B (en) Training method, image processing device, electronic equipment and storage medium
CN113345069A (en) Modeling method, device and system of three-dimensional human body model and storage medium
CN116403285A (en) Action recognition method, device, electronic equipment and storage medium
CN111062279A (en) Picture processing method and picture processing device
CN113724176A (en) Multi-camera motion capture seamless connection method, device, terminal and medium
Asokan et al. IoT based Pose detection of patients in Rehabilitation Centre by PoseNet Estimation Control
CN115830640B (en) Human body posture recognition and model training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant