CN112151194A - Fitness training monitoring system and method, storage medium and electronic equipment - Google Patents

Fitness training monitoring system and method, storage medium and electronic equipment Download PDF

Info

Publication number
CN112151194A
CN112151194A CN202011024552.8A CN202011024552A CN112151194A CN 112151194 A CN112151194 A CN 112151194A CN 202011024552 A CN202011024552 A CN 202011024552A CN 112151194 A CN112151194 A CN 112151194A
Authority
CN
China
Prior art keywords
data
user
fitness training
monitoring
platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011024552.8A
Other languages
Chinese (zh)
Other versions
CN112151194B (en
Inventor
刘岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN202011024552.8A priority Critical patent/CN112151194B/en
Publication of CN112151194A publication Critical patent/CN112151194A/en
Application granted granted Critical
Publication of CN112151194B publication Critical patent/CN112151194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Computing Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Alarm Systems (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Acoustics & Sound (AREA)
  • Life Sciences & Earth Sciences (AREA)

Abstract

The disclosure provides a fitness training monitoring system and method, a storage medium and electronic equipment, and relates to the technical field of computers. The fitness training monitoring system comprises: a front-end processing subsystem comprising: the video data processing module is used for collecting and processing video data in the fitness training process to obtain an evaluation result; the interactive data processing module is used for acquiring interactive data in the fitness training and calling a corresponding platform of a server side through the central control subsystem to analyze the interactive data; the display module is used for displaying the evaluation result and carrying out interaction between a user and the system; the central control subsystem is used for calling a corresponding platform of a server side according to the calling request of the front-end processing subsystem to analyze and process the interactive data; and the server comprises a voice analysis platform, an identity authentication platform and a question-answering conversation platform. The present disclosure may analyze a fitness training effect in real time based on video analysis techniques.

Description

Fitness training monitoring system and method, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a fitness training monitoring system and method, an electronic device, and a computer-readable storage medium.
Background
The fitness training plays an important role in the rehabilitation of the limbs of the user, and the normative rehabilitation action can help the user to better recover the functions of the limbs.
Generally, a rehabilitation doctor or a fitness coach is required to track and guide the rehabilitation training process of a user one-to-one, if the user selects to perform rehabilitation, the normative action and the treatment effect cannot be guaranteed, so that the labor cost and the time cost required by rehabilitation training are high, and the wide application of the treatment means is not facilitated. Meanwhile, the rehabilitation effect cannot be well evaluated because an effective recording and evaluating means is lacked in the rehabilitation process at present.
Therefore, it is desirable to provide a fitness training monitoring system, which can analyze the fitness training effect in real time and prompt the user to correct the irregular actions in time.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a fitness training monitoring system and method, an electronic device, and a computer-readable storage medium, where the system detects and tracks key points of a human body based on a video analysis technology, so as to analyze a fitness training effect in real time and prompt a user to correct an irregular action in time.
According to a first aspect of the present disclosure, there is provided a fitness training monitoring system, comprising:
the front-end processing subsystem comprises a video data processing module, an interactive data processing module and a display module, wherein:
the video data processing module is used for acquiring video data in a fitness training process in real time and analyzing the video data to obtain an evaluation result of the fitness training;
the interactive data processing module is used for acquiring interactive data in a fitness training process and calling platform services of a server side through the central control subsystem to analyze the interactive data;
the display module is used for displaying the evaluation result to the user of the fitness training and interacting with the user;
the central control subsystem is used for calling a corresponding platform of a server side according to the calling request of the front-end processing subsystem to analyze and process the interactive data;
the server comprises a voice analysis platform, an identity authentication platform and a question and answer dialogue platform, wherein the voice analysis platform is used for voice recognition and analysis; the identity authentication platform is used for carrying out identity authentication on a user; the question-answer dialogue platform is used for question-answer interaction.
In an exemplary embodiment of the present disclosure, the video data processing module includes a data acquisition sub-module and a data processing sub-module, wherein:
the data acquisition submodule acquires the video data in real time through a camera device;
the data processing submodule comprises a monitoring unit and a data analysis unit, wherein:
the monitoring unit is used for detecting a human body target from the video data, carrying out image segmentation on the human body target, monitoring predefined key points based on the result of the image segmentation and calculating to obtain monitoring data;
the data analysis unit is used for performing characteristic processing on the monitoring data to obtain characteristic data, and classifying the characteristic data based on a machine learning algorithm to obtain the evaluation result.
In an exemplary embodiment of the present disclosure, the image pickup apparatus includes a binocular camera and an adjustment lever, wherein:
the binocular camera is used for collecting the video data, and the adjusting rod is used for adjusting the position and the angle of the binocular camera, so that the binocular camera collects the video data.
In an exemplary embodiment of the present disclosure, the interactive data processing module includes a data acquisition sub-module and a data processing sub-module, wherein:
the data acquisition submodule is used for acquiring interactive data in a fitness training process through voice interactive equipment;
and the data processing submodule is used for analyzing the interactive data by calling a platform of a server side through the central control subsystem.
In an exemplary embodiment of the present disclosure, the interaction data includes voice interaction data, interface presentation data, and tactile interaction data; the display module comprises a result prompting unit and an interaction unit, wherein:
the result prompting module is used for displaying the evaluation result in a system interface through the interface display data and playing the evaluation result through the voice interaction data based on the analysis result of the voice analysis platform;
the interaction unit is used for receiving the touch interaction data, starting the fitness training monitoring system based on the touch interaction data and carrying out interaction so as to monitor the fitness training;
the interaction unit is further used for obtaining processing results of the voice interaction data by the voice analysis platform and the question-answer dialogue platform so as to carry out interaction between the user and the fitness training monitoring system.
In an exemplary embodiment of the present disclosure, the front-end processing subsystem further includes an identity authentication module, and the identity authentication module is configured to receive an identity authentication request of the user, and invoke the identity authentication platform through the central control subsystem to perform identity authentication on the user.
According to a second aspect of the present disclosure, there is provided a fitness training monitoring method, including:
acquiring video data of user fitness training in real time through camera equipment, detecting a human body target in the video data, and carrying out image segmentation on the human body target;
detecting predefined key nodes in the human body target based on the image segmentation result, and monitoring and calculating the key nodes to obtain monitoring data;
and performing characterization processing on the monitoring data, inputting a pre-established fitness training monitoring model, obtaining an evaluation result and sending the evaluation result to the user.
In an exemplary embodiment of the present disclosure, the obtaining of monitoring data by monitoring and calculating the key node includes:
acquiring coordinate data of the key node at a first moment and a second moment and central coordinate data of the human body target;
and calculating the monitoring data based on the coordinate data.
In an exemplary embodiment of the present disclosure, the method further comprises:
and acquiring interactive data in the fitness training, and calling a corresponding platform of a server side through a central control subsystem to process the interactive data so as to complete the interaction between the user and the system.
In an exemplary embodiment of the present disclosure, the interaction data includes voice interaction data, and the obtaining and sending the evaluation result to the user further includes:
and obtaining the action standardization of the user based on the evaluation result, and calling a question-answer dialogue platform of the server side through the central control subsystem when the action of the user is not standardized, and sending voice prompt and guidance suggestion to the user.
In an exemplary embodiment of the present disclosure, the collecting interactive data in the fitness training, and calling a corresponding platform of a server through a central control subsystem to process the interactive data to complete the interaction between the user and the system includes:
and voice questions of the user in the fitness training are collected, a voice analysis platform of the server side and the question-answer dialogue platform are called through the central control subsystem, and the voice questions are answered through voice interaction.
In an exemplary embodiment of the present disclosure, before the acquiring, by the image capturing apparatus, video data of user fitness training in real time, the method further includes:
carrying out face recognition on the user, and adjusting the camera equipment to acquire the video data after the face recognition is passed;
and starting system monitoring in response to a starting operation acted on the system by the user, wherein the starting operation is voice starting or touch feeling starting.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the fitness training monitoring method described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described fitness training monitoring method via execution of the executable instructions.
Exemplary embodiments of the present disclosure may have some or all of the following benefits:
an exemplary embodiment of the present disclosure provides a fitness training monitoring system, including: the system comprises a front-end processing subsystem, a central control subsystem and a server. The front-end processing subsystem comprises a video data processing module, an interactive data processing module and a display module. The video data processing module is used for acquiring video data in the fitness training process in real time and analyzing the video data to obtain an evaluation result of the fitness training; the interactive data processing module is used for acquiring interactive data in the fitness training process and calling platform service of the server side through the central control subsystem to analyze the interactive data; the display module is used for displaying the evaluation result to the user and interacting with the user; the central control subsystem is used for calling a corresponding platform of the server side according to the calling request of the front-end processing subsystem to analyze and process the interactive data; the server comprises a voice analysis platform, an identity authentication platform and a question-answering conversation platform. The voice analysis platform is used for voice recognition and analysis; the identity authentication platform is used for carrying out identity authentication on the user; the question-answer dialogue platform is used for question-answer interaction. On one hand, in the system provided by the exemplary embodiment, the front-end processing subsystem can acquire video data of a user's fitness training process in real time, and obtain a corresponding evaluation result by analyzing the acquired video data, and can judge the normative of the user's fitness training action based on the evaluation result and remind the user, so that one-to-one tracking guidance is not needed, the workload of a rehabilitation doctor or a fitness coach is reduced, and the labor cost is saved. On the other hand, in the system provided in the present exemplary embodiment, since the processing of the video data is performed in the local front-end processing subsystem, the detection of the fitness training can be realized in the offline state, and the speed of data processing can be provided. On the other hand, the video data are collected in real time, so that the video data can guide the fitness training process of the user in real time, the fitness training effect is improved, and the user experience is improved. Meanwhile, the monitoring of the system for fitness training provided by the embodiment is realized based on data, so that the management, evaluation and perfection of the system are more convenient.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 illustrates a system architecture diagram of a fitness training monitoring system to which embodiments of the present disclosure may be applied;
FIG. 2 shows a schematic diagram of predefined human key nodes to which embodiments of the present disclosure may be applied;
FIG. 3 shows a schematic diagram of a coordinate representation of predefined human key nodes to which embodiments of the present disclosure may be applied;
FIG. 4 is a schematic diagram illustrating coordinate data of a key node at a first time and a second time to which embodiments of the present disclosure may be applied;
FIG. 5 schematically shows a schematic diagram of a left wrist node time series motion monitoring curve according to one embodiment of the present disclosure;
FIG. 6 schematically shows a schematic of a frequency domain monitoring curve corresponding to a left wrist node time series motion monitoring curve according to one embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a system architecture of a fitness training monitoring system for a specific application scenario to which embodiments of the present disclosure may be applied;
FIG. 8 schematically illustrates a schematic diagram of a flow of a fitness training monitoring method according to one embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating a flow of a fitness training monitoring method for a specific application scenario to which an embodiment of the present disclosure may be applied;
FIG. 10 illustrates a schematic structural diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The fitness training plays an important role in the rehabilitation of the limbs of the user, and the normative rehabilitation action can help the user to better recover the functions of the limbs.
However, the existing fitness training needs one-to-one tracking guidance of a rehabilitation doctor or a fitness coach, so that the cost of manpower and time required by fitness training is high. Meanwhile, due to the lack of effective recording and evaluation means, the rehabilitation effect cannot be well evaluated.
In order to solve the above problem, the present exemplary embodiment firstly provides a fitness training monitoring system, which is used for monitoring a fitness training process of a user, so that the user can complete the fitness rehabilitation process more regularly without guidance of a rehabilitation doctor or a fitness coach, and referring to fig. 1, an architecture diagram of the fitness training monitoring system provided by the present exemplary embodiment is shown, as shown in the figure, the system architecture 100 may include a front-end processing subsystem 110, a central control subsystem 120, and a server 130. Wherein:
the front-end processing subsystem 110 may include a video data processing module 111, an interactive data processing module 112, and a presentation module 113. Wherein:
the video data processing module 111 may be configured to collect video data in a fitness training process in real time, and analyze the video data to obtain an evaluation result of the fitness training;
the interactive data processing module 112 may be configured to collect interactive data in a fitness training process, and invoke a platform service of a server to analyze the interactive data through the central control subsystem;
the display module 113 may be configured to display the evaluation result to a user of fitness training, and interact with the user;
the central control subsystem 120 may be configured to invoke a platform corresponding to the server to perform analysis processing on the interactive data according to a call request of the front-end processing subsystem;
the server 130 may include a voice analysis platform 131, an identity authentication platform 132, and a question and answer dialog platform 133. Wherein:
the speech analysis platform 131 can be used for speech recognition and analysis;
the authentication platform 132 may be used to authenticate a user;
question-answer dialog platform 133 may be used for question-answer interaction.
In the fitness training monitoring system provided by the exemplary embodiment of the present disclosure, on one hand, in the system provided by the exemplary embodiment, the front-end processing subsystem may collect video data of a user in a fitness training process in real time, obtain a corresponding evaluation result by analyzing the collected video data, judge the normalization of a user fitness training action based on the evaluation result, and remind the user without one-to-one tracking guidance, thereby reducing the workload of a rehabilitation doctor or a fitness coach and saving the labor cost. On the other hand, in the system provided in the present exemplary embodiment, since the processing of the video data is performed in the local front-end processing subsystem, the detection of the fitness training can be realized in the offline state, and the speed of data processing can be provided. On the other hand, the video data are collected in real time, so that the video data can guide the fitness training process of the user in real time, the fitness training effect is improved, and the user experience is improved. Meanwhile, the monitoring of the system for fitness training provided by the embodiment is realized based on data, so that the management, evaluation and perfection of the system are more convenient.
The following further describes the details of the above fitness training monitoring system:
in this exemplary embodiment, the fitness training monitoring system may perform identity authentication on the user through an identity authentication platform of the server, and after the authentication is passed, may acquire and process video data of fitness training of the user through a video data processing module of the front-end processing subsystem, so as to obtain an evaluation result of the fitness training. Meanwhile, interactive data in the training process can be collected, and the collected interactive data are analyzed by calling a platform in the server through the central control subsystem, so that interaction between a user and the system is realized.
The fitness training monitoring system provided in this exemplary embodiment may further include a training assisting device, in addition to the front-end processing subsystem, the central control subsystem and the server, for assisting a user in performing fitness training, for example, the type of the training assisting device may be an upright type, a squat type or a lying type, and this exemplary embodiment is not particularly limited thereto.
In this exemplary embodiment, the video data processing module includes a data acquisition sub-module and a data processing sub-module, where:
the data acquisition submodule can acquire video data in real time through the camera equipment. For example, the image capturing device may include a binocular camera and an adjusting lever, the binocular camera may be used for the video data, and the adjusting lever is used for adjusting a position and an angle of the camera, so as to ensure that the binocular camera can capture available video data.
For example, when the training aid is upright, squat or lying, the corresponding fitness training is standing training, squat training or lying training, and the emphasis and range of the exercise limb of each training user are different, so before the binocular camera collects video data in real time, the position and angle of the camera need to be confirmed, and the shooting height and angle need to be adjusted according to the content of the fitness training in the process, so that the front of the camera faces the user of the fitness training, and the visual field is ensured to cover the main body exercise limb of the user. It should be noted that the above scenario is only an exemplary illustration, and other image capturing apparatuses and methods for confirming the shooting range also belong to the protection scope of the present exemplary embodiment.
The data processing sub-module can analyze and process the acquired video data based on a video analysis technology to obtain an evaluation result and realize interaction with a user.
Specifically, the data processing module may include a monitoring unit and a data analysis unit. Wherein:
the monitoring unit is used for monitoring a human body target from video data acquired by the camera equipment in real time, carrying out image segmentation on the monitored human body target, monitoring predefined human body key points based on the image segmentation result, and calculating to obtain monitoring data.
Specifically, the detection unit may obtain the monitoring data by performing the following processes: detecting the video data based on a deep learning algorithm, detecting a human body target region in the video data, and identifying and segmenting the region; detecting human body limbs based on the segmented images, and further detecting key nodes based on the results of the limb detection; and tracking and calculating the detected key nodes to obtain the monitoring data.
The detection of the human body target region through the deep learning algorithm may be implemented as follows: training based on historical video data to obtain an image recognition model, and recognizing human body areas in each frame of picture of the video data through the image recognition model; and inputting video data acquired in real time by the camera equipment into the model to obtain a corresponding human target area. After the human body target area is obtained, the marking of the area can be realized by using a rectangular frame mark, and image segmentation is performed based on the mark area. Wherein, the human form in the target human body area can be different types such as standing, lying, squatting and the like. It should be noted that the above scenario is only an exemplary illustration, and other methods for detecting a human target area and identifying the area also belong to the protection scope of the present exemplary embodiment.
After the target human body area is identified and the image segmentation is carried out, human body limb detection can be carried out based on the segmented image, and predefined key nodes in the limbs are obtained. The key nodes are main nodes involved in human body movement, and may include elbow joint nodes, knee joint nodes, head nodes and the like. The specific number and location of the key nodes may be predefined according to the actual situation. As shown in fig. 2, 17 key nodes are predefined, and the monitoring data can be calculated by tracking the key nodes. Wherein, the calculation may be performed based on coordinates, and the coordinate representation of the key node may be as shown in fig. 3. It should be noted that the above scenario is only an exemplary illustration, and the number and the location of the above key nodes may also be other situations, which also belongs to the protection scope of the present exemplary embodiment.
After the key nodes are obtained, the detection unit can also track and measure data of the key nodes to finally obtain monitoring data. Specifically, the process may be as follows: acquiring coordinate data of each key node at a first moment and a second moment and central coordinate data of a human body target; and calculating to obtain monitoring data based on the coordinate data.
In the following, taking the node P as a key node as an example, the data measurement and calculation process is described in more detail:
the coordinate of the node P at the first time t1 is assumed to be S (S ═ S)x,Sy,Sz) At the second time t2, the coordinate is W ═ W (W)x,Wy,Wz) The detected center coordinate of the human body is E ═ E (E)x,Ey,Ez)。
The vector of the node P is expressed by using the center of the human body as the origin of coordinates and the time t1
Figure BDA0002701751740000101
the vector for node P at time t2 is represented as
Figure BDA0002701751740000102
As shown in fig. 4, then
Figure BDA0002701751740000103
And
Figure BDA0002701751740000104
the vector of (d) is represented as:
Figure BDA0002701751740000105
Figure BDA0002701751740000106
then, based on the obtained coordinate data, the motion angle θ of the node P may be calculated as:
Figure BDA0002701751740000107
the angular velocity ω of motion of the node P is:
ω=θ/|t2-t1|
after the movement angle and the movement angular velocity of the node P are obtained through calculation, the movement angular velocity, the movement angle, and the coordinate data are obtained through calculation, and the movement velocity V of the node P in the x direction from the first time to the second time can be further calculatedxDistance of movement Dx(ii) a Speed of movement V in the y directionyDistance of movement Dy(ii) a And a speed of movement V in the z directionzDistance of movement Dz. Taking the x direction as an example, the specific calculation process can be as follows, and the y direction and z direction parameter calculation processes are the same as those in the x direction:
distance D of movement in x directionxCalculating the formula:
Figure BDA0002701751740000111
speed of movement V in x-directionxCalculating the formula:
Figure BDA0002701751740000112
in summary, the monitoring data obtained by the node P at the time t2 includes:
F={θ,ω,Dx,Vx,Dy,Vy,Dz,Vz}
when the predetermined key nodes are as shown in fig. 2, then 17 nodes can obtain more than 17 sets of the above-mentioned monitoring data. It should be noted that the above scenario is only an exemplary illustration, and the scope of protection of the exemplary embodiment is not limited thereto.
After the monitoring data is obtained by the monitoring unit, the data analysis unit may be configured to perform characterization processing on the obtained monitoring data to obtain feature data, and obtain an evaluation result of the fitness training of the user through a machine learning algorithm.
Specifically, the above-described characterization process may include filtering and fourier transform, for example. Wherein the filtering can remove noise and high frequency interference noise generated due to human body jitter. In order to avoid data loss caused by short-time shielding of some nodes in the motion process of fitness training, so that the detection effect is influenced, the data filling can be performed through Kalman filtering in the example embodiment. It should be noted that the above scenario is only an exemplary illustration, and other filtering methods also belong to the protection scope of the present exemplary embodiment.
The fourier transform may be a fast fourier transform. In the course of the fitness training, when the human body is in continuous motion, the above-mentioned each key node data can form an irregular curve, generally, the course of the fitness training is mainly in a cyclic reciprocating manner, therefore, in order to solve the problem that the traditional method is difficult to control in action marking and threshold value decision and not strong in universality, the present exemplary embodiment can use fourier transform to transfer the motion characteristic data into the frequency domain space, because of the periodicity of the action, the change of the motion data in the frequency domain space is gentle and periodic, taking the left wrist node as an example, fig. 5 is a left wrist node time sequence motion monitoring curve, fig. 6 is a frequency domain monitoring curve, therefore, it can be seen that the amplitude and phase of each key node original data vector element are obtained under the fourier transform frequency spectrum, and the original 8-dimensional data characteristic is changed into 16-dimensional frequency domain data characteristic after fourier transform. It should be noted that the above scenario is only an exemplary illustration, and the scope of protection of the exemplary embodiment is not limited thereto.
After the monitoring data is subjected to characterization processing in the process to obtain corresponding characteristic data, an evaluation result can be obtained through a machine learning algorithm based on the characteristic data. Specifically, the process may be as follows: obtaining a classification model based on machine learning training through the frequency domain feature vectors of the historical data; and inputting the frequency domain feature vector obtained by the monitoring data through the characterization processing process into the model to obtain an evaluation result.
For example, the classification model may be obtained by performing 4-class model training on the frequency domain feature vectors of the historical data based on a convolutional neural network. The classification model classifies the training monitoring standard into 4 grades as an evaluation result: standard, general, non-canonical, dangerous actions. The recognition accuracy rate of the convolutional neural network is about 93%, which is 6% higher than that of the traditional machine learning methods such as SVM and the like. It should be noted that the above scenario is only an exemplary illustration, and other methods for generating the above classification model also belong to the protection scope of the present exemplary embodiment.
In this exemplary embodiment, the interactive data processing module includes a data acquisition sub-module and a data processing sub-module, where:
the data acquisition submodule can acquire interactive data in a fitness training process through voice interactive equipment. For example, the voice interaction device may include a speaker and a microphone array device, wherein the microphone array may be used for voice collection, and the speaker may be used for playing voice. It should be noted that the above scenario is only an exemplary illustration, and other voice interaction devices and scenarios applied to the voice interaction devices also belong to the protection scope of the present exemplary embodiment.
The data processing sub-module can call a platform of the server side through the central control subsystem to analyze and process the interactive data so as to realize the interaction between the user and the system. By analyzing the interactive data, the corresponding system function can be realized. For example, the user may control the system by voice, the system may send the evaluation result to the user by voice, and so on.
Specifically, the interaction data may be voice interaction data. For example, the voice interaction data may be voice question-answer interaction data, the data acquisition sub-module may acquire question data of a user about a certain fitness training action through the microphone array device and transmit the question data to the data processing sub-module, the data processing sub-module calls a voice analysis platform and a question-answer dialogue platform of the server through the central control sub-system, performs voice recognition and analysis on the question data of the user to obtain user intention, generates an answer corresponding to the question, and plays the answer to the user through the sound box. The interaction data may be tactile interaction data by which the system can be controlled. It should be noted that the above scenario is only an exemplary illustration, and other voice interaction data also belongs to the protection scope of the present exemplary embodiment.
In this exemplary embodiment, the display module may include a result prompting unit and an interaction unit, and is configured to display the evaluation result to a user of fitness training and interact with the user. The interactive data may include voice interactive data, interface display data, tactile interactive data, and the like.
The result prompting unit can be used for displaying the evaluation result in the system interface through interface display data, and playing the evaluation result through voice interaction data based on the analysis result of the voice analysis platform. For example, after the video data is acquired and processed by the video data processing module to obtain the evaluation result of fitness training, the evaluation result can be displayed on the system interface in the form of interface display data for the user to refer to and adjust the training action and mode of the user in time. The interface display data can also comprise a real-time evaluation report generated based on the classification result of the classification model and the process monitoring data vector. The system interface may be a display or other display devices having a display function, and the present exemplary embodiment is not particularly limited thereto.
In the process, when the result prompting unit displays the evaluation result on the system interface in the form of interface display data, the evaluation result can be sent to the user in the form of voice. In addition, when the evaluation result obtained by the video data processing module indicates that the user action is not standard, voice prompt and action guidance can be performed on the user through the module.
The interaction unit is used for receiving touch interaction data, starting the fitness training monitoring system based on the touch interaction data and carrying out interaction so as to monitor fitness training. For example, the user may trigger the monitoring function by clicking a corresponding control in the system display device, and may control the activation of the system by a non-contact gesture.
The interaction unit can also be used for acquiring the processing result of the voice interaction data by the voice analysis platform and the question-answer dialogue platform so as to carry out interaction between the user and the fitness training monitoring system. For example, the interactive unit may obtain the user intention obtained by the interactive data processing module through analysis of a voice analysis platform and a question-and-answer dialogue platform of the central control subsystem, generate an answer corresponding to the question, and play the answer to the user through a sound box.
It should be noted that the above-mentioned scenarios are only exemplary, and other processes of performing evaluation result notification and user interaction with the system through the above-mentioned display module also belong to the protection scope of the present exemplary embodiment.
In this exemplary embodiment, the front-end processing subsystem may further include an identity authentication module. The identity authentication module is used for receiving an identity authentication request of a user and calling an identity authentication platform of a server side through the central control subsystem to authenticate the identity of the user. The identity authentication process may be performed by face recognition, for example. Specifically, the process may be as follows: and responding to the operation of starting the system, sending an identity authentication request to the central control system, and calling an identity authentication platform of the server side to perform face recognition on the user after the central control system receives the identity authentication request.
When the user is a new user of the system, the user needs to be registered with the identity. The face recognition authentication is taken as an example. The registration process may be: and prompting the user to face the camera equipment, and acquiring a face photo of the user for identity registration.
In addition, after the identity registration or authentication is performed on the user, the sports training monitoring system provided by the present exemplary embodiment further needs the user to authorize the video acquisition of the sports training, and enters the subsequent monitoring starting unit after the authorization passes. Meanwhile, the system can also establish a user data information base and record the information of the user so as to realize the functions of automatic filing and data statistical analysis. It should be noted that the above scenario is only an exemplary illustration, and other software and hardware methods for implementing identity authentication also belong to the protection scope of the present exemplary embodiment.
After the user identity authentication is passed and video data acquisition authorization is obtained, a monitoring function of the system for a fitness training process is started, specifically, the process can be realized through a starting operation of the user acting on the system, for example, the user can control the monitoring function to be started through voice, and after a voice instruction of the user is acquired, the voice interaction device calls a voice analysis platform of a server through a central control subsystem to obtain the intention of the user and starts the monitoring function. Furthermore, the initiating operation may also be achieved by direct or indirect touch screen operation. It should be noted that the above scenario is only an exemplary illustration, and other software and hardware methods for implementing the start of the monitoring function also belong to the protection scope of the present exemplary embodiment.
In this exemplary embodiment, when the front-end processing subsystem performs the above processing, a platform of a server needs to be called to obtain a corresponding service, so as to complete data analysis and processing. The server can comprise a voice analysis platform, an identity authentication platform and a question and answer dialogue platform. Wherein:
the identity authentication platform is used for providing identity authentication and registration service of the user. For example, the identity authentication platform can provide a face recognition function, can recognize a face region in an acquired image through the face recognition function, can complete identity authentication, archiving and other operations through comparison with a user in a user information database in the system, and can provide an identity registration function for the user when user information does not exist. It should be noted that the above scenario is only an exemplary illustration, and the scope of protection of the exemplary embodiment is not limited thereto.
The voice analysis platform is used for voice recognition and analysis. Specifically, the voice analysis platform may identify voice interaction data collected by the voice interaction device. Such as. When the voice interaction equipment collects that the user starts the monitoring unit through voice 'start monitoring', the voice analysis platform can identify and analyze the voice interaction data to obtain the intention of the user as starting the monitoring unit, so that the system executes the action. It should be noted that the above scenario is only an exemplary illustration, and the scope of protection of the exemplary embodiment is not limited thereto.
The question-answer dialogue platform is used for question-answer interaction and is used for providing corresponding answers based on data input by a user, specifically, the interaction process may be performed based on presentation data in a display or based on voice interaction data acquired by voice interaction equipment, and this is not particularly limited in this example embodiment.
In the fitness training monitoring system provided in this exemplary embodiment, the central control subsystem is configured to receive a call request from the front-end processing subsystem, call each platform of the server, so that the front-end subsystem performs analysis processing on the video data and the interactive data, and complete monitoring on fitness training. The specific calling details are already described in detail in the corresponding modules and platforms of the front-end processing subsystem and the server, and are not described herein again.
In this embodiment, the fitness training monitoring system may further include an operation and maintenance management subsystem, and the operation and maintenance management subsystem is connected to the central control subsystem, and may be used to implement functions such as authorization management, log management, user profile management, and statistical monitoring.
Specifically, when the front-end processing subsystem sends a call request to the central control subsystem, the operation and maintenance management subsystem may authenticate the identity of the source of the service, store the authentication content in the operation and maintenance management subsystem, and provide the authentication service to the central control subsystem by the operation and maintenance management subsystem.
The operation and maintenance management subsystem can also monitor and perform statistical analysis on logs such as the working state of the front-end processing subsystem, such as the on-off time, the equipment inspection state and the like; in addition, the operation and maintenance management subsystem can also monitor the log of each task state of the central control subsystem, such as the authentication password, the request time, the request content, the return result and the like of each task for statistical analysis.
In the identity authentication module, the operation and maintenance management system may also manage user identification, such as a mapping relationship between an identity ID and a face feature code ID. Meanwhile, the user profile can also be managed, and the management method can comprise the following steps: the uploading and updating of the files are mainly realized as follows: the front-end processing subsystem initiatively initiates the operation and maintenance of the file, and sends the operation and maintenance to the central control subsystem; local download of the archive: after the front-end processing subsystem is started and identity identification is completed, the past file of the user is preferably searched in the front-end processing subsystem, and if the past file of the user does not exist, the operation and maintenance management subsystem is searched through the central control subsystem. If the inquiry is not successful, the user file is newly built at the front-end processing subsystem end and is synchronized to the operation and maintenance management system through the central control subsystem.
It should be noted that the above scenario is only an exemplary illustration, and the scope of protection of the exemplary embodiment is not limited in sequence.
Fig. 7 illustrates a specific application scenario of the fitness training monitoring system according to the exemplary embodiment. Wherein:
the binocular camera 710 and the adjusting rod 711 are camera equipment, the binocular camera 710 is used for collecting video data of user fitness training in real time, and the adjusting rod 711 is used for adjusting the position, the height and the angle of the binocular camera, so that the main body moving limbs of the user are in the visual field range capable of being shot by the camera.
The mike array 720 and the sound 721 are voice interaction devices, the mike array 720 is used for collecting voice data of a user, and the sound 721 is used for voice interaction with the system and the user so as to realize functions of monitoring, starting, voice prompt and the like.
The screen 730 is a display device, and is used for displaying the evaluation result and the evaluation report, and may also interact with the user, for example, the user may start the system through a touch screen operation.
The edge workstation 740 corresponds to the data processing module, and is configured to process video data, voice interaction data, and display data to implement a monitoring function of the system.
The central control system computing cluster 750 is connected to the edge workstation 740 through a network, corresponds to the central control subsystem, and is configured to receive a call request of the front-end processing subsystem and call each corresponding platform in the server.
The details of the above-mentioned parts are already described in detail in the above-mentioned embodiments of the sports training monitoring system, and therefore are not described herein again. In practical application, the system in the specific application scenario needs to be deployed and debugged before being used, and specifically, the following steps are performed:
the system deployment comprises the following steps: (1) front-end equipment deployment: installing front-end monitoring and display equipment, mainly large-screen equipment, and integrating a camera, a microphone array, a sound box, an edge workstation and other devices inside; (2) back-end service deployment: and deploying central control system software, connecting a network and connecting the central control system software to external platforms such as face recognition platforms.
The system debugging comprises the following steps: (1) the smoothness of the connection between the front-end equipment and the rear-end equipment and the system is debugged, so that the smoothness of a network is ensured; (2) and the integrity of functions such as a camera, a microphone array, a sound box, question answering conversation and the like is debugged.
After the debugging is successful, the system can be started to register the user identity, start fitness training and monitoring, and standardize fitness training actions based on the evaluation report generated in real time.
Further, the present exemplary embodiment also provides a fitness training monitoring method, which, as shown in fig. 8, may include the following steps:
step S810: the method comprises the steps of collecting video data of user fitness training in real time through camera equipment, detecting a human body target in the video data, and carrying out image segmentation on the human body target.
In this exemplary embodiment, the above-mentioned human target in the monitoring video data can be realized by the following processes: training based on historical video data to obtain an image recognition model, and recognizing human body areas in each frame of picture of the video data through the image recognition model; and inputting video data acquired in real time by the camera equipment into the model to obtain a corresponding human target area. After the human body target area is obtained, the marking of the area can be realized by using a rectangular frame mark, and image segmentation is performed based on the mark area. Wherein, the human form in the target human body area can be different types such as standing, lying, squatting and the like. It should be noted that the above scenario is only an exemplary illustration, and other methods for detecting a human target area and identifying the area also belong to the protection scope of the present exemplary embodiment.
Step S820: and detecting predefined key nodes in the human body target based on the image segmentation result, and monitoring and calculating the key nodes to obtain monitoring data.
In the present exemplary embodiment, after the target human body region is identified and image segmentation is performed, human body limb detection may be performed and predefined key nodes in the limb may be acquired based on the segmented image. The key nodes are main nodes involved in human body movement, and may include elbow joint nodes, knee joint nodes, head nodes and the like. The specific number and location of the key nodes may be predefined according to the actual situation. The process of obtaining the monitoring data by monitoring and calculating the defined key nodes is explained in detail in the corresponding module of the fitness training monitoring system, and therefore, the detailed description is omitted here.
Step S830: and performing characterization processing on the monitoring data, inputting a pre-established fitness training monitoring model, obtaining an evaluation result and sending the evaluation result to the user.
In this exemplary embodiment, the monitoring data may be characterized to obtain feature data, and the evaluation result may be obtained by a machine learning algorithm based on the feature data. Specifically, the process may be as follows: obtaining a classification model based on machine learning training through the frequency domain feature vectors of the historical data; and inputting the frequency domain feature vector obtained by the monitoring data through the characterization processing process into the model to obtain an evaluation result.
For example, the classification model may be obtained by performing 4-class model training on the frequency domain feature vectors of the historical data based on a convolutional neural network. The classification model classifies the training monitoring standard into 4 grades as an evaluation result: standard, general, non-canonical, dangerous actions. The recognition accuracy rate of the convolutional neural network is about 93%, which is 6% higher than that of the traditional machine learning methods such as SVM and the like. It should be noted that the above scenario is only an exemplary illustration, and other methods for generating the above classification model also belong to the protection scope of the present exemplary embodiment.
In the fitness training monitoring method provided by the example embodiment, the action standardization of the user can be obtained based on the evaluation result, and when the action of the user is not standardized, a voice prompt is sent, so that the user can correct errors and dangerous actions in time.
Before the video data is collected in real time, the method can further comprise the following steps: responding to the system starting operation, carrying out face recognition on a user, and adjusting the camera equipment to acquire video data after the face recognition is passed; and starting system monitoring in response to a starting operation acted on the system by a user. In addition, after the user identity is registered or authenticated, a user file can be established and data can be filed, so that the fitness training progress of the user can be conveniently tracked.
In this exemplary embodiment, the above-mentioned fitness training monitoring method may further collect interactive data in fitness training, and call a corresponding platform of the server through the central control subsystem to process the interactive data, so as to complete interaction between the user and the system. For example, the system sends an evaluation result to the user through question and answer interaction between the user and the system, and performs real-time voice guidance when the user action is not standard. The details of the process are described in detail in the corresponding modules of the fitness training system, and therefore are not described herein again.
Fig. 9 is a flowchart of a specific embodiment of a fitness training monitoring method according to this exemplary embodiment, and referring to fig. 9, the specific embodiment includes the following steps:
step S901: and carrying out face recognition authentication on the user.
Step S903: and judging whether the verification is passed or not, and jumping to the step S905 when the verification is passed.
Step S905: and adjusting the position and the angle of the camera to be opposite to the monitored object.
Step S907: the monitoring function is initiated through voice interaction.
Step S909: and starting the camera to collect video data in real time.
Step S911: and carrying out human body target detection on the video data.
Step S913: predefined key nodes in the human target are detected.
Step S915: and tracking and data measurement and calculation are carried out on the key nodes to obtain monitoring data.
Step S917: and performing characterization processing on the monitoring data to obtain characteristic data.
Step S919: and obtaining an evaluation result based on the characteristic data.
Step S921: and displaying the evaluation result on a screen, and outputting the result in a voice mode.
On the one hand, the fitness training monitoring method provided by the specific embodiment can acquire video data of a user in a fitness training process in real time, obtain a corresponding evaluation result by analyzing the acquired video data, judge the normativity of the fitness training action of the user based on the evaluation result, and remind the user of the user without one-to-one tracking guidance, so that the workload of a rehabilitation doctor or a fitness coach is reduced, and the labor cost is saved. On the other hand, since the processing of the video data is locally performed, the detection of the fitness training can be realized in the off-network state, and the speed of data processing can be provided. On the other hand, the video data are collected in real time, so that the video data can guide the fitness training process of the user in real time, the fitness training effect is improved, and the user experience is improved.
FIG. 10 illustrates a schematic structural diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present disclosure.
It should be noted that the computer system 1000 of the electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU)1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for system operation are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other via a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 8 to 9, and the like.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A fitness training monitoring system comprising:
the front-end processing subsystem comprises a video data processing module, an interactive data processing module and a display module, wherein:
the video data processing module is used for acquiring video data in a fitness training process in real time and analyzing the video data to obtain an evaluation result of the fitness training;
the interactive data processing module is used for acquiring interactive data in a fitness training process and calling platform services of a server side through the central control subsystem to analyze the interactive data;
the display module is used for displaying the evaluation result to the user of the fitness training and interacting with the user;
the central control subsystem is used for calling a corresponding platform of a server side according to the calling request of the front-end processing subsystem to analyze and process the interactive data;
the server comprises a voice analysis platform, an identity authentication platform and a question and answer dialogue platform, wherein the voice analysis platform is used for voice recognition and analysis; the identity authentication platform is used for carrying out identity authentication on a user; the question-answer dialogue platform is used for question-answer interaction.
2. The fitness training monitoring system of claim 1, wherein the video data processing module comprises a data acquisition sub-module and a data processing sub-module, wherein:
the data acquisition submodule acquires the video data in real time through a camera device;
the data processing submodule comprises a monitoring unit and a data analysis unit, wherein:
the monitoring unit is used for detecting a human body target from the video data, carrying out image segmentation on the human body target, monitoring predefined key points based on the result of the image segmentation and calculating to obtain monitoring data;
the data analysis unit is used for performing characteristic processing on the monitoring data to obtain characteristic data, and classifying the characteristic data based on a machine learning algorithm to obtain the evaluation result.
3. The fitness training monitoring system of claim 2, wherein the camera device comprises a binocular camera and an adjustment lever, wherein:
the binocular camera is used for collecting the video data, and the adjusting rod is used for adjusting the position and the angle of the binocular camera, so that the binocular camera collects the video data.
4. The fitness training monitoring system of claim 1, wherein the interactive data processing module comprises a data acquisition sub-module and a data processing sub-module, wherein:
the data acquisition submodule is used for acquiring interactive data in a fitness training process through voice interactive equipment;
and the data processing submodule is used for analyzing the interactive data by calling a platform of a server side through the central control subsystem.
5. The fitness training monitoring system of claim 1, wherein the interaction data comprises voice interaction data, interface presentation data, and tactile interaction data; the display module comprises a result prompting unit and an interaction unit, wherein:
the result prompting module is used for displaying the evaluation result in a system interface through the interface display data and playing the evaluation result through the voice interaction data based on the analysis result of the voice analysis platform;
the interaction unit is used for receiving the touch interaction data, starting the fitness training monitoring system based on the touch interaction data, and interacting the user with the fitness training monitoring system;
the interaction unit is further used for obtaining processing results of the voice interaction data by the voice analysis platform and the question-answer dialogue platform so as to carry out interaction between the user and the fitness training monitoring system.
6. The fitness training monitoring system of claim 1, wherein the front-end processing subsystem further comprises an identity authentication module, the identity authentication module being configured to receive an identity authentication request from the user and invoke the identity authentication platform to authenticate the user via the central control subsystem.
7. A fitness training monitoring method comprises the following steps:
acquiring video data of user fitness training in real time through camera equipment, detecting a human body target in the video data, and carrying out image segmentation on the human body target;
detecting predefined key nodes in the human body target based on the image segmentation result, and monitoring and calculating the key nodes to obtain monitoring data;
and performing characterization processing on the monitoring data, inputting a pre-established fitness training monitoring model, obtaining an evaluation result and sending the evaluation result to the user.
8. The fitness training monitoring method of claim 7, wherein the obtaining of monitoring data by monitoring and calculating the key nodes comprises:
acquiring coordinate data of the key node at a first moment and a second moment and central coordinate data of the human body target;
and calculating the monitoring data based on the coordinate data.
9. The fitness training monitoring method of claim 7, further comprising:
and acquiring interactive data in the fitness training, and calling a corresponding platform of a server side through a central control subsystem to process the interactive data so as to complete the interaction between the user and the system.
10. The fitness training monitoring method of claim 9, wherein the interaction data comprises voice interaction data, and the obtaining and sending the assessment results to the user further comprises:
and obtaining the action standardization of the user based on the evaluation result, and calling a question-answer dialogue platform of the server side through the central control subsystem when the action of the user is not standardized, and sending voice prompt and guidance suggestion to the user.
11. The sports training monitoring method of claim 10, wherein the collecting of the interaction data during the sports training and the processing of the interaction data by the central control subsystem invoking a corresponding platform of a server to complete the interaction between the user and the system comprises:
and voice questions of the user in the fitness training are collected, a voice analysis platform of the server side and the question-answer dialogue platform are called through the central control subsystem, and the voice questions are answered through voice interaction.
12. The fitness training monitoring method of claim 7, wherein prior to the acquiring, by the camera device, video data of the user fitness training in real-time, the method further comprises:
carrying out face recognition on the user, and adjusting the camera equipment to acquire the video data after the face recognition is passed;
and starting system monitoring in response to a starting operation acted on the system by the user, wherein the starting operation is voice starting or touch feeling starting.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 7 to 12.
14. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 7 to 12 via execution of the executable instructions.
CN202011024552.8A 2020-09-25 2020-09-25 Health training monitoring system and method, storage medium and electronic equipment Active CN112151194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011024552.8A CN112151194B (en) 2020-09-25 2020-09-25 Health training monitoring system and method, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011024552.8A CN112151194B (en) 2020-09-25 2020-09-25 Health training monitoring system and method, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112151194A true CN112151194A (en) 2020-12-29
CN112151194B CN112151194B (en) 2023-12-19

Family

ID=73897237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011024552.8A Active CN112151194B (en) 2020-09-25 2020-09-25 Health training monitoring system and method, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112151194B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689591A (en) * 2021-08-23 2021-11-23 浙江中体数联科技有限公司 Physical exercise teaching system
CN114042296A (en) * 2021-09-22 2022-02-15 广州医科大学附属第一医院(广州呼吸中心) Intelligent training system for rehabilitation of lung surgery patient
CN115798676A (en) * 2022-11-04 2023-03-14 中永(广东)网络科技有限公司 Interactive experience analysis management method and system based on VR technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105536205A (en) * 2015-12-08 2016-05-04 天津大学 Upper limb training system based on monocular video human body action sensing
US20160257000A1 (en) * 2015-03-04 2016-09-08 The Johns Hopkins University Robot control, training and collaboration in an immersive virtual reality environment
CN108853946A (en) * 2018-07-10 2018-11-23 燕山大学 A kind of exercise guide training system and method based on Kinect
CN110472554A (en) * 2019-08-12 2019-11-19 南京邮电大学 Table tennis action identification method and system based on posture segmentation and crucial point feature
CN111641699A (en) * 2020-05-25 2020-09-08 安徽大学 Local area rehabilitation Internet of things system for rehabilitation station

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160257000A1 (en) * 2015-03-04 2016-09-08 The Johns Hopkins University Robot control, training and collaboration in an immersive virtual reality environment
CN105536205A (en) * 2015-12-08 2016-05-04 天津大学 Upper limb training system based on monocular video human body action sensing
CN108853946A (en) * 2018-07-10 2018-11-23 燕山大学 A kind of exercise guide training system and method based on Kinect
CN110472554A (en) * 2019-08-12 2019-11-19 南京邮电大学 Table tennis action identification method and system based on posture segmentation and crucial point feature
CN111641699A (en) * 2020-05-25 2020-09-08 安徽大学 Local area rehabilitation Internet of things system for rehabilitation station

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689591A (en) * 2021-08-23 2021-11-23 浙江中体数联科技有限公司 Physical exercise teaching system
CN113689591B (en) * 2021-08-23 2024-04-19 浙江中体数联科技有限公司 Sports teaching system
CN114042296A (en) * 2021-09-22 2022-02-15 广州医科大学附属第一医院(广州呼吸中心) Intelligent training system for rehabilitation of lung surgery patient
CN115798676A (en) * 2022-11-04 2023-03-14 中永(广东)网络科技有限公司 Interactive experience analysis management method and system based on VR technology
CN115798676B (en) * 2022-11-04 2023-11-17 中永(广东)网络科技有限公司 Interactive experience analysis management method and system based on VR technology

Also Published As

Publication number Publication date
CN112151194B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
CN112151194B (en) Health training monitoring system and method, storage medium and electronic equipment
CN109726624B (en) Identity authentication method, terminal device and computer readable storage medium
CN108898086A (en) Method of video image processing and device, computer-readable medium and electronic equipment
CN105426827A (en) Living body verification method, device and system
CN108960090A (en) Method of video image processing and device, computer-readable medium and electronic equipment
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN111079791A (en) Face recognition method, face recognition device and computer-readable storage medium
WO2020019591A1 (en) Method and device used for generating information
WO2020006964A1 (en) Image detection method and device
CN109271762B (en) User authentication method and device based on slider verification code
US11017253B2 (en) Liveness detection method and apparatus, and storage medium
CN109887187A (en) A kind of pickup processing method, device, equipment and storage medium
CN108847941B (en) Identity authentication method, device, terminal and storage medium
CN109389098B (en) Verification method and system based on lip language identification
CN112333165B (en) Identity authentication method, device, equipment and system
CN108154111A (en) Biopsy method, system, electronic equipment and computer-readable medium
CN109934191A (en) Information processing method and device
CN110059624A (en) Method and apparatus for detecting living body
CN112597850A (en) Identity recognition method and device
CN109031201A (en) The voice localization method and device of Behavior-based control identification
CN105450664A (en) Information processing method and terminal
CN113989929A (en) Human body action recognition method and device, electronic equipment and computer readable medium
CN111062022B (en) Slider verification method and device based on disturbance visual feedback and electronic equipment
CN111949965A (en) Artificial intelligence-based identity verification method, device, medium and electronic equipment
Kang et al. Frontal-view human gait recognition based on Kinect features and deterministic learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant