CN114627559A - Exercise plan planning method, device, equipment and medium based on big data analysis - Google Patents

Exercise plan planning method, device, equipment and medium based on big data analysis Download PDF

Info

Publication number
CN114627559A
CN114627559A CN202210510021.2A CN202210510021A CN114627559A CN 114627559 A CN114627559 A CN 114627559A CN 202210510021 A CN202210510021 A CN 202210510021A CN 114627559 A CN114627559 A CN 114627559A
Authority
CN
China
Prior art keywords
search
motion
preset
points
video sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210510021.2A
Other languages
Chinese (zh)
Other versions
CN114627559B (en
Inventor
庞刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Sports Insurance Network Technology Co ltd
Original Assignee
Shenzhen Qianhai Sports Insurance Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Sports Insurance Network Technology Co ltd filed Critical Shenzhen Qianhai Sports Insurance Network Technology Co ltd
Priority to CN202210510021.2A priority Critical patent/CN114627559B/en
Publication of CN114627559A publication Critical patent/CN114627559A/en
Application granted granted Critical
Publication of CN114627559B publication Critical patent/CN114627559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Abstract

The invention relates to an artificial intelligence technology, and discloses a motion plan planning method based on big data analysis, which comprises the following steps: dividing a motion video sequence in a human motion video sequence into rectangular blocks, taking a search window as the center of each rectangular block, selecting a plurality of initial search points in the center of each rectangular block according to search step length, performing three-step search processing in each rectangular block based on the plurality of initial search points to obtain a plurality of standard search points, selecting a compressed video sequence determined by the plurality of standard search points in each rectangular block, and extracting human skeleton data characteristics in the compressed video sequence; and constructing and training by utilizing big data to obtain a motion analysis model, analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result, and planning a motion plan. The invention also provides an exercise plan planning device based on big data analysis, electronic equipment and a computer readable storage medium. The invention can solve the problem of low planning efficiency of the movement plan.

Description

Exercise plan planning method, device, equipment and medium based on big data analysis
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a motion plan planning method and device based on big data analysis, electronic equipment and a computer-readable storage medium.
Background
In recent years, physical conditions of teenagers or some middle-aged and elderly people are not optimistic, although related problems attract attention and attention of all parties, problems still exist in the development process of physical activities of the teenagers outside a class and the physical activity organization of the middle-aged and elderly people, and for example, the activity mode, the amount of exercise, the completion quality and the like are lack of targeted and scientific guidance, so that an exercise plan planning method needs to be provided.
The existing motion planning method is generally used for watching according to a motion video obtained by recognition, and experts give appointments, so that the method consumes a large amount of manpower and material resources, and meanwhile, the efficiency of motion planning is not high enough.
Disclosure of Invention
The invention provides a method and a device for planning an exercise plan based on big data analysis and a computer readable storage medium, and mainly aims to solve the problem of low efficiency of planning the exercise plan.
In order to achieve the above object, the present invention provides a motion planning method based on big data analysis, which includes:
acquiring a human motion video sequence at a target selected position by using a pre-constructed three-dimensional system capture module, and dividing each frame of motion video sequence in the human motion video sequence into rectangular blocks with preset sizes;
acquiring a preset search step length and a search window, taking the search window as the center of the rectangular block, selecting a plurality of initial search points in the center of the rectangular block according to the search step length, and performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points;
selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset size respectively, and extracting human skeleton data characteristics in the compressed video sequences;
constructing and training a motion analysis model by using pre-acquired big data, and analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result;
and performing motion planning on the motion crowd corresponding to the target selected position based on the motion analysis result to obtain a motion planning result.
Optionally, the performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points includes:
respectively calculating initial block matching errors between the initial search points and the search window, and taking the initial search point corresponding to the minimum initial block matching error as a secondary search center;
performing two rounds of searching by using the secondary searching center to obtain a plurality of newly added searching points, calculating corresponding newly added block matching errors based on the newly added searching points, and taking the newly added searching point corresponding to the minimum newly added block matching error as a target searching center;
and converting the preset search step length into a reference search step length according to a preset conversion scale factor, and searching a plurality of reference points as a plurality of standard search points in the target search center according to the reference search step length.
Optionally, performing two rounds of search with the secondary search center to obtain a plurality of new search points includes:
identifying a center category of the re-search center;
when the center category of the secondary search center is the corner point, selecting search points in a preset shape range expanded by the secondary search center as a plurality of newly-added search points;
and when the center category of the secondary search center is a non-corner point, randomly selecting a preset number of grid points as a plurality of newly-added search points from the secondary search center.
Optionally, the selecting a plurality of compressed video sequences determined by the standard search points in the rectangular blocks with the preset size respectively includes:
mapping the standard search points to a preset two-dimensional rectangular coordinate system, and carrying out closed connection on the standard search points in the two-dimensional rectangular coordinate system to obtain a search connection graph;
respectively calculating the coincidence degrees between the search connection graph and the plurality of rectangular blocks, and taking the rectangular block with the highest coincidence degree as a target rectangular block;
and taking the obtained motion video sequence corresponding to the target rectangular block as a compressed video sequence.
Optionally, the extracting human skeleton data features in the compressed video sequence includes:
identifying bone location data in the compressed video sequence using a bone identification device;
vectorizing the bone position data by using a preset vector conversion formula to obtain bone vector data;
calculating joint angle characteristics corresponding to the bone position data based on the bone vector data and a preset joint angle calculation formula;
calculating joint angle characteristics corresponding to the bone position data according to the joint angle characteristics and a preset joint angle calculation formula;
and summarizing the joint angle characteristics and the joint included angle characteristics into human skeleton data characteristics.
Optionally, the preset vector conversion formula is:
Figure 856874DEST_PATH_IMAGE002
Figure 883736DEST_PATH_IMAGE004
Figure 147358DEST_PATH_IMAGE006
Figure 841644DEST_PATH_IMAGE008
wherein,
Figure 100002_DEST_PATH_IMAGE009
Figure 447200DEST_PATH_IMAGE010
Figure 100002_DEST_PATH_IMAGE011
And
Figure 23675DEST_PATH_IMAGE012
are all the vector data of the bones,
Figure 100002_DEST_PATH_IMAGE013
Figure 153305DEST_PATH_IMAGE014
Figure 100002_DEST_PATH_IMAGE015
and
Figure 108623DEST_PATH_IMAGE016
representing the position data of said bone in a manner,
Figure 100002_DEST_PATH_IMAGE017
and
Figure 727823DEST_PATH_IMAGE018
is a preset fixed parameter.
Optionally, the dividing each frame of the motion video sequence in the human motion video sequence into rectangular blocks with preset sizes includes:
identifying a plurality of motion video frames in the human motion video sequence, and extracting key frames from the plurality of motion video frames to obtain a plurality of motion key frames;
and acquiring a reference human body block with a preset size, and performing image interception in the plurality of motion key frames according to the sequence from left to right and from top to bottom to obtain a rectangular block with a preset size.
In order to solve the above problems, the present invention further provides an exercise plan planning device based on big data analysis, the device comprising:
the search point selection module is used for acquiring a human motion video sequence at a target selected position by using a pre-constructed three-dimensional system capture module, dividing each frame of motion video sequence in the human motion video sequence into rectangular blocks with preset sizes, acquiring preset search step lengths and search windows, taking the search windows as the centers of the rectangular blocks, selecting a plurality of initial search points in the centers of the rectangular blocks by using the search step lengths, and performing three-step search processing in the rectangular blocks based on the plurality of initial search points to obtain a plurality of standard search points;
the data feature extraction module is used for selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset sizes respectively and extracting human skeleton data features in the compressed video sequences;
the motion analysis module is used for constructing and training a motion analysis model by utilizing the pre-acquired big data, and analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result;
and the plan planning module is used for carrying out movement plan planning on the movement crowd corresponding to the target selected position based on the movement analysis result to obtain a movement plan planning result.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the motion planning method based on the big data analysis.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, which stores at least one instruction, where the at least one instruction is executed by a processor in an electronic device to implement the motion planning method based on big data analysis.
In the embodiment of the invention, the human motion video sequence at the target selected position is accurately acquired through the three-dimensional system capturing module, and the motion video sequence is compressed by using the three-step searching processing method, so that the information redundancy in the motion video sequence is removed, and the compressed video sequence is obtained. And extracting human skeleton data characteristics in the compressed video sequence, analyzing the human skeleton data characteristics in a motion analysis model to obtain a motion analysis result, and planning a motion plan of a motion crowd corresponding to the target selected position according to the motion analysis result to obtain a motion plan planning result. The efficiency of movement plan planning is improved. Therefore, the exercise plan planning method, the exercise plan planning device, the electronic equipment and the computer readable storage medium based on big data analysis provided by the invention can solve the problem of low efficiency of exercise plan planning.
Drawings
Fig. 1 is a schematic flow chart of a motion planning method based on big data analysis according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of an exercise planning apparatus based on big data analysis according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device for implementing the motion planning method based on big data analysis according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides an exercise plan planning method based on big data analysis. The execution subject of the exercise planning method based on big data analysis includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiments of the present application. In other words, the exercise planning method based on big data analysis may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a motion planning method based on big data analysis according to an embodiment of the present invention. In this embodiment, the exercise plan planning method based on big data analysis includes:
s1, acquiring a human motion video sequence of a target selected position by using a pre-constructed three-dimensional system capture module, and dividing each frame of motion video sequence in the human motion video sequence into rectangular blocks with preset sizes.
In the embodiment of the invention, the pre-constructed three-dimensional system capturing module can obtain a human motion video, wherein the human motion video belongs to a special time sequence and reflects the motion state and the human posture of a human body changing along with time. The target selected position can be any place where people move, for example, a target selected position such as a learning playground where students move or practice in a break, a plot of a multi-person gathering break dance, and the like.
Specifically, the dividing each frame of the motion video sequence in the human motion video sequence into rectangular blocks with preset sizes includes:
identifying a plurality of motion video frames in the human motion video sequence, and extracting key frames from the plurality of motion video frames to obtain a plurality of motion key frames;
and acquiring a reference human body block with a preset size, and performing image interception in the plurality of motion key frames according to the sequence from left to right and from top to bottom to obtain a rectangular block with a preset size.
In detail, the motion key frame is a video frame containing human motion in the motion video frame, the reference human body block with the preset size is a standard human body reference frame obtained by referring to people groups with different age groups and different shapes in a summary mode, and non-overlapping image capturing is performed in the motion key frame from left to right and from top to bottom by using the reference human body block to obtain a rectangular block with the preset size.
S2, obtaining a preset search step length and a preset search window, taking the search window as the center of the rectangular block, selecting a plurality of initial search points in the center of the rectangular block according to the search step length, and performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points.
In the embodiment of the invention, the human motion video sequence at the target selected position acquired by the three-dimensional system capturing module has the characteristics of high dimensionality, dense sampling and large occupied storage space, so that information redundancy in data needs to be removed through data compression, and the data is encoded by using the least bits. The data compression method comprises a triangular mesh compression method, a three-step search compression method, a four-step search compression method and a non-rectangular search mode method. The non-rectangular search pattern method comprises a diamond search method, a hexagon search method, an edge-based inner search hexagon search method and the like.
Preferably, the preset search step length may be 2 units, the search window is in a grid shape, the coordinate of the central pixel point in the search window is set as an initial position of search, a plurality of initial search points are selected at the center of the rectangular block according to the search step length, and in the scheme, 9 initial search points may be selected near the center of the search window.
Specifically, the performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points includes:
respectively calculating initial block matching errors between the initial search points and the search window, and taking the initial search point corresponding to the minimum initial block matching error as a secondary search center;
performing two rounds of searching by using the secondary searching center to obtain a plurality of newly added searching points, calculating corresponding newly added block matching errors based on the newly added searching points, and taking the newly added searching point corresponding to the minimum newly added block matching error as a target searching center;
and converting the preset search step length into a reference search step length according to a preset conversion scale factor, and searching a plurality of reference points as a plurality of standard search points in the target search center according to the reference search step length.
Specifically, the conversion scaling factor is 0.5, the preset search step may be 2 units, and the preset search step is converted into a reference search step according to a preset conversion scaling factor, that is, the preset search step is multiplied by the conversion scaling factor, so as to obtain a reference search step, where in this scheme, the reference search step is 1.
Further, the performing two rounds of search with the secondary search center to obtain a plurality of newly added search points includes:
identifying a center category of the re-search center;
when the center category of the secondary search center is the corner point, selecting search points in a preset shape range expanded by the secondary search center as a plurality of newly-added search points;
and when the center category of the secondary search center is a non-corner point, randomly selecting a preset number of grid points as a plurality of newly-added search points from the secondary search center.
In detail, the preset shape range may be a shape of one letter L around the center of the re-search.
Preferably, the search range can be gradually reduced from coarse to fine in the search process by using a four-step search method, and meanwhile, a rectangular search mode is adopted, so that the search efficiency is improved.
S3, selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset sizes respectively, and extracting human skeleton data characteristics in the compressed video sequences.
In this embodiment of the present invention, the selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset size respectively includes:
mapping the standard search points to a preset two-dimensional rectangular coordinate system, and carrying out closed connection on the standard search points in the two-dimensional rectangular coordinate system to obtain a search connection graph;
respectively calculating the coincidence degrees between the search connection graph and the plurality of rectangular blocks, and taking the rectangular block with the highest coincidence degree as a target rectangular block;
and taking the obtained motion video sequence corresponding to the target rectangular block as a compressed video sequence.
In detail, the embodiment of the present invention may calculate the overlap ratio between the search connection pattern and the plurality of rectangular blocks by using the following formula:
Figure 916228DEST_PATH_IMAGE020
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE021
in order to be said degree of coincidence,
Figure 646286DEST_PATH_IMAGE022
in order to search for a connection pattern,
Figure DEST_PATH_IMAGE023
in order to form the rectangular block, the rectangular block is provided with a plurality of rectangular holes,
Figure 456111DEST_PATH_IMAGE024
in order to perform the intersection operation,
Figure DEST_PATH_IMAGE025
is a union operation.
Further, the extracting human skeleton data features in the compressed video sequence includes:
identifying bone location data in the compressed video sequence using a bone identification device;
vectorizing the bone position data by using a preset vector conversion formula to obtain bone vector data;
calculating joint angle characteristics corresponding to the bone position data based on the bone vector data and a preset joint angle calculation formula;
calculating joint angle characteristics corresponding to the bone position data according to the joint angle characteristics and a preset joint angle calculation formula;
and summarizing the joint angle characteristics and the joint included angle characteristics into human skeleton data characteristics.
In detail, the bone recognition device may include a depth sensor, an infrared emitter, an RGB color camera, and a microphone array. The system can perform human skeleton tracking, voice recognition, identity recognition and the like in real time, can clearly recognize the positions of human skeletons, can recognize 25 joint points, and can display skeleton structure diagrams of six persons at the same time. The RGB color camera is used for acquiring color images, 30 frames of images can be acquired every second, the number of the depth sensors is two, the microphone array is used for sound source positioning and voice recognition, and the infrared emitter can be used for detecting the relative position of a human body.
The skeleton position data comprises three-dimensional coordinates and quaternion of each node of the human skeleton, the three-dimensional coordinates are coordinates under a global coordinate system, the quaternion is defined as (x, y, z and w), and the rotation relation between a child node and a parent node, namely the conversion relation between local coordinate systems, is represented. The quaternion is a unit quaternion, i.e., modulo is 1.
Specifically, the preset vector conversion formula is as follows:
Figure 246212DEST_PATH_IMAGE026
Figure DEST_PATH_IMAGE027
Figure 154869DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 423039DEST_PATH_IMAGE009
Figure 680845DEST_PATH_IMAGE010
Figure 782793DEST_PATH_IMAGE011
and
Figure 493260DEST_PATH_IMAGE012
are all the vector data of the bones,
Figure 830701DEST_PATH_IMAGE013
Figure 208593DEST_PATH_IMAGE014
Figure 465130DEST_PATH_IMAGE015
and
Figure 928473DEST_PATH_IMAGE016
representing the bone position data.
Figure 7287DEST_PATH_IMAGE017
And
Figure 302002DEST_PATH_IMAGE018
is a preset fixed parameter.
Further, the preset joint angle size calculation formula is as follows:
Figure DEST_PATH_IMAGE031
Figure DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure 949015DEST_PATH_IMAGE034
is a first one of the joint angle features,
Figure DEST_PATH_IMAGE035
a second one of the joint angle features,
Figure 696392DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE037
and
Figure 736154DEST_PATH_IMAGE038
representing joint points
Figure DEST_PATH_IMAGE039
Figure 619796DEST_PATH_IMAGE040
Figure DEST_PATH_IMAGE041
And
Figure 968869DEST_PATH_IMAGE042
representing joint points
Figure DEST_PATH_IMAGE043
Figure 937962DEST_PATH_IMAGE044
Figure DEST_PATH_IMAGE045
And
Figure 14372DEST_PATH_IMAGE046
representing joint points
Figure DEST_PATH_IMAGE047
Specifically, the preset joint included angle calculation formula is as follows:
Figure DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 690204DEST_PATH_IMAGE050
is characterized by the included angle of the joint,
Figure 272495DEST_PATH_IMAGE034
is a first one of the joint angle features,
Figure 994463DEST_PATH_IMAGE035
a second one of the joint angle features,
Figure DEST_PATH_IMAGE051
a modulus of a first one of the joint angle features,
Figure 373098DEST_PATH_IMAGE052
is a modulus of a second one of the joint angular features.
S4, constructing and training a motion analysis model by utilizing the pre-acquired big data, and analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result.
In the embodiment of the invention, the motion analysis module can be a random forest model, a convolutional neural network model or a bidirectional long-short term memory network model and the like. The pre-acquired big data is motion-related data.
Specifically, the constructing and training by using the pre-acquired big data to obtain the motion analysis model includes:
obtaining motion characteristic data based on big data and constructing an initial motion sequence by utilizing the motion characteristic data;
performing integral transformation on the initial motion sequence based on a preset integral transformation formula to obtain a standard motion sequence;
constructing a motion analysis formula according to the standard motion sequence, and generating an initial analysis model by taking the motion analysis formula as a reference formula;
inputting preset training related data into the initial analysis model to obtain an initial analysis result;
comparing the initial analysis result with a pre-acquired standard reference result, and taking the initial analysis model as a motion analysis model when the initial analysis result is consistent with the pre-acquired standard reference result;
and when the comparison is inconsistent, performing parameter adjustment on the initial analysis model, inputting the training related data into the initial analysis model after the parameter adjustment to obtain a new analysis result, and when the new analysis result is consistent with the standard reference result, taking the initial analysis model after the parameter adjustment as a motion analysis model.
In detail, the big data may be a plurality of pieces of historical motion data included in a motion database, and the motion feature data in the big data may be extracted in the embodiment of the present invention
Figure DEST_PATH_IMAGE053
Specifically, the preset integral transformation formula is as follows:
Figure DEST_PATH_IMAGE055
wherein the content of the first and second substances,
Figure 965754DEST_PATH_IMAGE056
in the form of a standard sequence of movements,
Figure DEST_PATH_IMAGE057
in order to be an initial sequence of movements,
Figure 656629DEST_PATH_IMAGE058
is a transformation matrix.
Preferably, the first and second electrodes are formed of a metal,
Figure DEST_PATH_IMAGE059
in detail, the final purpose of the integral transformation is to better process the motion information.
Further, the motion analysis formula is:
Figure DEST_PATH_IMAGE061
Figure DEST_PATH_IMAGE063
wherein the content of the first and second substances,
Figure 256107DEST_PATH_IMAGE064
a horizontal motion component representing the standard motion sequence,
Figure DEST_PATH_IMAGE065
represents the vertical motion component of the standard motion sequence,
Figure 815264DEST_PATH_IMAGE066
Figure DEST_PATH_IMAGE067
Figure 137792DEST_PATH_IMAGE068
Figure DEST_PATH_IMAGE069
Figure 389782DEST_PATH_IMAGE070
Figure DEST_PATH_IMAGE071
Figure 446862DEST_PATH_IMAGE072
and
Figure DEST_PATH_IMAGE073
are all preset constraint parameters.
Specifically, the human body bone data features are input into the motion analysis model to obtain a motion analysis result, and the motion analysis result mainly comprises identification of a motion state, monitoring and judgment of a motion process and the like.
And S5, performing motion plan planning on the motion crowd corresponding to the target selected position based on the motion analysis result to obtain a motion plan planning result.
In the embodiment of the invention, the motion analysis result comprises the identification of the motion state, the monitoring and judgment of the motion process and the like, in order to ensure the regularity and the scientificity of the motion, the plan making can be carried out on the motion crowd corresponding to the target selected position, and a specific motion plan is appointed according to expert knowledge and crowd interest.
For example, the exercise analysis result indicates that the teenager lacks exercise and the exercise state is mild exercise, so that a corresponding exercise plan can be formulated, and the exercise time and intensity of the teenager can be enhanced.
In the embodiment of the invention, the human motion video sequence at the target selected position is accurately acquired through the three-dimensional system capturing module, and the motion video sequence is compressed by using the three-step searching processing method, so that the information redundancy in the motion video sequence is removed, and the compressed video sequence is obtained. And extracting human skeleton data characteristics in the compressed video sequence, performing motion analysis on the human skeleton data characteristics in a motion analysis model to obtain a motion analysis result, and performing motion plan planning on the motion crowd corresponding to the target selected position according to the motion analysis result to obtain a motion plan planning result. The efficiency of movement plan planning is improved. Therefore, the exercise planning method based on big data analysis provided by the invention can solve the problem of low efficiency of exercise planning.
Fig. 2 is a functional block diagram of an exercise planning apparatus based on big data analysis according to an embodiment of the present invention.
The exercise planning device 100 based on big data analysis according to the present invention can be installed in an electronic device. According to the realized functions, the motion planning device 100 based on big data analysis may include a search point selection module 101, a data feature extraction module 102, a motion analysis module 103, and a planning module 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions of the respective modules/units are as follows:
the search point selection module 101 is configured to acquire a human motion video sequence at a target selected position by using a pre-constructed three-dimensional system capture module, divide each frame of motion video sequence in the human motion video sequence into rectangular blocks with a preset size, acquire a preset search step size and a preset search window, use the search window as the center of each rectangular block, select a plurality of initial search points at the center of each rectangular block by using the search step size, and perform three-step search processing in the rectangular blocks based on the plurality of initial search points to obtain a plurality of standard search points;
the data feature extraction module 102 is configured to select a compressed video sequence determined by the plurality of standard search points from the rectangular blocks of the preset size, and extract human skeleton data features in the compressed video sequence;
the motion analysis module 103 is configured to construct and train a motion analysis model by using the pre-acquired big data, and analyze the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result;
the planning module 104 is configured to perform motion planning on the motion crowd corresponding to the target selected position based on the motion analysis result, so as to obtain a motion planning result.
In detail, the specific implementation of each module of the exercise planning apparatus 100 based on big data analysis is as follows:
the method comprises the steps of firstly, acquiring a human motion video sequence of a target selected position by using a pre-constructed three-dimensional system capture module, and dividing each frame of motion video sequence in the human motion video sequence into rectangular blocks with preset sizes.
In the embodiment of the invention, the pre-constructed three-dimensional system capturing module can obtain a human motion video, wherein the human motion video belongs to a special time sequence and reflects the motion state and the human posture of a human body changing along with time. The target selected position can be any place where people move, for example, a target selected position such as a learning playground where students move or practice in a break, a plot of a multi-person gathering break dance, and the like.
Specifically, the dividing each frame of the motion video sequence in the human motion video sequence into rectangular blocks with preset sizes includes:
identifying a plurality of motion video frames in the human motion video sequence, and extracting key frames from the plurality of motion video frames to obtain a plurality of motion key frames;
and acquiring a reference human body block with a preset size, and performing image interception in the plurality of motion key frames according to the sequence from left to right and from top to bottom to obtain a rectangular block with a preset size.
In detail, the motion key frame is a video frame containing human motion in the motion video frame, the reference human body block with the preset size is a standard human body reference frame obtained by referring to people groups with different age groups and different shapes in a summary mode, and non-overlapping image capturing is performed in the motion key frame from left to right and from top to bottom by using the reference human body block to obtain a rectangular block with the preset size.
And secondly, acquiring a preset search step length and a preset search window, taking the search window as the center of the rectangular block, selecting a plurality of initial search points in the center of the rectangular block according to the search step length, and performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points.
In the embodiment of the invention, the human motion video sequence at the target selected position acquired by the three-dimensional system capturing module has the characteristics of high dimensionality, dense sampling and large occupied storage space, so that information redundancy in data needs to be removed through data compression, and the data is encoded by using the least bits. The data compression method comprises a triangular mesh compression method, a three-step search compression method, a four-step search compression method and a non-rectangular search mode method. The non-rectangular search pattern method comprises a diamond search method, a hexagon search method, an inner search hexagon search method based on edges and the like. In the embodiment of the invention, a four-step search compression method is adopted.
Preferably, the preset search step length may be 2 units, the search window is in a grid shape, the coordinate of the central pixel point in the search window is set as an initial position of search, a plurality of initial search points are selected at the center of the rectangular block according to the search step length, and in the scheme, 9 initial search points may be selected near the center of the search window.
Specifically, the performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points includes:
respectively calculating initial block matching errors between the initial search points and the search window, and taking the initial search point corresponding to the minimum initial block matching error as a secondary search center;
performing two rounds of searching by using the secondary searching center to obtain a plurality of newly added searching points, calculating corresponding newly added block matching errors based on the newly added searching points, and taking the newly added searching point corresponding to the minimum newly added block matching error as a target searching center;
and converting the preset search step length into a reference search step length according to a preset conversion scale factor, and searching a plurality of reference points as a plurality of standard search points in the target search center according to the reference search step length.
In detail, the conversion scaling factor is 0.5, the preset search step may be 2 units, and the preset search step is converted into a reference search step according to a preset conversion scaling factor, that is, the preset search step is multiplied by the conversion scaling factor, so as to obtain a reference search step, where in the present scheme, the reference search step is 1.
Further, the performing two rounds of search with the secondary search center to obtain a plurality of newly added search points includes:
identifying a center category of the re-search center;
when the center category of the secondary search center is the corner point, selecting search points in a preset shape range expanded by the secondary search center as a plurality of newly-added search points;
and when the center category of the secondary search center is a non-corner point, randomly selecting a preset number of grid points as a plurality of newly-added search points from the secondary search center.
In detail, the preset shape range may be a shape of one letter L around the center of the re-search.
Preferably, the search range can be gradually reduced from coarse to fine in the search process by using a four-step search method, and meanwhile, a rectangular search mode is adopted, so that the search efficiency is improved.
And thirdly, selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset sizes respectively, and extracting human skeleton data characteristics in the compressed video sequences.
In this embodiment of the present invention, the selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset size respectively includes:
mapping the standard search points to a preset two-dimensional rectangular coordinate system, and carrying out closed connection on the standard search points in the two-dimensional rectangular coordinate system to obtain a search connection graph;
respectively calculating the coincidence degrees between the search connection graph and the plurality of rectangular blocks, and taking the rectangular block with the highest coincidence degree as a target rectangular block;
and taking the obtained motion video sequence corresponding to the target rectangular block as a compressed video sequence.
In detail, the embodiment of the present invention may calculate the overlap ratio between the search connection pattern and the plurality of rectangular blocks by using the following formula:
Figure 544131DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure 314641DEST_PATH_IMAGE021
in order to be said degree of coincidence,
Figure 612898DEST_PATH_IMAGE022
in order to search for a connection pattern,
Figure 734438DEST_PATH_IMAGE023
in order to form the rectangular block, the rectangular block is provided with a plurality of rectangular holes,
Figure 635398DEST_PATH_IMAGE024
in order to perform the intersection operation,
Figure 525993DEST_PATH_IMAGE025
is a union operation.
Further, the extracting human skeleton data features in the compressed video sequence includes:
identifying bone location data in the compressed video sequence using a bone identification device;
vectorizing the bone position data by using a preset vector conversion formula to obtain bone vector data;
calculating joint angle characteristics corresponding to the bone position data based on the bone vector data and a preset joint angle calculation formula;
calculating joint angle characteristics corresponding to the bone position data according to the joint angle characteristics and a preset joint angle calculation formula;
and summarizing the joint angle characteristics and the joint included angle characteristics into human skeleton data characteristics.
In detail, the bone recognition device may include a depth sensor, an infrared emitter, an RGB color camera, and a microphone array. The system can perform human skeleton tracking, voice recognition, identity recognition and the like in real time, can clearly recognize the positions of human skeletons, can recognize 25 joint points, and can display skeleton structure diagrams of six persons at the same time. The RGB color camera is used for acquiring color images, 30 frames of images can be acquired every second, the number of the depth sensors is two, the microphone array is used for sound source positioning and voice recognition, and the infrared emitter can be used for detecting the relative position of a human body.
The skeleton position data comprises three-dimensional coordinates and quaternion of each node of the human skeleton, the three-dimensional coordinates are coordinates under a global coordinate system, the quaternion is defined as (x, y, z and w), and the rotation relation between a child node and a parent node, namely the conversion relation between local coordinate systems, is represented. The quaternion is a unit quaternion, i.e., modulo is 1.
Specifically, the preset vector conversion formula is as follows:
Figure DEST_PATH_IMAGE075
Figure 978840DEST_PATH_IMAGE004
Figure 587676DEST_PATH_IMAGE006
Figure 292327DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 975112DEST_PATH_IMAGE009
Figure 943068DEST_PATH_IMAGE010
Figure 773621DEST_PATH_IMAGE011
and
Figure 16383DEST_PATH_IMAGE012
are all the vector data of the bones,
Figure 881571DEST_PATH_IMAGE013
Figure 452054DEST_PATH_IMAGE014
Figure 769903DEST_PATH_IMAGE015
and
Figure 816356DEST_PATH_IMAGE016
representing the bone position data.
Figure 536050DEST_PATH_IMAGE017
And
Figure 517913DEST_PATH_IMAGE018
is a preset fixed parameter.
Further, the preset joint angle size calculation formula is as follows:
Figure 323058DEST_PATH_IMAGE076
Figure DEST_PATH_IMAGE077
wherein the content of the first and second substances,
Figure 907623DEST_PATH_IMAGE034
is a first one of the joint angle features,
Figure 668775DEST_PATH_IMAGE035
a second one of the joint angle features,
Figure 883855DEST_PATH_IMAGE036
Figure 973034DEST_PATH_IMAGE037
and
Figure 564552DEST_PATH_IMAGE038
representing joint points
Figure 930943DEST_PATH_IMAGE039
Figure 316925DEST_PATH_IMAGE040
Figure 362241DEST_PATH_IMAGE041
And
Figure 554188DEST_PATH_IMAGE042
representing joint points
Figure 837402DEST_PATH_IMAGE043
Figure 348280DEST_PATH_IMAGE044
Figure 349734DEST_PATH_IMAGE045
And
Figure 79792DEST_PATH_IMAGE046
representing joint points
Figure 483092DEST_PATH_IMAGE047
Specifically, the preset joint included angle calculation formula is as follows:
Figure 414139DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 637310DEST_PATH_IMAGE050
is characterized by the included angle of the joint,
Figure 436639DEST_PATH_IMAGE034
is a first one of the joint angle features,
Figure 694445DEST_PATH_IMAGE035
a second one of the joint angle features,
Figure 858710DEST_PATH_IMAGE051
a modulus of a first one of the joint angle features,
Figure 21707DEST_PATH_IMAGE052
a modulus of a second one of the joint angle features.
And step four, constructing and training a motion analysis model by using the pre-acquired big data, and analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result.
In the embodiment of the invention, the motion analysis module can be a random forest model, a convolutional neural network model or a bidirectional long-short term memory network model and the like. The pre-acquired big data is motion-related data.
Specifically, the constructing and training by using the pre-acquired big data to obtain the motion analysis model includes:
acquiring motion characteristic data based on big data, and constructing an initial motion sequence by using the motion characteristic data;
performing integral transformation on the initial motion sequence based on a preset integral transformation formula to obtain a standard motion sequence;
constructing a motion analysis formula according to the standard motion sequence, and generating an initial analysis model by taking the motion analysis formula as a reference formula;
inputting preset training related data into the initial analysis model to obtain an initial analysis result;
comparing the initial analysis result with a pre-acquired standard reference result, and taking the initial analysis model as a motion analysis model when the initial analysis result is consistent with the pre-acquired standard reference result;
and when the comparison is inconsistent, performing parameter adjustment on the initial analysis model, inputting the training related data into the initial analysis model after the parameter adjustment to obtain a new analysis result, and when the new analysis result is consistent with the standard reference result, taking the initial analysis model after the parameter adjustment as a motion analysis model.
In detail, the big data may be a plurality of pieces of historical motion data included in a motion database, and the motion feature data in the big data is extracted in the embodiment of the present invention
Figure 296830DEST_PATH_IMAGE053
Specifically, the preset integral transformation formula is as follows:
Figure 471460DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure 806626DEST_PATH_IMAGE056
in the form of a standard sequence of movements,
Figure 942072DEST_PATH_IMAGE057
in order to be an initial sequence of movements,
Figure 20887DEST_PATH_IMAGE058
is a transformation matrix.
Preferably, the first and second electrodes are formed of a metal,
Figure 315602DEST_PATH_IMAGE059
in detail, the final purpose of the integral transformation is to better process the motion information.
Further, the motion analysis formula is:
Figure DEST_PATH_IMAGE079
Figure 975997DEST_PATH_IMAGE063
wherein the content of the first and second substances,
Figure 926635DEST_PATH_IMAGE064
a horizontal motion component representing the standard motion sequence,
Figure 605878DEST_PATH_IMAGE065
represents the vertical motion component of the standard motion sequence,
Figure 692783DEST_PATH_IMAGE066
Figure 41856DEST_PATH_IMAGE067
Figure 214211DEST_PATH_IMAGE068
Figure 697145DEST_PATH_IMAGE069
Figure 638556DEST_PATH_IMAGE070
Figure 673377DEST_PATH_IMAGE071
Figure 333029DEST_PATH_IMAGE072
and
Figure 557337DEST_PATH_IMAGE073
are all preset constraint parameters.
Specifically, the human body bone data features are input into the motion analysis model to obtain a motion analysis result, and the motion analysis result mainly comprises identification of a motion state, monitoring and judgment of a motion process and the like.
And fifthly, carrying out movement plan planning on the movement crowd corresponding to the target selected position based on the movement analysis result to obtain a movement plan planning result.
In the embodiment of the invention, the motion analysis result comprises the identification of the motion state, the monitoring and judgment of the motion process and the like, in order to ensure the regularity and the scientificity of the motion, the plan making can be carried out on the motion crowd corresponding to the target selected position, and a specific motion plan is appointed according to expert knowledge and crowd interest.
For example, the exercise analysis result indicates that the teenager lacks exercise and the exercise state is mild exercise, so that a corresponding exercise plan can be formulated, and the exercise time and intensity of the teenager can be enhanced.
In the embodiment of the invention, the human motion video sequence at the target selected position is accurately acquired through the three-dimensional system capturing module, and the motion video sequence is compressed by using the three-step searching processing method, so that the information redundancy in the motion video sequence is removed, and the compressed video sequence is obtained. And extracting human skeleton data characteristics in the compressed video sequence, performing motion analysis on the human skeleton data characteristics in a motion analysis model to obtain a motion analysis result, and performing motion plan planning on the motion crowd corresponding to the target selected position according to the motion analysis result to obtain a motion plan planning result. The efficiency of movement plan planning is improved. Therefore, the exercise planning device based on big data analysis can solve the problem of low efficiency of exercise planning.
Fig. 3 is a schematic structural diagram of an electronic device for implementing a motion planning method based on big data analysis according to an embodiment of the present invention.
The electronic device may include a processor 10, a memory 11, a communication interface 12 and a bus 13, and may further include a computer program, such as a motion planning program based on big data analysis, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a motion planning program based on big data analysis, etc., but also to temporarily store data that has been output or will be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., a motion planning program based on big data analysis, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication interface 12 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are commonly used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
The bus 13 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 13 may be divided into an address bus, a data bus, a control bus, etc. The bus 13 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 3 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 3 does not constitute a limitation of the electronic device, and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the electronic device may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The big data analysis-based movement plan planning program stored in the memory 11 of the electronic device is a combination of a plurality of instructions, which when executed in the processor 10, can realize:
acquiring a human motion video sequence of a target selected position by using a pre-constructed three-dimensional system capturing module, and dividing each frame of motion video sequence in the human motion video sequence into rectangular blocks with preset sizes;
acquiring a preset search step length and a search window, taking the search window as the center of the rectangular block, selecting a plurality of initial search points in the center of the rectangular block according to the search step length, and performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points;
selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset size respectively, and extracting human skeleton data characteristics in the compressed video sequences;
constructing and training a motion analysis model by using pre-acquired big data, and analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result;
and performing motion planning on the motion crowd corresponding to the target selected position based on the motion analysis result to obtain a motion planning result.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, a recording medium, a usb-disk, a removable hard disk, a magnetic diskette, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring a human motion video sequence of a target selected position by using a pre-constructed three-dimensional system capturing module, and dividing each frame of motion video sequence in the human motion video sequence into rectangular blocks with preset sizes;
acquiring a preset search step length and a search window, taking the search window as the center of the rectangular block, selecting a plurality of initial search points in the center of the rectangular block according to the search step length, and performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points;
selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset size respectively, and extracting human skeleton data characteristics in the compressed video sequences;
constructing and training a motion analysis model by using pre-acquired big data, and analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result;
and performing motion planning on the motion crowd corresponding to the target selected position based on the motion analysis result to obtain a motion planning result.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. An exercise plan planning method based on big data analysis, the method comprising:
acquiring a human motion video sequence of a target selected position by using a pre-constructed three-dimensional system capturing module, and dividing each frame of motion video sequence in the human motion video sequence into rectangular blocks with preset sizes;
acquiring a preset search step length and a search window, taking the search window as the center of the rectangular block, selecting a plurality of initial search points in the center of the rectangular block according to the search step length, and performing three-step search processing in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points;
selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset size respectively, and extracting human skeleton data characteristics in the compressed video sequences;
constructing and training a motion analysis model by using pre-acquired big data, and analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result;
and performing motion planning on the motion crowd corresponding to the target selected position based on the motion analysis result to obtain a motion planning result.
2. The big data analysis-based movement plan planning method of claim 1, wherein the three-step search process is performed in the rectangular block based on the plurality of initial search points to obtain a plurality of standard search points, and the method comprises:
respectively calculating initial block matching errors between the initial search points and the search window, and taking the initial search point corresponding to the minimum initial block matching error as a secondary search center;
performing two rounds of searching by using the secondary searching center to obtain a plurality of newly added searching points, calculating corresponding newly added block matching errors based on the newly added searching points, and taking the newly added searching point corresponding to the minimum newly added block matching error as a target searching center;
and converting the preset search step length into a reference search step length according to a preset conversion scale factor, and searching a plurality of reference points as a plurality of standard search points in the target search center according to the reference search step length.
3. The big data analysis-based movement plan planning method of claim 2, wherein the performing two rounds of search with the re-search center to obtain a plurality of new search points comprises:
identifying a center category of the re-search center;
when the center category of the secondary search center is the corner point, selecting search points in a preset shape range expanded by the secondary search center as a plurality of newly-added search points;
and when the center category of the secondary search center is a non-corner point, randomly selecting a preset number of grid points as a plurality of newly-added search points from the secondary search center.
4. The big data analysis-based motion planning method according to claim 1, wherein the selecting a plurality of the compressed video sequences determined by the standard search points in the rectangular blocks of the preset size respectively comprises:
mapping the standard search points to a preset two-dimensional rectangular coordinate system, and carrying out closed connection on the standard search points in the two-dimensional rectangular coordinate system to obtain a search connection graph;
respectively calculating the coincidence degrees between the search connection graph and the plurality of rectangular blocks, and taking the rectangular block with the highest coincidence degree as a target rectangular block;
and taking the obtained motion video sequence corresponding to the target rectangular block as a compressed video sequence.
5. The big data analysis-based motion planning method according to claim 1, wherein the extracting human skeletal data features in the compressed video sequence comprises:
identifying bone location data in the compressed video sequence using a bone identification device;
vectorizing the bone position data by using a preset vector conversion formula to obtain bone vector data;
calculating joint angle characteristics corresponding to the bone position data based on the bone vector data and a preset joint angle calculation formula;
calculating joint angle characteristics corresponding to the bone position data according to the joint angle characteristics and a preset joint angle calculation formula;
and summarizing the joint angle characteristics and the joint included angle characteristics into human skeleton data characteristics.
6. The big data analysis-based movement plan planning method according to claim 5, wherein the preset vector conversion formula is:
Figure 466490DEST_PATH_IMAGE002
Figure 485262DEST_PATH_IMAGE004
Figure 55045DEST_PATH_IMAGE006
Figure 817465DEST_PATH_IMAGE008
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE009
Figure 530206DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
and
Figure 391983DEST_PATH_IMAGE012
are all the vector data of the bones,
Figure DEST_PATH_IMAGE013
Figure 822964DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
and
Figure 44867DEST_PATH_IMAGE016
representing the position data of said bone in a manner,
Figure DEST_PATH_IMAGE017
and
Figure 877694DEST_PATH_IMAGE018
is a preset fixed parameter.
7. The big data analysis-based motion planning method according to claim 1, wherein the dividing each frame of the human motion video sequence into rectangular blocks of a preset size comprises:
identifying a plurality of motion video frames in the human motion video sequence, and extracting key frames from the plurality of motion video frames to obtain a plurality of motion key frames;
and acquiring a reference human body block with a preset size, and performing image interception in the plurality of motion key frames according to the sequence from left to right and from top to bottom to obtain a rectangular block with a preset size.
8. An exercise plan planning apparatus based on big data analysis, the apparatus comprising:
the search point selection module is used for acquiring a human motion video sequence at a target selected position by using a pre-constructed three-dimensional system capture module, dividing each frame of motion video sequence in the human motion video sequence into rectangular blocks with preset sizes, acquiring preset search step lengths and search windows, taking the search windows as the centers of the rectangular blocks, selecting a plurality of initial search points in the centers of the rectangular blocks by using the search step lengths, and performing three-step search processing in the rectangular blocks based on the plurality of initial search points to obtain a plurality of standard search points;
the data feature extraction module is used for selecting a plurality of compressed video sequences determined by the standard search points from the rectangular blocks with the preset sizes respectively and extracting human skeleton data features in the compressed video sequences;
the motion analysis module is used for constructing and training a motion analysis model by utilizing the pre-acquired big data, and analyzing the human skeleton data characteristics based on the motion analysis model to obtain a motion analysis result;
and the plan planning module is used for carrying out movement plan planning on the movement crowd corresponding to the target selected position based on the movement analysis result to obtain a movement plan planning result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a big data analysis based movement plan planning method according to any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements a big data analysis-based movement plan planning method according to any one of claims 1 to 7.
CN202210510021.2A 2022-05-11 2022-05-11 Exercise plan planning method, device, equipment and medium based on big data analysis Active CN114627559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210510021.2A CN114627559B (en) 2022-05-11 2022-05-11 Exercise plan planning method, device, equipment and medium based on big data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210510021.2A CN114627559B (en) 2022-05-11 2022-05-11 Exercise plan planning method, device, equipment and medium based on big data analysis

Publications (2)

Publication Number Publication Date
CN114627559A true CN114627559A (en) 2022-06-14
CN114627559B CN114627559B (en) 2022-08-30

Family

ID=81906285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210510021.2A Active CN114627559B (en) 2022-05-11 2022-05-11 Exercise plan planning method, device, equipment and medium based on big data analysis

Country Status (1)

Country Link
CN (1) CN114627559B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563874B1 (en) * 2000-06-23 2003-05-13 Hitachi America, Ltd. Fast search method for motion estimation
US20120214594A1 (en) * 2011-02-18 2012-08-23 Microsoft Corporation Motion recognition
US20130155229A1 (en) * 2011-11-14 2013-06-20 Massachusetts Institute Of Technology Assisted video surveillance of persons-of-interest
CN105303493A (en) * 2015-10-30 2016-02-03 安徽云硕科技有限公司 Analysis service system for health big data of home-based care for the aged
US20170257641A1 (en) * 2016-03-03 2017-09-07 Uurmi Systems Private Limited Systems and methods for motion estimation for coding a video sequence
CN110188599A (en) * 2019-04-12 2019-08-30 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intellectual analysis recognition methods
US20200322624A1 (en) * 2018-03-07 2020-10-08 Tencent Technology (Shenzhen) Company Limited Video motion estimation method and apparatus, and storage medium
CN112258555A (en) * 2020-10-15 2021-01-22 佛山科学技术学院 Real-time attitude estimation motion analysis method, system, computer equipment and storage medium
CN112464847A (en) * 2020-12-07 2021-03-09 北京邮电大学 Human body action segmentation method and device in video
CN113229832A (en) * 2021-03-24 2021-08-10 清华大学 System and method for acquiring human motion information
CN113255522A (en) * 2021-05-26 2021-08-13 山东大学 Personalized motion attitude estimation and analysis method and system based on time consistency
CN113521683A (en) * 2021-08-27 2021-10-22 吉林师范大学 Intelligent physical ability comprehensive training control system
WO2021217927A1 (en) * 2020-04-29 2021-11-04 平安国际智慧城市科技股份有限公司 Video-based exercise evaluation method and apparatus, and computer device and storage medium
CN113903082A (en) * 2021-10-14 2022-01-07 黑龙江省科学院智能制造研究所 Human body gait monitoring algorithm based on dynamic time planning
WO2022088290A1 (en) * 2020-10-28 2022-05-05 中国科学院深圳先进技术研究院 Motion assessment method, apparatus and system, and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563874B1 (en) * 2000-06-23 2003-05-13 Hitachi America, Ltd. Fast search method for motion estimation
US20120214594A1 (en) * 2011-02-18 2012-08-23 Microsoft Corporation Motion recognition
US20130155229A1 (en) * 2011-11-14 2013-06-20 Massachusetts Institute Of Technology Assisted video surveillance of persons-of-interest
CN105303493A (en) * 2015-10-30 2016-02-03 安徽云硕科技有限公司 Analysis service system for health big data of home-based care for the aged
US20170257641A1 (en) * 2016-03-03 2017-09-07 Uurmi Systems Private Limited Systems and methods for motion estimation for coding a video sequence
US20200322624A1 (en) * 2018-03-07 2020-10-08 Tencent Technology (Shenzhen) Company Limited Video motion estimation method and apparatus, and storage medium
CN110188599A (en) * 2019-04-12 2019-08-30 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intellectual analysis recognition methods
WO2021217927A1 (en) * 2020-04-29 2021-11-04 平安国际智慧城市科技股份有限公司 Video-based exercise evaluation method and apparatus, and computer device and storage medium
CN112258555A (en) * 2020-10-15 2021-01-22 佛山科学技术学院 Real-time attitude estimation motion analysis method, system, computer equipment and storage medium
WO2022088290A1 (en) * 2020-10-28 2022-05-05 中国科学院深圳先进技术研究院 Motion assessment method, apparatus and system, and storage medium
CN112464847A (en) * 2020-12-07 2021-03-09 北京邮电大学 Human body action segmentation method and device in video
CN113229832A (en) * 2021-03-24 2021-08-10 清华大学 System and method for acquiring human motion information
CN113255522A (en) * 2021-05-26 2021-08-13 山东大学 Personalized motion attitude estimation and analysis method and system based on time consistency
CN113521683A (en) * 2021-08-27 2021-10-22 吉林师范大学 Intelligent physical ability comprehensive training control system
CN113903082A (en) * 2021-10-14 2022-01-07 黑龙江省科学院智能制造研究所 Human body gait monitoring algorithm based on dynamic time planning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LONGTENG KONG ET AL: "A_Joint_Framework_for_Athlete_Tracking_and_Action_Recognition_in_Sports_Videos", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
朱浩天: "基于ZigBee的空巢老人智能监护系统设计", 《电子制作》 *

Also Published As

Publication number Publication date
CN114627559B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN110414499A (en) Text position localization method and system and model training method and system
CN110222611A (en) Human skeleton Activity recognition method, system, device based on figure convolutional network
CN111932547B (en) Method and device for segmenting target object in image, electronic device and storage medium
CA2913432A1 (en) System and method for identifying, analyzing, and reporting on players in a game from video
CN112446919A (en) Object pose estimation method and device, electronic equipment and computer storage medium
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN113989944B (en) Operation action recognition method, device and storage medium
CN112507934A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN104537705A (en) Augmented reality based mobile platform three-dimensional biomolecule display system and method
CN114782901B (en) Sand table projection method, device, equipment and medium based on visual change analysis
CN114049568A (en) Object shape change detection method, device, equipment and medium based on image comparison
CN111914939A (en) Method, device and equipment for identifying blurred image and computer readable storage medium
CN113160231A (en) Sample generation method, sample generation device and electronic equipment
CN114241338A (en) Building measuring method, device, equipment and storage medium based on image recognition
CN111274937A (en) Fall detection method and device, electronic equipment and computer-readable storage medium
CN116778527A (en) Human body model construction method, device, equipment and storage medium
Yang et al. Automated semantics and topology representation of residential-building space using floor-plan raster maps
CN115457451A (en) Monitoring method and device of constant temperature and humidity test box based on Internet of things
CN114022841A (en) Personnel monitoring and identifying method and device, electronic equipment and readable storage medium
CN104252473A (en) Image recognition method
CN109816721A (en) Image position method, device, equipment and storage medium
CN112990154A (en) Data processing method, computer equipment and readable storage medium
CN114627559B (en) Exercise plan planning method, device, equipment and medium based on big data analysis
CN112651782A (en) Behavior prediction method, device, equipment and medium based on zoom dot product attention
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant