CN115661935B - Human body action accuracy determining method and device - Google Patents

Human body action accuracy determining method and device Download PDF

Info

Publication number
CN115661935B
CN115661935B CN202211350974.3A CN202211350974A CN115661935B CN 115661935 B CN115661935 B CN 115661935B CN 202211350974 A CN202211350974 A CN 202211350974A CN 115661935 B CN115661935 B CN 115661935B
Authority
CN
China
Prior art keywords
point cloud
cloud data
standard
human
correction data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211350974.3A
Other languages
Chinese (zh)
Other versions
CN115661935A (en
Inventor
高雪松
陈维强
辛洪录
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202211350974.3A priority Critical patent/CN115661935B/en
Publication of CN115661935A publication Critical patent/CN115661935A/en
Application granted granted Critical
Publication of CN115661935B publication Critical patent/CN115661935B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application provides a method and equipment for determining accuracy of human body actions, and relates to the technical field of artificial intelligence, wherein the method comprises the following steps: the method comprises the steps of firstly obtaining human body point cloud data corresponding to a preset time point from a target area, obtaining standard point cloud data corresponding to the preset time point from a played standard training video, and then determining the accuracy of human body actions based on the distance similarity between the standard point cloud data and the human body point cloud data. In order to avoid the problem of user privacy leakage caused by directly collecting image data through a camera in the traditional method, in the embodiment of the application, human body point cloud data is directly collected through a laser radar sensor of an accuracy determining system, and the accuracy of human body actions is determined based on the human body point cloud data.

Description

Human body action accuracy determining method and device
Technical Field
The embodiment of the invention relates to the technical field of artificial intelligence, in particular to a method and equipment for determining accuracy of human body actions.
Background
During human development, people are consciously involved in various fitness exercises to improve physical fitness. During exercise, exercise coaches are often required to conduct motion coaching and correction to enhance the exercise effect, which can result in significant economic and time costs.
Currently, in order to ensure the accuracy of actions of a user in the process of body-building exercise and improve the convenience of body-building exercise, the user generally selects intelligent body-building products to assist body-building exercise. Specifically, the user can play the standard training video in the intelligent body-building product and carry out body-building exercises along with the standard training video, and the intelligent body-building product can also collect training actions of the user at the same time when playing the standard training video and judge whether the training actions of the user are standard according to the played standard training video.
However, the intelligent exercise product generally collects the training actions of the user through the camera, which easily causes privacy leakage of the user, so the safety of the intelligent exercise product is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a human body action accuracy determining method and device, which are used for accurately determining human body actions and protecting user privacy.
In one aspect, an embodiment of the present application provides a method for determining accuracy of human motion, where the method includes:
acquiring human point cloud data corresponding to a preset time point from a target area, and acquiring standard point cloud data corresponding to the preset time point from a played standard training video;
and determining the accuracy of the human action based on the distance similarity of the standard point cloud data and the human point cloud data.
Optionally, the acquiring human body point cloud data corresponding to a preset time point from the target area includes:
acquiring first environmental point cloud data corresponding to the target area, wherein the first environmental point cloud data is determined based on the target area;
acquiring second environmental point cloud data corresponding to the preset time point from the target area; the second ambient point cloud data is determined based on the target area and a human body;
and performing difference set calculation on the second environmental point cloud data and the first environmental point cloud data to obtain the human point cloud data.
Optionally, the first environmental point cloud data comprises a plurality of first environmental data points, and the second environmental point cloud data comprises a plurality of second environmental data points;
Performing difference set calculation on the second environmental point cloud data and the first environmental point cloud data to obtain the human point cloud data, including:
for any second environmental data point in the second environmental point cloud data, respectively calculating distances between the second environmental data point and a plurality of first environmental data points in the first environmental point cloud data to obtain a plurality of environmental similar distances; if any environmental similarity distance is smaller than a first preset distance threshold, deleting the second environmental data point from the second environmental point cloud data;
and taking the second environment point cloud data after deleting the plurality of second environment data points as the human body point cloud data.
Optionally, the determining the accuracy of the human action based on the distance similarity of the standard point cloud data and the human point cloud data includes:
adopting the mass center of the standard point cloud data to adjust the standard point cloud data to obtain standard correction data;
adopting the mass center of the human body point cloud data to adjust the human body point cloud data to obtain human body correction data;
and determining the accuracy of the human action based on the distance similarity of the standard correction data and the human correction data.
Optionally, the standard correction data comprises a plurality of standard correction data points, and the human correction data comprises a plurality of human correction data points;
the determining the accuracy of the human action based on the distance similarity of the standard correction data and the human correction data includes:
for any standard correction data point in the standard correction data, respectively determining the distances between the standard correction data point and a plurality of human correction data points in the human correction data to obtain a plurality of human similarity distances; if any human body similar distance is smaller than a second preset distance threshold value, marking the standard correction data points;
the accuracy of the human motion is determined based on the number of standard correction data points after marking and the number of standard correction data points in the standard correction data.
Optionally, the adjusting the human body point cloud data by using the centroid of the human body point cloud data, before obtaining the human body correction data, further includes:
determining a standard scale of the standard point cloud data and a scale to be calibrated of the human point cloud data;
determining a scaling factor of the human point cloud data based on the standard scale and the scale to be calibrated;
And scaling the human body point cloud data by adopting the scaling coefficient.
Optionally, the method further comprises:
dividing the standard point cloud data by adopting a division model to obtain a plurality of local standard point cloud data; dividing the human body point cloud data by adopting the division model to obtain a plurality of local human body point cloud data;
and determining the accuracy of the local action according to the distance similarity of any local standard point cloud data and the corresponding local human body point cloud data.
Optionally, the determining the accuracy of the local action for the distance similarity of any local standard point cloud data and the corresponding local human point cloud data includes:
determining local standard correction data corresponding to the local standard point cloud data based on the standard correction data;
based on the human body correction data, determining local human body correction data corresponding to the local human body point cloud data;
and determining the accuracy of the local action based on the distance similarity of the local standard correction data and the local human correction data.
Optionally, the local standard correction data comprises a plurality of local standard correction data points, the local human correction data comprising a plurality of local human correction data points;
The determining the accuracy of the local motion based on the distance similarity of the local standard correction data and the local human correction data includes:
for any local standard correction data point in the local standard correction data, respectively determining the distances between the local standard correction data point and a plurality of local human correction data points in the local human correction data to obtain a plurality of local human similarity distances; if any local human body similarity distance is smaller than a third preset distance threshold value, marking the local standard correction data points.
Determining the accuracy of the local motion based on the number of marked local standard correction data points and the number of local standard correction data points in the local standard correction data.
In one aspect, an embodiment of the present application provides a human action accuracy determining apparatus, including:
the acquisition module is used for acquiring human point cloud data corresponding to a preset time point from a target area and acquiring standard point cloud data corresponding to the preset time point from a played standard training video;
and the accuracy determining module is used for determining the accuracy of the human body action based on the distance similarity between the standard point cloud data and the human body point cloud data.
Optionally, the acquiring module is specifically configured to:
acquiring first environmental point cloud data corresponding to the target area, wherein the first environmental point cloud data is determined based on the target area;
acquiring second environmental point cloud data corresponding to the preset time point from the target area; the second ambient point cloud data is determined based on the target area and a human body;
and performing difference set calculation on the second environmental point cloud data and the first environmental point cloud data to obtain the human point cloud data.
Optionally, the first environmental point cloud data comprises a plurality of first environmental data points, and the second environmental point cloud data comprises a plurality of second environmental data points;
the acquisition module is specifically configured to:
for any second environmental data point in the second environmental point cloud data, respectively calculating distances between the second environmental data point and a plurality of first environmental data points in the first environmental point cloud data to obtain a plurality of environmental similar distances; if any environmental similarity distance is smaller than a first preset distance threshold, deleting the second environmental data point from the second environmental point cloud data;
and taking the second environment point cloud data after deleting the plurality of second environment data points as the human body point cloud data.
Optionally, the accuracy determining module is specifically configured to:
adopting the mass center of the standard point cloud data to adjust the standard point cloud data to obtain standard correction data;
adopting the mass center of the human body point cloud data to adjust the human body point cloud data to obtain human body correction data;
and determining the accuracy of the human action based on the distance similarity of the standard correction data and the human correction data.
Optionally, the standard correction data comprises a plurality of standard correction data points, and the human correction data comprises a plurality of human correction data points;
the accuracy determining module is specifically configured to:
for any standard correction data point in the standard correction data, respectively determining the distances between the standard correction data point and a plurality of human correction data points in the human correction data to obtain a plurality of human similarity distances; if any human body similar distance is smaller than a second preset distance threshold value, marking the standard correction data points;
the accuracy of the human motion is determined based on the number of standard correction data points after marking and the number of standard correction data points in the standard correction data.
Optionally, the device further comprises a scaling module, wherein the scaling module is specifically used for:
the centroid of the human body point cloud data is adopted, the human body point cloud data is adjusted, and before human body correction data are obtained, the standard scale of the standard point cloud data and the scale to be calibrated of the human body point cloud data are determined;
determining a scaling factor of the human point cloud data based on the standard scale and the scale to be calibrated;
and scaling the human body point cloud data by adopting the scaling coefficient.
Optionally, the accuracy determining module is further configured to:
dividing the standard point cloud data by adopting a division model to obtain a plurality of local standard point cloud data; dividing the human body point cloud data by adopting the division model to obtain a plurality of local human body point cloud data;
and determining the accuracy of the local action according to the distance similarity of any local standard point cloud data and the corresponding local human body point cloud data.
Optionally, the accuracy determining module is specifically configured to:
determining local standard correction data corresponding to the local standard point cloud data based on the standard correction data;
based on the human body correction data, determining local human body correction data corresponding to the local human body point cloud data;
And determining the accuracy of the local action based on the distance similarity of the local standard correction data and the local human correction data.
Optionally, the local standard correction data comprises a plurality of local standard correction data points, the local human correction data comprising a plurality of local human correction data points;
the accuracy determining module is specifically configured to:
for any local standard correction data point in the local standard correction data, respectively determining the distances between the local standard correction data point and a plurality of local human correction data points in the local human correction data to obtain a plurality of local human similarity distances; if any local human body similarity distance is smaller than a third preset distance threshold value, marking the local standard correction data points.
Determining the accuracy of the local motion based on the number of marked local standard correction data points and the number of local standard correction data points in the local standard correction data.
In one aspect, embodiments of the present application provide a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the above-described human action accuracy determination method when the program is executed.
In one aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer device, which when run on the computer device, causes the computer device to perform the steps of the human action accuracy determination method described above.
In the embodiment of the application, human body point cloud data corresponding to a preset time point are acquired from a target area, standard point cloud data corresponding to the preset time point is acquired from a played standard training video, and then the accuracy of human body actions is determined based on the distance similarity between the standard point cloud data and the human body point cloud data. In order to avoid the problem of user privacy leakage caused by directly collecting image data through a camera in the traditional method, in the embodiment of the application, human body point cloud data is directly collected through a laser radar sensor of an accuracy determining system, and the accuracy of human body actions is determined based on the human body point cloud data.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present application;
fig. 2 is a flow chart of a method for determining accuracy of human motion according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a product of an installation accuracy determining system according to an embodiment of the present application;
fig. 4 is a flowchart of a method for obtaining human point cloud data according to an embodiment of the present application;
fig. 5 is a flow chart of a human point cloud data scaling method according to an embodiment of the present application;
fig. 6 is a flowchart of a method for determining accuracy of human motion according to an embodiment of the present application;
fig. 7 is a flow chart of a method for determining accuracy of local actions according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a product of an installation accuracy determination system according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a human motion accuracy determining device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, a human motion accuracy determining system architecture diagram applicable to the embodiments of the present application includes at least a terminal device 101 and an accuracy determining system 102.
The accuracy determination system 102 serves the target application for its background server. The accuracy determining system 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligence platforms, and the like. In the embodiment of the present application, the accuracy determining system 102 is generally installed in a product such as a mobile phone, a television, a body-building mirror, and the like.
The terminal device 101 and the accuracy determining system 102 may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
The terminal device 101 transmits an accuracy determination instruction to the accuracy determination system 102 in response to a determination of the accuracy operation of the human action by the user. The accuracy determining system 102 receives the accuracy determining instruction, acquires human body point cloud data corresponding to a preset time point from the target area, acquires standard point cloud data corresponding to the preset time point from the played standard training video, and determines the accuracy of human body actions based on the distance similarity between the standard point cloud data and the human body point cloud data.
Based on the system architecture diagram shown in fig. 1, the embodiment of the present application provides a flow of a method for determining accuracy of human motion, as shown in fig. 2, where the flow of the method is performed by the accuracy determining system 102 shown in fig. 1, and includes the following steps:
step S201, acquiring human point cloud data corresponding to a preset time point from a target area, and acquiring standard point cloud data corresponding to the preset time point from a played standard training video.
Specifically, after a product provided with the accuracy determining system is placed, an area which can be acquired by a laser radar sensor in the accuracy determining system is a target area. The product provided with the accuracy determining system can be placed in a living room, a bedroom and the like, and the target area is a living room area or a bedroom area and the like.
Because the accuracy determining system needs a certain calculation time for collecting the human body point cloud data and determining the accuracy of human body actions, the accuracy determining system can collect the human body point cloud data at a preset time point. For example, the accuracy determination system collects human point cloud data corresponding to the current point in time every 2 seconds.
The structure presented when the product of the accuracy determination system plays standard training video is shown in fig. 3. When a user performs body-building exercise along with the played standard training video, the accuracy determining system acquires human body point cloud data corresponding to a preset time point from a target area through the laser radar sensor. The human body point cloud data are point cloud data corresponding to the body of the user. The plurality of data points in the human body point cloud data are all data points in a three-dimensional space coordinate system. The three-dimensional space coordinate system consists of an X axis, a Y axis and a Z axis.
Meanwhile, the accuracy determining system acquires standard point cloud data corresponding to a preset time point from the played standard training video. The standard point cloud data are point cloud data corresponding to the body of the body-building coach in the standard training video. The plurality of data points in the standard point cloud data are all data points in a three-dimensional space coordinate system.
Step S202, determining the accuracy of human body actions based on the distance similarity between the standard point cloud data and the human body point cloud data.
Optionally, the standard point cloud data may be further segmented by using a segmentation model to obtain a plurality of local standard point cloud data, and the human point cloud data may be segmented by using a segmentation model to obtain a plurality of local human point cloud data. And determining the accuracy of the local action according to the distance similarity of any local standard point cloud data and the corresponding local human body point cloud data. The segmentation model is trained in advance and stored in an accuracy determining system, and can be a deep neural network model such as point_net.
The standard point cloud data can be divided into head standard point cloud data, trunk point cloud data and limb point cloud data by adopting a division model. The four-limb point cloud data comprise left forearm point cloud data, right forearm point cloud data, left thigh point cloud data, right thigh point cloud data, left calf point cloud data, right calf point cloud data, left thigh point cloud data and right thigh point cloud data.
The result of dividing the human point cloud data by the division model is similar to the division result of the standard point cloud data, and will not be described here again.
In the embodiment of the application, human body point cloud data corresponding to a preset time point are acquired from a target area, standard point cloud data corresponding to the preset time point is acquired from a played standard training video, and then the accuracy of human body actions is determined based on the distance similarity between the standard point cloud data and the human body point cloud data. In order to avoid the problem of user privacy leakage caused by directly collecting image data through a camera in the traditional method, in the embodiment of the application, human body point cloud data is directly collected through a laser radar sensor of an accuracy determining system, and the accuracy of human body actions is determined based on the human body point cloud data.
Optionally, in the step S201, the human body point cloud data corresponding to the preset time point is obtained from the target area, which specifically includes the following steps as shown in fig. 4:
step S401, acquiring first environmental point cloud data corresponding to a target area.
Wherein the first ambient point cloud data is determined based on the target area. The first environmental point cloud data only comprises point cloud data corresponding to the target area, and does not comprise point cloud data corresponding to the body of the user.
The first environmental point cloud data can be determined only once, that is, before the training video is played, the accuracy determining system collects the first environmental point cloud data corresponding to the target area through the laser radar sensor.
Step S402, second environmental point cloud data corresponding to a preset time point is obtained from the target area.
Wherein the second ambient point cloud data is determined based on the target area and the human body. The second environmental point cloud data comprises point cloud data corresponding to the target area and point cloud data corresponding to the user body.
Step S403, performing difference set calculation on the second environmental point cloud data and the first environmental point cloud data to obtain human body point cloud data.
Wherein the first ambient point cloud data comprises a plurality of first ambient data points and the second ambient point cloud data comprises a plurality of second ambient data points.
Specifically, for any second environmental data point in the second environmental point cloud data, respectively calculating distances between the second environmental data point and a plurality of first environmental data points in the first environmental point cloud data to obtain a plurality of environmental similar distances; and if any environmental similarity distance is smaller than the first preset distance threshold, deleting the second environmental data point from the second environmental point cloud data.
And finally, deleting the second environmental point cloud data after the plurality of second environmental data points to serve as human body point cloud data.
In the embodiment of the application, the human body point cloud data is obtained by calculating the difference set of the second environment point cloud data and the first environment point cloud data, so that the accuracy of the obtained human body point cloud data is improved.
Optionally, before determining the accuracy of the human action in step S202 based on the distance similarity between the standard point cloud data and the human point cloud data, the method further includes the following steps as shown in fig. 5:
step S501, determining standard dimensions of standard point cloud data and dimensions to be calibrated of human point cloud data.
Specifically, a maximum value and a minimum value of standard point cloud data in the Z-axis direction in a three-dimensional space coordinate system are determined. In the standard point cloud data, determining the maximum value in the Z-axis direction according to the point cloud data corresponding to the head top position, determining the minimum value in the Z-axis direction according to the point cloud data corresponding to the sole position, and taking the difference between the determined maximum value in the Z-axis direction and the determined minimum value in the Z-axis direction as the standard scale S of the standard point cloud data std
Likewise, determining the scale S to be calibrated of human point cloud data user And determining a standard scale S of standard point cloud data std The method of (2) is the same and will not be described in detail herein.
Step S502, based on the standard scale and the scale to be calibrated, determining the scaling factor of the human point cloud data.
Specifically, the ratio of the standard scale to the scale to be calibrated is used as the human body point cloudScaling factor of the data. Scaling factor r=s std /S user . The scaling factor r may be greater than 1, may be less than 1, and may be 1 in a very small case.
In step S503, scaling is performed on the human point cloud data by using the scaling factor.
Specifically, for coordinate values of each data point in the human body point cloud data in a three-dimensional space coordinate system, a scaling coefficient is multiplied to obtain coordinate values of the scaled data points, so that scaling of the human body point cloud data is realized.
If the coordinate value of any data point in the human body point cloud data is (x) user ,y user ,z user ) The coordinate value of the scaled data point is (x user *r,y user *r,z user * r), wherein r=s std /S user
In the embodiment of the application, because the standard point cloud data and the human point cloud data are different in scale, the human point cloud data are required to be scaled according to the standard scale of the standard point cloud data and the scale to be calibrated of the human point cloud data, so that the standard point cloud data and the human point cloud data are ensured to be identical in scale, and the accuracy of human actions calculated subsequently is ensured.
Optionally, in the above step S202, the accuracy of the human action is determined based on the distance similarity between the standard point cloud data and the human point cloud data, specifically including the following steps as shown in fig. 6:
and step S601, adopting the mass center of the standard point cloud data to adjust the standard point cloud data, and obtaining standard correction data.
Specifically, the coordinate value (x std ,y std ,z std ) For the X-axis direction in the three-dimensional space coordinate system, the X values of all data points in the standard point cloud data are averaged to obtain an average value
Figure BDA0003918843770000121
For the Y-axis direction in the three-dimensional space coordinate system, the Y values of all data points in the standard point cloud data are averaged to obtainAverage value->
Figure BDA0003918843770000122
For the Z-axis direction in the three-dimensional space coordinate system, the Z values of all data points in the standard point cloud data are averaged to obtain an average value +.>
Figure BDA0003918843770000123
Mean>
Figure BDA0003918843770000124
Average value->
Figure BDA0003918843770000125
Mean value->
Figure BDA0003918843770000126
Centroid as standard point cloud data
Figure BDA0003918843770000127
Coordinate values (x std ,y std ,z std ) Centroid subtracting standard point cloud data
Figure BDA0003918843770000128
Obtaining standard correction data points->
Figure BDA0003918843770000129
Figure BDA00039188437700001210
Finally, the obtained plurality of standard correction data points are used as standard correction data.
In the embodiment of the application, the centroid of the standard point cloud data is adopted to adjust the standard point cloud data, and the obtained standard correction data takes the centroid of the standard point cloud data as the origin of coordinates.
Step S602, the centroid of the human body point cloud data is adopted to adjust the human body point cloud data, and human body correction data is obtained.
Specifically, the coordinate value (x user ,y user ,z user ) For the X-axis direction in the three-dimensional space coordinate system, the X values of all data points in the human body point cloud data are averaged to obtain an average value
Figure BDA00039188437700001211
For the Y-axis direction in the three-dimensional space coordinate system, the Y values of all data points in the human body point cloud data are averaged to obtain an average value
Figure BDA0003918843770000131
For the Z-axis direction in the three-dimensional space coordinate system, the Z values of all data points in the human body point cloud data are averaged to obtain an average value +.>
Figure BDA0003918843770000132
Mean>
Figure BDA0003918843770000133
Average value->
Figure BDA0003918843770000134
Mean value->
Figure BDA0003918843770000135
Centroid as human point cloud data
Figure BDA0003918843770000136
Coordinate values (x) for any one data point in human body point cloud data user ,y user ,z user ) Barycenter of human point cloud data is subtracted
Figure BDA0003918843770000137
Obtaining correction data points +.>
Figure BDA0003918843770000138
Figure BDA0003918843770000139
Finally, the obtained plurality of human body correction data points are used as human body correction data.
In the embodiment of the application, the centroid of the human body point cloud data is adopted to adjust the human body point cloud data, and the obtained human body correction data takes the centroid of the human body point cloud data as the origin of coordinates.
Step S603, determining accuracy of human motion based on the distance similarity of the standard correction data and the human correction data.
The standard correction data includes a plurality of standard correction data points and the body correction data includes a plurality of body correction data points.
For any standard correction data point in the standard correction data, respectively determining distances between the standard correction data point and a plurality of human correction data points in the human correction data to obtain a plurality of human similarity distances; if any human body similar distance is smaller than the second preset distance threshold value, marking the standard correction data points.
Finally, determining the accuracy of the human body action based on the number of standard correction data points after marking and the number of standard correction data points in the standard correction data.
In general, the number of standard correction data points after marking is p, the number of standard correction data points in the standard correction data is q, and the ratio of the number of standard correction data points after marking to the number of standard correction data points in the standard correction data is used as the accuracy of human body motion, and the accuracy of human body motion is displayed.
In the embodiment of the application, the centroid of the standard point cloud data is adopted to adjust the standard point cloud data, and meanwhile, the centroid of the human body point cloud data is adopted to adjust the human body point cloud data, so that the standard point cloud data and the human body point cloud data are located in the same three-dimensional space coordinate system, and the accuracy of human body action determined based on the distance similarity of the standard correction data and the human body correction data is further improved.
Optionally, determining the accuracy of the local action for the distance similarity of any local standard point cloud data and the corresponding local human point cloud data specifically includes the following steps as shown in fig. 7:
step S701, determining local standard correction data corresponding to the local standard point cloud data based on the standard correction data.
Step S702, determining local human body correction data corresponding to the local human body point cloud data based on the human body correction data.
In step S703, the accuracy of the local motion is determined based on the distance similarity between the local standard correction data and the local human correction data.
Specifically, the local standard correction data includes a plurality of local standard correction data points, and the local human correction data includes a plurality of local human correction data points.
For any local standard correction data point in the local standard correction data, respectively determining the distances between the local standard correction data point and a plurality of local human correction data points in the local human correction data to obtain a plurality of local human similarity distances; if any local human body similarity distance is smaller than a third preset distance threshold value, marking the local standard correction data points.
Finally, the accuracy of the local motion is determined based on the number of marked local standard correction data points and the number of local standard correction data points in the local standard correction data.
In general, the number of local standard correction data points after marking is m, the number of local standard correction data points in the local standard correction data is n, and the ratio of the number of local standard correction data points after marking to the number of local standard correction data points in the local standard correction data n is used as the accuracy of the local action.
If the accuracy of the local action is smaller than the preset similarity threshold, the fact that the local action of the user has larger deviation from the action in the standard training video is indicated, and correction prompt information of the human body part corresponding to the local action is output.
For example, the accuracy of the human motion determined by the above method is 80%, the accuracy of the local motion corresponding to the lower leg is 20%, the preset similarity threshold is set to be 50%, and since 20% is smaller than the preset similarity threshold, a correction prompt for the lower leg is output. As shown in fig. 8.
In the embodiment of the application, the distance similarity between the local standard correction data and the local human body correction data is determined according to the local standard correction data corresponding to any local standard point cloud data and the local human body correction data corresponding to the local human body point cloud data, and the accuracy of local actions is determined based on the distance similarity, so that the accuracy of each local action of a human body can be determined more accurately, corresponding correction prompt information is output according to the accuracy of each local action, and a user can conveniently adjust actions at any time.
Based on the same technical concept, the embodiment of the present application provides a human motion accuracy determining apparatus, as shown in fig. 9, the human motion accuracy determining apparatus 900 includes:
the acquisition module 901 is configured to acquire human point cloud data corresponding to a preset time point from a target area, and acquire standard point cloud data corresponding to the preset time point from a played standard training video;
an accuracy determining module 902, configured to determine accuracy of the human action based on a distance similarity between the standard point cloud data and the human point cloud data.
Optionally, the acquiring module 901 is specifically configured to:
acquiring first environmental point cloud data corresponding to the target area, wherein the first environmental point cloud data is determined based on the target area;
acquiring second environmental point cloud data corresponding to the preset time point from the target area; the second ambient point cloud data is determined based on the target area and a human body;
and performing difference set calculation on the second environmental point cloud data and the first environmental point cloud data to obtain the human point cloud data.
Optionally, the first environmental point cloud data comprises a plurality of first environmental data points, and the second environmental point cloud data comprises a plurality of second environmental data points;
The obtaining module 901 is specifically configured to:
for any second environmental data point in the second environmental point cloud data, respectively calculating distances between the second environmental data point and a plurality of first environmental data points in the first environmental point cloud data to obtain a plurality of environmental similar distances; if any environmental similarity distance is smaller than a first preset distance threshold, deleting the second environmental data point from the second environmental point cloud data;
and taking the second environment point cloud data after deleting the plurality of second environment data points as the human body point cloud data.
Optionally, the accuracy determining module 902 is specifically configured to:
adopting the mass center of the standard point cloud data to adjust the standard point cloud data to obtain standard correction data;
adopting the mass center of the human body point cloud data to adjust the human body point cloud data to obtain human body correction data;
and determining the accuracy of the human action based on the distance similarity of the standard correction data and the human correction data.
Optionally, the standard correction data comprises a plurality of standard correction data points, and the human correction data comprises a plurality of human correction data points;
The accuracy determining module 902 is specifically configured to:
for any standard correction data point in the standard correction data, respectively determining the distances between the standard correction data point and a plurality of human correction data points in the human correction data to obtain a plurality of human similarity distances; if any human body similar distance is smaller than a second preset distance threshold value, marking the standard correction data points;
the accuracy of the human motion is determined based on the number of standard correction data points after marking and the number of standard correction data points in the standard correction data.
Optionally, a scaling module 903 is further included, where the scaling module 903 is specifically configured to:
the centroid of the human body point cloud data is adopted, the human body point cloud data is adjusted, and before human body correction data are obtained, the standard scale of the standard point cloud data and the scale to be calibrated of the human body point cloud data are determined;
determining a scaling factor of the human point cloud data based on the standard scale and the scale to be calibrated;
and scaling the human body point cloud data by adopting the scaling coefficient.
Optionally, the accuracy determining module 902 is further configured to:
Dividing the standard point cloud data by adopting a division model to obtain a plurality of local standard point cloud data; dividing the human body point cloud data by adopting the division model to obtain a plurality of local human body point cloud data;
and determining the accuracy of the local action according to the distance similarity of any local standard point cloud data and the corresponding local human body point cloud data.
Optionally, the accuracy determining module 902 is specifically configured to:
determining local standard correction data corresponding to the local standard point cloud data based on the standard correction data;
based on the human body correction data, determining local human body correction data corresponding to the local human body point cloud data;
and determining the accuracy of the local action based on the distance similarity of the local standard correction data and the local human correction data.
Optionally, the local standard correction data comprises a plurality of local standard correction data points, the local human correction data comprising a plurality of local human correction data points;
the accuracy determining module 902 is specifically configured to:
for any local standard correction data point in the local standard correction data, respectively determining the distances between the local standard correction data point and a plurality of local human correction data points in the local human correction data to obtain a plurality of local human similarity distances; if any local human body similarity distance is smaller than a third preset distance threshold value, marking the local standard correction data points.
Determining the accuracy of the local motion based on the number of marked local standard correction data points and the number of local standard correction data points in the local standard correction data.
Based on the same technical concept, the embodiments of the present application provide a computer device, which may be a terminal or a server, as shown in fig. 10, including at least one processor 1001, and a memory 1002 connected to the at least one processor, where a specific connection medium between the processor 1001 and the memory 1002 is not limited in the embodiments of the present application, and in fig. 10, the connection between the processor 1001 and the memory 1002 is exemplified by a bus. The buses may be divided into address buses, data buses, control buses, etc.
In the embodiment of the present application, the memory 1002 stores instructions executable by the at least one processor 1001, and the at least one processor 1001 may perform the steps included in the above-described human action accuracy determining method by executing the instructions stored in the memory 1002.
The processor 1001 is a control center of the computer device, and may use various interfaces and lines to connect various parts of the computer device, and execute or execute instructions stored in the memory 1002 and invoke data stored in the memory 1002, so as to determine accuracy of human motion. Alternatively, the processor 1001 may include one or more processing units, and the processor 1001 may integrate an application processor and a modem processor, wherein the application processor primarily processes an operating system, a user interface, an application program, and the like, and the modem processor primarily processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 1001. In some embodiments, the processor 1001 and the memory 1002 may be implemented on the same chip, and in some embodiments they may be implemented separately on separate chips.
The processor 1001 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
The memory 1002 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 1002 may include at least one type of storage medium, and may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (StaticRandom Access Memory, SRAM), programmable read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), magnetic Memory, magnetic disk, optical disk, and the like. Memory 1002 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1002 in the embodiments of the present application may also be circuitry or any other device capable of implementing a memory function for storing program instructions and/or data.
Based on the same inventive concept, the embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer device, which when run on the computer device, causes the computer device to perform the steps of the above-described human action accuracy determination method.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (7)

1. A method for determining accuracy of human motion, comprising:
acquiring human point cloud data corresponding to a preset time point from a target area, and acquiring standard point cloud data corresponding to the preset time point from a played standard training video;
determining the accuracy of the human action based on the distance similarity of the standard point cloud data and the human point cloud data;
dividing the standard point cloud data by adopting a division model to obtain a plurality of local standard point cloud data; dividing the human body point cloud data by adopting the division model to obtain a plurality of local human body point cloud data;
the following steps are executed for any local standard point cloud data and corresponding local human body point cloud data:
determining local standard correction data corresponding to the local standard point cloud data based on the standard correction data; the local standard correction data includes a plurality of local standard correction data points; the standard correction data are obtained by adopting the mass center of the standard point cloud data and adjusting the standard point cloud data;
based on the human body correction data, determining local human body correction data corresponding to the local human body point cloud data; the local human correction data includes a plurality of local human correction data points; the human body correction data are obtained by adjusting the human body point cloud data by adopting the mass center of the human body point cloud data;
For any local standard correction data point in the local standard correction data, respectively determining the distances between the local standard correction data point and a plurality of local human correction data points in the local human correction data to obtain a plurality of local human similarity distances; if any local human body similarity distance is smaller than a third preset distance threshold value, marking the local standard correction data points;
the accuracy of the local motion is determined based on the number of marked local standard correction data points and the number of local standard correction data points in the local standard correction data.
2. The method of claim 1, wherein the acquiring human point cloud data corresponding to a preset time point from the target area comprises:
acquiring first environmental point cloud data corresponding to the target area, wherein the first environmental point cloud data is determined based on the target area;
acquiring second environmental point cloud data corresponding to the preset time point from the target area; the second ambient point cloud data is determined based on the target area and a human body;
and performing difference set calculation on the second environmental point cloud data and the first environmental point cloud data to obtain the human point cloud data.
3. The method of claim 2, wherein the first environmental point cloud data comprises a plurality of first environmental data points and the second environmental point cloud data comprises a plurality of second environmental data points;
performing difference set calculation on the second environmental point cloud data and the first environmental point cloud data to obtain the human point cloud data, including:
for any second environmental data point in the second environmental point cloud data, respectively calculating distances between the second environmental data point and a plurality of first environmental data points in the first environmental point cloud data to obtain a plurality of environmental similar distances; if any environmental similarity distance is smaller than a first preset distance threshold, deleting the second environmental data point from the second environmental point cloud data;
and taking the second environment point cloud data after deleting the plurality of second environment data points as the human body point cloud data.
4. The method of claim 1, wherein the determining the accuracy of the human action based on the distance similarity of the standard point cloud data and the human point cloud data comprises:
adopting the mass center of the standard point cloud data to adjust the standard point cloud data to obtain standard correction data;
Adopting the mass center of the human body point cloud data to adjust the human body point cloud data to obtain human body correction data;
and determining the accuracy of the human action based on the distance similarity of the standard correction data and the human correction data.
5. The method of claim 4, wherein the standard correction data comprises a plurality of standard correction data points and the body correction data comprises a plurality of body correction data points;
the determining the accuracy of the human action based on the distance similarity of the standard correction data and the human correction data includes:
for any standard correction data point in the standard correction data, respectively determining the distances between the standard correction data point and a plurality of human correction data points in the human correction data to obtain a plurality of human similarity distances; if any human body similar distance is smaller than a second preset distance threshold value, marking the standard correction data points;
the accuracy of the human motion is determined based on the number of standard correction data points after marking and the number of standard correction data points in the standard correction data.
6. The method of claim 4, wherein the adjusting the human point cloud data using the centroid of the human point cloud data, prior to obtaining human correction data, further comprises:
Determining a standard scale of the standard point cloud data and a scale to be calibrated of the human point cloud data;
determining a scaling factor of the human point cloud data based on the standard scale and the scale to be calibrated;
and scaling the human body point cloud data by adopting the scaling coefficient.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1-6 when the program is executed.
CN202211350974.3A 2022-10-31 2022-10-31 Human body action accuracy determining method and device Active CN115661935B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211350974.3A CN115661935B (en) 2022-10-31 2022-10-31 Human body action accuracy determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211350974.3A CN115661935B (en) 2022-10-31 2022-10-31 Human body action accuracy determining method and device

Publications (2)

Publication Number Publication Date
CN115661935A CN115661935A (en) 2023-01-31
CN115661935B true CN115661935B (en) 2023-07-11

Family

ID=84995410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211350974.3A Active CN115661935B (en) 2022-10-31 2022-10-31 Human body action accuracy determining method and device

Country Status (1)

Country Link
CN (1) CN115661935B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10152771B1 (en) * 2017-07-31 2018-12-11 SZ DJI Technology Co., Ltd. Correction of motion-based inaccuracy in point clouds
US11367204B1 (en) * 2021-12-16 2022-06-21 Ecotron LLC Multi-sensor spatial data auto-synchronization system and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110231605B (en) * 2019-05-09 2021-10-29 深圳市速腾聚创科技有限公司 Human behavior recognition method and device, computer equipment and storage medium
CN111160088A (en) * 2019-11-22 2020-05-15 深圳壹账通智能科技有限公司 VR (virtual reality) somatosensory data detection method and device, computer equipment and storage medium
CN111368635B (en) * 2020-02-05 2021-05-25 北京邮电大学 Millimeter wave-based multi-person gait recognition method and device
CN114942434B (en) * 2022-04-25 2024-02-02 四川八维九章科技有限公司 Fall gesture recognition method and system based on millimeter wave Lei Dadian cloud
CN114694263B (en) * 2022-05-30 2022-09-02 深圳智华科技发展有限公司 Action recognition method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10152771B1 (en) * 2017-07-31 2018-12-11 SZ DJI Technology Co., Ltd. Correction of motion-based inaccuracy in point clouds
US11367204B1 (en) * 2021-12-16 2022-06-21 Ecotron LLC Multi-sensor spatial data auto-synchronization system and method

Also Published As

Publication number Publication date
CN115661935A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN112464918B (en) Body-building action correcting method and device, computer equipment and storage medium
CN110827383B (en) Attitude simulation method and device of three-dimensional model, storage medium and electronic equipment
US20200089958A1 (en) Image recognition method and apparatus, electronic device, and readable storage medium
CN113239849B (en) Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium
CN113724378B (en) Three-dimensional modeling method and apparatus, computer-readable storage medium, and computer device
US20220092300A1 (en) Display apparatus and method for controlling thereof
CN111368787A (en) Video processing method and device, equipment and computer readable storage medium
CN114220119A (en) Human body posture detection method, terminal device and computer readable storage medium
CN110866417A (en) Image processing method and device and electronic equipment
CN115661935B (en) Human body action accuracy determining method and device
US20230401740A1 (en) Data processing method and apparatus, and device and medium
CN113350771A (en) Athlete dynamic posture recognition method, device, system and storage medium
CN113112185A (en) Teacher expressive force evaluation method and device and electronic equipment
CN110415171B (en) Image processing method, image processing device, storage medium and electronic equipment
CN117392746A (en) Rehabilitation training evaluation assisting method, device, computer equipment and storage medium
CN111353345B (en) Method, apparatus, system, electronic device, and storage medium for providing training feedback
US20140073383A1 (en) Method and system for motion comparison
CN116704615A (en) Information processing method and device, computer equipment and computer readable storage medium
CN113842622B (en) Motion teaching method, device, system, electronic equipment and storage medium
CN116563588A (en) Image clustering method and device, electronic equipment and storage medium
CN116012417A (en) Track determination method and device of target object and electronic equipment
CN114550282A (en) Multi-person three-dimensional attitude estimation method and device and electronic equipment
CN110148202B (en) Method, apparatus, device and storage medium for generating image
CN114864043A (en) Cognitive training method, device and medium based on VR equipment
CN114821771A (en) Clipping object determining method in image, video clipping method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant