CN113902084A - Motion counting method and device, electronic equipment and computer storage medium - Google Patents
Motion counting method and device, electronic equipment and computer storage medium Download PDFInfo
- Publication number
- CN113902084A CN113902084A CN202010642880.8A CN202010642880A CN113902084A CN 113902084 A CN113902084 A CN 113902084A CN 202010642880 A CN202010642880 A CN 202010642880A CN 113902084 A CN113902084 A CN 113902084A
- Authority
- CN
- China
- Prior art keywords
- motion
- target object
- inflection point
- movement
- point information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 424
- 238000000034 method Methods 0.000 title claims abstract description 86
- 230000008859 change Effects 0.000 claims description 47
- 230000008569 process Effects 0.000 claims description 39
- 238000001914 filtration Methods 0.000 claims description 23
- 238000001514 detection method Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 10
- 238000002360 preparation method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000002829 reductive effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 210000002310 elbow joint Anatomy 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000001364 upper extremity Anatomy 0.000 description 1
- 210000000689 upper leg Anatomy 0.000 description 1
- 210000003857 wrist joint Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06M—COUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
- G06M1/00—Design features of general application
- G06M1/27—Design features of general application for representing the result of count in the form of electric signals, e.g. by sensing markings on the counter drum
- G06M1/272—Design features of general application for representing the result of count in the form of electric signals, e.g. by sensing markings on the counter drum using photoelectric means
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a motion counting method and device, electronic equipment and a computer storage medium. The motion counting method comprises the following steps: acquiring video data of a target object moving, wherein the video data comprises a plurality of frames of images; obtaining motion trail inflection point information of key points of the target object according to the multi-frame image; and updating the movement times of the target object according to the inflection point information of the movement track. The motion counting method can improve counting accuracy.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a motion counting method and device, electronic equipment and a computer storage medium.
Background
Along with the increasing importance of people on the health, more and more people begin to ensure the health through the mode of sports fitness. For example, a rope skipping or the like. The skipping rope is a movement mode which can improve the capabilities of a sporter such as endurance, bouncing force, flexibility and coordination. The current rope skipping sports are already used as a necessary item in the sports test of the middle and primary schools. And the rope skipping times of the sportsman need to be counted in the test process.
In the prior art, a rope skipping counting mode is a manual counting mode, and the mode is original and is easily interfered by factors such as rope skipping speed of a sportsman to cause counting errors. Another skipping rope counting mode in the prior art is to use a skipping rope with an automatic counting function for movement, but the mode needs to use a special skipping rope, and the skipping rope is not only high in price but also short in service life and insufficient in reliability.
In summary, for the conventional exercise requiring counting, a manual counting method is usually adopted, which requires a high concentration of the counter, and when the exercise speed is too high or the counter is disturbed, the counting is easy to make mistakes, so a more reliable exercise counting method is needed.
Disclosure of Invention
Embodiments of the present invention provide a motion counting scheme to at least partially solve the above problems.
According to a first aspect of embodiments of the present invention, there is provided a motion counting method, including: acquiring video data of a target object moving, wherein the video data comprises a plurality of frames of images; obtaining motion trail inflection point information of key points of the target object according to the multi-frame image; and updating the movement times of the target object according to the inflection point information of the movement track.
According to a second aspect of embodiments of the present invention, there is provided a motion counting apparatus comprising: the acquisition module is used for acquiring video data of a target object moving, and the video data comprises a plurality of frames of images; the acquisition module is used for acquiring motion trail inflection point information of key points of the target object according to the multi-frame image; and the updating module is used for updating the movement times of the target object according to the inflection point information of the movement track.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including: the image acquisition equipment is used for acquiring video data of the movement of a target object, and the video data comprises a plurality of frames of images; the processor is used for obtaining motion trail inflection point information of key points of the target object according to the multi-frame images; and updating the movement times of the target object according to the inflection point information of the movement track.
According to a fourth aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the motion counting method according to the first aspect.
According to the motion counting scheme provided by the embodiment of the invention, the positions of key points are obtained from the multi-frame images of the target object, the inflection point information of the motion trail is determined, and the motion times are determined according to the inflection point information. Therefore, automatic counting can be realized, the movement times can be obtained through inflection point information of the key points, and the positions of the key points can be accurately obtained when a target object moves around the body in the movement process or rotates to different angles, so that the accuracy of movement counting is improved, special movement equipment does not need to be purchased, and the counting cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on the drawings.
FIG. 1a is a flowchart illustrating steps of a motion counting method according to a first embodiment of the present invention;
FIG. 1b is a schematic diagram of a skeleton key point according to a first embodiment of the present invention;
FIG. 1c is a diagram illustrating an example of a scenario in the embodiment shown in FIG. 1 a;
FIG. 2a is a flowchart illustrating steps of a motion counting method according to a second embodiment of the present invention;
FIG. 2b is a diagram illustrating waveforms of a motion trajectory and waveforms formed by filtered motion cycles according to a second embodiment of the present invention;
FIG. 2c is a schematic diagram of a waveform formed by another filtered motion cycle in accordance with the second embodiment of the present invention;
FIG. 2d is a diagram illustrating an example of a scenario in the embodiment shown in FIG. 2 a;
FIG. 2e is a waveform diagram illustrating a motion trajectory of an example of the scene in the embodiment shown in FIG. 2 a;
fig. 3 is a block diagram of a motion counting apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present invention, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention shall fall within the scope of the protection of the embodiments of the present invention.
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
Example one
Referring to fig. 1a, a schematic flow chart of a motion counting method according to a first embodiment of the present invention is shown.
In this embodiment, a description will be given of an implementation process of the motion counting method, taking the case where the motion counting method is configured in a terminal device as an example. However, those skilled in the art should understand that the method may also be configured to be executed in any other device with computing capability (e.g., a server, which may include a server and/or a cloud), and the embodiment is not limited thereto.
Step S102: video data of the movement of the target object is collected.
The target object may be any object that performs a motion, such as a human, a robot, or an animal. In this embodiment, the target object is described as an example of a human being.
The video data may be a video within a set period of time, the video data including a plurality of frames of images. The multi-frame image may be all frame images included in the video data, or may be a part of frame images selected from all frame images, which is not limited in this embodiment.
The set time period may be any time period with any appropriate duration, which is not limited by the embodiment. For example, taking rope skipping movement as an example, the set time period may be 1 second, 5 seconds, 10 seconds, 1 minute, 2 minutes, 5 minutes, 10 minutes, or the like from the time when the target object picks up the rope skipping.
In a specific implementation, the target object may be photographed by a terminal device (such as a mobile phone, a camera, etc.) with an image capturing device, so as to obtain video data of all or part of the motion process of the target object.
Still taking rope skipping as an example, at time t1, the target object picks up the rope skipping to start rope skipping preparation, and at the same time, the target object is recorded with a mobile phone. When the video reaches the time t2, video data at the time t 1-t 2 are obtained.
Although the embodiment is described by taking the video data acquired by the terminal device through the image capturing device carried by the terminal device, it should be understood that, in other embodiments, the video data may be the existing video data or the video data acquired from other image capturing devices through a network or the like, which is not limited in this embodiment.
Step S104: and acquiring the motion trail inflection point information of the key point of the target object according to the multi-frame image.
The key points of the target object may be skeleton key points but are not limited to skeleton key points, and the key points may also be contour key points such as shoulders, head, hips, etc., or key points of five sense organs, key points of upper limbs or lower limbs, etc., which is not limited by the embodiment.
Some skeletal key points in the human body are shown in fig. 1b, including but not limited to: key points at the top of the head (shown at 0 in the figure), key points at the middle of the neck (shown at 1 in the figure), key points at the end of the shoulder (shown at 2 and 5 in the figure), key points at the elbow joint (shown at 3 and 6 in the figure), key points at the wrist joint (shown at 4 and 7 in the figure), key points at the top of the thigh (shown at 8 and 11 in the figure), key points at the elbow joint of the leg (shown at 9 and 12 in the figure), and key points at the end of the leg (shown at 10 and 13 in the figure).
Since the positions of the key points of the target object periodically change in the moving process of the target object, the motion trajectory (which can be represented as a waveform) of the positions of the key points changing with time can be determined by obtaining the positions of the key points at different moments, and then the inflection point information of the motion trajectory is determined. The inflection point information may indicate a moment when the target object turns during the movement, for example, the inflection point of the target object during the rope skipping may be that the target object jumps to the highest point, or squats to the lowest point, and so on.
The vibration period of the waveform formed by the motion trail of the target object can be determined based on the inflection point information, so that the number of times of motion of the target object can be determined. It should be noted that the waveform in this embodiment may be a series of position data with time sequence, and is expressed by the waveform for convenience of understanding and description, but the waveform is not necessarily drawn.
The inflection point information may be obtained by any suitable means, which is not limited by the present embodiment.
For example, in a specific implementation, step S104 can be implemented as: detecting skeleton key points of the multi-frame images; and determining inflection point information of the motion trail of the skeleton key points of the target object at least according to the positions of the detected skeleton key points in the corresponding images.
For skeleton keypoint detection, those skilled in the art can perform skeleton keypoint detection (position Estimation) in any appropriate manner in the prior art, for example, using a trained neural network model capable of detecting human skeleton keypoints.
In this embodiment, only one skeleton key point (such as the skeleton key point at 1 in fig. 1 b) may be detected, and at least the height coordinates (i.e., Y-axis coordinates) of the skeleton key point in each frame image may be obtained. Alternatively, a plurality of skeleton key points may be detected simultaneously, and the height coordinates of a plurality of different skeleton key points in each frame image may be obtained.
According to the height coordinates of the skeleton key points, inflection point information of the motion trail of the skeleton key points can be determined, and then the motion times are determined according to the inflection point information in the subsequent steps.
The position of the image acquisition equipment for acquiring the video data is static in the acquisition process, and the skeleton key points of the target object move along with the movement of the target object relative to the image acquisition equipment, so that the movement of the target object is represented through the position change (such as height coordinate change) of the skeleton key points, the influence on the counting accuracy when the target object moves or rotates relative to the image acquisition device can be avoided, and the counting accuracy is improved.
Step S106: and updating the movement times of the target object according to the inflection point information of the movement track.
In a specific implementation, if the video data is a video from the start of the motion of the target object to the current time, the number of motion cycles of the motion trajectory may be determined according to the inflection point information, and the number of motion cycles is directly used as the number of motion cycles of the target object, and the number of motion cycles is used for updating. If the original movement number is 0, the movement cycle number is directly updated to the new movement number.
Alternatively, in another specific implementation, if the video data is a video from a certain time point after the target object starts moving to the current time point, the number of movement cycles may be determined based on the multi-frame images in the video data, and then the video data may be updated based on the number of movement cycles and the original number of movement cycles. If the number of original movements is 5, the number of movement cycles is added to the number of original movements, and the number of movements is updated using the result of the addition.
In the embodiment, a skeleton key point detection technology (which is one of key point detection, in other embodiments, other appropriate detection modes can be adopted) is adopted, position change information of key points of a target object in a motion process is obtained, inflection point information is further determined, motion counting is realized according to the inflection point information, and the problem that counting is inaccurate due to the fact that a counter needs to be focused by people with higher attention and is easily influenced by factors such as motion speed in manual counting in the prior art is solved. And overcome the mode that exists through analyzing the texture characteristic of the fixed local area in the motion image and count among the prior art: when the target object moves forwards, backwards, leftwards and rightwards and changes in direction in the movement process, the texture characteristics in the fixed local area are changed, and the problem that accurate counting cannot be achieved is caused.
The following describes an implementation process of the method with reference to a specific usage scenario:
as shown in fig. 1c, in the present usage scenario, the target object is taken as an example to perform a rope skipping movement. When the target object is ready to jump, video data of the target object for jumping the rope is shot through the mobile phone. In order to improve the accuracy of counting, the mobile phone can be supported by a tripod and other equipment, so that the mobile phone is placed still, and video data of rope skipping of a target object are shot. Of course, in other use scenes, the shooting can be carried out by holding the mobile phone by other people.
Skeleton key point detection is carried out on multi-frame images in a set time period in shot video data, so that the positions of skeleton key points in the multi-frame images at different moments in the set time period (such as 0:00 time to 1:00 time) are obtained.
And determining inflection point information of the motion trail of the key point of the target object according to the positions of the multi-frame images, and updating the motion times according to the inflection point information. For example, as shown in fig. 1c, the number of movement cycles in the movement trace is determined according to the inflection point information to determine the number of movements.
As shown in an interface 1 in fig. 1c, real-time video data and real-time movement times of a shot target object during movement can be displayed in an interface of a terminal device as required for a user to view.
According to the embodiment, the positions of key points are obtained from the multi-frame images of the target object, inflection point information of the motion trail is determined, and the motion times are determined according to the inflection point information. Therefore, automatic counting can be realized, the movement times can be obtained through inflection point information of the key points, and the positions of the key points can be accurately obtained when a target object moves around the body in the movement process or rotates to different angles, so that the accuracy of movement counting is improved, special movement equipment does not need to be purchased, and the counting cost is reduced.
Example two
Referring to fig. 2a, a flow chart of a motion counting method according to a second embodiment of the present invention is shown.
In this embodiment, the method includes the aforementioned steps S102 to S106. Wherein, step S104 includes the following substeps:
substep S1041: and detecting the skeleton key points of the multi-frame images.
In a specific implementation, in order to meet individual requirements of different users, the users can select the detected skeleton key points according to their own requirements. For example, a plurality of candidate skeleton key points are displayed in a display interface of the terminal device for the user to select. The user may perform a selection operation on one or more candidate skeletal keypoints.
Based on this, the sub-step S1041 may be implemented as: and detecting the skeleton key points of the multi-frame images based on the skeleton key points indicated by the selection operation, and acquiring the positions of the selected skeleton key points in the multi-frame images.
Taking the selection operation instruction as an example that the key point in the middle of the neck is selected, skeleton key point detection is performed on each frame of image according to the selection operation, and the position of the key point in the middle of the neck in each frame of image is determined (which can be expressed in a coordinate manner in the image).
By the method, the user can select and detect the skeleton key points matched with the movement according to different counted movements, so that the accuracy of movement counting is improved.
Substep S1042: and determining inflection point information of the motion trail of the skeleton key points of the target object at least according to the positions of the detected skeleton key points in the corresponding images.
In a specific implementation, step S1042 can be implemented as: and determining inflection point information of the motion trail of the skeleton key point of the target object according to the acquisition time of the multi-frame image and the detected position of the skeleton key point of the target object in the multi-frame image.
For example, the multi-frame images are respectively images 1-3, the positions of the skeleton key points in the images 1-3 are respectively obtained, and the motion trail of the skeleton key points is determined according to the acquisition time of the images 1-3 and the positions of the skeleton key points.
And then determining inflection point information according to the motion trail of the skeleton key points. Wherein, the inflection point can be a peak point and/or a valley point in the motion trail. The inflection point information can be obtained by performing waveform analysis on the motion trail.
For example, the position difference of the skeleton key points at two adjacent moments in the motion trajectory is compared, and then the positions of the peak and the trough in the motion trajectory, that is, the position of the inflection point, can be determined.
After determining the motion trajectory of the skeleton key point and according to the inflection point information of the motion trajectory, executing step S106, and updating the motion times according to the inflection point information.
In one specific implementation, step S106 includes the following sub-steps:
substep S1061: and determining the number of the motion cycles of the target object according to an analysis result obtained by analyzing the inflection point information of the motion trail.
The sub-step S1061 may be implemented by the following processes a to B:
process A: and determining at least one motion period and the amplitude of the motion period in the motion process of the target object according to the analysis result of the inflection point information of the motion trail, and carrying out noise filtering on the motion trail.
Because the inflection point information of the motion trail represents the moment when the key point of the target object is reversed in the motion process, for example, in the rope skipping process, the inflection point indicated by the inflection point information corresponds to the position where the target object jumps to the highest point (namely, the peak in the waveform of the motion trail) and the position where the target object falls to the lowest point (namely, the trough in the waveform of the motion trail), the motion period of the target object can be obtained by analyzing the inflection point information, and the amplitude of each motion period can be determined.
For example, whether each inflection point is a peak or a trough is determined through analysis, and then a waveform segment between two adjacent peaks is determined as one motion period according to a time corresponding to each inflection point, or a waveform segment between two adjacent troughs is determined as one motion period, or any other suitable manner may be adopted to determine the motion period, which is not limited in this embodiment.
After the movement period has been determined, the amplitude thereof, i.e. the distance between the peak and the trough of the movement period, can be determined for each movement period. In addition, since the target object may have a preparation stage before the movement and an end stage after the movement, etc. during the movement, in these stages, the height coordinate variation amplitude of the key point of the target object is usually smaller than the height coordinate variation amplitude of the actual movement stage, in order to avoid the influence of the data of the preparation stage, the end stage, etc. and the noise of the actual movement stage on the counting accuracy, in the present embodiment, the waveform information is noise-filtered, so as to remove the waveform information of the preparation stage and the end stage and the noise of the actual movement stage in the waveform information.
In one particular implementation, the noise filtering of the motion cycle may be implemented as: according to the analysis result, determining the amplitude of the waveform segment corresponding to the motion period, and filtering the waveform segment meeting the filtering condition, wherein the filtering condition comprises at least one of the following conditions: the period duration does not satisfy the set period duration and the corresponding amplitude is less than the set amplitude threshold.
Taking the waveform formed by the motion cycle as an example of the waveform 1 shown in fig. 2b, as shown in the figure, from the time t0, the first trough appears at the time t1, the first peak appears at the time t2, the second trough appears at the time t3, and based on this, the time t0 to the time t3 can be determined as a motion cycle, and so on. The amplitude may be the peak to trough distance for each motion cycle.
When each movement period is verified, if the amplitude of the current movement period is smaller than the set amplitude threshold, it indicates that the target object may be in a preparation stage or an end stage in a time period corresponding to the movement period, and the target object does not move for counting, and needs to be filtered out to avoid affecting the accuracy of counting. The set amplitude threshold may be a predetermined value or a value determined according to the amplitude in the waveform information, for example, the set amplitude threshold may be an average value of N times the amplitude, where N is greater than or equal to 1.
Or, if the period duration of the current movement period does not meet the set period duration, it indicates that the target object does not move for counting, and the movement can be filtered out to avoid affecting the accuracy of counting. It should be noted that the set period duration may preset different values according to different motions, such as 2 seconds, 5 seconds, and the like, or may be determined according to the duration of each vibration period in the waveform information, such as setting the period duration to be an average period duration that is M times longer than or equal to 1.
The waveform information corresponding to the motion period remaining after filtering is shown as waveform 2 in fig. 2 b.
And a process B: and determining the number of the motion periods of the target object according to the amplitude change rule of the filtered motion periods.
Because the amplitude change rules of the waveform information formed by different motion modes may be different, and the corresponding counting modes may also be different, in order to ensure the accuracy of counting, the motion mode of the target object needs to be judged, and then counting is performed in a matching mode according to the motion mode.
In a specific implementation, the process B may be implemented as: determining a corresponding target motion mode according to the amplitude change rule of the filtered motion period; and determining the number of the motion cycles in which the amplitude change rule in the filtered motion cycles is matched with the amplitude change rule indicated by the target motion mode as the number of the motion cycles.
Wherein, according to the amplitude variation rule of the filtered motion period, determining the corresponding target motion mode can be realized as: acquiring waveform characteristic information of a waveform segment corresponding to the filtered motion period; determining the amplitude change rule of the filtered motion period according to the waveform characteristic information; and determining the motion mode matched with the amplitude change rule of the filtered motion period as the target motion mode according to the amplitude change rule indicated by the preset motion mode.
Wherein the waveform feature information includes at least one of: the position difference of two adjacent wave troughs, the position difference of two adjacent wave crests and the position difference between the adjacent wave crest and the wave trough.
In general, if the height difference between two adjacent peaks and/or two adjacent troughs is greater than a set difference (the set difference may be determined as needed, but the present embodiment is not limited thereto), it means that the waveform segment between two adjacent peaks or two adjacent troughs is not a waveform segment corresponding to a complete motion cycle, but may be a part of a motion cycle.
As shown in fig. 2c, which shows waveform information formed by a filtered movement period. As shown, there is one trough with smaller amplitude after the trough with larger amplitude in the waveform, and the large amplitude (the amplitude formed from point A to point B shown in FIG. 2C) is greater than or equal to 2 times the small amplitude (the amplitude formed from point C to point D in FIG. 2C). It can be seen that the motion pattern of the target object in fig. 2c is different from the motion pattern of the target object in fig. 2b, and the same counting method cannot be adopted.
For this purpose, the waveform feature information in the waveform information shown in fig. 2c is obtained, and an amplitude variation rule (e.g., a small amplitude with a large amplitude greater than or equal to 2 times, etc.) is determined according to the waveform feature information. And determining a corresponding target motion mode by matching the amplitude change rule with the amplitude change rule indicated by the preset motion mode. In this embodiment, the target motion pattern of the target object is determined to be springback skipping by performing matching, and in this skipping pattern, the target object has the situation that skipping twice in a skipping process once and rebounds, so that a small amplitude exists in the waveform information.
Thus, the target motion mode can be determined by matching the amplitude change law, and the number of motion cycles in the filtered motion cycle, the amplitude change law of which is matched with the amplitude change law indicated by the target motion mode, can be determined as the number of motion cycles.
In a specific implementation, according to the amplitude change rule indicated by the target motion pattern, it is determined that one motion cycle includes two peaks and two troughs, and a relationship of at least 2 times exists between a first amplitude formed by the first peak and the first trough and a second amplitude formed by the second peak and the second trough, so that the number of motion cycles satisfying the amplitude change rule indicated by the target motion pattern in the filtered motion cycle can be determined, and the number of motion cycles is used as the number of motion cycles.
Substep S1062: updating the number of movements based on the number of movement cycles.
In a specific implementation, if the video data includes video data of a complete motion process from a motion starting time to a current time, the number of motion cycles is directly used as a new number of motion cycles, and the original number of motion cycles is updated to the new number of motion cycles.
In another specific implementation, if the video data is from time t1 to the current time in the motion process, the number of motion cycles is summed with the number of original motions, and the number of original motions is updated as the result obtained by the summation.
In addition to counting according to the target motion mode in the aforementioned manner, a default motion mode may be directly configured and counting may be directly performed according to the default motion mode, so that a process of determining the motion mode is omitted.
Optionally, in order to enable the viewer to directly, conveniently and quickly know the number of movements, the method in this embodiment may further include step S108.
Step S108: in the live broadcasting process of the video data, a motion trail display window is created, and motion trails of skeleton key points are displayed in the motion trail display window, wherein the motion trails are determined according to positions of the skeleton key points in the multi-frame images.
The terminal equipment can carry out live broadcast on video data so that a viewer can watch the motion process, and in the live broadcast process, a motion trail display window is created so that the motion trail of the key points of the framework can be displayed through the display window. Meanwhile, the current movement times and the like can be displayed in the display window.
The following description is given of a counting process of performing rope rebounding and skipping on a target object by combining a specific use scenario as follows:
as shown in fig. 2d, in the present usage scenario, a video of a target object skipping rope is acquired in real time by a way of placing the mobile phone still.
For the collected video data, the skeleton key point detection can be carried out on each frame of image in real time, or after the image is collected for a period of time, the skeleton key point detection is carried out on part or all of the images in the video data. By skeleton key point detection, the position of a skeleton key point at the neck in each detected image (i.e., height coordinates in a moving image) is obtained.
According to the acquisition time of the detected image and the height of the skeleton key point, a waveform (i.e. a motion trail) for indicating the position change of the skeleton key point on the time sequence in the rope skipping process is obtained, and the waveform is shown in fig. 2e, wherein the horizontal axis is a time axis, and the vertical axis is the height of the skeleton key point.
In order to avoid the adverse effect of waveform segments outside the actual motion phase on the counting accuracy, the waveform is analyzed to determine inflection point information. And determining the motion period and the amplitude of the motion period according to the inflection point information, and filtering out corresponding waveform segments of each motion period if the period duration of the motion period does not meet the set period duration and/or the amplitude is smaller than a set amplitude threshold value, thereby realizing the effects of filtering the noise of the actual motion stage and the preparation stage and the ending stage before and after rope skipping.
In one case, if the default motion pattern is preset, the number of motion cycles satisfying the amplitude variation law of the default motion pattern among the filtered motion cycles is determined, and the number of motions is determined according to the number.
In another case, since the default motion mode is not configured, the target object may perform different motion modes, but the counting modes of different motion modes are different, for example, the counting modes in the motion modes such as running skipping rope (similar to running posture) and rebounding skipping rope (skipping twice during one rope skipping) are different from the counting mode of a common skipping rope, and therefore, in the present usage scenario, the motion modes need to be analyzed.
For example, it can be seen from fig. 2e that the heights of the skeleton key points exhibit regular fluctuation in the actual motion phase (i.e., in the actual rope skipping process), and based on this, the target motion pattern of the target object can be determined by obtaining the amplitude variation law of the waveform segment corresponding to the filtered motion period and then matching the amplitude variation law with the preset motion pattern, so as to determine the number of motion cycles according to the target motion pattern, and further determine the number of motion times.
Specifically, the corresponding target motion pattern (for example, the target motion pattern is a running and skipping rope pattern) may be determined by obtaining the waveform feature information, determining a continuous fluctuation rule of the height of the trough and/or a continuous fluctuation rule of the height of the peak according to the waveform feature information (these fluctuation rules are referred to as amplitude change rules), and matching the amplitude change rule with the amplitude change rule indicated by the preset motion pattern.
And determining the number of the motion cycles in which the amplitude change rule is matched with the amplitude change rule of the vibration cycle indicated by the target motion mode in all the motion cycles according to the target motion mode, and taking the number as the number of the motion cycles.
After obtaining the number of movement cycles, if the set time period is from the 0 th time point at which the target object starts moving, that is, in the case where the number of original movements is 0, the number of movement cycles is directly updated as the new number of movements.
Therefore, for the image in the next set time period, inflection point information of the motion trail can be obtained through skeleton key point detection, and the number of motion cycles is determined according to the inflection point information. Since the original number of movements is not 0 in this case, the number of movement cycles may be added to the original number of movements to obtain a new number of movements, and the original number of movements may be updated, and this may be repeated until it is detected that the movement of the target object is completed.
It should be noted that the step of returning to the moving image of the real-time shooting target object after updating the number of movements in fig. 2d of the usage scenario is only to show that the counting is a cyclic process, and the front-back relationship between the steps is not strictly limited in time sequence.
In the use scene, the skeleton key point detection can be carried out on the images in parallel in the process of shooting the moving image of the target object in real time, and the motion times can be acquired in parallel.
It should be noted that, although the target motion pattern is obtained by analysis in the present usage scenario and classified detection is performed on different target motion patterns, it should be understood by those skilled in the art that other manners may be used to obtain the target motion pattern, for example, the user selects the target motion pattern in advance before using the target motion pattern. In addition, although the present usage scenario is described with the example of determining the number of motions according to the waveform information of a single skeleton key point, in other usage scenarios, inflection point information of motion trajectories of a plurality of different skeleton key points may be obtained, and the number of motions may be updated according to the inflection point information of the plurality of different skeleton key points, which is not limited in the present usage scenario.
Through this use scenario, utilize skeleton key point to reflect the motion condition of target object, obtain the motion number of times through carrying out the analysis to the motion condition of skeleton key point, can directly remove the reliance to artifical count, realize the automatic counting, just can realize the automatic counting through basic equipment such as smart mobile phone moreover, need not purchase extra intelligent sports equipment, use ordinary sports apparatus (like ordinary rope skipping) also can carry out the automatic counting, the cost is reduced.
In addition, because the skeleton key point information is used instead of the texture characteristics of the fixed local area of the image, the problem that the change of the texture of the local area caused by the front, back, left and right movement and the direction change of a user in the rope skipping process cannot be accurately counted can be effectively solved.
The motion condition of the key points of the framework in the rope skipping process is analyzed by applying a human body key point detection technology (such as a framework key point detection technology), and rope skipping counting is achieved. Moreover, by analyzing the movement conditions of the key points of the framework in different rope skipping modes, rope skipping counting suitable for various rope skipping modes is realized, and the adaptability is higher.
According to the embodiment, the positions of key points are obtained from the multi-frame images of the target object, inflection point information of the motion trail is determined, and the motion times are determined according to the inflection point information. Therefore, automatic counting can be realized, the movement times can be obtained through inflection point information of the key points, and the positions of the key points can be accurately obtained when a target object moves around the body in the movement process or rotates to different angles, so that the accuracy of movement counting is improved, special movement equipment does not need to be purchased, and the counting cost is reduced.
EXAMPLE III
Referring to fig. 3, a block diagram of a motion counting apparatus according to a third embodiment of the present invention is shown.
As shown in fig. 3, in the present embodiment, the motion counting apparatus includes: the acquisition module 302 is configured to acquire video data of a target object moving, where the video data includes multiple frames of images; an obtaining module 304, configured to obtain motion trajectory inflection point information of a key point of the target object according to the multi-frame image; an updating module 306, configured to update the number of times of movement of the target object according to the inflection point information of the movement trajectory.
Optionally, the obtaining module 304 includes: a skeleton detection module 3041, configured to perform skeleton key point detection on the multiple frames of images; a first determining module 3042, configured to determine, according to at least the position of the detected skeleton key point in the corresponding image, inflection point information of a motion trajectory of the skeleton key point of the target object.
Optionally, the first determining module 3042 is configured to determine, according to the acquisition time of the multiple frames of images and the detected position of the skeleton key point of the target object in the multiple frames of images, motion trajectory inflection point information of the skeleton key point of the target object.
Optionally, the skeleton detecting module 3041 is configured to perform skeleton key point detection on the multiple frames of images based on the skeleton key point indicated by the selection operation, and acquire a position of the selected skeleton key point in the multiple frames of images.
Optionally, the update module 306 includes: a second determining module 3061, configured to determine, according to an analysis result obtained by analyzing the inflection point information of the motion trajectory, a number of motion cycles of the target object; a number update module 3062 configured to update the number of movements based on the number of movement cycles.
Optionally, the second determining module 3061 is configured to determine at least one motion period and an amplitude of the motion period in the motion process of the target object according to an analysis result of inflection point information of the motion trajectory, and perform noise filtering on the motion period; and determining the number of the motion periods of the target object according to the amplitude change rule of the filtered motion periods.
Optionally, the second determining module 3061 is configured to, when the noise filtering is performed on the motion cycle, determine, according to the analysis result, an amplitude of a waveform segment corresponding to the motion cycle; filtering the waveform segments that satisfy a filtering condition, the filtering condition including at least one of: the period duration does not satisfy the set period duration and the corresponding amplitude is less than the set amplitude threshold.
Optionally, the second determining module 3061 is configured to, when the number of motion cycles of the target object is determined according to the amplitude variation rule of the filtered motion cycle, determine a corresponding target motion pattern according to the amplitude variation rule of the filtered motion cycle; and determining the number of the motion cycles in which the amplitude change rule is matched with the amplitude change rule indicated by the target motion mode in the filtered motion cycles as the number of the motion cycles.
Optionally, the second determining module 3061 is configured to, when the corresponding target motion mode is determined according to the amplitude variation rule of the filtered motion period, obtain waveform feature information of a waveform segment corresponding to the filtered motion period, where the waveform feature information includes at least one of: the position difference of two adjacent wave troughs, the position difference of two adjacent wave crests and the position difference between the adjacent wave crests and the wave troughs; determining the amplitude change rule of the filtered motion period according to the waveform characteristic information; and determining the motion mode matched with the amplitude change rule of the filtered motion period as the target motion mode according to the amplitude change rule indicated by the preset motion mode.
Optionally, the apparatus further comprises: and the display module 308 is configured to create a motion trail display window in the live broadcast process of the video data, and display a motion trail of the skeleton key point in the motion trail display window, where the motion trail is determined according to a position of the skeleton key point in the multi-frame image.
The motion counting apparatus of this embodiment is used to implement the corresponding motion counting method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again. In addition, the functional implementation of each module in the motion counting apparatus of this embodiment can refer to the description of the corresponding part in the foregoing method embodiments, and is not repeated herein.
Example four
Referring to fig. 4, a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention is shown, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 4, the electronic device may include: a processor (processor)402, a Communications Interface 404, a memory 406, and a communication bus 408 and image capture device.
Wherein:
the processor 402, communication interface 404, and memory 406 and the image capture device communicate with each other via a communication bus 408.
A communication interface 404 for communicating with other electronic devices or servers.
The image acquisition equipment is used for acquiring video data of the movement of the target object, and the video data comprises a plurality of frames of images.
The processor 402 is configured to execute the program 410, and may specifically execute the relevant steps in the above-described motion counting method embodiment.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The intelligent device comprises one or more processors which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 410 may specifically be configured to cause the processor 402 to perform the following operations: acquiring video data of a target object moving, wherein the video data comprises a plurality of frames of images; obtaining motion trail inflection point information of key points of the target object according to the multi-frame image; and updating the movement times of the target object according to the inflection point information of the movement track.
In an optional implementation manner, the program 410 is further configured to enable the processor 402 to perform skeleton keypoint detection on the multi-frame image when obtaining motion trajectory inflection point information of keypoints of the target object according to the multi-frame image; and determining inflection point information of the motion trail of the skeleton key points of the target object at least according to the positions of the detected skeleton key points in the corresponding images.
In an optional implementation manner, the program 410 is further configured to, when determining inflection point information of a motion trajectory of a skeleton key point of the target object according to at least a position of the detected skeleton key point in a corresponding image, determine the inflection point information of the motion trajectory of the skeleton key point of the target object according to an acquisition time of the multi-frame image and the detected position of the skeleton key point of the target object in the multi-frame image.
In an optional implementation, the program 410 is further configured to, when performing skeleton keypoint detection on the multi-frame image, perform skeleton keypoint detection on the multi-frame image based on the skeleton keypoint indicated by the selection operation, and acquire a position of the selected skeleton keypoint in the multi-frame image.
In an optional implementation manner, the program 410 is further configured to enable the processor 402 to determine a number of movement cycles of the target object according to an analysis result obtained by analyzing the inflection point information of the movement locus when the number of movement times of the target object is updated according to the inflection point information of the movement locus; updating the number of movements based on the number of movement cycles.
In an optional implementation, the program 410 is further configured to, when determining the number of movement cycles of the target object according to an analysis result obtained by analyzing the inflection point information of the movement trajectory, determine at least one movement cycle and an amplitude of the movement cycle in a movement process of the target object according to the analysis result of the inflection point information of the movement trajectory, and perform noise filtering on the movement cycle; and determining the number of the motion periods of the target object according to the amplitude change rule of the filtered motion periods.
In an alternative embodiment, the program 410 is further configured to enable the processor 402 to determine, according to the analysis result, an amplitude of a waveform segment corresponding to the motion cycle when performing noise filtering on the motion cycle; filtering the waveform segments that satisfy a filtering condition, the filtering condition including at least one of: the period duration does not satisfy the set period duration and the corresponding amplitude is less than the set amplitude threshold.
In an alternative embodiment, the program 410 is further configured to enable the processor 402 to determine a corresponding target motion pattern according to the amplitude variation law of the filtered motion cycle when determining the number of motion cycles of the target object according to the amplitude variation law of the filtered motion cycle; and determining the number of the motion cycles in which the amplitude change rule is matched with the amplitude change rule indicated by the target motion mode in the filtered motion cycles as the number of the motion cycles.
In an optional implementation, the program 410 is further configured to cause the processor 402 to obtain waveform feature information of a waveform segment corresponding to the filtered motion period when determining the corresponding target motion pattern according to the amplitude variation rule of the filtered motion period, where the waveform feature information includes at least one of: the position difference of two adjacent wave troughs, the position difference of two adjacent wave crests and the position difference between the adjacent wave crests and the wave troughs; determining the amplitude change rule of the filtered motion period according to the waveform characteristic information; and determining the motion mode matched with the amplitude change rule of the filtered motion period as the target motion mode according to the amplitude change rule indicated by the preset motion mode.
In an optional implementation manner, the program 410 is further configured to cause the processor 402 to create a motion trail presentation window during live broadcasting of the video data, and present a motion trail of a skeleton key point in the motion trail presentation window, where the motion trail is determined according to a position of the skeleton key point in a multi-frame image.
For specific implementation of each step in the program 410, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing motion counting method embodiments, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the motion counting methods described herein. Further, when a general purpose computer accesses code for implementing the motion counting methods shown herein, execution of the code transforms the general purpose computer into a special purpose computer for performing the motion counting methods shown herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.
Claims (14)
1. A motion counting method, comprising:
acquiring video data of a target object moving, wherein the video data comprises a plurality of frames of images;
obtaining motion trail inflection point information of key points of the target object according to the multi-frame image;
and updating the movement times of the target object according to the inflection point information of the movement track.
2. The method according to claim 1, wherein the obtaining motion trajectory inflection point information of the key point of the target object according to the multi-frame image comprises:
detecting skeleton key points of the multi-frame images;
and determining inflection point information of the motion trail of the skeleton key points of the target object at least according to the positions of the detected skeleton key points in the corresponding images.
3. The method of claim 2, wherein the determining inflection point information of the motion trajectory of the skeleton keypoint of the target object at least according to the position of the detected skeleton keypoint in the corresponding image comprises:
and determining the motion trail inflection point information of the skeleton key point of the target object according to the acquisition time of the multi-frame image and the detected position of the skeleton key point of the target object in the multi-frame image.
4. The method of claim 2, wherein the performing skeletal keypoint detection on the multi-frame image comprises:
and detecting the skeleton key points of the multi-frame images based on the skeleton key points indicated by the selection operation, and acquiring the positions of the selected skeleton key points in the multi-frame images.
5. The method of claim 1, wherein the updating the number of movements of the target object according to the motion trajectory inflection information comprises:
determining the number of movement cycles of the target object according to an analysis result obtained by analyzing the inflection point information of the movement locus;
updating the number of movements based on the number of movement cycles.
6. The method of claim 5, wherein the determining the number of movement cycles of the target object according to an analysis result obtained by analyzing the movement trajectory inflection point information comprises:
determining at least one motion period and the amplitude of the motion period in the motion process of a target object according to the analysis result of the inflection point information of the motion trail, and carrying out noise filtration on the motion period;
and determining the number of the motion periods of the target object according to the amplitude change rule of the filtered motion periods.
7. The method of claim 6, wherein the noise filtering the motion cycle comprises:
determining the amplitude of the waveform segment corresponding to the motion period according to the analysis result;
filtering the waveform segments that satisfy a filtering condition, the filtering condition including at least one of: the period duration does not satisfy the set period duration and the corresponding amplitude is less than the set amplitude threshold.
8. The method of claim 6, wherein the determining the number of motion cycles of the target object according to the filtered law of change in amplitude of the motion cycles comprises:
determining a corresponding target motion mode according to the amplitude change rule of the filtered motion period;
and determining the number of the motion cycles in which the amplitude change rule is matched with the amplitude change rule indicated by the target motion mode in the filtered motion cycles as the number of the motion cycles.
9. The method of claim 8, wherein determining the corresponding target motion pattern according to the amplitude variation law of the filtered motion cycle comprises:
acquiring waveform characteristic information of a waveform segment corresponding to the filtered motion period, wherein the waveform characteristic information comprises at least one of the following: the position difference of two adjacent wave troughs, the position difference of two adjacent wave crests and the position difference between the adjacent wave crests and the wave troughs;
determining the amplitude change rule of the filtered motion period according to the waveform characteristic information;
and determining the motion mode matched with the amplitude change rule of the filtered motion period as the target motion mode according to the amplitude change rule indicated by the preset motion mode.
10. The method of claim 2, wherein the method further comprises:
in the live broadcasting process of the video data, a motion trail display window is created, and motion trails of skeleton key points are displayed in the motion trail display window, wherein the motion trails are determined according to positions of the skeleton key points in the multi-frame images.
11. A motion counting apparatus comprising:
the acquisition module is used for acquiring video data of a target object moving, and the video data comprises a plurality of frames of images;
the acquisition module is used for acquiring motion trail inflection point information of key points of the target object according to the multi-frame image;
and the updating module is used for updating the movement times of the target object according to the inflection point information of the movement track.
12. An electronic device, comprising:
the image acquisition equipment is used for acquiring video data of the movement of a target object, and the video data comprises a plurality of frames of images;
the processor is used for obtaining motion trail inflection point information of key points of the target object according to the multi-frame images; and updating the movement times of the target object according to the inflection point information of the movement track.
13. The electronic device of claim 12, further comprising a memory;
the memory is used for storing at least the multi-frame images and the motion times.
14. A computer storage medium, on which a computer program is stored which, when being executed by a processor, carries out the motion counting method according to any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010642880.8A CN113902084A (en) | 2020-07-06 | 2020-07-06 | Motion counting method and device, electronic equipment and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010642880.8A CN113902084A (en) | 2020-07-06 | 2020-07-06 | Motion counting method and device, electronic equipment and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113902084A true CN113902084A (en) | 2022-01-07 |
Family
ID=79186844
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010642880.8A Pending CN113902084A (en) | 2020-07-06 | 2020-07-06 | Motion counting method and device, electronic equipment and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113902084A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114640807A (en) * | 2022-03-15 | 2022-06-17 | 京东科技信息技术有限公司 | Video-based object counting method and device, electronic equipment and storage medium |
CN114926910A (en) * | 2022-07-18 | 2022-08-19 | 科大讯飞(苏州)科技有限公司 | Action matching method and related equipment thereof |
CN115546291A (en) * | 2022-11-28 | 2022-12-30 | 成都怡康科技有限公司 | Rope skipping counting method and device, computer equipment and storage medium |
CN115813374A (en) * | 2022-11-02 | 2023-03-21 | 科大讯飞股份有限公司 | Periodic motion evaluation method and device, electronic equipment and storage medium |
CN116306766A (en) * | 2023-03-23 | 2023-06-23 | 北京奥康达体育产业股份有限公司 | Wisdom horizontal bar pull-up examination training system based on skeleton recognition technology |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101172199A (en) * | 2006-07-18 | 2008-05-07 | 孙学川 | Intelligent sit-up test system |
CN106650590A (en) * | 2016-09-30 | 2017-05-10 | 上海斐讯数据通信技术有限公司 | Counting method and apparatus for sit-ups and mobile terminal |
CN109876416A (en) * | 2019-03-26 | 2019-06-14 | 浙江大学 | A kind of rope skipping method of counting based on image information |
CN110210360A (en) * | 2019-05-24 | 2019-09-06 | 浙江大学 | A kind of rope skipping method of counting based on video image target identification |
CN110738192A (en) * | 2019-10-29 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Human motion function auxiliary evaluation method, device, equipment, system and medium |
CN110772749A (en) * | 2019-11-28 | 2020-02-11 | 杨雯悦 | Rope skipping counting method and system |
CN111242030A (en) * | 2020-01-13 | 2020-06-05 | 平安国际智慧城市科技股份有限公司 | Video data processing method, device, equipment and computer readable storage medium |
CN111275032A (en) * | 2020-05-07 | 2020-06-12 | 西南交通大学 | Deep squatting detection method, device, equipment and medium based on human body key points |
-
2020
- 2020-07-06 CN CN202010642880.8A patent/CN113902084A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101172199A (en) * | 2006-07-18 | 2008-05-07 | 孙学川 | Intelligent sit-up test system |
CN106650590A (en) * | 2016-09-30 | 2017-05-10 | 上海斐讯数据通信技术有限公司 | Counting method and apparatus for sit-ups and mobile terminal |
CN109876416A (en) * | 2019-03-26 | 2019-06-14 | 浙江大学 | A kind of rope skipping method of counting based on image information |
CN110210360A (en) * | 2019-05-24 | 2019-09-06 | 浙江大学 | A kind of rope skipping method of counting based on video image target identification |
CN110738192A (en) * | 2019-10-29 | 2020-01-31 | 腾讯科技(深圳)有限公司 | Human motion function auxiliary evaluation method, device, equipment, system and medium |
CN110772749A (en) * | 2019-11-28 | 2020-02-11 | 杨雯悦 | Rope skipping counting method and system |
CN111242030A (en) * | 2020-01-13 | 2020-06-05 | 平安国际智慧城市科技股份有限公司 | Video data processing method, device, equipment and computer readable storage medium |
CN111275032A (en) * | 2020-05-07 | 2020-06-12 | 西南交通大学 | Deep squatting detection method, device, equipment and medium based on human body key points |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114640807A (en) * | 2022-03-15 | 2022-06-17 | 京东科技信息技术有限公司 | Video-based object counting method and device, electronic equipment and storage medium |
CN114640807B (en) * | 2022-03-15 | 2024-01-16 | 京东科技信息技术有限公司 | Video-based object statistics method, device, electronic equipment and storage medium |
CN114926910A (en) * | 2022-07-18 | 2022-08-19 | 科大讯飞(苏州)科技有限公司 | Action matching method and related equipment thereof |
CN115813374A (en) * | 2022-11-02 | 2023-03-21 | 科大讯飞股份有限公司 | Periodic motion evaluation method and device, electronic equipment and storage medium |
CN115546291A (en) * | 2022-11-28 | 2022-12-30 | 成都怡康科技有限公司 | Rope skipping counting method and device, computer equipment and storage medium |
CN116306766A (en) * | 2023-03-23 | 2023-06-23 | 北京奥康达体育产业股份有限公司 | Wisdom horizontal bar pull-up examination training system based on skeleton recognition technology |
CN116306766B (en) * | 2023-03-23 | 2023-09-22 | 北京奥康达体育产业股份有限公司 | Wisdom horizontal bar pull-up examination training system based on skeleton recognition technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113902084A (en) | Motion counting method and device, electronic equipment and computer storage medium | |
US11638854B2 (en) | Methods and systems for generating sports analytics with a mobile device | |
CN105654512B (en) | A kind of method for tracking target and device | |
US20220080260A1 (en) | Pose comparison systems and methods using mobile computing devices | |
US20190160339A1 (en) | System and apparatus for immersive and interactive machine-based strength training using virtual reality | |
CN108229294A (en) | A kind of motion capture method, apparatus, electronic equipment and storage medium | |
US20240050803A1 (en) | Video-based motion counting and analysis systems and methods for virtual fitness application | |
US10796448B2 (en) | Methods and systems for player location determination in gameplay with a mobile device | |
US10350454B1 (en) | Automated circuit training | |
CN107273857B (en) | Motion action recognition method and device and electronic equipment | |
CN113743273A (en) | Real-time rope skipping counting method, device and equipment based on video image target detection | |
CN107694046A (en) | A kind of body building training method, device and computer-readable recording medium | |
Ingwersen et al. | SportsPose-A Dynamic 3D sports pose dataset | |
CN108063915A (en) | A kind of image-pickup method and system | |
CN105850109A (en) | Information processing device, recording medium, and information processing method | |
CN113706507A (en) | Real-time rope skipping counting method, device and equipment based on human body posture detection | |
CN114245210B (en) | Video playing method, device, equipment and storage medium | |
CN114037923A (en) | Target activity hotspot graph drawing method, system, equipment and storage medium | |
CN116703968B (en) | Visual tracking method, device, system, equipment and medium for target object | |
CN114241595A (en) | Data processing method and device, electronic equipment and computer storage medium | |
CN114584680A (en) | Motion data display method and device, computer equipment and storage medium | |
CN114071211B (en) | Video playing method, device, equipment and storage medium | |
CN117058758A (en) | Intelligent sports examination method based on AI technology and related device | |
CN111353347B (en) | Action recognition error correction method, electronic device, and storage medium | |
CN111353345B (en) | Method, apparatus, system, electronic device, and storage medium for providing training feedback |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |