CN113673494B - Human body posture standard motion behavior matching method and system - Google Patents

Human body posture standard motion behavior matching method and system Download PDF

Info

Publication number
CN113673494B
CN113673494B CN202111237633.0A CN202111237633A CN113673494B CN 113673494 B CN113673494 B CN 113673494B CN 202111237633 A CN202111237633 A CN 202111237633A CN 113673494 B CN113673494 B CN 113673494B
Authority
CN
China
Prior art keywords
frame
human body
video
matched
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111237633.0A
Other languages
Chinese (zh)
Other versions
CN113673494A (en
Inventor
王海滨
纪文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Genjian Intelligent Technology Co ltd
Original Assignee
Qingdao Genjian Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Genjian Intelligent Technology Co ltd filed Critical Qingdao Genjian Intelligent Technology Co ltd
Priority to CN202111237633.0A priority Critical patent/CN113673494B/en
Publication of CN113673494A publication Critical patent/CN113673494A/en
Application granted granted Critical
Publication of CN113673494B publication Critical patent/CN113673494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Abstract

The invention provides a human body posture standard motion behavior matching method and a system, comprising the following steps: preprocessing the behavior data of the video to be matched to obtain a frame sequence; extracting human body joint points in each frame of image in the frame sequence; asynchronously aligning the frame sequence and the standard behavior data in time sequence to obtain an adjusted frame rate of the video data to be matched or a frame rate of the standard behavior data; obtaining the gravity center position coordinates of the human body in the video behavior data to be matched based on the extracted human body joint points and the adjusted frame rate of the video data to be matched; obtaining the gravity center position coordinates of the standard behavior data based on the adjusted standard behavior data frame rate; and normalizing the barycentric position coordinates of the human body in the video behaviors to be matched according to the barycentric position coordinates of the standard behavior data, and then matching. The invention aligns the input video data with the standard data by using a video frame complementing algorithm, realizes asynchronous matching of the video data, and eliminates the interference of human body type factors on behavior matching.

Description

Human body posture standard motion behavior matching method and system
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a human body posture standard motion behavior matching method and system.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The deep learning method obtains good results in the field of posture estimation, so that the matching and evaluation technology of human body postures is well applied to many practical scenes, including the solution of the problems of the daily behavior habit specifications of empty nesters and students; monitoring the sitting posture, writing posture, walking posture and other behavior habits of the child, correcting the bad behavior habits in time, and specifically, scoring the sitting posture of the child based on the standard sitting posture and giving a correction suggestion, thereby supervising the child to develop a good sitting posture habit, preventing myopia and being beneficial to establishing a good body posture; the realization of this technique still can be used to the physical training, and gesture action when training to the sportsman is graded, promotes training effect and efficiency.
However, the existing matching method for the human body dynamic posture is not mature, and due to the unicity of the video behavior data standard action template, the differences of the frame rate, the human body type and the like in the matching process cannot be eliminated, so that the behavior matching accuracy is low.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a human body posture standard motion behavior matching method, which utilizes a video frame-filling algorithm to eliminate the frame rate difference and utilizes template gravity center normalization to eliminate the human body type difference.
In order to achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
in a first aspect, a human body posture standard motion behavior matching method is disclosed, which comprises the following steps:
preprocessing the behavior data of the video to be matched to obtain a frame sequence;
extracting human body joint points in each frame of image in the frame sequence;
performing asynchronous alignment on the frame sequence and standard behavior data in time sequence by using a video frame supplementing algorithm to obtain an adjusted frame rate of video data to be matched or a frame rate of standard behavior data;
extracting human body joint points of each frame in the adjusted video to be matched, and obtaining the gravity center position coordinates of the human body in the behavior data of the video to be matched based on the extracted human body joint points;
obtaining the gravity center position coordinates of the standard behavior data based on the adjusted standard behavior data frame rate;
and normalizing the barycentric position coordinates of the human body in the video behaviors to be matched according to the barycentric position coordinates of the standard behavior data, and then matching.
The further technical scheme also comprises the following steps: and a step of calculating the similarity of the behaviors of the video to be matched, namely acquiring the similarity of the behaviors of the video to be matched and standard behavior data according to the information of the posture joint points of the human body in the matched image pair.
According to the further technical scheme, after human body joint points in each frame of image in the frame sequence are extracted, the human body joint points are combined to obtain skeleton frame data formed by all the joint points of the human body in the video to be matched.
In a further technical scheme, a frame sequence and standard behavior data are asynchronously aligned in time sequence by utilizing a video frame complementing algorithm, and if a standard behavior data frame is larger than a video data frame to be matched, the video data frame to be matched is adjusted to enable the frame sequence and the standard behavior data frame to be equal.
If the video frame number is not adjusted, the frame numbers of the standard video and the video to be matched are different, so that the behavior in the video cannot be evaluated, the current behavior evaluation method is carried out on the basis of the condition that the frame numbers are the same, the method has action alignment and the like, and the invention utilizes a video frame supplementing algorithm to carry out frame number alignment.
In a further technical scheme, a frame sequence and standard behavior data are asynchronously aligned in time sequence by utilizing a video frame complementing algorithm, and if the standard behavior data frame is smaller than a video data frame to be matched, the standard behavior data frame is adjusted to enable the standard behavior data frame and the standard behavior data frame to be equal.
Further technical scheme, the barycentric position coordinate of the human body in the behavior data of the video to be matched is obtained, and the method specifically comprises the following steps:
dividing the human body into a plurality of human body segments based on the extracted three-dimensional coordinate data of the human body joint points;
calculating the human body center of gravity by the human body segment method: firstly, three-dimensional space coordinates of the gravity centers of all human body segments are calculated, then the gravity center coordinates of the segments are calculated in a weighted average mode, and finally the gravity center position coordinates of the human body in the video data to be matched are obtained.
In a further technical scheme, the extracted human body joint points are human body 3D joint point information.
In a second aspect, a human body posture standard motion behavior matching system is disclosed, comprising:
a frame sequence acquisition module configured to: preprocessing the behavior data of the video to be matched to obtain a frame sequence;
a human body joint point extraction module configured to: extracting human body joint points in each frame of image in the frame sequence;
an asynchronous alignment module configured to: performing asynchronous alignment on the frame sequence and standard behavior data in time sequence by using a video frame supplementing algorithm to obtain an adjusted frame rate of video data to be matched or a frame rate of standard behavior data;
a matching module configured to: obtaining the gravity center position coordinates of the human body in the video behavior data to be matched based on the extracted human body joint points and the adjusted frame rate of the video data to be matched;
obtaining the gravity center position coordinates of the standard behavior data based on the adjusted standard behavior data frame rate;
and normalizing the barycentric position coordinates of the human body in the video behaviors to be matched according to the barycentric position coordinates of the standard behavior data, and then matching.
The further technical scheme also comprises the following steps: a similarity calculation module configured to: and a step of calculating the similarity of the behaviors of the video to be matched, namely acquiring the similarity of the behaviors of the video to be matched and standard behavior data according to the information of the posture joint points of the human body in the matched image pair.
In the further technical scheme, in the frame sequence acquisition module, after human body joint points in each frame of image in the frame sequence are extracted, the human body joint points are combined to obtain skeleton frame data formed by all the joint points of the human body in the video to be matched.
In the further technical scheme, in the asynchronous alignment module, a frame sequence and standard behavior data are asynchronously aligned in time sequence by using a video frame complementing algorithm, and if the standard behavior data frame is larger than a video data frame to be matched, the video data frame to be matched is adjusted to enable the frame sequence and the standard behavior data frame to be equal.
In the further technical scheme, in the asynchronous alignment module, a frame sequence and standard behavior data are asynchronously aligned in time sequence by using a video frame complementing algorithm, and if the standard behavior data frame is smaller than a video data frame to be matched, the standard behavior data frame is adjusted to enable the standard behavior data frame and the standard behavior data frame to be equal.
According to the technical scheme, in the matching module, the gravity center position coordinates of the human body in the video behavior data to be matched are obtained, and the method specifically comprises the following steps:
dividing the human body into a plurality of human body segments based on the extracted three-dimensional coordinate data of the human body joint points;
calculating the human body center of gravity by the human body segment method: firstly, three-dimensional space coordinates of the gravity centers of all human body segments are calculated, then the gravity center coordinates of the segments are calculated in a weighted average mode, and finally the gravity center position coordinates of the human body in the video data to be matched are obtained.
The above one or more technical solutions have the following beneficial effects:
the invention aligns the input video data with the standard data by using a video frame complementing algorithm to ensure that the matching of the input video behaviors eliminates the interference of action speed factors, realizes the asynchronous matching of the video data and also aims to eliminate the interference of human body form factors on behavior matching.
The invention introduces a template gravity center concept, performs gravity center normalization on the joint points of each frame in the data to be matched, then performs matching, and finally evaluates the similarity of behaviors according to the information of the posture joint points of the human body in the image pair.
The method has the characteristics of universality and more accurate matching. On one hand, the method realizes asynchronous matching of video data and can be applied to any human motion behavior matching task, and on the other hand, the method avoids matching deviation caused by single standard motion behavior template through designed motion alignment and motion mapping method. The method has good effect on the matching task of the human body movement behaviors. The effectiveness of the proposed method is verified by performing experiments on the existing attitude estimation data set.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a simulation diagram of a standard motion behavior evaluation method based on asynchronous matching of 3D gestures according to an embodiment of the present invention;
fig. 2 is a flowchart of a standard motion behavior evaluation method based on 3D pose asynchronous matching according to an embodiment of the present invention.
Detailed Description
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example one
As the standard motion behavior template is single, the frame rate and the human body shape difference exist between the standard motion behavior template and the input video data to be matched, the video frame interpolation algorithm is used for eliminating the frame rate difference, and the template center normalization is used for eliminating the human body shape difference. Referring to the attached drawing 1, the invention discloses a human body posture standard motion behavior matching method, which comprises the following steps:
firstly, preprocessing video data to be evaluated to input video behavior data to obtain a frame sequence, then extracting human body joint points in each frame of image by using a 3D attitude estimation algorithm, aligning the input video data with standard data in time sequence by using a video frame supplement algorithm to ensure that the matching of the input video behaviors eliminates the interference of action speed and speed factors, introducing a template gravity center concept to eliminate the interference of human body form factors on behavior matching, then performing matching after performing gravity center normalization on the actions to be evaluated, and then evaluating the similarity of the behaviors according to human body attitude joint point information in an image pair.
In a more specific embodiment, assume that a 3D pose estimation data set a is selected, and data is selected from the 3D pose estimation data set a as a video to be matched, wherein the data frame rate is
Figure 942619DEST_PATH_IMAGE001
N =30 frames in total, and is recorded as
Figure 962528DEST_PATH_IMAGE002
(ii) a Selecting data from A as standard behavior data with frame rate of
Figure 696610DEST_PATH_IMAGE003
Total M =60 frames, noted as
Figure 762873DEST_PATH_IMAGE004
According to the invention, the template is single, and then the gravity center position is fixed, in the technical scheme, the gravity center estimation is carried out on the human posture skeleton of the actual scene, and the difference of the gravity center positions of the two is utilized to carry out coordinate normalization, namely skeleton coordinate mapping, so as to obtain different posture joint point skeletons of people with different body types.
Referring to fig. 2, the present invention specifically includes the following steps:
step S0: video data preprocessing: processing the input behavior to be evaluated into videoFrame sequence
Figure 192717DEST_PATH_IMAGE005
Step S1: extracting human body 3D joint point information from the frame sequence by using a VideoPose3D posture estimation algorithm: 17 human body joint points in each frame of image obtained in step S0 are extracted, and the human body joint points are combined to obtain skeleton frame data composed of all the joint points of the human body in the video.
In the steps, the 3D attitude estimation algorithm has the same accuracy as other methods, and meanwhile, the complexity is lower, and the parameter quantity is less. The method can be compatible with any 2D attitude estimation detector, and can also well solve the problem of overlarge video background.
Step S2: action alignment: video data to be matched is subjected to Frame complementing algorithm by utilizing DAIN (Depth-Aware Video Frame Interpolation)
Figure 699922DEST_PATH_IMAGE006
And standard behavior data
Figure 975045DEST_PATH_IMAGE007
Are asynchronously aligned in time sequence due to
Figure 884095DEST_PATH_IMAGE008
Adjusting the frame rate of the video data to be matched to be the same by utilizing a video frame supplementing algorithm
Figure 219262DEST_PATH_IMAGE009
In this step, asynchronization means that the video to be matched is aligned with the standard video frame number only by using a video frame complementing algorithm, but not the action synchronization alignment in a strict sense. The method is simple and easy to implement. And obtaining the to-be-matched and standard videos with the same frame number through asynchronous alignment, and further performing action matching and further performing action evaluation.
And adjusting the frame rate to meet the condition that the number of the video frames to be matched is equal to the number of the standard behavior video data frames.
Step S3: combining the three-dimensional coordinate data of the 17 joint points extracted in the step S1, the human body is divided into 11 human body segments: head and neck, upper torso, lower torso, left upper arm, right upper arm, left forearm, right forearm, left thigh, right thigh, left calf, right calf.
In the step, the video frame complementing algorithm is utilized to asynchronously align the video data to be matched with the standard behavior data in time sequence, so that the asynchronous matching of the video data is realized, and meanwhile, the matching deviation caused by inconsistent action rates is avoided.
Step S4: calculating the gravity center of the human body by a human body segment method, firstly calculating the three-dimensional space coordinates of the gravity center of each human body segment in S3 by using a formula (1), then calculating the gravity center coordinates of the segments by the weighted average of a formula (2), and finally obtaining the gravity center position coordinates of the human body in the video data to be matched, wherein j represents the j-th segment,
Figure 213763DEST_PATH_IMAGE010
indicates the number of joints contained in the segment,
Figure 292577DEST_PATH_IMAGE011
coordinate values of directions indicating the joints included in the segment,
Figure 56134DEST_PATH_IMAGE012
which represents the weighting coefficient(s) of the,
Figure 591895DEST_PATH_IMAGE013
representing the three-dimensional coordinate value of the gravity center of the human body of each frame in the video to be matched;
Figure 808113DEST_PATH_IMAGE014
step S5: calculating the barycentric coordinates of the human body for each frame in the standard video data by the method of step S4
Figure 956197DEST_PATH_IMAGE015
The standard video data is the true value of the assessment of the motor behaviour, i.e. the normative behaviour video, and this data is given prior to assessment by the method, or may be filmed or otherwise obtained by the method user himself.
Step S6: and (3) action mapping: all the joint point coordinates in the data video frame to be matched are normalized according to the gravity center of the standard video data obtained in the step S5 and then are matched, and the coordinate of a certain joint point in the original video frame is
Figure 308681DEST_PATH_IMAGE016
The normalized joint point coordinates are
Figure 516809DEST_PATH_IMAGE017
Then, there are:
Figure 954743DEST_PATH_IMAGE018
the normalization refers to the normalization of coordinates of joint points of a human body in a video to be matched, so that the coordinates of the joint points can be transformed according to the body type of the human body instead of being single and unchanged as a template.
Step S7: behavior evaluation: according to the similarity of the human body posture joint point information in the image pair to the behavior, the effectiveness of the method can be proved if the similarity of the similar actions in the posture estimation data set is high.
The images in this step are image pairs, which are frame pairs formed by frames in the video to be matched and frames in the standard video, and when performing action evaluation, the images are evaluated pair by pair.
The posture estimation algorithm estimates the joint points of the human body, the position information of the joint points is represented by coordinates, the position of the two-dimensional joint points is represented by two-dimensional coordinates, and the information of the three-dimensional joint points is represented by three-dimensional coordinates.
Example two
The object of this embodiment is to provide a human body posture standard motion behavior matching system, including:
a frame sequence acquisition module configured to: preprocessing the behavior data of the video to be matched to obtain a frame sequence;
a human body joint point extraction module configured to: extracting human body joint points in each frame of image in the frame sequence;
an asynchronous alignment module configured to: performing asynchronous alignment on the frame sequence and standard behavior data in time sequence by using a video frame supplementing algorithm to obtain an adjusted frame rate of video data to be matched or a frame rate of standard behavior data;
a matching module configured to: obtaining the gravity center position coordinates of the human body in the video behavior data to be matched based on the extracted human body joint points and the adjusted frame rate of the video data to be matched;
obtaining the gravity center position coordinates of the standard behavior data based on the adjusted standard behavior data frame rate;
and normalizing the barycentric position coordinates of the human body in the video behaviors to be matched according to the barycentric position coordinates of the standard behavior data, and then matching.
Further comprising: a similarity calculation module configured to: and a step of calculating the similarity of the behaviors of the video to be matched, namely acquiring the similarity of the behaviors of the video to be matched and standard behavior data according to the information of the posture joint points of the human body in the matched image pair.
In the frame sequence acquisition module, after human body joint points in each frame of image in the frame sequence are extracted, the human body joint points are combined to obtain skeleton frame data formed by all the joint points of the human body in the video to be matched.
In the asynchronous alignment module, a frame sequence and standard behavior data are asynchronously aligned in time sequence by utilizing a video frame complementing algorithm, and if the standard behavior data frame is larger than a video data frame to be matched, the video data frame to be matched is adjusted to enable the frame sequence and the standard behavior data frame to be equal.
In the asynchronous alignment module, a frame sequence and standard behavior data are asynchronously aligned in time sequence by utilizing a video frame complementing algorithm, and if the standard behavior data frame is smaller than a video data frame to be matched, the standard behavior data frame is adjusted to enable the standard behavior data frame and the standard behavior data frame to be equal.
In the matching module, the barycentric position coordinate of a human body in the behavior data of the video to be matched is obtained, and the method specifically comprises the following steps:
dividing the human body into a plurality of human body segments based on the extracted three-dimensional coordinate data of the human body joint points;
calculating the human body center of gravity by the human body segment method: firstly, three-dimensional space coordinates of the gravity centers of all human body segments are calculated, then the gravity center coordinates of the segments are calculated in a weighted average mode, and finally the gravity center position coordinates of the human body in the video data to be matched are obtained.
The invention provides a method for reducing the difference between the standard action template and the video frame-filling algorithm on action alignment, and provides a method for eliminating the difference between the human body to be matched and the human body type of the standard data by using gravity center normalization on the aspect of action mapping.
Those skilled in the art will appreciate that the modules or steps of the present invention described above can be implemented using general purpose computer means, or alternatively, they can be implemented using program code that is executable by computing means, such that they are stored in memory means for execution by the computing means, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps of them are fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (8)

1. The human body posture standard motion behavior matching method is characterized by comprising the following steps:
step S0: preprocessing the behavior data of the video to be matched to obtain a frame sequence;
step S1: extracting human body joint points in each frame of image in the frame sequence, and extracting three-dimensional coordinate data of each human body joint point;
step S2: performing asynchronous alignment on the frame sequence and standard behavior data in time sequence by using a video frame complementing algorithm;
step S3: dividing the human body into a plurality of human body segments based on the extracted three-dimensional coordinate data of the human body joint points;
step S4: calculating the human body center of gravity by the human body segment method: firstly, calculating three-dimensional space coordinates of the gravity center of each human body segment, then carrying out weighted average calculation on the gravity center coordinates of the segments, and finally obtaining the gravity center position coordinates of the human body in the video data to be matched;
step S5: calculating the barycentric coordinates of each frame in the standard video data by using the method in the step S4;
step S6: all the joint point coordinates in the data video frame to be matched are normalized according to the gravity center of the standard video data obtained in the step S5 and then are matched; the method comprises the following specific steps: the coordinate of a certain joint point in the original video frame is
Figure 901752DEST_PATH_IMAGE001
The normalized joint point coordinates are
Figure 638764DEST_PATH_IMAGE002
Then, there are:
Figure 162149DEST_PATH_IMAGE003
Figure 903578DEST_PATH_IMAGE004
Figure 145204DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 482644DEST_PATH_IMAGE006
representing the three-dimensional coordinate value of the gravity center of the human body of each frame in the video to be matched;
Figure 126115DEST_PATH_IMAGE007
representing the coordinates of the center of gravity of the human body for each frame in the standard video data.
2. The human body posture standard exercise behavior matching method as claimed in claim 1, further comprising: and a step of calculating the similarity of the behaviors of the video to be matched, namely acquiring the similarity of the behaviors of the video to be matched and standard behavior data according to the information of the posture joint points of the human body in the matched image pair.
3. The human body posture standard motion behavior matching method as claimed in claim 1, wherein after human body joint points in each frame image in the frame sequence are extracted, the human body joint points are combined to obtain skeleton frame data composed of all the joint points of the human body in the video to be matched.
4. The human body posture standard motion behavior matching method as claimed in claim 1, wherein the frame sequence and the standard behavior data are asynchronously aligned in time sequence by using a video frame complementing algorithm, and if the standard behavior data frame is larger than the video data frame to be matched, the video data frame to be matched is adjusted to be equal to the video data frame to be matched;
and carrying out asynchronous alignment on the frame sequence and the standard behavior data in time sequence by utilizing a video frame complementing algorithm, and if the standard behavior data frame is smaller than the video data frame to be matched, adjusting the standard behavior data frame to ensure that the standard behavior data frame and the standard behavior data frame are equal.
5. Human body posture standard motion action matching system, characterized by includes:
a frame sequence acquisition module configured to: preprocessing the behavior data of the video to be matched to obtain a frame sequence;
a human body joint point extraction module configured to: extracting human body joint points in each frame of image in the frame sequence, and extracting three-dimensional coordinate data of each human body joint point;
an asynchronous alignment module configured to: performing asynchronous alignment on the frame sequence and standard behavior data in time sequence by using a video frame complementing algorithm;
a matching module configured to include the steps of:
step S3: dividing the human body into a plurality of human body segments based on the extracted three-dimensional coordinate data of the human body joint points;
step S4: calculating the human body center of gravity by the human body segment method: firstly, calculating three-dimensional space coordinates of the gravity center of each human body segment, then carrying out weighted average calculation on the gravity center coordinates of the segments, and finally obtaining the gravity center position coordinates of the human body in the video data to be matched;
step S5: calculating the barycentric coordinates of each frame in the standard video data by using the method in the step S4;
step S6: all the joint point coordinates in the data video frame to be matched are normalized according to the gravity center of the standard video data obtained in the step S5 and then are matched; the method comprises the following specific steps: the coordinate of a certain joint point in the original video frame is
Figure 274331DEST_PATH_IMAGE001
The normalized joint point coordinates are
Figure 3252DEST_PATH_IMAGE008
Then, there are:
Figure 82067DEST_PATH_IMAGE009
Figure 642361DEST_PATH_IMAGE010
Figure 414008DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 679161DEST_PATH_IMAGE006
representing the three-dimensional coordinate value of the gravity center of the human body of each frame in the video to be matched;
Figure 561666DEST_PATH_IMAGE007
representing the coordinates of the center of gravity of the human body for each frame in the standard video data.
6. The human body posture standard athletic behavior matching system of claim 5, further comprising: a similarity calculation module configured to: and a step of calculating the similarity of the behaviors of the video to be matched, namely acquiring the similarity of the behaviors of the video to be matched and standard behavior data according to the information of the posture joint points of the human body in the matched image pair.
7. The human body posture standard motion behavior matching system according to claim 5, wherein in the frame sequence obtaining module, after human body joint points in each frame image in the frame sequence are extracted, the human body joint points are combined to obtain skeleton frame data composed of all the joint points of the human body in the video to be matched.
8. The human body posture standard motion behavior matching system according to claim 5, wherein in the asynchronous alignment module, a frame sequence and standard behavior data are asynchronously aligned in time sequence by using a video frame complementing algorithm, and if the standard behavior data frame is larger than the video data frame to be matched, the video data frame to be matched is adjusted to be equal to the standard behavior data frame;
and carrying out asynchronous alignment on the frame sequence and the standard behavior data in time sequence by utilizing a video frame complementing algorithm, and if the standard behavior data frame is smaller than the video data frame to be matched, adjusting the standard behavior data frame to ensure that the standard behavior data frame and the standard behavior data frame are equal.
CN202111237633.0A 2021-10-25 2021-10-25 Human body posture standard motion behavior matching method and system Active CN113673494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111237633.0A CN113673494B (en) 2021-10-25 2021-10-25 Human body posture standard motion behavior matching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111237633.0A CN113673494B (en) 2021-10-25 2021-10-25 Human body posture standard motion behavior matching method and system

Publications (2)

Publication Number Publication Date
CN113673494A CN113673494A (en) 2021-11-19
CN113673494B true CN113673494B (en) 2022-03-08

Family

ID=78551057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111237633.0A Active CN113673494B (en) 2021-10-25 2021-10-25 Human body posture standard motion behavior matching method and system

Country Status (1)

Country Link
CN (1) CN113673494B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515371A (en) * 2009-03-26 2009-08-26 浙江大学 Human body movement data fragment extracting method
CN102074019A (en) * 2010-12-28 2011-05-25 深圳泰山在线科技有限公司 Human tracking method and system
CN103002198A (en) * 2011-09-08 2013-03-27 株式会社东芝 Monitoring device, method thereof
CN108986884A (en) * 2018-05-31 2018-12-11 杭州同绘科技有限公司 The training system and method that a kind of balanced rehabilitation and cognitive rehabilitation blend
CN109830078A (en) * 2019-03-05 2019-05-31 北京智慧眼科技股份有限公司 Intelligent behavior analysis method and intelligent behavior analytical equipment suitable for small space
CN110008857A (en) * 2019-03-21 2019-07-12 浙江工业大学 A kind of human action matching methods of marking based on artis
CN110213635A (en) * 2018-04-08 2019-09-06 腾讯科技(深圳)有限公司 Video mixed flow method, video flow mixing device and storage medium
CN110210284A (en) * 2019-04-12 2019-09-06 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intelligent Evaluation method
CN110309732A (en) * 2019-06-13 2019-10-08 浙江大学 Activity recognition method based on skeleton video
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method
CN111260718A (en) * 2020-01-17 2020-06-09 杭州同绘科技有限公司 Human body gravity center estimation method based on Kinect camera
CN111833245A (en) * 2020-05-19 2020-10-27 南京邮电大学 Super-resolution reconstruction method based on multi-scene video frame supplementing algorithm

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815855A (en) * 2015-12-02 2017-06-09 山东科技职业学院 Based on the human body motion tracking method that production and discriminate combine
JP6841097B2 (en) * 2017-03-09 2021-03-10 富士通株式会社 Movement amount calculation program, movement amount calculation method, movement amount calculation device and business support system
CN106991690B (en) * 2017-04-01 2019-08-20 电子科技大学 A kind of video sequence synchronous method based on moving target timing information
CN108509878B (en) * 2018-03-19 2019-02-12 特斯联(北京)科技有限公司 A kind of safety door system and its control method based on Human Body Gait Analysis
CN108597578B (en) * 2018-04-27 2021-11-05 广东省智能制造研究所 Human motion assessment method based on two-dimensional skeleton sequence
CN110321780B (en) * 2019-04-30 2022-05-17 苏州大学 Abnormal falling behavior detection method based on space-time motion characteristics
CN113255479A (en) * 2021-05-10 2021-08-13 北京邮电大学 Lightweight human body posture recognition model training method, action segmentation method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515371A (en) * 2009-03-26 2009-08-26 浙江大学 Human body movement data fragment extracting method
CN102074019A (en) * 2010-12-28 2011-05-25 深圳泰山在线科技有限公司 Human tracking method and system
CN103002198A (en) * 2011-09-08 2013-03-27 株式会社东芝 Monitoring device, method thereof
CN110213635A (en) * 2018-04-08 2019-09-06 腾讯科技(深圳)有限公司 Video mixed flow method, video flow mixing device and storage medium
CN108986884A (en) * 2018-05-31 2018-12-11 杭州同绘科技有限公司 The training system and method that a kind of balanced rehabilitation and cognitive rehabilitation blend
CN109830078A (en) * 2019-03-05 2019-05-31 北京智慧眼科技股份有限公司 Intelligent behavior analysis method and intelligent behavior analytical equipment suitable for small space
CN110008857A (en) * 2019-03-21 2019-07-12 浙江工业大学 A kind of human action matching methods of marking based on artis
CN110210284A (en) * 2019-04-12 2019-09-06 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intelligent Evaluation method
CN110309732A (en) * 2019-06-13 2019-10-08 浙江大学 Activity recognition method based on skeleton video
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method
CN111260718A (en) * 2020-01-17 2020-06-09 杭州同绘科技有限公司 Human body gravity center estimation method based on Kinect camera
CN111833245A (en) * 2020-05-19 2020-10-27 南京邮电大学 Super-resolution reconstruction method based on multi-scene video frame supplementing algorithm

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A simple calibration for upper limb motion tracking and reconstruction;Yan Wang等;《2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society》;20141106;第5868-5871页 *
Drift-Free and Self-Aligned IMU-Based Human Gait Tracking System With Augmented Precision and Robustness;Yawen Chen等;《IEEE Robotics and Automation Letters》;20200615;第4671-4678页 *
基于二维骨架运动特征向量的行为识别;肖利雪等;《计算机与数字工程》;20200930;第48卷(第9期);第2201-2206页 *
基于深度学习的关节点行为识别综述;刘云等;《电子与信息学报》;20210630;第43卷(第6期);第1790-1802页 *

Also Published As

Publication number Publication date
CN113673494A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN105389539B (en) A kind of three-dimension gesture Attitude estimation method and system based on depth data
CN111460875B (en) Image processing method and apparatus, image device, and storage medium
Urtasun et al. Monocular 3D tracking of the golf swing
WO2021169839A1 (en) Action restoration method and device based on skeleton key points
CN110188700B (en) Human body three-dimensional joint point prediction method based on grouping regression model
CN102622766A (en) Multi-objective optimization multi-lens human motion tracking method
WO2023071964A1 (en) Data processing method and apparatus, and electronic device and computer-readable storage medium
CN111259713A (en) Sight tracking method based on self-adaptive weighting
CN111507184B (en) Human body posture detection method based on parallel cavity convolution and body structure constraint
WO2020147791A1 (en) Image processing method and device, image apparatus, and storage medium
Matsuyama et al. Ballroom dance step type recognition by random forest using video and wearable sensor
CN103839280B (en) A kind of human body attitude tracking of view-based access control model information
CN110348370B (en) Augmented reality system and method for human body action recognition
Liu et al. Trampoline motion decomposition method based on deep learning image recognition
Ko et al. CNN and bi-LSTM based 3D golf swing analysis by frontal swing sequence images
Yu et al. 3D facial motion tracking by combining online appearance model and cylinder head model in particle filtering
CN114550292A (en) High-physical-reality human body motion capture method based on neural motion control
CN113673494B (en) Human body posture standard motion behavior matching method and system
CN113192186B (en) 3D human body posture estimation model establishing method based on single-frame image and application thereof
CN115346640A (en) Intelligent monitoring method and system for closed-loop feedback of functional rehabilitation training
CN112099330B (en) Holographic human body reconstruction method based on external camera and wearable display control equipment
JP2023536074A (en) Full skeleton 3D pose reconstruction from monocular camera
JP2022092528A (en) Three-dimensional person attitude estimation apparatus, method, and program
Cha et al. Mobile. Egocentric human body motion reconstruction using only eyeglasses-mounted cameras and a few body-worn inertial sensors
CN111914798B (en) Human body behavior identification method based on skeletal joint point data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method and System for Matching Human Posture Standard Motion Behavior

Effective date of registration: 20230506

Granted publication date: 20220308

Pledgee: Qingdao Jiaozhou Shengyu Financing Guarantee Co.,Ltd.

Pledgor: Qingdao genjian Intelligent Technology Co.,Ltd.

Registration number: Y2023980039931