CN115937969A - Method, device, equipment and medium for determining target person in sit-up examination - Google Patents

Method, device, equipment and medium for determining target person in sit-up examination Download PDF

Info

Publication number
CN115937969A
CN115937969A CN202211314123.3A CN202211314123A CN115937969A CN 115937969 A CN115937969 A CN 115937969A CN 202211314123 A CN202211314123 A CN 202211314123A CN 115937969 A CN115937969 A CN 115937969A
Authority
CN
China
Prior art keywords
determining
coordinate
image
target
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211314123.3A
Other languages
Chinese (zh)
Inventor
陈俊伟
黄宏程
胡燊
王灿
刘军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunteng Technology Co ltd
Original Assignee
Kunteng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunteng Technology Co ltd filed Critical Kunteng Technology Co ltd
Priority to CN202211314123.3A priority Critical patent/CN115937969A/en
Publication of CN115937969A publication Critical patent/CN115937969A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a method, a device, equipment and a medium for determining a target figure in sit-up examination, which relate to the field of computer vision, and the method comprises the following steps: acquiring a video image in a target area, wherein the video image comprises a character image; extracting human skeleton joint points in the figure image, and obtaining skeleton point coordinate values of the human skeleton joint points according to coordinate positions of the human skeleton joint points in a preset coordinate system; determining a human body action characteristic value of the figure image according to the coordinate value of the skeleton point; judging whether the human body action characteristic value is within a preset action characteristic value range or not; if the human body action characteristic value is within a preset action characteristic value range, determining the figure image as a target figure image; and determining a target person monitoring range according to the target person image, and monitoring persons in the target person monitoring range. The invention can monitor the target person in a complex environment and under the condition of multiple persons.

Description

Method, device, equipment and medium for determining target person in sit-up examination
Technical Field
The invention relates to the technical field of computer vision, in particular to a method, a device, equipment and a medium for determining a target figure in sit-up examination.
Background
In the existing technical method for sit-up examination, a standard body test room is only arranged for a target person, and a tested object is independently detected in an independent test room.
However, for increasingly complex real operating environments, the existing sit-up test method cannot meet the requirement of improvement along with environmental changes, namely, the existing sit-up test method can only test a single environment and a single person and cannot monitor a target person in a complex environment and under the condition of multiple persons.
Disclosure of Invention
The invention provides a method, a device, equipment and a medium for determining a target figure in sit-up examination, and aims to solve the problem that a sit-up test method in the prior art can only monitor a single environment and a single figure but cannot monitor the target figure in a complex environment and under the condition of multiple persons.
In order to solve the problems, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for determining a target person in a sit-up assessment, the method comprising:
acquiring a video image in a target area, wherein the video image comprises a character image;
extracting human skeleton joint points in the figure image, and obtaining skeleton point coordinate values of the human skeleton joint points according to coordinate positions of the human skeleton joint points in a preset coordinate system;
determining a human body action characteristic value of the figure image according to the coordinate value of the skeleton point;
judging whether the human body action characteristic value is within a preset action characteristic value range or not;
if the human body action characteristic value is within a preset action characteristic value range, determining the figure image as a target figure image;
and determining a target person monitoring range according to the target person image, and monitoring persons in the target person monitoring range.
The further technical scheme is that the character image comprises a color image and a depth image, and the obtaining of the skeleton point coordinate value of the human skeleton joint point according to the coordinate position of the human skeleton joint point in a preset coordinate system comprises:
determining the X-axis coordinate and the Y-axis coordinate of the human body bone joint point in the coordinate system according to the color image, and determining the Z-axis coordinate of the human body bone joint point in the coordinate system according to the depth image;
and determining the coordinate value of the bone point according to the X-axis coordinate, the Y-axis coordinate and the Z-axis coordinate.
The further technical scheme is that the determining of the human body action characteristic value of the character image according to the coordinate value of the skeleton point comprises the following steps:
determining the actual distance between the human body bone joint points according to the coordinate values of the bone points, and determining the actual angle between the human body bone joint points according to the coordinate values of the bone points;
and determining the human body action characteristic value according to the actual distance and the actual angle.
The technical scheme is that the preset action characteristic value range comprises a preset distance range and a preset angle range, and the judgment of whether the human body action characteristic value is in the preset action characteristic value range comprises the following steps:
and judging whether the actual distance is within the preset distance range and the actual angle is within the preset angle range.
The further technical scheme is that the determining of the target person monitoring range according to the target person image comprises:
acquiring coordinate values of all skeleton points in the target character image;
selecting one skeletal point coordinate value from all skeletal point coordinate values in the target character image as a target character monitoring coordinate value according to a preset first condition;
and determining the target person monitoring range according to the target person monitoring coordinate value.
The further technical scheme is that the extracting human skeleton joint points in the character image comprises:
extracting initial human skeleton joint points from the human image;
and screening out initial human body bone joint points matched with the joint point types set by the user from the initial human body bone joint points to serve as the human body bone joint points.
The further technical scheme is that the determining the actual angle between the human body bone joint points according to the coordinate values of the bone points comprises the following steps:
selecting three target bone point coordinates from the bone point coordinate values;
selecting a target skeleton point coordinate from the three target skeleton point coordinates as a reference point, and constructing an included angle by using the reference point as a center;
and calculating the angle of the included angle according to the coordinates of the three target bone points to obtain the actual angle.
In a second aspect, the invention also provides a device for determining a target person in a sit-up assessment, comprising means for performing the method according to the first aspect.
In a third aspect, the present invention further provides an electronic device, including a processor, a communication interface, a memory and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the method of the first aspect when executing the program stored in the memory.
In a fourth aspect, the invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
Compared with the prior art, the technical scheme provided by the invention has the following advantages:
firstly, acquiring a video image in a target area, wherein the video image comprises a figure image; then, extracting human skeleton joint points in the figure image, and obtaining skeleton point coordinate values of the human skeleton joint points according to the coordinate positions of the human skeleton joint points in a preset coordinate system; determining a human body action characteristic value of the figure image according to the coordinate value of the skeleton point, and judging whether the human body action characteristic value is within a preset action characteristic value range or not; if the human body action characteristic value is within a preset action characteristic value range, determining the character image as a target character image, so that the target character can be rapidly determined in a multi-person scene; and finally, determining a target person monitoring range according to the target person image, so that the spatial position needing to be monitored can be quickly determined under a complex and non-fixed environment, persons in the target person monitoring range are monitored, and accurate monitoring of the target person can be completed under a multi-person scene.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a method for determining a target person in a sit-up examination according to embodiment 1 of the present invention;
fig. 2 is a block diagram illustrating a structure of a device for determining a target person in a sit-up assessment according to embodiment 2 of the present invention;
FIG. 3 is a view of the knee joint angle in the sit-up assessment provided in embodiment 1 of the present invention;
FIG. 4 is a distribution diagram of initial human skeletal joint points in a sit-up examination provided in example 1 of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to embodiment 3 of the present invention.
Detailed Description
In order to more fully understand the technical content of the present invention, the technical solution of the present invention will be further described and illustrated with reference to the following specific embodiments, but not limited thereto.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without any inventive step are within the scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Example 1
Referring to fig. 1, with reference to fig. 3 and 4, fig. 1 is a schematic flowchart of a method for determining a target person in a sit-up assessment according to embodiment 1 of the present invention. Specifically, as shown in FIG. 1, the method includes the following steps S101-S106.
S101, collecting a video image in a target area.
Specifically, the method for determining the target figure in the sit-up examination mainly comprises the following four modules: the system comprises an image acquisition module, an image recognition module, a characteristic value acquisition module and a target person determination module. The processor controls the high-definition depth camera to acquire video images of the tested person in a testing time period frame by frame, wherein a Kinect depth camera can be adopted, and each frame of video image comprises a color image and a depth image. Wherein the video image comprises a human image.
S102, extracting human skeleton joint points in the character image, and obtaining skeleton point coordinate values of the human skeleton joint points according to coordinate positions of the human skeleton joint points in a preset coordinate system.
Specifically, the image recognition module extracts human body skeleton joint points in the character image, and obtains skeleton point coordinate values of the human body skeleton joint points according to coordinate positions of the human body skeleton joint points in a preset coordinate system, wherein the preset coordinate system is a 3D coordinate system set by a user.
In one embodiment, the extracting human skeletal joint points in the human image includes:
extracting initial human skeleton joint points from the human image;
and screening out initial human body bone joint points matched with the joint point types set by the user from the initial human body bone joint points to serve as the human body bone joint points.
Specifically, an image recognition module in a processor extracts initial human skeletal joint points from the human image, wherein the initial human skeletal joint points are 25 skeletal joint points in a human skeleton, and the method comprises the following steps: a spondylodesis skeletal point, a supination skeletal point, a cervical skeletal point, a cephalic skeletal point, a left shoulder skeletal point, a left elbow skeletal point, a left wrist skeletal point, a left hand skeletal point, a right shoulder skeletal point, a right elbow skeletal point, a right wrist skeletal point, a right hand skeletal point, a left hip skeletal point, a left knee skeletal point, a left ankle skeletal point, a left foot skeletal point, a right hip skeletal point, a right knee skeletal point, a right ankle skeletal point, a right foot skeletal point, a spine shoulder skeletal point, a left hand cusp skeletal point, a left abdominal skeletal point, a right hand cusp skeletal point, and a right abdominal skeletal point.
Then, screening initial human body bone joint points matched with joint point types set by a user from the initial human body bone joint points to serve as the human body bone joint points, so that calculation amount is reduced for subsequent steps, screening the bone joint points, discarding joint data which has no effect on sit-up action description and has no contribution to human body action characteristic values, such as data of 8 joint points of left and right thumbs, left and right finger tips, left and right hands, left and right feet and the like, processing the data of the remaining 17 joint points when calculating the human body action characteristic values, connecting the bone joint points by connecting lines to form a human body overall frame generally, displaying the data on an interface in real time, greatly reducing the data amount and facilitating calculation.
In an embodiment, the obtaining of the bone point coordinate values of the human body bone joint points according to the coordinate positions of the human body bone joint points in a preset coordinate system includes:
determining the X-axis coordinate and the Y-axis coordinate of the human body bone joint point in the coordinate system according to the color image, and determining the Z-axis coordinate of the human body bone joint point in the coordinate system according to the depth image;
and determining the coordinate value of the bone point according to the X-axis coordinate, the Y-axis coordinate and the Z-axis coordinate.
Specifically, the X-axis coordinate and the Y-axis coordinate of the human body bone joint point in the 3D coordinate system can be determined through the color image, and the Z-axis coordinate of the human body bone joint point in the 3D coordinate system can be determined according to the depth image, so that the coordinate value of the human body bone joint point in the 3D coordinate system is obtained according to the X-axis coordinate, the Y-axis coordinate and the Z-axis coordinate, that is, the coordinate value of the bone point is obtained. Points in the 3D coordinate system are represented in the form of index coordinate triplets in millimeters. The three-dimensional coordinate of the human body bone joint point is located in a 3D coordinate system, the origin of the 3D coordinate system is a collecting point, the x axis of the 3D coordinate system is the direction in which the collecting point points point to the human body bone, the y axis of the 3D coordinate system is the side direction of the human body bone, and the z axis of the 3D coordinate system is the height direction of the collecting point. The 3D coordinate system and all the human skeletal joint points and the skeletal point coordinate values of the human skeletal joint points form an absolute coordinate system, which can completely and effectively represent the human body structure and various movement postures, as shown in fig. 4.
S103, determining a human body action characteristic value of the character image according to the coordinate value of the skeleton point.
Specifically, the feature value obtaining module determines a human body motion feature value of the character image according to the coordinate value of the skeleton point, where the human body motion feature value is a feature value of a motion posture in the character image.
In one embodiment, the determining the human body motion characteristic value of the human figure according to the coordinate value of the skeleton point comprises:
determining the actual distance between the human body bone joint points according to the coordinate values of the bone points, and determining the actual angle between the human body bone joint points according to the coordinate values of the bone points;
and determining the human body action characteristic value according to the actual distance and the actual angle.
Specifically, the human motion characteristic values include actual distances between the human bone joint points and actual angles between the human bone joint points.
In an embodiment, said determining an actual angle between said human skeletal joint points from said skeletal point coordinate values comprises:
selecting three target bone point coordinates from the bone point coordinate values;
selecting a target skeleton point coordinate from the three target skeleton point coordinates as a reference point, and constructing an included angle by using the reference point as a center;
and calculating the angle of the included angle according to the coordinates of the three target bone points to obtain the actual angle.
Specifically, the characteristic value acquisition module selects three target skeleton point coordinates from the skeleton point coordinate values according to the requirements of a user, the three target skeleton point coordinates represent the actual angle of one action, and because the posture of a human body needs the joint judgment of a plurality of actions, a plurality of groups of target skeleton point coordinates are selected from the skeleton point coordinate values according to the setting of the user, and each group of three target skeleton point coordinates; selecting a target skeleton point coordinate from the three target skeleton point coordinates as a reference point, and constructing the three target skeleton point coordinates into an included angle by taking the reference point as a center; and finally, calculating the angle of the included angle according to the coordinates of the three target bone points to obtain the actual angle, wherein the coordinates of each group of target bone points are obtained in such a way, so that a plurality of actual angles are obtained, and the posture of the human body is determined through the plurality of actual angles.
The actual distance between the human skeleton joint points is calculated as follows, the distance between two human skeleton joint points in a three-dimensional space is Euclidean distance, and the formula is as follows:
Figure BDA0003908346510000051
in the formula (1-1), x (a) x ,a y ,a z ) Three-dimensional coordinates of a human skeletal joint point of a human being to be measured, b (b) x ,b y ,b z ) Is the three-dimensional coordinate of another human bone joint point.
The actual angle calculation method between the human skeletal joint points is as follows:
the actual angles among the human skeleton joint points are joint angles, the joint angles are calculated by adopting a node intersection method, the angles of included angles are calculated according to a cosine law, and a space vector method is used for solving the included angles. When the angle of the included angle is within a prescribed threshold range, the action is judged to be effective. Selecting a joint point (namely selecting a target bone point coordinate) as the center or zero point of the whole coordinate system, taking the center or zero point as a reference point, and then selecting an auxiliary joint point to form an included angle with another auxiliary joint point. The constraint conditions of the joint angles are as follows:
L A =(J 0 ,J 1 ,J 2 ,θ,τ} (1-2)
in the formula (1-2), J 0 As a reference articulation point, J 1 、J 2 For two auxiliary joint points, θ is the joint angle, and τ is the threshold. Taking the knee joint angle calculation as an example, a space vector method is needed for solving. For the calculation of the knee joint angle, as shown in fig. 3, it is obtained by the included angle of the space vector, and the calculation formula is:
Figure BDA0003908346510000052
Figure BDA0003908346510000053
Figure BDA0003908346510000061
in the formula (1-5), F (F) x ,F y ,F z ) As three-dimensional coordinates of the ankle joint, K (K) x ,K y ,K z ) As three-dimensional coordinates of the knee joint, H (H) x ,H y ,H z ) Is the three-dimensional coordinate of the hip joint. The angular value theta is in the range of [0, 180 DEG ]]The cosine function values within the range are monotonically decreasing, so that each angle value corresponds to a unique cosine value.
The posture of the human body needs the joint judgment of a plurality of actions, the judgment of a specific action is specified to be completed by traversing a plurality of joint angles in sequence, even the actual angles among the joint points of the skeleton of the human body, and finally all calculated human body action characteristic values are put into a constraint set D. The set of constraints D for exclusion of non-target figures in the subsequent sit-up assessment is:
D=(θ 1 ,θ 21 ,θ 22 ,θ 31 ,θ 32 ,θ 4 ,θ 51 ,θ 52 ,d 1 ,d 2 ) (1-6)
in the formula (1-6), θ 1 Is the joint angle between the right ankle, right hip and right shoulder, θ 21 、θ 22 The joint angle of the knees and legs, theta 31 、θ 32 The joint angle of the two elbows, [ theta ] 4 The joint angle between the chest, pelvis and right knee; theta 51 The joint angles between the left wrist, the left knee and the pelvis are respectively taken as the center; theta 52 Is the angle of the joint between the right wrist, the right knee and the pelvis with the right wrist as the center. d is a radical of 1 Is the distance between the skeletal points of the left and right knees, d 2 The distance between the left and right hip bone points. Wherein, a 1 、d 2 The actual distance is obtained; theta 1 、θ 21 、θ 22 、θ 31 、θ 32 、θ 4 、θ 51 、θ 52 I.e. the actual angle.
And S104, judging whether the human body motion characteristic value is within a preset motion characteristic value range.
Specifically, the target person determination module determines whether the human body motion characteristic value is within a preset motion characteristic value range.
In an embodiment, the determining whether the human body motion characteristic value is within a preset motion characteristic value range includes:
and judging whether the actual distance is within the preset distance range and the actual angle is within the preset angle range.
Specifically, when the target person determination module determines that the actual distance is within the preset distance range and the actual angle is within the preset angle range, it is determined that the human motion characteristic value is consistent with a preset motion characteristic value.
And S105, if the human body motion characteristic value is within a preset motion characteristic value range, determining that the character image is a target character image.
Specifically, when the human body motion characteristic value is within a preset motion characteristic value range, the target person determining module determines that the person image is a target person image.
And S106, determining a target person monitoring range according to the target person image, and monitoring persons in the target person monitoring range.
Specifically, the target person determination module determines a target person monitoring range according to the target person image, and monitors persons in the target person monitoring range, so that the subsequent discrimination calculation amount is reduced.
In one embodiment, the determining the target person monitoring range according to the target person image includes:
acquiring coordinate values of all skeleton points in the target character image;
selecting one skeletal point coordinate value from all skeletal point coordinate values in the target character image as a target character monitoring coordinate value according to a preset first condition;
and determining the target person monitoring range according to the target person monitoring coordinate value.
Specifically, after the figure image is determined as a target figure image, that is, a target figure is determined, all skeletal point coordinate values in the target figure image are obtained, one skeletal point coordinate value is selected from all skeletal point coordinate values in the target figure image according to a preset first condition and stored as a target figure monitoring coordinate value, the first condition is preset as user setting, and the first condition is preset, that is, the principle that the skeletal point coordinate basically does not change too much in the sit-up process; after the target figure monitoring coordinate value is determined, the target figure monitoring range is determined according to the target figure monitoring coordinate value, so that the target figure monitoring range is monitored only subsequently, the interference of non-target figures is eliminated, and the accuracy of subsequent examination action detection judgment and counting of the target figures is enhanced.
Because the environment that is tested is idle and miscellaneous more, appear in information acquisition's environment extremely easily, in addition, need other people help when being examined person and doing sit up usually, need the helper to press by the hand and the ankle of being examined person if being examined person and doing sit up in-process. In order to be applicable to various environments, the sit-up examination method is extremely important in eliminating non-target characters, and the target character determination module is mainly used for improving the accuracy of test results and expanding the application scene of the system.
For interference of people who are idle and miscellaneous, people can pass through the testing environment inevitably in the testing process, the action of people is difficult to predict, and the people cannot determine what posture the people really appear in the tested environment and need to be distinguished through strict human body postures; the helper belongs to a non-target person who is closest to the subject, but his posture may be determined to be inconsistent with the subject.
For the two situations, the postures of all people appearing in the scene can be detected, when the posture of a person is in a sit-up preparatory movement state, the preparatory movement posture of the person accords with the posture of the sit-up preparatory movement, the person can be generally completely judged as a target person, then the hip bone point coordinate information of the target person is stored, and then the non-target person is eliminated by utilizing the hip bone point coordinate information, so that the subsequent judgment calculation amount is reduced.
The target person determination module mainly comprises four steps, namely step 1: the target person determining module traverses the feature values required by the sit-up preparation actions in the feature value obtaining module: the joint angle theta between the right ankle, the right hip and the right shoulder 1 Angle of articulation theta of knees and legs 21 、θ 22 Angle θ of the joints of the two elbows 31 、θ 32 Angle θ of joint between chest, pelvis, and right knee 4 (ii) a The joint angle theta between the left wrist and the left knee and pelvis with the left wrist as the center 51 (ii) a The joint angle theta between the right wrist, the right knee and the pelvis with the right wrist as the center 52 (ii) a Distance d between left and right knee bone points 1 Distance d between the left and right hip bone points 2
Step 2: searching a target person doing sit-up preparatory actions according to the following four indexes: reading and judging data theta 1 Whether greater than 150 deg., theta 4 Whether the angle is larger than 130 degrees or not, if so, detecting that the person is lying down; read data theta 31 、θ 32 Whether the angle is less than 120 degrees or not, if so, the bending of the legs of the person is detected; read data theta 31 、θ 32 Whether it is less than 100 deg., and reading data theta 51 、θ 52 Whether the angle is less than 150 degrees or not, if so, detecting that the person holds the shoulders with both hands; reading data d 1 And d 2 If d is 1 <d 2 If so, detecting that the knees of the person are closed; if more than four indexes of a person are met, the person can be confirmed to be the target person who needs to be searched for and doing sit-up preparation actions.
And step 3: immediately after the target person is confirmed, the three-dimensional coordinate value H (H) of the crotch bone point of the target person is stored x ,H y ,H z )。
And 4, step 4: only the three-dimensional coordinate value (H) of all the human hip bone points needs to be traversed at each subsequent time x ,H y ,H z ) If H is x -400<H x <H x +400,H 1 -400<H 1 <H 1 +400,H 2 -400<H 2 <H 2 +400, wherein, if the unit of 400 is millimeter, the person can be determined as the target person, and then the posture information is obtained to judge the subsequent sit-up examination.
Example 2
As shown in fig. 2, the present invention further provides an apparatus 400 for determining a target person in a sit-up examination, where the apparatus 400 for determining a target person in a sit-up examination includes a first collecting unit 401, a first extracting unit 402, a first determining unit 403, a first judging unit 404, a second determining unit 405, and a third determining unit 406.
A first collecting unit 401, configured to collect a video image in a target area, where the video image includes a person image;
a first extracting unit 402, configured to extract a human skeleton joint point in the person image, and obtain a skeleton point coordinate value of the human skeleton joint point according to a coordinate position of the human skeleton joint point in a preset coordinate system;
a first determining unit 403, configured to determine a human body motion feature value of the person image according to the coordinate value of the skeleton point;
a first determining unit 404, configured to determine whether the human body motion characteristic value is within a preset motion characteristic value range;
a second determining unit 405, configured to determine that the person image is a target person image if the human body motion characteristic value is within a preset motion characteristic value range;
a third determining unit 406, configured to determine a target person monitoring range according to the target person image, and monitor persons within the target person monitoring range.
In an embodiment, the obtaining of the coordinate values of the bone joint points of the human body according to the coordinate positions of the bone joint points of the human body in a preset coordinate system includes:
determining the X-axis coordinate and the Y-axis coordinate of the human body bone joint point in the coordinate system according to the color image, and determining the Z-axis coordinate of the human body bone joint point in the coordinate system according to the depth image;
and determining the coordinate value of the bone point according to the X-axis coordinate, the Y-axis coordinate and the Z-axis coordinate.
In one embodiment, the determining the human motion characteristic value of the human figure image according to the coordinate value of the bone point includes:
determining the actual distance between the human body bone joint points according to the coordinate values of the bone points, and determining the actual angle between the human body bone joint points according to the coordinate values of the bone points;
and determining the human body action characteristic value according to the actual distance and the actual angle.
In an embodiment, the determining whether the human body motion characteristic value is within a preset motion characteristic value range includes:
and judging whether the actual distance is within the preset distance range and the actual angle is within the preset angle range.
In one embodiment, the determining the target person monitoring range according to the target person image includes:
acquiring coordinate values of all skeleton points in the target character image;
selecting one skeletal point coordinate value from all skeletal point coordinate values in the target character image as a target character monitoring coordinate value according to a preset first condition;
and determining the target person monitoring range according to the target person monitoring coordinate value.
In one embodiment, the extracting human skeletal joint points in the human image includes:
extracting initial human skeleton joint points from the person image;
and screening out initial human body bone joint points matched with the joint point types set by the user from the initial human body bone joint points to serve as the human body bone joint points.
In one embodiment, said determining an actual angle between said human skeletal joint points from said skeletal point coordinate values comprises:
selecting three target bone point coordinates from the bone point coordinate values;
selecting a target skeleton point coordinate from the three target skeleton point coordinates as a reference point, and constructing an included angle by using the reference point as a center;
and calculating the angle of the included angle according to the coordinates of the three target bone points to obtain the actual angle.
Example 3
Referring to fig. 5, an embodiment of the present invention further provides an electronic device, which includes a processor 111, a communication interface 112, a memory 113, and a communication bus 114, where the processor 111, the communication interface 112, and the memory 113 complete mutual communication through the communication bus 114.
A memory 113 for storing a computer program;
a processor 111 for executing the program stored in the memory 113 to implement the method for determining the target person in the sit-up assessment as provided by any one of the above method embodiments.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by the processor 111, implements the steps of the method for determining a target person in a sit-up assessment as provided in any of the method embodiments described above.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for determining a target person in a sit-up examination, the method comprising:
acquiring a video image in a target area, wherein the video image comprises a character image;
extracting human skeleton joint points in the figure image, and obtaining skeleton point coordinate values of the human skeleton joint points according to coordinate positions of the human skeleton joint points in a preset coordinate system;
determining a human body action characteristic value of the figure image according to the coordinate value of the skeleton point;
judging whether the human body action characteristic value is within a preset action characteristic value range or not;
if the human body action characteristic value is within a preset action characteristic value range, determining the figure image as a target figure image;
and determining a target person monitoring range according to the target person image, and monitoring persons in the target person monitoring range.
2. The method for determining a target person in a sit-up examination as claimed in claim 1, wherein the person image comprises a color image and a depth image, and the obtaining of the coordinate values of the skeletal joint points of the human body according to the coordinate positions of the skeletal joint points of the human body in a preset coordinate system comprises:
determining the X-axis coordinate and the Y-axis coordinate of the human body bone joint point in the coordinate system according to the color image, and determining the Z-axis coordinate of the human body bone joint point in the coordinate system according to the depth image;
and determining the coordinate value of the bone point according to the X-axis coordinate, the Y-axis coordinate and the Z-axis coordinate.
3. The method for determining a target person in a sit-up assessment according to claim 1, wherein the determining of the human body motion characteristic value of the person image based on the skeletal point coordinate value comprises:
determining the actual distance between the human body bone joint points according to the coordinate values of the bone points, and determining the actual angle between the human body bone joint points according to the coordinate values of the bone points;
and determining the human body action characteristic value according to the actual distance and the actual angle.
4. The method for determining the target person in the sit-up assessment as claimed in claim 3, wherein the preset motion characteristic value range comprises a preset distance range and a preset angle range, and the determining whether the human motion characteristic value is within the preset motion characteristic value range comprises:
and judging whether the actual distance is within the preset distance range and the actual angle is within the preset angle range.
5. The method for determining a target person in a sit-up assessment according to claim 1, wherein the determining of the monitoring range of the target person based on the target person image comprises:
acquiring coordinate values of all skeleton points in the target character image;
selecting one skeletal point coordinate value from all skeletal point coordinate values in the target character image as a target character monitoring coordinate value according to a preset first condition;
and determining the target person monitoring range according to the target person monitoring coordinate value.
6. The method for determining a target person in sit-up assessment according to claim 1, wherein the extracting of human skeletal joint points in the person image comprises:
extracting initial human skeleton joint points from the person image;
and screening out initial human body bone joint points matched with the joint point types set by the user from the initial human body bone joint points to serve as the human body bone joint points.
7. The method as claimed in claim 3, wherein said determining the actual angle between the human skeletal joint points from the skeletal point coordinate values comprises:
selecting three target bone point coordinates from the bone point coordinate values;
selecting a target skeleton point coordinate from the three target skeleton point coordinates as a reference point, and constructing an included angle by using the reference point as a center;
and calculating the angle of the included angle according to the coordinates of the three target bone points to obtain the actual angle.
8. An apparatus for determining a target person in a sit-up assessment, comprising means for performing the method of any one of claims 1-7.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the method of any one of claims 1 to 7 when executing a program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202211314123.3A 2022-10-25 2022-10-25 Method, device, equipment and medium for determining target person in sit-up examination Pending CN115937969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211314123.3A CN115937969A (en) 2022-10-25 2022-10-25 Method, device, equipment and medium for determining target person in sit-up examination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211314123.3A CN115937969A (en) 2022-10-25 2022-10-25 Method, device, equipment and medium for determining target person in sit-up examination

Publications (1)

Publication Number Publication Date
CN115937969A true CN115937969A (en) 2023-04-07

Family

ID=86553046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211314123.3A Pending CN115937969A (en) 2022-10-25 2022-10-25 Method, device, equipment and medium for determining target person in sit-up examination

Country Status (1)

Country Link
CN (1) CN115937969A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116549956A (en) * 2023-05-09 2023-08-08 北京维艾狄尔信息科技有限公司 Outdoor somatosensory interaction method, system and intelligent terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116549956A (en) * 2023-05-09 2023-08-08 北京维艾狄尔信息科技有限公司 Outdoor somatosensory interaction method, system and intelligent terminal
CN116549956B (en) * 2023-05-09 2023-11-07 北京维艾狄尔信息科技有限公司 Outdoor somatosensory interaction method, system and intelligent terminal

Similar Documents

Publication Publication Date Title
CN110969114B (en) Human body action function detection system, detection method and detector
CN112069933A (en) Skeletal muscle stress estimation method based on posture recognition and human body biomechanics
CN111368810A (en) Sit-up detection system and method based on human body and skeleton key point identification
KR101930652B1 (en) Gait analysis system and computer program recorded on recording medium
Matthew et al. Kinematic and kinetic validation of an improved depth camera motion assessment system using rigid bodies
CN111883229B (en) Intelligent movement guidance method and system based on visual AI
CN112084878B (en) Method for judging operator gesture standardization degree
CN113139962B (en) System and method for scoliosis probability assessment
Bonnet et al. Fast determination of the planar body segment inertial parameters using affordable sensors
CN107115102A (en) A kind of osteoarticular function appraisal procedure and device
CN112435731B (en) Method for judging whether real-time gesture meets preset rules
CN115937969A (en) Method, device, equipment and medium for determining target person in sit-up examination
CN107256390B (en) Hand function evaluation device and method based on change of each part of hand in three-dimensional space position
Bumacod et al. Image-processing-based digital goniometer using OpenCV
CN117115922A (en) Seat body forward-bending evaluation method, system, electronic equipment and storage medium
CN116721471A (en) Multi-person three-dimensional attitude estimation method based on multi-view angles
CN116740618A (en) Motion video action evaluation method, system, computer equipment and medium
CN115331304A (en) Running identification method
Domingues et al. Towards a detailed anthropometric body characterization using the Microsoft Kinect
CN114712835A (en) Supplementary training system based on two mesh human position appearance discernments
JP2014117409A (en) Method and apparatus for measuring body joint position
CN112836544A (en) Novel sitting posture detection method
CN111079597A (en) Three-dimensional information detection method and electronic equipment
CN111724901A (en) Method, system and device for predicting structure body parameters based on vision and storage medium
CN114494190A (en) Human body structure relation description method and device based on spatial transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination