CN114129151A - Method for defining human body action, posture and each joint relation by visual recognition - Google Patents

Method for defining human body action, posture and each joint relation by visual recognition Download PDF

Info

Publication number
CN114129151A
CN114129151A CN202111461081.1A CN202111461081A CN114129151A CN 114129151 A CN114129151 A CN 114129151A CN 202111461081 A CN202111461081 A CN 202111461081A CN 114129151 A CN114129151 A CN 114129151A
Authority
CN
China
Prior art keywords
human body
joint
joints
spatial position
defining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111461081.1A
Other languages
Chinese (zh)
Inventor
林明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Technology Shenzhen Co ltd
Original Assignee
Smart Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Technology Shenzhen Co ltd filed Critical Smart Technology Shenzhen Co ltd
Priority to CN202111461081.1A priority Critical patent/CN114129151A/en
Publication of CN114129151A publication Critical patent/CN114129151A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physiology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a method for defining human body actions, postures and joint relations by visual recognition in the technical field of visual recognition, which comprises the following steps: s1, the depth camera acquires color and depth data; s2, separating the human body from the background to obtain a human body image and depth data; s3, identifying human skeleton information through the human body image and the depth data; and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information. The invention recognizes the human body action and gesture, can solve the problem that the action and gesture can not be defined because the image recognition is influenced by the shooting position and angle, can compare and calculate the real-time image and the standard action of the human body by using the recognition of the human body action and gesture, thereby obtaining the nuance with the standard action, and can apply the static recognition to the dynamic action and behavior judgment by using the recognition of the human body action and gesture.

Description

Method for defining human body action, posture and each joint relation by visual recognition
Technical Field
The invention relates to the technical field of visual recognition, in particular to a method for defining human body actions, postures and joint relations by visual recognition.
Background
In visual recognition, human body information can be recognized through a static picture, but human body actions are difficult to define, such as the definition of postures and positions of human bodies, hands and feet and the definition of postures and actions of feet and shoulders which are as wide as shoulders, because of individual differences, the angle and the position shot by a camera influence the input of images, and further the recognition of the human body is influenced; the human body is separated from the background, and the human body is influenced when bone identification data is made, so that the accuracy of actions and postures of a person cannot be directly judged, for example: people's both hands are upwards extended, and the camera is shot and is not put horizontally, and it is ascending that it is slope to shoot out coming people's both arms, and this can't be made clear of the figure whether both hands are upwards extended, and the same reason has the unable direct recognition of the action and the posture of a lot of human bodies yet, has consequently just brought a lot of uncertainties in the specific action and the posture visual identification to the human body.
In the current visual recognition, after the human body action, the human body posture and the information of each joint are recognized, the human body action and the human body posture cannot be recognized accurately, so that a method for defining the human body action, the human body posture and the relation of each joint through visual recognition is provided, and the problem that the human body action and the human body posture cannot be recognized accurately is solved.
Disclosure of Invention
In view of the above problems, the present invention provides a method for visually recognizing and defining human body motions, postures and joint relationships, having.
The technical scheme of the invention is as follows:
the method for defining the human body action, the human body posture and the joint relation through visual recognition comprises the following steps:
s1, the depth camera acquires color and depth data;
s2, separating the human body from the background to obtain a human body image and depth data;
s3, identifying human skeleton information through the human body image and the depth data;
and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information.
In a further aspect, the human joint is for fifteen definable joint points including a head, a neck, a torso, a left shoulder, a left elbow, a left wrist, a right shoulder, a right elbow, a right wrist, a left hip, a left knee, a left heel, a right hip, a right knee, and a right heel.
In a further technical scheme, the method for defining the human body actions, postures and joint relations by visual recognition needs to analyze the relations of the joints of the human body, so that the influences of individual differences and the positions and angles of the shot images in the real world can be ignored.
In a further technical solution, the method for defining the spatial position and spatial angular relationship of each joint of the human body includes the following steps:
s1, spatial position and angular relationship of head, neck and torso joints;
s2, the spatial position and angular relationship of the neck, the trunk, the left hip joint and the right hip joint;
s3, spatial position and angular relationship of hand shoulder, elbow and wrist joints.
In a further technical solution, the method for defining the spatial position and spatial angular relationship of each joint of the human body further includes the following steps:
s4, spatial position and angular relationship of foot, hip, knee and heel joints;
s5, the spatial position and angular relationship of the torso, shoulders and elbow joints;
s6, spatial position and angular relationship of torso, hip, and knee joints.
In a further aspect, the spatial position and angular relationship of the head, neck and torso joints comprises the following identification definition steps:
s1, if the person stands upright normally, the corresponding xyz coordinates should be: x0, x1, y2, y0, y1, y2, and z0, z1, z 2;
s2, because there is an influence of the camera shooting angle and position, the formulas of x0 ═ x1, y0> y1, and z0 ═ z1 may be biased, so that a spatial angular relationship needs to be introduced.
In a further technical scheme, the spatial position and angle relationship of the neck, the trunk, the left hip joint and the right hip joint comprises the following identification and definition steps:
s1, if the person stands upright normally, x and z of the neck and the torso are substantially the same, x should be located at the middle position of the left and right hips, so the spatial relationship should be: x9 (x 1) ═ x2 (x 12), y1 (y 2) > y9 (y 12), z1 (z 2) ═ z9 (z 12);
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In a further technical scheme, the spatial position and angle relation of the shoulder, elbow and wrist joints of the hand comprises the following identification and definition steps:
s1, for the same reason of the left and right arms, taking the right arm as an example, the spatial position and angle relationship of the joints of the right shoulder, right elbow and right wrist, if the person stands upright normally and the two arms are opened horizontally, the corresponding xyz coordinates should be: x6< x7< x8, y6 ═ y7 ═ y8, z6 ═ z7 ═ z 8;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In a further aspect, the spatial position and angular relationship of the foot-hip, knee-and heel joints comprises the following identifying and defining steps:
s1, because the left and right legs are the same, taking the right leg as an example, the spatial position and angular relationship of the right hip, the right knee, and the right heel joint, if the person stands upright normally, the corresponding xyz coordinate should be: x12, x13, y14, y12, y13, y14, and z12, z13, z 14;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In a further aspect, the spatial position and angular relationship of the torso, shoulders and elbow joints comprises the following identifying and defining steps:
s1, since the left and right arms are the same, taking the right arm as an example, the spatial position and angular relationship of the torso, the right shoulder, and the right elbow joint, for example, if the person stands upright normally and the two arms are opened horizontally, the corresponding xyz coordinates should be: x2< x6< x7, y2< y6 ═ y7, z2 ═ z6 ═ z 7;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In a further aspect, the spatial position and angular relationship of the torso, hip and knee joints comprises the following identifying and defining steps:
s1, since the left and right legs are the same, taking the right leg as an example, the spatial position and angular relationship of the joints of the trunk, right hip and right knee, if a person stands upright normally, the corresponding xyz coordinates should be: x2 (x 12-x 13), y 2-y 12-y 13, and z 2-z 12-z 13;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
The invention has the beneficial effects that:
1. the invention recognizes the human body action and gesture, and can solve the problem that the action and gesture can not be defined because the image recognition is influenced by the shooting position and angle;
2. the invention can compare and calculate the real-time image and the standard action of the human body by utilizing the recognition of the action and the posture of the human body, thereby obtaining the nuance with the standard action;
3. the invention utilizes the recognition of human body action and posture, can be beneficial to the trend recognition during human body movement, and applies static recognition to dynamic action and behavior judgment.
Drawings
FIG. 1 is a diagram of a human skeleton according to an embodiment of the present invention;
fig. 2 is a flowchart of human body motion recognition according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be further described with reference to the accompanying drawings.
Example 1:
as shown in fig. 1-2, the method for visually recognizing and defining human body motions, postures and joint relations comprises the following steps:
s1, the depth camera acquires color and depth data;
s2, separating the human body from the background to obtain a human body image and depth data;
s3, identifying human skeleton information through the human body image and the depth data;
and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information.
The human joints are used for fifteen definable joint points including a head 0, neck 1, torso 2, left shoulder 3, left elbow 4, left wrist 5, right shoulder 6, right elbow 7, right wrist 8, left hip 9, left knee 10, left heel 11, right hip 12, right knee 13, and right heel 14.
The method for defining the human body action, posture and the relation of each joint through visual recognition needs to analyze the relation of each joint of the human body, so that the influence of individual difference and shot image position and angle in the real world can be ignored.
The method for defining the spatial position and spatial angle relation of each joint of the human body comprises the following steps:
s1 spatial position and angular relationship of the head 0, neck 1 and torso 2 joints;
s2, the spatial position and angular relationship of the neck 1, torso 2, left hip 9, and right hip 12 joints;
s3, spatial position and angular relationship of hand shoulder, elbow and wrist joints.
The method of defining the relationship of each joint further includes the following methods:
s4, spatial position and angular relationship of foot, hip, knee and heel joints;
s5, the spatial position and angular relationship of torso 2, shoulders and elbow joints;
s6, torso 2, hip and knee joint spatial position and angular relationship.
The spatial position and angular relationship of the head 0, neck 1 and torso 2 joints comprises the following identification definition steps:
s1, if the person stands upright normally, the corresponding xyz coordinates should be: x0, x1, y2, y0, y1, y2, and z0, z1, z 2;
s2, because there is an influence of the camera shooting angle and position, the formulas of x0 ═ x1, y0> y1, and z0 ═ z1 may be biased, so that a spatial angular relationship needs to be introduced.
In the present embodiment, if the three points (x0, y0, z0), (x1, y1, z1), (x2, y2, z2) are spatially on a straight line, it can be defined that the head of the human body is in a standing posture; if three points are at an angle, the included angle between the two vectors of the neck 1 to the head 0 and the neck 1 to the trunk 2 can be understood, and the swinging condition of the head, which inclines forwards, backwards, leftwards or rightwards, can be inferred by the included angles of the two vectors respectively projected on the xyz axes.
Example 2:
as shown in fig. 1-2, the method for visually recognizing and defining human body motions, postures and joint relations comprises the following steps:
s1, the depth camera acquires color and depth data;
s2, separating the human body from the background to obtain a human body image and depth data;
s3, identifying human skeleton information through the human body image and the depth data;
and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information.
The human joints are used for fifteen definable joint points including a head 0, neck 1, torso 2, left shoulder 3, left elbow 4, left wrist 5, right shoulder 6, right elbow 7, right wrist 8, left hip 9, left knee 10, left heel 11, right hip 12, right knee 13, and right heel 14.
The method for defining the human body action, posture and the relation of each joint through visual recognition needs to analyze the relation of each joint of the human body, so that the influence of individual difference and shot image position and angle in the real world can be ignored.
The method for defining the spatial position and spatial angle relation of each joint of the human body comprises the following steps:
s1 spatial position and angular relationship of the head 0, neck 1 and torso 2 joints;
s2, the spatial position and angular relationship of the neck 1, torso 2, left hip 9, and right hip 12 joints;
s3, spatial position and angular relationship of hand shoulder, elbow and wrist joints.
The method of defining the relationship of each joint further includes the following methods:
s4, spatial position and angular relationship of foot, hip, knee and heel joints;
s5, the spatial position and angular relationship of torso 2, shoulders and elbow joints;
s6, torso 2, hip and knee joint spatial position and angular relationship.
The spatial position and angular relationship of the neck 1, torso 2 and joints of the left and right hips 9, 12 includes the following identification definition steps:
s1, if the person stands upright normally, x and z of the neck 1 and the trunk 2 are substantially the same, x should be located at the middle position of the left hip 9 and the right hip 12, so the spatial relationship should be: x9 (x 1) ═ x2 (x 12), y1 (y 2) > y9 (y 12), z1 (z 2) ═ z9 (z 12);
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In this embodiment, if the three points (x1, y1, z1), (x2, y2, z2), the middle points of the left and right buttocks 9 and 12 ((x9-x12)/2, (y9-y12)/2, (z9-z12)/2) are spatially on a straight line, it can be defined that the upper body of the human body is in a standing posture; if three points are provided with angles, the angle between the vector body 2 and the neck 1 and the angle between the vector body 2 and the midpoint of the left hip 9 and the right hip 12 can be understood, and the position of the human body 2 can be deduced through the angle projected by the two vectors on the xyz axis, and the position is forward leaning, backward leaning, leftward leaning or rightward leaning.
Example 3:
as shown in fig. 1-2, the method for visually recognizing and defining human body motions, postures and joint relations comprises the following steps:
s1, the depth camera acquires color and depth data;
s2, separating the human body from the background to obtain a human body image and depth data;
s3, identifying human skeleton information through the human body image and the depth data;
and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information.
The human joints are used for fifteen definable joint points including a head 0, neck 1, torso 2, left shoulder 3, left elbow 4, left wrist 5, right shoulder 6, right elbow 7, right wrist 8, left hip 9, left knee 10, left heel 11, right hip 12, right knee 13, and right heel 14.
The method for defining the human body action, posture and the relation of each joint through visual recognition needs to analyze the relation of each joint of the human body, so that the influence of individual difference and shot image position and angle in the real world can be ignored.
The method for defining the spatial position and spatial angle relation of each joint of the human body comprises the following steps:
s1 spatial position and angular relationship of the head 0, neck 1 and torso 2 joints;
s2, the spatial position and angular relationship of the neck 1, torso 2, left hip 9, and right hip 12 joints;
s3, spatial position and angular relationship of hand shoulder, elbow and wrist joints.
The method of defining the relationship of each joint further includes the following methods:
s4, spatial position and angular relationship of foot, hip, knee and heel joints;
s5, the spatial position and angular relationship of torso 2, shoulders and elbow joints;
s6, torso 2, hip and knee joint spatial position and angular relationship.
The spatial position and angular relationship of the hand shoulder, elbow and wrist joints comprises the following identification definition steps:
s1, for the same reason of the left and right arms, taking the right arm as an example, the spatial position and angular relationship of the joints of the right shoulder 6, right elbow 7 and right wrist 8, for example, if the person stands upright normally and the arms are open horizontally, the corresponding xyz coordinate should be: x6< x7< x8, y6 ═ y7 ═ y8, z6 ═ z7 ═ z 8;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In this embodiment, for example, (x6, y6, z6), (x7, y7, z7), (x8, y8, z8) are in a straight line, which means that the arm is straight; if the three points have angles, the arm can be considered to be bent, the included angle between the vector from the right elbow 7 to the right shoulder 6 and the vector from the right elbow 7 to the right wrist 8 is understood, and the position condition of the arm of the human body can be deduced through the projected included angles of the two vectors on the xyz axis respectively, and the position condition is forward bending, backward bending, leftward bending or rightward bending.
Example 4:
as shown in fig. 1-2, the method for visually recognizing and defining human body motions, postures and joint relations comprises the following steps:
s1, the depth camera acquires color and depth data;
s2, separating the human body from the background to obtain a human body image and depth data;
s3, identifying human skeleton information through the human body image and the depth data;
and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information.
The human joints are used for fifteen definable joint points including a head 0, neck 1, torso 2, left shoulder 3, left elbow 4, left wrist 5, right shoulder 6, right elbow 7, right wrist 8, left hip 9, left knee 10, left heel 11, right hip 12, right knee 13, and right heel 14.
The method for defining the human body action, posture and the relation of each joint through visual recognition needs to analyze the relation of each joint of the human body, so that the influence of individual difference and shot image position and angle in the real world can be ignored.
The method for defining the spatial position and spatial angle relation of each joint of the human body comprises the following steps:
s1 spatial position and angular relationship of the head 0, neck 1 and torso 2 joints;
s2, the spatial position and angular relationship of the neck 1, torso 2, left hip 9, and right hip 12 joints;
s3, spatial position and angular relationship of hand shoulder, elbow and wrist joints.
The method of defining the relationship of each joint further includes the following methods:
s4, spatial position and angular relationship of foot, hip, knee and heel joints;
s5, the spatial position and angular relationship of torso 2, shoulders and elbow joints;
s6, torso 2, hip and knee joint spatial position and angular relationship.
The spatial position and angular relationship of the foot-hip, knee-and heel-joint comprises the following identification definition steps:
s1, since the left and right legs are the same, taking the right leg as an example, the spatial position and angular relationship of the joints of the right hip 12, the right knee 13, and the right heel 14, if the person stands upright normally, the corresponding xyz coordinates should be: x12, x13, y14, y12, y13, y14, and z12, z13, z 14;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In this embodiment, for example, (x12, y12, z12), (x13, y13, z13), (x14, y14, z14) are in a straight line, i.e., the leg is considered to be straight; if three points are provided with angles, the legs can be considered to be bent, the included angles of two vectors from the vector right knee 13 to the right hip 12 and the vector right knee 13 to the right heel 14 are understood, and the position condition of the legs of the human body can be deduced through the included angles of the two vectors respectively projected on an xyz axis, and the positions of the legs are forward bending, backward bending, leftward bending or rightward bending.
Example 5:
as shown in fig. 1-2, the method for visually recognizing and defining human body motions, postures and joint relations comprises the following steps:
s1, the depth camera acquires color and depth data;
s2, separating the human body from the background to obtain a human body image and depth data;
s3, identifying human skeleton information through the human body image and the depth data;
and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information.
The human joints are used for fifteen definable joint points including a head 0, neck 1, torso 2, left shoulder 3, left elbow 4, left wrist 5, right shoulder 6, right elbow 7, right wrist 8, left hip 9, left knee 10, left heel 11, right hip 12, right knee 13, and right heel 14.
The method for defining the human body action, posture and the relation of each joint through visual recognition needs to analyze the relation of each joint of the human body, so that the influence of individual difference and shot image position and angle in the real world can be ignored.
The method for defining the spatial position and spatial angle relation of each joint of the human body comprises the following steps:
s1 spatial position and angular relationship of the head 0, neck 1 and torso 2 joints;
s2, the spatial position and angular relationship of the neck 1, torso 2, left hip 9, and right hip 12 joints;
s3, spatial position and angular relationship of hand shoulder, elbow and wrist joints.
The method of defining the relationship of each joint further includes the following methods:
s4, spatial position and angular relationship of foot, hip, knee and heel joints;
s5, the spatial position and angular relationship of torso 2, shoulders and elbow joints;
s6, torso 2, hip and knee joint spatial position and angular relationship.
The spatial position and angular relationship of the torso 2, shoulders and elbow joints includes the following identification definition steps:
s1, since the left and right arms are the same, taking the right arm as an example, the spatial position and angular relationship of the joints of the torso 2, the right shoulder 6, and the right elbow 7, for example, if the person stands upright normally and the two arms are opened horizontally, the corresponding xyz coordinates should be: x2< x6< x7, y2< y6 ═ y7, z2 ═ z6 ═ z 7;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In the present embodiment, the angle between the vector right shoulder 6 and the torso 2 and the vector right shoulder 6 and the right elbow 7 is understood, the two arms are horizontally opened through the definition of the standard action of the human body, the angle of the two vectors is recorded as a standard, and the position of the human body and the arms can be inferred through the angle of the two vectors respectively projected on the xyz axis, and the positions of the human body and the arms are forwards extended, backwards extended, leftwards extended or rightwards extended.
Example 6:
as shown in fig. 1-2, the method for visually recognizing and defining human body motions, postures and joint relations comprises the following steps:
s1, the depth camera acquires color and depth data;
s2, separating the human body from the background to obtain a human body image and depth data;
s3, identifying human skeleton information through the human body image and the depth data;
and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information.
The human joints are used for fifteen definable joint points including a head 0, neck 1, torso 2, left shoulder 3, left elbow 4, left wrist 5, right shoulder 6, right elbow 7, right wrist 8, left hip 9, left knee 10, left heel 11, right hip 12, right knee 13, and right heel 14.
The method for defining the human body action, posture and the relation of each joint through visual recognition needs to analyze the relation of each joint of the human body, so that the influence of individual difference and shot image position and angle in the real world can be ignored.
The method for defining the spatial position and spatial angle relation of each joint of the human body comprises the following steps:
s1 spatial position and angular relationship of the head 0, neck 1 and torso 2 joints;
s2, the spatial position and angular relationship of the neck 1, torso 2, left hip 9, and right hip 12 joints;
s3, spatial position and angular relationship of hand shoulder, elbow and wrist joints.
The method of defining the relationship of each joint further includes the following methods:
s4, spatial position and angular relationship of foot, hip, knee and heel joints;
s5, the spatial position and angular relationship of torso 2, shoulders and elbow joints;
s6, torso 2, hip and knee joint spatial position and angular relationship.
The spatial position and angular relationship of the torso 2, hip and knee joints includes the following identification definition steps:
s1, since the left and right legs are the same, taking the right leg as an example, the spatial position and angular relationship of the joints of the trunk 2, the right hip 12, and the right knee 13, if the person stands upright normally, the corresponding xyz coordinates should be: x2 (x 12-x 13), y 2-y 12-y 13, and z 2-z 12-z 13;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
In the present embodiment, the included angle between the vector right hip 12 and the trunk 2 and the vector right hip 12 and the right knee 13 is understood, the angle between the two vectors is recorded as a standard through the definition of the standard motion of the human body, and the position of the human body and the leg can be inferred through the included angle of the two vectors respectively projected on the xyz axes, namely, the position is extended forwards, extended backwards, extended leftwards or extended rightwards.
The above examples only express the specific embodiments of the present invention, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (11)

1. The method for defining human body action, posture and joint relation by visual recognition is characterized by comprising the following steps of: the method for defining human body action, posture and joint relation by visual recognition comprises the following steps:
s1, the depth camera acquires color and depth data;
s2, separating the human body from the background to obtain a human body image and depth data;
s3, identifying human skeleton information through the human body image and the depth data;
and S4, defining the spatial position and spatial angle relation of each joint of the human body according to the skeleton information with the depth information.
2. The visual recognition method of defining human body motions, postures and joint relations according to claim 1, wherein: the human body joint is used for fifteen definable joint points which comprise a head (0), a neck (1), a trunk (2), a left shoulder (3), a left elbow (4), a left wrist (5), a right shoulder (6), a right elbow (7), a right wrist (8), a left hip (9), a left knee (10), a left heel (11), a right hip (12), a right knee (13) and a right heel (14).
3. The visual recognition method of defining human body motions, postures and joint relations according to claim 2, wherein: the method for defining the human body actions, postures and the relations among the joints by visual recognition needs to analyze the relations among the joints of the human body, so that the influences of individual differences and the positions and angles of shot images in the real world can be ignored.
4. The visual recognition method of defining human body motions, postures and joint relations according to claim 3, wherein: the method for defining the spatial position and spatial angle relation of each joint of the human body comprises the following steps:
s1, the spatial position and angular relationship of the joints of the head (0), neck (1) and trunk (2);
s2, the spatial position and the angular relation of the joints of the neck (1), the trunk (2), the left hip (9) and the right hip (12);
s3, spatial position and angular relationship of hand shoulder, elbow and wrist joints.
5. The visual recognition method of defining human body movements, gestures and joint relationships according to claim 4, wherein: the method for defining the spatial position and spatial angle relation of each joint of the human body further comprises the following steps:
s4, spatial position and angular relationship of foot, hip, knee and heel joints;
s5, the spatial position and angular relationship of the torso (2), shoulders and elbow joints;
s6, torso (2), hip and knee joint spatial position and angular relationship.
6. The visual recognition method of defining human body movements, gestures and joint relationships according to claim 5, wherein: the spatial position and angular relationship of the joints of the head (0), neck (1) and torso (2) comprises the following identification definition steps:
s1, if the person stands upright normally, the corresponding xyz coordinates should be: x0, x1, y2, y0, y1, y2, and z0, z1, z 2;
s2, because there is an influence of the camera shooting angle and position, the formulas of x0 ═ x1, y0> y1, and z0 ═ z1 may be biased, so that a spatial angular relationship needs to be introduced.
7. The visual recognition method of defining human body movements, gestures and joint relationships according to claim 6, wherein: the spatial position and angle relation of the neck (1), the trunk (2), the left hip (9) and the right hip (12) joints comprises the following identification and definition steps:
s1, if the person stands upright normally, x and z of the neck (1) and the trunk (2) are basically consistent, x should be located in the middle position of the left hip (9) and the right hip (12), so the spatial position relationship should be as follows: x9 (x 1) ═ x2 (x 12), y1 (y 2) > y9 (y 12), z1 (z 2) ═ z9 (z 12);
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
8. The visual recognition method of defining human body movements, gestures and joint relationships according to claim 7, wherein: the spatial position and angular relationship of the hand shoulder, elbow and wrist joints comprises the following identification and definition steps:
s1, for the same reason of the left and right arms, taking the spatial position and angular relationship of the joints of the right arm, such as the right shoulder (6), the right elbow (7), and the right wrist (8), if the person stands upright normally and the two arms are opened horizontally, the corresponding xyz coordinates should be: x6< x7< x8, y6 ═ y7 ═ y8, z6 ═ z7 ═ z 8;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
9. The visual recognition method of defining human body movements, gestures and joint relationships according to claim 8, wherein: the spatial position and angular relationship of the foot hip, knee and heel joints includes the following identifying and defining steps:
s1, because the left and right legs are the same, taking the right leg as an example, the spatial position and angular relationship of the joints of the right hip (12), the right knee (13), and the right heel (14), if the person stands upright normally, the corresponding xyz coordinate should be: x12, x13, y14, y12, y13, y14, and z12, z13, z 14;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
10. The visual recognition method of defining human body movements, gestures and joint relationships according to claim 9, wherein: the spatial position and angular relationship of the torso (2), shoulders and elbow joints comprises the following identification definition steps:
s1, since the left and right arms are the same, taking the right arm as an example, the spatial position and angular relationship of the joints of the trunk (2), the right shoulder (6) and the right elbow (7), if the person stands upright normally and the two arms are opened horizontally, the corresponding xyz coordinate should be: x2< x6< x7, y2< y6 ═ y7, z2 ═ z6 ═ z 7;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
11. The visual recognition method of defining human body movements, gestures and joint relationships according to claim 10, wherein: the spatial position and angular relationship of the torso (2), hip and knee joints comprises the following identification definition steps:
s1, for the same reason of the left and right legs, taking the right leg as an example, the spatial position and angular relationship of the joints of the trunk (2), the right hip (12), and the right knee (13), if the person stands upright normally, the corresponding xyz coordinates should be: x2 (x 12-x 13), y 2-y 12-y 13, and z 2-z 12-z 13;
s2, because there is the influence of the shooting angle and position of the camera, the above formula is possible to have deviation, so the spatial angle relationship needs to be introduced.
CN202111461081.1A 2021-11-30 2021-11-30 Method for defining human body action, posture and each joint relation by visual recognition Withdrawn CN114129151A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111461081.1A CN114129151A (en) 2021-11-30 2021-11-30 Method for defining human body action, posture and each joint relation by visual recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111461081.1A CN114129151A (en) 2021-11-30 2021-11-30 Method for defining human body action, posture and each joint relation by visual recognition

Publications (1)

Publication Number Publication Date
CN114129151A true CN114129151A (en) 2022-03-04

Family

ID=80387154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111461081.1A Withdrawn CN114129151A (en) 2021-11-30 2021-11-30 Method for defining human body action, posture and each joint relation by visual recognition

Country Status (1)

Country Link
CN (1) CN114129151A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114886417A (en) * 2022-05-10 2022-08-12 南京布尔特医疗技术发展有限公司 Intelligent safety nursing monitoring system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114886417A (en) * 2022-05-10 2022-08-12 南京布尔特医疗技术发展有限公司 Intelligent safety nursing monitoring system and method
CN114886417B (en) * 2022-05-10 2023-09-22 南京布尔特医疗技术发展有限公司 Intelligent safety nursing monitoring system and method

Similar Documents

Publication Publication Date Title
CN111460875B (en) Image processing method and apparatus, image device, and storage medium
Viswakumar et al. Human gait analysis using OpenPose
CN111091732B (en) Cardiopulmonary resuscitation (CPR) instructor based on AR technology and guiding method
JP6124308B2 (en) Operation evaluation apparatus and program thereof
JP6425318B2 (en) Posture evaluation system
JP7126812B2 (en) Detection device, detection system, image processing device, detection method, image processing program, image display method, and image display system
Obdržálek et al. Real-time human pose detection and tracking for tele-rehabilitation in virtual reality
CN107930048B (en) Space somatosensory recognition motion analysis system and motion analysis method
JP2015186531A (en) Action information processing device and program
CN108564643A (en) Performance based on UE engines captures system
CN108098780A (en) A kind of new robot apery kinematic system
TWI652039B (en) Simple detection method and system for sarcopenia
CN114129151A (en) Method for defining human body action, posture and each joint relation by visual recognition
US20230240594A1 (en) Posture assessment program, posture assessment apparatus, posture assessment method, and posture assessment system
WO2021240848A1 (en) Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program
WO2020147797A1 (en) Image processing method and apparatus, image device, and storage medium
CN115890671A (en) SMPL parameter-based multi-geometry human body collision model generation method and system
KR20000074633A (en) Real-time virtual character system
Cha et al. Mobile. Egocentric human body motion reconstruction using only eyeglasses-mounted cameras and a few body-worn inertial sensors
CN112990089B (en) Method for judging human motion gesture
CN113345552A (en) Method and system for intelligently assisting in guiding dance exercises and mobile terminal
CN211180839U (en) Motion teaching equipment and motion teaching system
CN116246041A (en) AR-based mobile phone virtual fitting system and method
CN113569775A (en) Monocular RGB input-based mobile terminal real-time 3D human body motion capture method and system, electronic equipment and storage medium
Dallaire-Côté et al. Animated self-avatars for motor rehabilitation applications that are biomechanically accurate, low-latency and easy to use

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220304