CN111914790B - Real-time human body rotation angle identification method based on double cameras under different scenes - Google Patents

Real-time human body rotation angle identification method based on double cameras under different scenes Download PDF

Info

Publication number
CN111914790B
CN111914790B CN202010816048.5A CN202010816048A CN111914790B CN 111914790 B CN111914790 B CN 111914790B CN 202010816048 A CN202010816048 A CN 202010816048A CN 111914790 B CN111914790 B CN 111914790B
Authority
CN
China
Prior art keywords
human body
key point
camera
rotation angle
acquired
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010816048.5A
Other languages
Chinese (zh)
Other versions
CN111914790A (en
Inventor
陈安成
李若铖
陈林
张康
王权泳
吴哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010816048.5A priority Critical patent/CN111914790B/en
Publication of CN111914790A publication Critical patent/CN111914790A/en
Application granted granted Critical
Publication of CN111914790B publication Critical patent/CN111914790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying the rotation angle of a human body in real time under different scenes based on two cameras, which has the advantages of low implementation cost, easy implementation and difficult environmental influence. The invention sets two cameras with the same parameters, respectively obtains real-time images of a human body through the two cameras, then obtains the coordinates of the key points of the bones of the human body in the images by using a human body key point identification technology, and finally calculates the relative rotation angle of the human body at the moment according to the difference of the coordinates of the key points of the bones of the human body obtained by the left camera and the right camera. The invention can be suitable for different application scenes and accurately acquire the human body rotation angle in real time.

Description

Real-time human body rotation angle identification method based on double cameras under different scenes
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a real-time human body rotation angle identification method based on double cameras under different scenes.
Background
The current gesture recognition research is mainly based on the detection of key points of human bones, and theoretically, the rotation angle of a joint or the whole human body can be calculated as long as three-dimensional coordinates of the key points of the human bones are obtained. The current two-dimensional human key point identification technology tends to be mature, and the identification accuracy rate is very accurate in single-person or multi-person scenes. At present, three-dimensional human skeleton key point identification is carried out based on a single picture, but the accuracy is difficult to ensure. The key difficulty is how to obtain accurate depth information. The conventional depth image acquisition methods mainly comprise binocular stereo vision, structured light, laser radar and the like. Although the binocular stereo vision has the advantages of low requirements on camera hardware and low cost, the binocular stereo vision is very sensitive to ambient light, is not suitable for monotonous scenes lacking textures, has high calculation complexity, limits the measurement range by a camera baseline and the like, so that the applicable field range of the binocular stereo vision is greatly limited. The Kinect device provided by Microsoft develops an optical coding technology on the basis of structured light, the Kinect is rapidly developed in the field of consumer electronics due to the low price and the real-time high-resolution depth image capturing characteristic, however, the effective range of Kinect is only 800 mm to 4000 mm, and the Kinect cannot guarantee the acquisition of accurate depth values for objects outside the range. The Kinect captures a depth image with a missing depth area, which is represented by a depth value of zero, and the area means that the Kinect cannot obtain the depth value of the area. In addition, the depth image has problems such as non-correspondence between the edge of the depth image and the edge of the color image, and depth noise. The lidar is widely used in an artificial intelligence system for outdoor three-dimensional space perception due to the characteristics of wide ranging range and high measurement precision, however, three-dimensional information captured by the lidar is uneven and sparse in a color image coordinate system. Since the number of points scanned by the laser is limited in a unit period, when a three-dimensional point captured by the laser radar is projected to a color image coordinate system to obtain a depth image, the depth values of the depth image are presented in the form of discrete points, and the depth values of many areas in the depth image are unknown. This means that some pixels in the color image do not have corresponding depth information.
Disclosure of Invention
Aiming at the defects in the prior art, the method for identifying the human body rotation angle in real time under different scenes based on the double cameras solves the problems of environmental influence and high calculation complexity in the prior art.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: the method for identifying the human body rotation angle in real time under different scenes based on the double cameras comprises the following steps:
s1, arranging two cameras with the same parameters and horizontal connecting lines on one wall;
s2, acquiring human body images in real time through two cameras, and acquiring human body skeleton key point coordinates by adopting a human body key point identification algorithm, wherein the skeleton key point coordinates comprise a left shoulder key point coordinate and a right shoulder key point coordinate;
s3, acquiring a first steering identification parameter K according to the coordinates of the key points of the human skeleton acquired by the first camera 1 And a second steering identification parameter K according to the coordinates of the key points of the human skeleton collected by the second camera 2
S4, judging whether the abscissa of the left shoulder key point acquired by the first camera is larger than the abscissa of the right shoulder key point acquired by the first camera, if so, entering a step S5, and otherwise, entering a step S6;
s5, judging whether the abscissa of the left shoulder key point acquired by the second camera is larger than the abscissa of the right shoulder key point acquired by the second camera, if so, entering a step S7, and otherwise, entering a step S8;
s6, judging whether the abscissa of the left shoulder key point acquired by the second camera is larger than the abscissa of the right shoulder key point acquired by the second camera, if so, entering a step S10, and otherwise, entering a step S9;
s7, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If so, judging as a first left-turn process and calculating a rotation angle to obtain a human body rotation angle identification result, otherwise, judging as a first right-turn process and calculating a rotation angle to obtain a human body rotation angle identification result;
s8, judging that the current human body turning process is a second left turning process, and calculating a turning angle to obtain a human body turning angle recognition result;
s9, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If so, judging as a third right-turn process and calculating a rotation angle to obtain a human body rotation angle identification result, otherwise, judging as a third left-turn process and calculating a rotation angle to obtain a human body rotation angle identification result;
and S10, judging that the current human body turning process is a second right turning process, and calculating the turning angle to obtain a human body turning angle recognition result.
Further, in the step S1, the included angles between the lens axes of the two cameras and the wall are both 90- α degrees, the lens axes of the two cameras intersect and are parallel to the horizontal plane, the distance between the lenses of the two cameras is d meters, α represents a constant, and α belongs to (0, 45).
Further, the two cameras in step S2 include a first camera and a second camera, and the coordinates of the key points of the human skeleton collected by the first camera include coordinates (x) of the key points of the left shoulder lj1 ,y lj1 ) Right shoulder key point coordinate (x) rj1 ,y rj1 ) Coordinates of key points of hip on the left (x) lk1 ,y lk1 ) And right hip keypoint coordinates (x) rk1 ,y rk1 ) The human skeleton key point coordinates acquired by the second camera comprise left shoulder key point coordinates (x) lj2 ,y lj2 ) Right shoulder key point coordinate (x) rj2 ,y rj2 ) Coordinates of key points of hip on the left (x) lk2 ,y lk2 ) And right hip keypoint coordinates (x) rk2 ,y rk2 )。
Further, in step S3, a first turning identification parameter K is obtained according to the coordinates of the key points of the human skeleton collected by the first camera 1 Comprises the following steps:
Figure BDA0002632728140000031
wherein D is 1 Represents the difference of the horizontal coordinates between the projected points of the left shoulder key point and the right shoulder key point coordinates on the wall acquired by the first camera, D 2 Represents the difference of the vertical coordinates, x, between the projected points of the left hip key point and the left shoulder key point on the wall collected by the first camera lj1 The abscissa, x, representing the left shoulder keypoint lj acquired by the first camera rj1 The abscissa, y, representing the right shoulder key point rj acquired by the first camera lk1 Represents the ordinate, y, of the left hip key point lk acquired by the first camera lj1 A vertical coordinate of a left shoulder key point lj acquired by the first camera is represented;
in the step S3, a second steering identification parameter K is acquired according to the human skeleton key point coordinate acquired by the second camera 2 Comprises the following steps:
Figure BDA0002632728140000041
wherein D is 3 Represents the difference of the horizontal coordinates between the projected points of the coordinates of the left shoulder key point and the right shoulder key point collected by the second camera on the wall, D 4 Represents the difference of the vertical coordinates, x, between the projected points of the left hip key point and the left shoulder key point on the wall collected by the second camera lj2 The abscissa, x, of the left shoulder keypoint lj acquired by the second camera rj2 The abscissa, y, representing the right shoulder key point rj acquired by the second camera lk2 Represents the ordinate, y, of the left hip key point lk acquired by the second camera lj2 And the ordinate of the left shoulder key point lj acquired by the second camera is shown.
Further, the first left turn process in step S7 specifically includes: when the human body is opposite to the wall, the human body rotates leftwards by an angle of (0, 90-alpha); the first right turn process is: a process of rotating rightwards by an angle of (0, 90-alpha) degrees when the human body is opposite to the wall;
the step S7 includes the following sub-steps:
s71, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If yes, go to step S72, otherwise go to step S73;
s72, judging that the steering process is the first left-turning process at the moment, and calculating a turning angle as follows:
angle=arccos(K 2 /K max )/PI*180-α
s73, judging that the steering process is the first right steering process at the moment, and calculating a rotation angle as:
angle=arccos(K 1 /K max )/PI*180-α
wherein, K max Representing the maximum distance between the left and right shoulder keypoints; PI represents a circumferential ratio.
Further, the second left turn process in step S8 specifically includes: a process of turning left within the range of (90-alpha, 90+ alpha) degrees at the left side of the human body by taking the human body right opposite to the wall as reference;
in step S8, the calculated rotation angle is:
angle=arccos(K 1 /K max )/PI*180+α。
further, the third left turn process in step S9 specifically includes: a process of turning left within the range of (90+ alpha, 180) degrees at the left side of the human body by taking the human body right opposite to the wall as reference; the third right-turn process specifically comprises: a process of turning right within the range of (90+ alpha, 180) degrees at the right side of the human body by taking the human body right opposite to the wall as reference;
the step S9 includes the following sub-steps:
s91, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If yes, go to step S92, otherwise go to step S93;
s92, judging that the steering process is the third left-turning process, and calculating a turning angle as:
angle=180-arccos(K 1 /K max )/PI*180+α
s93, judging that the steering process is the third right steering process, and calculating a rotation angle as:
angle=180-arccos(K 2 /K max )/PI*180+α。
further, the second right turn process in step S10 specifically includes: a process of turning right within the range of (90-alpha, 90+ alpha) degrees at the right side of the human body by taking the human body right facing the wall as a reference;
in step S10, the calculated rotation angle is:
angle=arccos(K 2 /K max )/PI*180+α。
further, the maximum distance K between the key points of the left shoulder and the right shoulder max Initialisation to 0.65, before each said calculation of the rotation angle, for a maximum distance K max Updating, wherein the updating step comprises the following steps: judging a first steering identification parameter K 1 Whether or not it is greater than the maximum distance K max If so, thenLet maximum distance K max Is the first steering identification parameter K 1 Otherwise the maximum distance K is not changed max Count value of (c), completion maximum distance K max And (4) updating.
The invention has the beneficial effects that:
(1) the invention provides a method for identifying the rotation angle of a human body in real time under different scenes based on two cameras, which is low in implementation cost, easy to implement and not easy to be influenced by the environment.
(2) The invention sets two cameras with the same parameters, respectively obtains real-time images of a human body through the two cameras, then obtains the coordinates of the key points of the bones of the human body in the images by using a human body key point identification technology, and finally calculates the relative rotation angle of the human body at the moment according to the difference of the coordinates of the key points of the bones of the human body obtained by the left camera and the right camera.
(3) The invention can be suitable for different application scenes and accurately acquire the human body rotation angle in real time.
Drawings
Fig. 1 is a flow chart of a real-time human body rotation angle identification method based on two cameras in different scenes.
Fig. 2 is a schematic view of the installation of the camera in the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a method for identifying a human body rotation angle in real time in different scenes based on two cameras includes the following steps:
s1, arranging two cameras with the same parameters and horizontal connecting lines on one wall;
s2, acquiring human body images in real time through two cameras, and acquiring human body skeleton key point coordinates by adopting a human body key point identification algorithm, wherein the skeleton key point coordinates comprise a left shoulder key point coordinate and a right shoulder key point coordinate;
s3, acquiring a first steering identification parameter K according to the coordinates of the key points of the human skeleton acquired by the first camera 1 And a second steering identification parameter K according to the coordinates of the key points of the human skeleton collected by the second camera 2
S4, judging whether the abscissa of the left shoulder key point acquired by the first camera is larger than the abscissa of the right shoulder key point acquired by the first camera, if so, entering a step S5, and otherwise, entering a step S6;
s5, judging whether the abscissa of the left shoulder key point acquired by the second camera is larger than the abscissa of the right shoulder key point acquired by the second camera, if so, entering a step S7, and otherwise, entering a step S8;
s6, judging whether the abscissa of the left shoulder key point acquired by the second camera is larger than the abscissa of the right shoulder key point acquired by the second camera, if so, entering a step S10, and otherwise, entering a step S9;
s7, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If so, judging as a first left-turn process and calculating a rotation angle to obtain a human body rotation angle identification result, otherwise, judging as a first right-turn process and calculating a rotation angle to obtain a human body rotation angle identification result;
s8, judging that the current human body turning process is a second left turning process, and calculating a turning angle to obtain a human body turning angle identification result;
s9, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If so, judging as a third right-turn process and calculating a rotation angle to obtain a human body rotation angle identification result, otherwise, judging as a third left-turn process and calculating a rotation angle to obtain a human body rotation angle identification result;
and S10, judging that the current human body turning process is a second right turning process, and calculating the turning angle to obtain a human body turning angle recognition result.
As shown in fig. 2, in step S1, the included angles between the lens axes of the two cameras and the wall are both 90- α degrees, the lens axes of the two cameras intersect and are parallel to the horizontal plane, the distance between the lenses of the two cameras is d meters, α represents a constant, and α ∈ (0, 45).
In the step S2, the two cameras include a first camera and a second camera, and the human skeleton key point coordinates acquired by the first camera include a left shoulder key point coordinate (x) lj1 ,y lj1 ) Right shoulder key point coordinate (x) rj1 ,y rj1 ) Coordinates of key points of hip on the left (x) lk1 ,y lk1 ) And right hip keypoint coordinates (x) rk1 ,y rk1 ) The human skeleton key point coordinates acquired by the second camera comprise left shoulder key point coordinates (x) lj2 ,y lj2 ) Right shoulder key point coordinate (x) rj2 ,y rj2 ) Coordinates of key points of hip on the left (x) lk2 ,y lk2 ) And right hip keypoint coordinates (x) rk2 ,y rk2 )。
In the step S3, a first turning identification parameter K is obtained according to the coordinates of the key points of the human skeleton collected by the first camera 1 Comprises the following steps:
Figure BDA0002632728140000081
wherein D is 1 Represents the difference of the horizontal coordinates between the projected points of the left shoulder key point and the right shoulder key point coordinates on the wall acquired by the first camera, D 2 Represents the difference of the vertical coordinates, x, between the projected points of the left hip key point and the left shoulder key point on the wall collected by the first camera lj1 The abscissa, x, representing the left shoulder keypoint lj acquired by the first camera rj1 The abscissa, y, representing the right shoulder key point rj acquired by the first camera lk1 Represents the ordinate, y, of the left hip key point lk acquired by the first camera lj1 A vertical coordinate representing a left shoulder key point lj acquired by the first camera;
in the step S3, a second steering identification parameter K is acquired according to the human skeleton key point coordinate acquired by the second camera 2 Comprises the following steps:
Figure BDA0002632728140000082
wherein D is 3 Represents the difference of the horizontal coordinates between the projected points of the coordinates of the left shoulder key point and the right shoulder key point collected by the second camera on the wall, D 4 Represents the difference of the vertical coordinates, x, between the projected points of the left hip key point and the left shoulder key point on the wall collected by the second camera lj2 The abscissa, x, of the left shoulder keypoint lj acquired by the second camera rj2 The abscissa, y, representing the right shoulder key point rj acquired by the second camera lk2 Represents the ordinate, y, of the left hip key point lk acquired by the second camera lj2 And the ordinate of the left shoulder key point lj acquired by the second camera is shown.
The first left turn process in step S7 specifically includes: when the human body is opposite to the wall, the human body rotates leftwards by an angle of (0, 90-alpha); the first right turn process is: a process of rotating rightwards by an angle of (0, 90-alpha) degrees when the human body is opposite to the wall;
the step S7 includes the following sub-steps:
s71, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If yes, go to step S72, otherwise go to step S73;
s72, judging that the steering process is the first left-turning process at the moment, and calculating a turning angle as follows:
angle=arccos(K 2 /K max )/PI*180-α
s73, judging that the steering process is the first right steering process at the moment, and calculating a rotation angle as:
angle=arccos(K 1 /K max )/PI*180-α
wherein, K max Representing the maximum distance between the left and right shoulder keypoints; PI represents a circumferential ratio.
The second left turn process in step S8 specifically includes: a process of turning left within the range of (90-alpha, 90+ alpha) degrees at the left side of the human body by taking the human body right opposite to the wall as reference;
in step S8, the calculated rotation angle is:
angle=arccos(K 1 /K max )/PI*180+α。
the third left turn process in step S9 specifically includes: a process of turning left within the range of (90+ alpha, 180) degrees at the left side of the human body by taking the human body right opposite to the wall as reference; the third right-turn process specifically comprises: a process of turning right within the range of (90+ alpha, 180) degrees at the right side of the human body by taking the human body right opposite to the wall as reference;
the step S9 includes the following sub-steps:
s91, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If yes, go to step S92, otherwise go to step S93;
s92, judging that the steering process is the third left-turning process, and calculating a turning angle as:
angle=180-arccos(K 1 /K max )/PI*180+α
s93, judging that the steering process is the third right steering process, and calculating a rotation angle as:
angle=180-arccos(K 2 /K max )/PI*180+α。
the second right turn process in step S10 specifically includes: a process of turning right within the range of (90-alpha, 90+ alpha) degrees at the right side of the human body by taking the human body right facing the wall as a reference;
in step S10, the calculated rotation angle is:
angle=arccos(K 2 /K max )/PI*180+α。
the maximum distance K between the key points of the left shoulder and the right shoulder max Initialisation to 0.65, before each said calculation of the rotation angle, for a maximum distance K max Updating, wherein the updating step comprises the following steps: judging a first steering identification parameter K 1 Whether or not it is greater than the maximum distance K max If yes, let the maximum distance K max Is the first steering identification parameter K 1 Count value ofOtherwise, the maximum distance K is not changed max Count value of (c), completion maximum distance K max And (4) updating.

Claims (6)

1. A real-time human body rotation angle identification method based on two cameras under different scenes is characterized by comprising the following steps:
s1, arranging two cameras with the same parameters and horizontal connecting lines on one wall;
s2, acquiring human body images in real time through two cameras, and acquiring human body skeleton key point coordinates by adopting a human body key point identification algorithm, wherein the skeleton key point coordinates comprise a left shoulder key point coordinate and a right shoulder key point coordinate;
s3, acquiring a first steering identification parameter K according to the coordinates of the key points of the human skeleton acquired by the first camera 1 And a second steering identification parameter K according to the coordinates of the key points of the human skeleton collected by the second camera 2
S4, judging whether the abscissa of the left shoulder key point acquired by the first camera is larger than the abscissa of the right shoulder key point acquired by the first camera, if so, entering a step S5, and otherwise, entering a step S6;
s5, judging whether the abscissa of the left shoulder key point acquired by the second camera is larger than the abscissa of the right shoulder key point acquired by the second camera, if so, entering a step S7, and otherwise, entering a step S8;
s6, judging whether the abscissa of the left shoulder key point acquired by the second camera is larger than the abscissa of the right shoulder key point acquired by the second camera, if so, entering a step S10, and otherwise, entering a step S9;
s7, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If so, judging as a first left-turn process and calculating a rotation angle to obtain a human body rotation angle identification result, otherwise, judging as a first right-turn process and calculating a rotation angle to obtain a human body rotation angle identification result;
s8, judging that the current human body turning process is a second left turning process, and calculating a turning angle to obtain a human body turning angle identification result;
s9, judgmentA steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If so, judging as a third right-turn process and calculating a rotation angle to obtain a human body rotation angle identification result, otherwise, judging as a third left-turn process and calculating a rotation angle to obtain a human body rotation angle identification result;
s10, judging that the current human body turning process is a second right turning process, and calculating a turning angle to obtain a human body turning angle identification result;
in the step S1, the included angles between the lens axes of the two cameras and the wall are both 90- α degrees, the lens axes of the two cameras are intersected and parallel to the horizontal plane, the distance between the lenses of the two cameras is d meters, α represents a constant, and α belongs to (0, 45);
in the step S2, the two cameras include a first camera and a second camera, and the human skeleton key point coordinates acquired by the first camera include a left shoulder key point coordinate (x) lj1 ,y lj1 ) Right shoulder key point coordinate (x) rj1 ,y rj1 ) Coordinates of key points of hip on the left (x) lk1 ,y lk1 ) And right hip keypoint coordinates (x) rk1 ,y rk1 ) The human skeleton key point coordinates acquired by the second camera comprise left shoulder key point coordinates (x) lj2 ,y lj2 ) Right shoulder key point coordinate (x) rj2 ,y rj2 ) Coordinates of key points of hip on the left (x) lk2 ,y lk2 ) And right hip keypoint coordinates (x) rk2 ,y rk2 );
In the step S3, a first turning identification parameter K is obtained according to the coordinates of the key points of the human skeleton collected by the first camera 1 Comprises the following steps:
Figure FDA0003671859550000021
wherein D is 1 Represents the difference of the horizontal coordinates between the projected points of the left shoulder key point and the right shoulder key point coordinates on the wall acquired by the first camera, D 2 Representing the projection points of the left hip key point and the left shoulder key point collected by the first camera on the wallDifference between longitudinal coordinates, x lj1 The abscissa, x, representing the left shoulder keypoint lj acquired by the first camera rj1 The abscissa, y, representing the right shoulder key point rj acquired by the first camera lk1 Represents the ordinate, y, of the left hip key point lk acquired by the first camera lj1 A vertical coordinate representing a left shoulder key point lj acquired by the first camera;
in the step S3, a second steering identification parameter K is acquired according to the human skeleton key point coordinate acquired by the second camera 2 Comprises the following steps:
Figure FDA0003671859550000022
wherein D is 3 Represents the difference of the horizontal coordinates between the projected points of the coordinates of the left shoulder key point and the right shoulder key point collected by the second camera on the wall, D 4 Represents the difference of the vertical coordinates, x, between the projected points of the left hip key point and the left shoulder key point on the wall collected by the second camera lj2 The abscissa, x, of the left shoulder keypoint lj acquired by the second camera rj2 The abscissa, y, representing the right shoulder key point rj acquired by the second camera lk2 Represents the ordinate, y, of the left hip key point lk acquired by the second camera lj2 And the ordinate of the left shoulder key point lj acquired by the second camera is shown.
2. The method for identifying the human body rotation angle in real time under different scenes based on two cameras according to claim 1, wherein the first left turn process in the step S7 specifically comprises: when the human body is opposite to the wall, the human body rotates leftwards by an angle of (0, 90-alpha); the first right turn process is: a process of rotating rightwards by an angle of (0, 90-alpha) degrees when the human body is opposite to the wall;
the step S7 includes the following sub-steps:
s71, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If yes, go to step S72, otherwise go to step S73;
s72, judging that the steering process is the first left-turning process at the moment, and calculating a turning angle as follows:
angle=arccos(K 2 /K max )/PI*180-α
s73, judging that the steering process is the first right steering process at the moment, and calculating a rotation angle as:
angle=arccos(K 1 /K max )/PI*180-α
wherein, K max Representing the maximum distance between the left and right shoulder keypoints; PI represents a circumferential ratio.
3. The method for identifying the human body rotation angle in real time under different scenes based on two cameras according to claim 2, wherein the second left turn process in the step S8 specifically comprises: a process of turning left within the range of (90-alpha, 90+ alpha) degrees at the left side of the human body by taking the human body right opposite to the wall as reference;
in step S8, the calculated rotation angle is:
angle=arccos(K 1 /K max )/PI*180+α。
4. the method for identifying the human body rotation angle in real time under different scenes based on two cameras according to claim 2, wherein the third left turn process in the step S9 specifically comprises: a process of turning left within the range of (90+ alpha, 180) degrees at the left side of the human body by taking the human body right opposite to the wall as reference; the third right-turn process specifically comprises: a process of turning right within the range of (90+ alpha, 180) degrees at the right side of the human body by taking the human body right opposite to the wall as reference;
the step S9 includes the following sub-steps:
s91, judging the first steering identification parameter K 1 Whether or not it is greater than the second steering identification parameter K 2 If yes, go to step S92, otherwise go to step S93;
s92, judging that the steering process is the third left-turning process, and calculating a turning angle as:
angle=180-arccos(K 1 /K max )/PI*180+α
s93, judging that the steering process is a third right-turning process, and calculating a turning angle as follows:
angle=180-arccos(K 2 /K max )/PI*180+α。
5. the method for identifying the human body rotation angle in real time under different scenes based on two cameras according to claim 2, wherein the second right turning process in the step S10 specifically comprises: a process of turning right within the range of (90-alpha, 90+ alpha) degrees at the right side of the human body by taking the human body right facing the wall as a reference;
in step S10, the calculated rotation angle is:
angle=arccos(K 2 /K max )/PI*180+α。
6. the method for identifying the human body rotation angle in different scenes based on two cameras as claimed in claim 2, wherein the maximum distance K between the key points of the left shoulder and the right shoulder max Initialisation to 0.65, before each said calculation of the rotation angle, for a maximum distance K max Updating, wherein the updating step comprises the following steps: judging a first steering identification parameter K 1 Whether or not it is greater than the maximum distance K max If yes, let the maximum distance K max Is the first steering identification parameter K 1 Otherwise the maximum distance K is not changed max Count value of (c), completion maximum distance K max And (4) updating.
CN202010816048.5A 2020-08-14 2020-08-14 Real-time human body rotation angle identification method based on double cameras under different scenes Active CN111914790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010816048.5A CN111914790B (en) 2020-08-14 2020-08-14 Real-time human body rotation angle identification method based on double cameras under different scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010816048.5A CN111914790B (en) 2020-08-14 2020-08-14 Real-time human body rotation angle identification method based on double cameras under different scenes

Publications (2)

Publication Number Publication Date
CN111914790A CN111914790A (en) 2020-11-10
CN111914790B true CN111914790B (en) 2022-08-02

Family

ID=73284641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010816048.5A Active CN111914790B (en) 2020-08-14 2020-08-14 Real-time human body rotation angle identification method based on double cameras under different scenes

Country Status (1)

Country Link
CN (1) CN111914790B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488005B (en) * 2020-12-04 2022-10-14 临沂市新商网络技术有限公司 On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
CN113435364B (en) * 2021-06-30 2023-09-26 平安科技(深圳)有限公司 Head rotation detection method, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004294657A (en) * 2003-03-26 2004-10-21 Canon Inc Lens device
CN102855379A (en) * 2012-05-30 2013-01-02 无锡掌游天下科技有限公司 Skeleton joint data based standardizing method
CN106296720A (en) * 2015-05-12 2017-01-04 株式会社理光 Human body based on binocular camera is towards recognition methods and system
CN110495889A (en) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 Postural assessment method, electronic device, computer equipment and storage medium
CN110969114A (en) * 2019-11-28 2020-04-07 四川省骨科医院 Human body action function detection system, detection method and detector

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK179948B1 (en) * 2017-05-16 2019-10-22 Apple Inc. Recording and sending Emoji

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004294657A (en) * 2003-03-26 2004-10-21 Canon Inc Lens device
CN102855379A (en) * 2012-05-30 2013-01-02 无锡掌游天下科技有限公司 Skeleton joint data based standardizing method
CN106296720A (en) * 2015-05-12 2017-01-04 株式会社理光 Human body based on binocular camera is towards recognition methods and system
CN110495889A (en) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 Postural assessment method, electronic device, computer equipment and storage medium
CN110969114A (en) * 2019-11-28 2020-04-07 四川省骨科医院 Human body action function detection system, detection method and detector

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
C. Weerasinghe ; P. Ogunbona ; Wanqing Li.2D to pseudo-3D conversion of "head and shoulder" images using feature based parametric disparity maps.《Proceedings 2001 International Conference on Image Processing 》.2002, *
基于3D骨架和MCRF模型的行为识别;刘皓,郭立等;《中国科学技术大学学报》;20140415;全文 *

Also Published As

Publication number Publication date
CN111914790A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN108154550B (en) RGBD camera-based real-time three-dimensional face reconstruction method
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
CN111062873B (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
US11521311B1 (en) Collaborative disparity decomposition
CN106780618B (en) Three-dimensional information acquisition method and device based on heterogeneous depth camera
US20200334842A1 (en) Methods, devices and computer program products for global bundle adjustment of 3d images
US20130335535A1 (en) Digital 3d camera using periodic illumination
CN101697233A (en) Structured light-based three-dimensional object surface reconstruction method
CN111914790B (en) Real-time human body rotation angle identification method based on double cameras under different scenes
CN110838164A (en) Monocular image three-dimensional reconstruction method, system and device based on object point depth
Taguchi et al. SLAM using both points and planes for hand-held 3D sensors
CN113379815A (en) Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
Wan et al. A study in 3D-reconstruction using kinect sensor
Ruchay et al. Accuracy analysis of 3D object reconstruction using RGB-D sensor
KR20050061115A (en) Apparatus and method for separating object motion from camera motion
Ringaby et al. Scan rectification for structured light range sensors with rolling shutters
CN114935316B (en) Standard depth image generation method based on optical tracking and monocular vision
Cai et al. Assembling convolution neural networks for automatic viewing transformation
Howells et al. Depth maps comparisons from monocular images by MiDaS convolutional neural networks and dense prediction transformers
Yin et al. Motion detection and tracking using the 3D-camera
WO2022253043A1 (en) Facial deformation compensation method for facial depth image, and imaging apparatus and storage medium
CN110132225B (en) Monocular oblique non-coaxial lens distance measuring device
Huang et al. AR Mapping: Accurate and Efficient Mapping for Augmented Reality
Zhu et al. Research on infrared and visible images registration algorithm based on graph
Almeida et al. Incremental reconstruction approach for telepresence or ar applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant