CN112132883A - Human neck flexibility measurement system and method based on depth camera - Google Patents

Human neck flexibility measurement system and method based on depth camera Download PDF

Info

Publication number
CN112132883A
CN112132883A CN202010963092.9A CN202010963092A CN112132883A CN 112132883 A CN112132883 A CN 112132883A CN 202010963092 A CN202010963092 A CN 202010963092A CN 112132883 A CN112132883 A CN 112132883A
Authority
CN
China
Prior art keywords
point cloud
depth
camera
module
color image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010963092.9A
Other languages
Chinese (zh)
Inventor
张静
褚智威
杨少毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Weiplastic Intelligent Technology Co ltd
Original Assignee
Xi'an Weiplastic Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Weiplastic Intelligent Technology Co ltd filed Critical Xi'an Weiplastic Intelligent Technology Co ltd
Priority to CN202010963092.9A priority Critical patent/CN112132883A/en
Publication of CN112132883A publication Critical patent/CN112132883A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/053Detail-in-context presentations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a human neck flexibility measuring system and method based on a depth camera, which comprises a host, the depth camera, a controller, a guide rail and a turntable, wherein the host controls a sliding part arranged on the guide rail to move up and down and the turntable to rotate through the controller; the point cloud algorithm module calculates the rotation angle of the neck of the tested person through the acquired data and displays the tested result through the display screen. The testee can accurately and quickly obtain the neck flexibility data according to guidance by standing on the turntable without using an additional tool.

Description

Human neck flexibility measurement system and method based on depth camera
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a human neck flexibility measuring system and method based on a depth camera.
Background
Along with the continuous improvement of living standard of people, people pay more and more attention to the health of the people gradually. Many traditional fitness industries are beginning to find intervention points with three-dimensional vision. Wherein, owing to use intelligent terminal's for a long time, everybody is paid close attention to whether healthy especially to the flexibility ratio of neck, as shown in fig. 1, neck flexibility ratio includes that the neck is controlled the side bend, the neck is controlled rotation and neck forward, backward bend about the neck, but traditional manual measurement human neck flexibility ratio data is very consuming time to measuring tool uses complicacy, and the misoperation causes very big error easily. In addition, the industry has demands on human body data, and by combining the existing three-dimensional data acquisition technology, the industry expects that the flexibility data of the neck of the human body can be automatically acquired through a computer vision technical means, so that whether the neck has potential bad body posture or not is diagnosed, a proper fitness plan is helped to be formulated, the bad body posture is changed, and the purpose of correcting the bad body posture is achieved.
In view of the above, the present inventors have studied a system and method for measuring human neck flexibility based on a depth camera to solve the above problems.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a human neck flexibility measurement system and method based on a depth camera.
The technical problem to be solved by the invention is realized by the following technical scheme: a human neck flexibility measurement system based on a depth camera comprises a hardware unit and a software unit, wherein the hardware unit comprises a host, the depth camera, a controller, a guide rail and a turntable;
the software unit comprises a background removing module, a guiding module, an OpenVINO face key point detecting module and a point cloud algorithm module;
the host machine controls a sliding part arranged on the guide rail to reciprocate up and down and the rotation of a rotary table through a controller, the rotary table is used for placing a tested person, the depth camera is fixedly connected with the sliding part of the guide rail, and the depth camera is used for acquiring a depth image and a color image of a human face;
the background removal module, the OpenVINO face key point detection module and the point cloud algorithm module are arranged in the host; the system comprises a depth image acquisition module, a depth camera, a point cloud algorithm module, a host computer, a guide module, an OpenVINO face key point detection module, a point cloud algorithm module and a display screen of the host computer, wherein the depth image acquisition module is used for acquiring the key point data of a face color image, the point cloud algorithm module is used for calculating the rotation angle of the neck of a detected person and displaying the detected result through the display screen of the host computer, the host computer stops the guide rail movement through the controller when the proportion of blank parts in the depth image is 1/2, namely the face sight is flush with the depth camera, the guide module is used for guiding the detected person to do relevant rotation actions, the OpenVINO face key point detection module is used for.
Further, the human neck flexibility measuring method based on the depth camera comprises the following steps:
step one, collecting a face depth image and a color image of a detected person
The tested person stands on the rotary disc still, the host machine adjusts the rotary disc and the depth camera through the controller, the depth camera is opposite to the face of the tested person and is 1.1 m away from the face of the tested person, and meanwhile, a depth image and a color image are collected;
step two, background removal
Establishing a cylindrical equation at a distance of 1.1 m, wherein the inside of the cylinder is a measured human body range, and removing the content of the depth image and the color image in the space outside the cylinder through the cylindrical equation;
step three, aligning the depth image and the color image
Corresponding the pixels of the depth image with the background removed in the step two to the pixels of the color image one by one;
step four, extracting key points of the human face and converting the key points into three-dimensional point cloud
Extracting face key points in the color image by using an OpenVINO face key point detection module, and converting two-dimensional key points in the color image into three-dimensional point cloud;
step five, constructing reference point cloud
Step six, measuring according to guidance and obtaining results
Guiding the tested person to do relevant rotation action through a guiding module, acquiring a depth image and a color image in real time, executing steps two to four on the depth image and the color image acquired in real time, registering the point cloud acquired in real time and the reference point cloud constructed in the step five, acquiring a rotation and translation matrix, and converting the rotation and translation matrix into an Euler angle to calculate a relative angle relative to the reference point cloud.
Further, the specific process of the second step is as follows:
firstly, traversing each pixel in the depth image acquired in the first step, and converting the depth image into point cloud by using the following formula:
Figure BDA0002681277090000031
wherein xw,yw,zwIs the coordinates of a three-dimensional point cloud, u and v are pixel coordinates, zcAs depth values in the depth image, fxIs the focal length of the camera in the x-axis direction, fyIs the focal length of the camera in the y-axis direction, cx,cyIs the center of the camera aperture;
the point cloud is then background removed in space using the following formula:
(zw-d)·(zw-d)+xw·xw≥r2
and d is the distance between the depth camera and the measurement target, r is the radius of the measurement target, each point cloud is traversed, the point meeting the formula is the background and then deleted, and only the turntable part, namely the place where the human body stands, is reserved.
Further, the third step of aligning may be to traverse each depth image pixel, and then generate a color map that is completely equivalent to the depth image, that is, the color map after alignment, specifically:
1) converting the depth map into point cloud, wherein the coordinates of each three-dimensional point are as follows: x is the number ofw,yw,zw
2) The point cloud is converted into color camera space using the following formula:
Figure BDA0002681277090000041
wherein R T is camera external reference, xw,yw,zwIs three-dimensional point cloud;
3) converting the three-dimensional point cloud in the color camera space to a color image using the following formula:
Figure BDA0002681277090000042
wherein c isxc cycIs the principal axis of a color camera, fxc,fycThe focal lengths of the color camera in the x-axis and y-axis directions, respectively.
Further, the process from the fourth step to the fifth step is as follows: after the depth image and the color image are aligned, 35 key points of each part of the human face are extracted on the aligned color image by using an OpenVINO human face key point detection module, then the coordinates of the 35 key points are converted into three-dimensional coordinates, and finally the coordinates of the 35 key points of the three-dimensional human face are obtained, wherein the coordinates of the 35 key points of the three-dimensional human face are the measurement reference.
Further, the specific process of the sixth step is as follows: the method comprises the following steps of guiding a tested person to sequentially do left and right neck lateral bending, left and right neck rotation and front and back neck bending through a guiding module, generating real-time point clouds through the second step to the fourth step when the tested person rotates, wherein the real-time point clouds correspond to the reference point clouds constructed in the fifth step one by one, and the human face moves in a rigid body in a short time, so that the problem of rigid body registration can be converted, the registration is divided into rotation and translation, and the specific calculation method comprises the following steps:
1) recording the point in the reference point cloud constructed in the step five as piThe point in the point cloud generated in real time is qi(i ═ 1,2,3 … 35), each a 3 × 1 vector;
2) respectively calculate piAnd q isiIs the center of gravity of, get p0And q is0
3) Calculating data for translation to the origin of the center of gravity
Figure BDA0002681277090000051
4) Note the book
Figure BDA0002681277090000052
Is X0Memory for recording
Figure BDA0002681277090000053
Is Y0
5) Computing matrices
Figure BDA0002681277090000054
And do SVD decomposition
Figure BDA0002681277090000055
6) Obtaining a rotation matrix R ═ VUT
7) And converting the rotation matrix into Euler angles, namely the measured angles.
The system also comprises a wireless communication module which is connected with the intelligent terminal of the tested person in a wireless mode and used for obtaining the measurement result.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention relates to a human neck flexibility measurement system and method based on a depth camera, which do not need to use additional other tools for measurement, a measured person only needs to stand on a turntable, a host machine adjusts the turntable and the depth camera through a controller, acquires face data in a static state, establishes a measurement reference, then guides the measured person to do related actions through a guide module, and the system acquires real-time point cloud data, calculates neck flexibility data of the measured person through a point cloud algorithm module, and finally displays the neck flexibility data on a screen of the host machine.
2. According to the human neck flexibility measurement system and method based on the depth camera, data of three degrees of freedom can be measured, wherein the data comprises front and back bending of the head, left and right rotation of the head and left and right lateral bending of the head.
3. The human neck flexibility measuring system and method based on the depth camera have the advantages of accurate measurement and high measuring speed, and current measuring data can be seen in real time.
Drawings
FIG. 1 is a schematic diagram of human neck flexibility;
FIG. 2 is a flowchart illustrating the steps of measuring human neck flexibility according to the present invention;
fig. 3 is a diagram illustrating the effect of measuring the flexibility of the human neck according to the present invention.
In the figure: 1. a host; 2. a depth camera; 3. a guide rail; 4. a turntable; 11. a display screen.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects of the present invention clearer, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention, the detailed description being as follows.
In this embodiment, as shown in fig. 3, a human neck flexibility measurement system based on a depth camera includes a hardware unit and a software unit, where the hardware unit includes a host 1, a depth camera 2, a controller (interface control), a guide rail 3, and a turntable 4;
the software unit comprises a background removing module, a guiding module, an OpenVINO face key point detecting module and a point cloud algorithm module, wherein the background removing module, the OpenVINO face key point detecting module and the point cloud algorithm module are arranged in the host.
The host machine controls the sliding component mounted on the guide rail to reciprocate up and down and the rotating disk to rotate through the controller, the rotating disk is used for placing a tested person, the depth camera is fixedly connected with the sliding component of the guide rail, and the depth camera is adjusted to be right opposite to the face of the tested person through the cooperation of the rotating disk and the guide rail, so that the depth camera collects the depth image and the color image of the face of the person.
The system comprises a background removing module, a host computer, a guidance module, an OpenVINO face key point detecting module, a point cloud algorithm module and a display screen 11, wherein the background removing module is used for removing background parts of a depth image and a color image in real time and only keeping a human body part, when the proportion of blank parts in the depth image is 1/2, namely the face sight is flush with a depth camera, the host computer stops guide rail movement through a controller, the guidance module is used for guiding a measured person to do relevant rotation actions, the OpenVINO face key point detecting module is respectively used for obtaining key point data of static and rotating states of the face color image, the point cloud algorithm module calculates the rotating angle of the neck of the measured person after obtaining the key point data.
The invention relates to a human neck flexibility measuring method based on a depth camera, which is shown in figure 2 and specifically comprises the following steps:
step one, collecting a face depth image and a color image of a detected person
The measured person stands on the turntable still, the controller is started, the controller controls the depth camera to move along the guide rail and adjust the turntable, the depth camera stops when the depth camera is level with the sight of human eyes, whether the human face is still or not can be judged through technologies such as a sensor or an OpenVINO human face key point detection technology, and if the human face is still, the depth camera continuously collects three depth images and color images, the range of the depth camera from the measured person is 0.8-1.45 meters, and the depth camera in the embodiment is 1.1 meter from the human face.
Step two, background removal
Establishing a cylindrical equation at a distance of 1.1 m, wherein the inside of a cylinder is a measured human body range, and removing the content of depth images and color images in the space outside the cylinder through the cylindrical equation, specifically: firstly, traversing each pixel in the depth image acquired in the first step, and converting the depth image into point cloud by using the following formula:
Figure BDA0002681277090000071
wherein xw,yw,zwIs the coordinates of a three-dimensional point cloud, u and v are pixel coordinates, zcAs depth values in the depth image, fxIs the focal length of the camera in the x-axis direction, fyIs the focal length of the camera in the y-axis direction, cx,cyIs the center of the camera aperture;
the point cloud is then background removed in space using the following formula:
(zw-d)·(zw-d)+xw·xw≥r2
wherein d is the distance from the depth camera to the measurement target, 1.1 meter is used in the invention, r is the radius of the measurement target, the radius of the turntable in the system is 0.38 meter, each point cloud is traversed, the point meeting the formula is the background and then deleted, and only the turntable part, namely the place where the human body stands, is reserved.
Step three, aligning the depth image and the color image
Corresponding the pixels of the depth image with the background removed in the step two to the pixels of the color image one by one; and (4) aligning the depth image and the color image, namely, each effective pixel point in the depth image has the same position of the corresponding color data in the color image. By using the following method, a color image which is completely equal to the depth image in size can be generated by traversing each depth image pixel, namely the color image after alignment.
The method comprises the following specific steps in sequence:
1) converting the depth map into point cloud, wherein the coordinates of each three-dimensional point are as follows: x is the number ofw,yw,zw
2) The point cloud is converted into color camera space using the following formula:
Figure BDA0002681277090000081
wherein R T is outside the cameraGinseng, xw,yw,zwIs three-dimensional point cloud;
3) converting the three-dimensional point cloud in the color camera space to a color image using the following formula:
Figure BDA0002681277090000082
wherein c isxc cycIs the principal axis of a color camera, fxc,fycThe focal lengths of the color camera in the x-axis and y-axis directions, respectively.
Step four, extracting key points of the human face and converting the key points into three-dimensional point cloud
Extracting face key points in the color image by using an OpenVINO face key point detection module, and converting two-dimensional key points in the color image into three-dimensional point cloud;
step five, constructing reference point cloud
Specifically, the process from the fourth step to the fifth step is as follows: after the depth image and the color image are aligned, 35 key points of each part of the human face are extracted on the aligned color image by using an OpenVINO human face key point detection module, then the coordinates of the 35 key points are converted into three-dimensional coordinates, and finally the coordinates of the 35 key points of the three-dimensional human face are obtained, wherein the coordinates of the 35 key points of the three-dimensional human face are measurement references, namely constructed reference point clouds.
Step six, measuring according to guidance and obtaining results
And the measured person guides the measured person to do relevant rotation action according to the guiding module, a depth image and a color image are obtained in real time, the step two to the step four are carried out on the depth image and the color image which are obtained in real time, the point cloud obtained in real time is registered with the reference point cloud constructed in the step five, a rotation and translation matrix is obtained, and then the point cloud is converted into an Euler angle to calculate the relative angle relative to the reference point cloud.
Specifically, the testee guides the testee to do neck left and right lateral bending, neck left and right rotation and neck front and back bending in sequence according to the guide module, when the testee rotates, real-time point clouds are generated through the second step to the fourth step, the real-time point clouds correspond to the reference point clouds constructed in the fifth step one by one, and the human face moves in a rigid body in a short time, so that the problem of rigid body registration can be converted, the registration is divided into rotation and translation, and the specific calculation method is as follows:
1) recording the point in the reference point cloud constructed in the step five as piThe point in the point cloud generated in real time is qi(i ═ 1,2,3 … 35), each a 3 × 1 vector;
2) respectively calculate piAnd q isiIs the center of gravity of, get p0And q is0
3) Calculating data for translation to the origin of the center of gravity
Figure BDA0002681277090000091
4) Note the book
Figure BDA0002681277090000092
Is X0Memory for recording
Figure BDA0002681277090000093
Is Y0
5) Computing matrices
Figure BDA0002681277090000094
And do SVD decomposition
Figure BDA0002681277090000095
6) Obtaining a rotation matrix R ═ VUT
7) And converting the rotation matrix into Euler angles, namely the measured angles.
The whole measurement process is completed through the steps, finally measured angle data are displayed through a display screen of the host, the system further comprises a wireless communication module, and the measured person is connected with the wireless communication module in a wireless mode through an intelligent terminal and used for obtaining the result of the neck flexibility data of the measured person.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (7)

1. A human neck flexibility measurement system based on a depth camera is characterized by comprising a hardware unit and a software unit, wherein the hardware unit comprises a host, the depth camera, a controller, a guide rail and a turntable;
the software unit comprises a background removing module, a guiding module, an OpenVINO face key point detecting module and a point cloud algorithm module;
the host machine controls a sliding part arranged on the guide rail to reciprocate up and down and the rotation of a rotary table through a controller, the rotary table is used for placing a tested person, the depth camera is fixedly connected with the sliding part of the guide rail, and the depth camera is used for acquiring a depth image and a color image of a human face;
the background removal module, the OpenVINO face key point detection module and the point cloud algorithm module are arranged in the host; the system comprises a depth image acquisition module, a depth camera, a point cloud algorithm module, a host computer, a guide module, an OpenVINO face key point detection module, a point cloud algorithm module and a display screen of the host computer, wherein the depth image acquisition module is used for acquiring the key point data of a face color image, the point cloud algorithm module is used for calculating the rotation angle of the neck of a detected person and displaying the detected result through the display screen of the host computer, the host computer stops the guide rail movement through the controller when the proportion of blank parts in the depth image is 1/2, namely the face sight is flush with the depth camera, the guide module is used for guiding the detected person to do relevant rotation actions, the OpenVINO face key point detection module is used for.
2. A human neck flexibility measurement method based on a depth camera is characterized by comprising the following steps:
step one, collecting a face depth image and a color image of a detected person
The tested person stands on the rotary disc still, the host machine adjusts the rotary disc and the depth camera through the controller, the depth camera is opposite to the face of the tested person and is 1.1 m away from the face of the tested person, and meanwhile, a depth image and a color image are collected;
step two, background removal
Establishing a cylindrical equation at a distance of 1.1 m, wherein the inside of the cylinder is a measured human body range, and removing the content of the depth image and the color image in the space outside the cylinder through the cylindrical equation;
step three, aligning the depth image and the color image
Corresponding the pixels of the depth image with the background removed in the step two to the pixels of the color image one by one;
step four, extracting key points of the human face and converting the key points into three-dimensional point cloud
Extracting face key points in the color image by using an OpenVINO face key point detection module, and converting two-dimensional key points in the color image into three-dimensional point cloud;
step five, constructing reference point cloud
Step six, measuring according to guidance and obtaining results
Guiding the tested person to do relevant rotation action through a guiding module, acquiring a depth image and a color image in real time, executing steps two to four on the depth image and the color image acquired in real time, registering the point cloud acquired in real time and the reference point cloud constructed in the step five, acquiring a rotation and translation matrix, and converting the rotation and translation matrix into an Euler angle to calculate a relative angle relative to the reference point cloud.
3. The method for measuring human neck flexibility based on a depth camera according to claim 2, wherein the specific process of the second step is as follows:
firstly, traversing each pixel in the depth image acquired in the first step, and converting the depth image into point cloud by using the following formula:
Figure FDA0002681277080000021
wherein xw,yw,zwIs the coordinates of a three-dimensional point cloud, u and v are pixel coordinates, zcAs depth values in the depth image, fxIs the focal length of the camera in the x-axis direction, fyIs the focal length of the camera in the y-axis direction, cx,cyIs the center of the camera aperture;
the point cloud is then background removed in space using the following formula:
(zw-d)·(zw-d)+xw·xw≥r2
and d is the distance between the depth camera and the measurement target, r is the radius of the measurement target, each point cloud is traversed, the point meeting the formula is the background and then deleted, and only the turntable part, namely the place where the human body stands, is reserved.
4. The method for measuring human neck flexibility according to claim 2, wherein the step three alignment method is to traverse each depth image pixel to generate a color image completely equivalent to the depth image, that is, the color image after alignment, and specifically includes:
1) converting the depth map into point cloud, wherein the coordinates of each three-dimensional point are as follows: x is the number ofw,yw,zw
2) The point cloud is converted into color camera space using the following formula:
Figure FDA0002681277080000031
wherein R T is camera external reference, xw,yw,zwIs three-dimensional point cloud;
3) converting the three-dimensional point cloud in the color camera space to a color image using the following formula:
Figure FDA0002681277080000032
wherein c isxc cycIs the principal axis of a color camera, fxc,fycThe focal lengths of the color camera in the x-axis and y-axis directions, respectively.
5. The method for measuring human neck flexibility based on depth camera as claimed in claim 2, wherein the process of the fourth to fifth steps is: after the depth image and the color image are aligned, 35 key points of each part of the human face are extracted on the aligned color image by using an OpenVINO human face key point detection module, then the coordinates of the 35 key points are converted into three-dimensional coordinates, and finally the coordinates of the 35 key points of the three-dimensional human face are obtained, wherein the coordinates of the 35 key points of the three-dimensional human face are the measurement reference.
6. The method for measuring human neck flexibility based on a depth camera according to claim 2, wherein the specific process of the sixth step is as follows: the method comprises the following steps of guiding a tested person to sequentially do left and right neck lateral bending, left and right neck rotation and front and back neck bending through a guiding module, generating real-time point clouds through the second step to the fourth step when the tested person rotates, wherein the real-time point clouds correspond to the reference point clouds constructed in the fifth step one by one, and the human face moves in a rigid body in a short time, so that the problem of rigid body registration can be converted, the registration is divided into rotation and translation, and the specific calculation method comprises the following steps:
1) recording the point in the reference point cloud constructed in the step five as piThe point in the point cloud generated in real time is qi(i ═ 1,2,3 … 35), each a 3 × 1 vector;
2) respectively calculate piAnd q isiIs the center of gravity of, get p0And q is0
3) Calculating data for translation to the origin of the center of gravity
Figure FDA0002681277080000041
4) Note the book
Figure FDA0002681277080000042
Is X0Memory for recording
Figure FDA0002681277080000043
Is Y0
5) Computing matrices
Figure FDA0002681277080000044
And do SVD decomposition
Figure FDA0002681277080000045
6) Obtaining a rotation matrix R ═ VUT
7) And converting the rotation matrix into Euler angles, namely the measured angles.
7. The human neck flexibility measurement system based on the depth camera as claimed in claim 1, further comprising a wireless communication module, wherein the wireless communication module is wirelessly connected with the subject intelligent terminal for obtaining the measurement result.
CN202010963092.9A 2020-09-14 2020-09-14 Human neck flexibility measurement system and method based on depth camera Pending CN112132883A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010963092.9A CN112132883A (en) 2020-09-14 2020-09-14 Human neck flexibility measurement system and method based on depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010963092.9A CN112132883A (en) 2020-09-14 2020-09-14 Human neck flexibility measurement system and method based on depth camera

Publications (1)

Publication Number Publication Date
CN112132883A true CN112132883A (en) 2020-12-25

Family

ID=73846003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010963092.9A Pending CN112132883A (en) 2020-09-14 2020-09-14 Human neck flexibility measurement system and method based on depth camera

Country Status (1)

Country Link
CN (1) CN112132883A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114334084A (en) * 2022-03-01 2022-04-12 深圳市海清视讯科技有限公司 Body-building data processing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107928633A (en) * 2017-12-22 2018-04-20 西安蒜泥电子科技有限责任公司 A kind of light-type three-dimensional and body component tracker and body composition test method
KR20180103280A (en) * 2017-03-09 2018-09-19 석원영 An exercise guidance system for the elderly that performs posture recognition based on distance similarity between joints
CN108596948A (en) * 2018-03-16 2018-09-28 中国科学院自动化研究所 The method and device of human body head posture is identified based on depth camera
CN111414798A (en) * 2019-02-03 2020-07-14 沈阳工业大学 Head posture detection method and system based on RGB-D image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180103280A (en) * 2017-03-09 2018-09-19 석원영 An exercise guidance system for the elderly that performs posture recognition based on distance similarity between joints
CN107928633A (en) * 2017-12-22 2018-04-20 西安蒜泥电子科技有限责任公司 A kind of light-type three-dimensional and body component tracker and body composition test method
CN108596948A (en) * 2018-03-16 2018-09-28 中国科学院自动化研究所 The method and device of human body head posture is identified based on depth camera
CN111414798A (en) * 2019-02-03 2020-07-14 沈阳工业大学 Head posture detection method and system based on RGB-D image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114334084A (en) * 2022-03-01 2022-04-12 深圳市海清视讯科技有限公司 Body-building data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN100488451C (en) Medical image process apparatus with medical image measurement function
US8721567B2 (en) Mobile postural screening method and system
CN104132613B (en) Noncontact optical volume measurement method for complex-surface and irregular objects
US8842906B2 (en) Body measurement
CN105054936B (en) Quick height and body weight measurement based on Kinect depth images
CN104665836B (en) length measuring method and length measuring device
CN106625673A (en) Narrow space assembly system and assembly method
CN102589516B (en) Dynamic distance measuring system based on binocular line scan cameras
US20130188851A1 (en) Information processing apparatus and control method thereof
EP2506215B1 (en) Information processing apparatus, imaging system, information processing method, and program causing computer to execute information processing
CN108245788B (en) Binocular distance measuring device and method and accelerator radiotherapy system comprising same
US20160379368A1 (en) Method for determining an imaging specification and image-assisted navigation as well as device for image-assisted navigation
WO2014027229A1 (en) Method and apparatus for converting 2d images to 3d images
CN103617611A (en) Automatic threshold segmentation detection method for center and size of light spot
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
CN111179335A (en) Standing tree measuring method based on binocular vision
WO2022257794A1 (en) Method and apparatus for processing visible light image and infrared image
CN112132883A (en) Human neck flexibility measurement system and method based on depth camera
CN111311659A (en) Calibration method based on three-dimensional imaging of oblique plane mirror
CN103767734A (en) Wireless curved plane extended field-of-view ultrasound imaging method and device
KR100930594B1 (en) The system for capturing 2d facial image and extraction method of face feature points thereof
CN117392109A (en) Mammary gland focus three-dimensional reconstruction method and system
CN114067420B (en) Sight line measuring method and device based on monocular camera
CN114862960A (en) Multi-camera calibrated image ground leveling method and device, electronic equipment and medium
Su et al. An automatic calibration system for binocular stereo imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination