CN112509055A - Acupuncture point positioning system and method based on combination of binocular vision and coded structured light - Google Patents

Acupuncture point positioning system and method based on combination of binocular vision and coded structured light Download PDF

Info

Publication number
CN112509055A
CN112509055A CN202011308196.2A CN202011308196A CN112509055A CN 112509055 A CN112509055 A CN 112509055A CN 202011308196 A CN202011308196 A CN 202011308196A CN 112509055 A CN112509055 A CN 112509055A
Authority
CN
China
Prior art keywords
structured light
coordinate system
image
camera
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011308196.2A
Other languages
Chinese (zh)
Other versions
CN112509055B (en
Inventor
刘军
郭剑峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202011308196.2A priority Critical patent/CN112509055B/en
Publication of CN112509055A publication Critical patent/CN112509055A/en
Application granted granted Critical
Publication of CN112509055B publication Critical patent/CN112509055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a binocular vision and coded structured light combined acupuncture point positioning system and method. And carrying out three-dimensional reconstruction on the human back model, restoring the two-dimensional coordinates shot by the camera to three-dimensional coordinates in a world coordinate system, segmenting the human back model from the image, and finding out corresponding acupuncture point information of the human back by combining traditional Chinese medicine. According to the invention, the three-dimensional information of the back is reconstructed by adopting the two cameras and the structured light, so that the positioning accuracy is further improved. The invention combines the coding structure light and the binocular vision, effectively solves the difficult problems of indoor white walls and non-texture objects, enhances the environmental interference resistance, has stronger reliability and greatly improves the quality of the depth map.

Description

Acupuncture point positioning system and method based on combination of binocular vision and coded structured light
Technical Field
The invention belongs to the field of medical technology application, and particularly relates to an acupuncture point positioning method based on combination of binocular vision and coded structured light.
Background
In traditional Chinese medicine, acupuncture therapy has a long history and a unique curative effect. However, the treatment method has high requirements on the experience and technique of the practitioner, which is particularly shown in the accuracy of the acupuncture point positioning. The acupoint location method based on body, which is given in medical books, mainly uses a certain organ or a certain characteristic part of body as a reference to find the relative position of the acupoint. At present, the related technology of automatic acupoint selection at home and abroad is not mature, most of the technologies are in the research stage, and the examples of clinical application are almost not available. To accurately determine the positions of the acupuncture points, years of clinical practice are required, and doctors with little clinical experience have great difficulty in accurately positioning the acupuncture points. Therefore, there is an urgent need to develop a positioning apparatus for automatically positioning the acupuncture points.
In order to solve the problem, an acupuncture point positioning method combining binocular vision and structured light is developed, a human back model is subjected to three-dimensional reconstruction, two-dimensional coordinates shot by a camera are restored to three-dimensional coordinates in a world coordinate system, the human back model is segmented from images, and corresponding acupuncture point information of the human back is found by combining traditional Chinese medicine.
Disclosure of Invention
The invention aims to provide a binocular vision and structured light combined acupuncture point positioning system and method, aiming at the defects of the prior art, the binocular vision and structured light combined acupuncture point positioning system and method are used for carrying out three-dimensional reconstruction on a human back model, restoring two-dimensional coordinates shot by a camera into three-dimensional coordinates under a world coordinate system, segmenting the human back model from images, and finding corresponding acupuncture point information of the human back by combining traditional Chinese medicine.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the acupuncture point positioning system based on combination of binocular vision and coded structured light comprises two identical cameras, a structured light generator, a support frame, a calibration board and a main controller; the two cameras and the structural light generator are positioned above the support frame, and the structural light generator is positioned in front of the two cameras;
the structured light generator is used for projecting structured light on the back of a human body; wherein the pattern of structured light is a stripe pattern encoded with gray codes;
the camera is used for capturing an image of the back of the human body projected with the structured light pattern;
the calibration plate is used for calibrating the camera;
the main controller is used for controlling the starting of the structured light generator, setting a structured light pattern by the code of the structured light generator, receiving the pattern information transmitted by the camera and transmitting the pattern information to the computer for data analysis.
An acupuncture point positioning method based on combination of binocular vision and coded structured light comprises the following steps:
step (1), camera calibration:
and calibrating by adjusting the direction of the calibration board or the camera by adopting a Zhang Zhengyou calibration method.
In the calibration process, the camera extracts angular points as characteristic points, a least square method is applied to estimate distortion parameters under the condition of actual radial distortion, and finally a maximum likelihood method is applied to optimize and improve the precision, so that rotation and translation parameters and camera distortion parameters are obtained.
Step (2), starting the structured light generator, and enabling the structured light pattern to fall on the back of the human body;
step (3), two cameras acquire a left image and a right image of the back of the human body, which are projected with structured light patterns;
step (4) image stereo correction
The left image and the right image are corrected in a three-dimensional way by the prior art; the stereo correction is to align two images which are not strictly coplanar and aligned into a coplanar and line alignment.
Step (5) obtaining matching points
5-1 Gray code values
And projecting Gray code pattern structured light to the back of the human body for coding and decoding, so that each pixel point acquired by the two cameras obtains a Gray code value.
The decoding adopts the structured light of the coded original gray code pattern projected to the back of the human body, then the structured light of the gray code anti-code pattern is projected to the back of the human body again, and finally the decoding is carried out through the two patterns.
The Gray code decoding pattern is obtained by inverting the original Gray code pattern.
In the decoding stage, a simple dual-threshold segmentation method is adopted: let I (x, y) be the gray value of the image at point (x, y), I+And (x, y) is the corresponding gray value when the original Gray code pattern is projected, and I- (x, y) is the corresponding gray value when the gray code anti-code pattern is projected. If I+(x,y)<I- (x, y), the gray code value at the coordinate is considered to be 0, otherwise, the gray code value is 1.
5-2 phase value
The structured light with N patterns is projected on the back of a human body, each pattern corresponds to a phase value, and the phase period is N. And extracting the boundary of the black and white stripe from each phase diagram, wherein if the pixel point (x, y) in the nth phase diagram is extracted to the boundary, the phase value corresponding to the pixel point (x, y) is N, and N is less than or equal to N.
5-3 search for matching points
And traversing pixel points on the same line of the right image by a traversal search method, and searching a point of the left image P (x, y) corresponding to the right image and having the same gray code value and phase value, wherein the point is a matching point.
Difference in structured light and general binocular vision: for scenes with few texture features, the effect of binocular vision reconstruction is not good. The structured light is projected, the texture of the image is improved, the environment interference resistance is enhanced, the reliability is higher, and the quality of the depth map is improved greatly.
Step (6), acquiring a disparity map and a depth map:
6-1 assume left plot point PL(xl,yl) The matching point in the right diagram is Pr(xr,yr) Point PLCorresponding to a disparity value of xl-xr(ii) a And solving a parallax value of each effective pixel point in the left image to obtain a parallax image.
The effective pixel points are pixel points with right image matching points.
6-2, eliminating invalid pixel points in the disparity map by adopting a median filtering mode.
The invalid pixel points are pixel points without matching points.
6-3, converting the disparity map into a depth map by a triangulation formula, specifically:
a) establishing a pixel coordinate system O on the left image0-uv。
b) And establishing an image coordinate system O-XY by taking the intersection point of the optical axis of the camera and the plane as an origin.
c) And establishing a camera coordinate system by taking the optical center of the camera as an origin, taking the optical axis of the camera as a Z axis, and setting the X axis and the Y axis to be the same as the X axis and the Y axis of the image coordinate system.
d) Constructing a relation between a pixel coordinate system and an image coordinate system as follows:
Figure BDA0002788940700000031
wherein u and v respectively represent the u axis and the v axis of the pixel coordinate system; (u)0,v0) Representing pixel points in a pixel coordinate system, (X, y) representing coordinate points in an image coordinate system, (X)c,Yc,Zc) And d, dy are physical sizes of the coordinate points in the image coordinate system on an x axis and a y axis.
e) The relationship between the camera coordinate system and the image coordinate system is constructed through projection perspective relationship transformation as follows:
Figure BDA0002788940700000032
where f represents the focal length of the left camera.
f) The relationship of the camera coordinate system and the world coordinate system can be described by the rotation parameter R and the translation parameter T determined by the camera external parameters, and the relationship is as follows:
Figure BDA0002788940700000041
wherein (X)w,Yw,Zw) Coordinate points representing a world coordinate system.
g) The conversion relation between the world coordinate system and the pixel coordinate system of a certain point in the single-camera imaging can be obtained through the conversion of the four coordinate systems as follows:
Figure BDA0002788940700000042
wherein f isx,fyAnd calibrating the internal parameter focal length of the left camera.
h) And (4) obtaining the three-dimensional coordinate values of all effective pixel points of the left image in the world coordinate system by combining the formula (4) according to the matching point information obtained in the step (5).
i) Three-dimensional measurement of the back of the human body is carried out according to the parallax principle, and the three-dimensional coordinates of the space points are as follows:
Figure BDA0002788940700000043
wherein, B is the base line distance of the binocular camera, f is the focal length of the camera, (x)i,yi) Is the coordinate of the effective pixel point of the left camera under the image coordinate system, (x)r,yr) Is the image coordinate system and (x)i,yi) The corresponding right camera matches the point coordinates.
Step (7) of obtaining the position information of the acupuncture points
7-1 obtaining a Back image Profile
And preprocessing the left image, and then carrying out edge detection by using a canny operator to obtain a back image contour map.
The preprocessing is to divide the left image by using a watershed algorithm, the watershed algorithm divides the regions which are close in space and have similar gray values into one region, and the gray values are the same, so that the contour segmentation and extraction are performed.
7-2, searching two obvious characteristic points on the back based on the back image contour map to obtain two-dimensional pixel coordinates of the two characteristic points; the distinct feature points are at the widest and narrowest two-point locations on the median ridge.
And 7-3, further obtaining the three-dimensional coordinates of the characteristic points by combining the formula (4) based on the pixel coordinates of the two characteristic points.
7-4, according to the three-dimensional coordinates of the characteristic points, combining the traditional Chinese medicine bone degree cunning method to obtain the position information of the acupuncture points.
The invention has the beneficial effects that:
1) according to the invention, the three-dimensional information of the back is reconstructed by adopting the two cameras and the structured light, so that the positioning accuracy is further improved.
2) The medical instrument for automatically detecting and positioning the specific acupuncture points of the body, such as the acupuncture points of the head and the back, is adopted to assist a clinician in accurately positioning the acupuncture points and reduce the subjective misjudgment probability of the clinician.
3) The invention combines the coding structure light and the binocular vision, effectively solves the difficult problems of indoor white walls and non-texture objects, enhances the environmental interference resistance, has stronger reliability and greatly improves the quality of the depth map.
Drawings
FIG. 1 is a schematic structural view of the present invention;
the labels in the figure are: 1. the system comprises a left camera, a 2-structure light generator, a 3-right camera and a 4-support frame;
FIG. 2 is a stripe pattern with Gray code encoding;
fig. 3(a) is a left image, and fig. 3(b) is a back image contour diagram.
Detailed Description
The present invention is further analyzed with reference to the following specific examples.
As shown in fig. 1, the acupuncture point positioning system based on binocular vision and coded structured light combination comprises a left camera 1, a right camera 3 with the same parameters, a structured light generator 2, a support frame 4, a calibration board and a main controller; the two cameras and the structural light generator are positioned above the support frame, and the structural light generator is positioned in front of the two cameras;
the structured light generator is used for projecting structured light on the back of a human body; wherein the pattern of structured light is a stripe pattern encoded with gray codes as in fig. 2;
the camera is used for capturing an image of the back of the human body projected with the structured light pattern;
the calibration plate is used for calibrating the camera;
the main controller is used for controlling the starting of the structured light generator, setting a structured light pattern by the code of the structured light generator, receiving the pattern information transmitted by the camera and transmitting the pattern information to the computer for data analysis.
An acupuncture point positioning method based on combination of binocular vision and coded structured light comprises the following steps:
step (1), camera calibration:
and calibrating by adjusting the direction of the calibration board or the camera by adopting a Zhang Zhengyou calibration method.
In the calibration process, the camera extracts angular points as characteristic points, a least square method is applied to estimate distortion parameters under the condition of actual radial distortion, and finally a maximum likelihood method is applied to optimize and improve the precision, so that rotation and translation parameters and camera distortion parameters are obtained.
Step (2), starting the structured light generator, and enabling the structured light pattern to fall on the back of the human body;
step (3), two cameras acquire a left image and a right image of the back of the human body, which are projected with structured light patterns;
step (4) image stereo correction
The left image and the right image are corrected in a three-dimensional way by the prior art; the stereo correction is to align two images which are not strictly coplanar and aligned into a coplanar and line alignment.
Step (5) obtaining matching points
5-1 Gray code values
And projecting Gray code pattern structured light to the back of the human body for coding and decoding, so that each pixel point acquired by the two cameras obtains a Gray code value.
The decoding is to project the structured light of the coded original gray code pattern to the back of the human body, then to invert the original gray code pattern to be the gray code inverted pattern, to project the structured light of the gray code inverted pattern to the back of the human body again, and finally to decode through the two patterns.
In the decoding stage, a simple dual-threshold segmentation method is adopted: let I (x, y) be the gray value of the image at point (x, y), I+(x, y) is the corresponding gray value when projecting the original Gray code pattern, I-And (x, y) is the corresponding gray value when the gray code anti-code pattern is projected. If I+(x,y)<I-(x, y), the gray code value at the coordinate is considered to be 0, otherwise, the gray code value is 1.
5-2 phase value
The structured light with N patterns is projected on the back of a human body, each pattern corresponds to a phase value, and the phase period is N. And extracting the boundary of the black and white stripe from each phase diagram, wherein if the pixel point (x, y) in the nth phase diagram is extracted to the boundary, the phase value corresponding to the pixel point (x, y) is N, and N is less than or equal to N.
5-3 search for matching points
And traversing pixel points on the same line of the right image by a traversal search method, and searching a point of the left image P (x, y) corresponding to the right image and having the same gray code value and phase value, wherein the point is a matching point.
Difference in structured light and general binocular vision: for scenes with few texture features, the effect of binocular vision reconstruction is not good. The structured light is projected, the texture of the image is improved, the environment interference resistance is enhanced, the reliability is higher, and the quality of the depth map is improved greatly.
Step (6), acquiring a disparity map and a depth map:
6-1 assume left plot point PL(xl,yl) The matching point in the right diagram is Pr(xr,yr) Point PLCorresponding to a disparity value of xl-xr(ii) a And solving a parallax value of each effective pixel point in the left image to obtain a parallax image.
The effective pixel points are pixel points with right image matching points.
6-2, eliminating invalid pixel points in the disparity map by adopting a median filtering mode.
The invalid pixel points are pixel points without matching points.
6-3, converting the disparity map into a depth map by a triangulation formula, specifically:
a) establishing a pixel coordinate system O on the left image0-uv。
b) And establishing an image coordinate system O-XY by taking the intersection point of the optical axis of the camera and the plane as an origin.
c) And establishing a camera coordinate system by taking the optical center of the camera as an origin, taking the optical axis of the camera as a Z axis, and setting the X axis and the Y axis to be the same as the X axis and the Y axis of the image coordinate system.
d) Constructing a relation between a pixel coordinate system and an image coordinate system as follows:
Figure BDA0002788940700000071
wherein the ratio of u,v represents the u-axis and the v-axis of the pixel coordinate system respectively; (u)0,v0) Representing pixel points in a pixel coordinate system, (X, y) representing coordinate points in an image coordinate system, (X)c,Yc,Zc) And d, dy are physical sizes of the coordinate points in the image coordinate system on an x axis and a y axis.
e) The relationship between the camera coordinate system and the image coordinate system is constructed through projection perspective relationship transformation as follows:
Figure BDA0002788940700000072
where f represents the focal length of the left camera.
f) The relationship of the camera coordinate system and the world coordinate system can be described by the rotation parameter R and the translation parameter T determined by the camera external parameters, and the relationship is as follows:
Figure BDA0002788940700000081
wherein (X)w,Yw,Zw) Coordinate points representing a world coordinate system.
g) The conversion relation between the world coordinate system and the pixel coordinate system of a certain point in the single-camera imaging can be obtained through the conversion of the four coordinate systems as follows:
Figure BDA0002788940700000082
wherein f isx,fyAnd calibrating the internal parameter focal length of the left camera.
h) And (4) obtaining the three-dimensional coordinate values of all effective pixel points of the left image in the world coordinate system by combining the formula (4) according to the matching point information obtained in the step (5).
i) Three-dimensional measurement of the back of the human body is carried out according to the parallax principle, and the three-dimensional coordinates of the space points are as follows:
Figure BDA0002788940700000083
wherein, B is the base line distance of the binocular camera, f is the focal length of the camera, (x)i,yi) Is the coordinate of the effective pixel point of the left camera under the image coordinate system, (x)r,yr) Is the image coordinate system and (x)i,yi) The corresponding right camera matches the point coordinates.
Step (7) of obtaining the position information of the acupuncture points
7-1 obtaining a Back image Profile
The left image in fig. 3(a) is preprocessed, and then edge detection is performed by using a canny operator, so as to obtain a back image contour map fig. 3 (b).
The preprocessing is to divide the left image by using a watershed algorithm, the watershed algorithm divides the regions which are close in space and have similar gray values into one region, and the gray values are the same, so that the contour segmentation and extraction are performed.
7-2, searching two obvious characteristic points on the back based on the back image contour map to obtain two-dimensional pixel coordinates of the two characteristic points; the distinct feature points are at the widest and narrowest two-point locations on the median ridge.
And 7-3, further obtaining the three-dimensional coordinates of the characteristic points by combining the formula (4) based on the pixel coordinates of the two characteristic points. 7-4, according to the three-dimensional coordinates of the characteristic points, combining the traditional Chinese medicine bone degree cunning method to obtain the position information of the acupuncture points.
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above embodiments, and all embodiments are within the scope of the present invention as long as the requirements of the present invention are met.

Claims (8)

1. The acupuncture point positioning method based on combination of binocular vision and coded structured light is characterized by comprising the following steps of:
step (1), calibrating two cameras;
step (2), starting the structured light generator, and enabling the structured light pattern to fall on the back of the human body;
step (3), two cameras acquire a left image and a right image of the back of the human body, which are projected with structured light patterns;
step (4) image stereo correction
Step (5) obtaining matching points
5-1 Gray code values
Projecting Gray code pattern structured light to the back of a human body for coding and decoding, so that each pixel point acquired by the two cameras obtains a Gray code value;
5-2 phase value
Projecting structured light of N patterns on the back of a human body, wherein each pattern corresponds to a phase value, and the phase period is N; extracting the boundary of the black and white stripes from each phase diagram, wherein if the pixel point (x, y) in the nth phase diagram is extracted to the boundary, the phase value corresponding to the pixel point (x, y) is N, and N is less than or equal to N;
5-3 search for matching points
Traversing pixel points on the same line of the right image by a traversing searching method, and searching a point of the left image P (x, y) corresponding to the right image, wherein the gray code value and the phase value are the same, and the point is a matching point;
step (6), acquiring a disparity map and a depth map:
6-1 assume left plot point PL(xl,yl) The matching point in the right diagram is Pr(xr,yr) Point PLCorresponding to a disparity value of xl-xr(ii) a Solving a parallax value of each effective pixel point in the left image to obtain a parallax image;
6-2, eliminating invalid pixel points in the disparity map by adopting a median filtering mode;
6-3, converting the disparity map into a depth map through a triangulation formula;
step (7) of obtaining the position information of the acupuncture points
7-1 obtaining a Back image Profile for a left image
7-2, searching two obvious characteristic points on the back based on the back image contour map to obtain two-dimensional pixel coordinates of the two characteristic points; the obvious characteristic points are the positions of the widest and narrowest points on the middle ridge line;
7-3, further obtaining the three-dimensional coordinates of the feature points based on the pixel coordinates of the two feature points in combination with a formula (4);
7-4, according to the three-dimensional coordinates of the characteristic points, combining the traditional Chinese medicine bone degree cunning method to obtain the position information of the acupuncture points.
2. The binocular vision and coded structured light based acupuncture point locating method of claim 1, wherein the pattern of the structured light of the step (2) is a stripe pattern coded with gray codes.
3. The binocular vision and coded structured light based acupuncture point locating method of claim 1, wherein the stereo correction of the step (4) is to align two images which are not strictly coplanar and aligned to be coplanar and aligned in a row.
4. The binocular vision and coded structured light based acupuncture point locating method according to claim 1, wherein the decoding of the step (5-1) is performed by projecting the coded structured light of the original gray code pattern to the back of the human body, projecting the structured light of the gray code anti-code pattern to the back of the human body again, and finally decoding through the two patterns.
5. The binocular vision and coded structured light based acupuncture point positioning method of claim 4, wherein the gray code inversion pattern is obtained by inverting an original gray code pattern.
6. The binocular vision and coded structured light-based acupuncture point positioning method of claim 4, wherein in a decoding stage, a simple dual-threshold segmentation method is adopted: let I (x, y) be the gray value of the image at point (x, y), I+(x, y) is the corresponding gray value when projecting the original Gray code pattern, I-(x, y) is the corresponding gray value when the gray code anti-code pattern is projected; if I+(x,y)<I-(x, y), the gray code value at the coordinate is considered to be 0, otherwise, the gray code value is 1.
7. The acupuncture point positioning method based on the combination of binocular vision and coded structured light as claimed in claim 1, wherein the step (6-3) is specifically:
a) establishing a pixel coordinate system O on the left image0-uv;
b) Establishing an image coordinate system O-XY by taking the intersection point of the optical axis of the camera and the plane as an origin;
c) establishing a camera coordinate system by taking a camera optical center as an origin, taking a camera optical axis as a Z axis, and setting an X axis and a Y axis as the same as an X axis and a Y axis of an image coordinate system;
d) constructing a relation between a pixel coordinate system and an image coordinate system as follows:
Figure FDA0002788940690000021
wherein u and v respectively represent the u axis and the v axis of the pixel coordinate system; (u)0,v0) Representing pixel points in a pixel coordinate system, (X, y) representing coordinate points in an image coordinate system, (X)c,Yc,Zc) Representing coordinate points in a camera coordinate system, and dx and dy are the physical sizes of the coordinate points in the image coordinate system on an x axis and a y axis;
e) the relationship between the camera coordinate system and the image coordinate system is constructed through projection perspective relationship transformation as follows:
Figure FDA0002788940690000031
wherein f represents the focal length of the left camera;
f) the relationship of the camera coordinate system and the world coordinate system can be described by the rotation parameter R and the translation parameter T determined by the camera external parameters, and the relationship is as follows:
Figure FDA0002788940690000032
wherein (X)w,Yw,Zw) Coordinate points representing a world coordinate system;
g) the conversion relation between the world coordinate system and the pixel coordinate system of a certain point in the single-camera imaging can be obtained through the conversion of the four coordinate systems as follows:
Figure FDA0002788940690000033
wherein f isx,fyCalibrating an internal parameter focal length for the left camera;
h) according to the matching point information obtained in the step (5), combining a formula (4) to obtain three-dimensional coordinate values of all effective pixel points of the left image in a world coordinate system;
i) three-dimensional measurement of the back of the human body is carried out according to the parallax principle, and the three-dimensional coordinates of the space points are as follows:
Figure FDA0002788940690000041
wherein, B is the base line distance of the binocular camera, f is the focal length of the camera, (x)i,yi) Is the coordinate of the effective pixel point of the left camera under the image coordinate system, (x)r,yr) Is the image coordinate system and (x)i,yi) The corresponding right camera matches the point coordinates.
8. An acupuncture point locating system using the method of any one of claims 1 to 7, comprising two identical cameras, a structured light generator, a support frame, a calibration plate, a master controller; the two cameras and the structural light generator are positioned above the support frame, and the structural light generator is positioned in front of the two cameras;
the structured light generator is used for projecting structured light on the back of a human body; wherein the pattern of structured light is a stripe pattern encoded with gray codes;
the camera is used for capturing an image of the back of the human body projected with the structured light pattern;
the calibration plate is used for calibrating the camera;
the main controller is used for controlling the starting of the structured light generator, setting a structured light pattern by the code of the structured light generator, receiving the pattern information transmitted by the camera and transmitting the pattern information to the computer for data analysis.
CN202011308196.2A 2020-11-20 2020-11-20 Acupuncture point positioning system and method based on combination of binocular vision and coded structured light Active CN112509055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011308196.2A CN112509055B (en) 2020-11-20 2020-11-20 Acupuncture point positioning system and method based on combination of binocular vision and coded structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011308196.2A CN112509055B (en) 2020-11-20 2020-11-20 Acupuncture point positioning system and method based on combination of binocular vision and coded structured light

Publications (2)

Publication Number Publication Date
CN112509055A true CN112509055A (en) 2021-03-16
CN112509055B CN112509055B (en) 2022-05-03

Family

ID=74959064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011308196.2A Active CN112509055B (en) 2020-11-20 2020-11-20 Acupuncture point positioning system and method based on combination of binocular vision and coded structured light

Country Status (1)

Country Link
CN (1) CN112509055B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991437A (en) * 2021-04-08 2021-06-18 上海盛益精密机械有限公司 Full-automatic acupuncture point positioning method based on image expansion and contraction technology
CN113129430A (en) * 2021-04-02 2021-07-16 中国海洋大学 Underwater three-dimensional reconstruction method based on binocular structured light
CN113689326A (en) * 2021-08-06 2021-11-23 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN114812429A (en) * 2022-03-06 2022-07-29 南京理工大学 Binocular vision metal gear three-dimensional appearance measuring device and method based on Gray code structured light

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016037486A1 (en) * 2014-09-10 2016-03-17 深圳大学 Three-dimensional imaging method and system for human body
CN108020175A (en) * 2017-12-06 2018-05-11 天津中医药大学 A kind of more optical grating projection binocular vision tongue body surface three dimension entirety imaging methods
CN108340371A (en) * 2018-01-29 2018-07-31 珠海市俊凯机械科技有限公司 Target follows localization method and system a little
CN109191509A (en) * 2018-07-25 2019-01-11 广东工业大学 A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 3D imaging method based on coded structured light and dual purposes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016037486A1 (en) * 2014-09-10 2016-03-17 深圳大学 Three-dimensional imaging method and system for human body
CN108020175A (en) * 2017-12-06 2018-05-11 天津中医药大学 A kind of more optical grating projection binocular vision tongue body surface three dimension entirety imaging methods
CN108340371A (en) * 2018-01-29 2018-07-31 珠海市俊凯机械科技有限公司 Target follows localization method and system a little
CN109191509A (en) * 2018-07-25 2019-01-11 广东工业大学 A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 3D imaging method based on coded structured light and dual purposes

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QI ZHOU: "Combing structured light measurement technology with binocular stereo vision", 《2017 IEEE 2ND INTERNATIONAL CONFERENCE ON OPTO-ELECTRONIC INFORMATION PROCESSING (ICOIP)》 *
戴红芬等: "基于增强现实和双目视觉技术的针灸辅助系统", 《自动化技术与应用》 *
王兵等: "基于格雷码和多步相移法的双目立体视觉三维测量技术研究", 《计算机测量与控制》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129430A (en) * 2021-04-02 2021-07-16 中国海洋大学 Underwater three-dimensional reconstruction method based on binocular structured light
CN113129430B (en) * 2021-04-02 2022-03-04 中国海洋大学 Underwater three-dimensional reconstruction method based on binocular structured light
CN112991437A (en) * 2021-04-08 2021-06-18 上海盛益精密机械有限公司 Full-automatic acupuncture point positioning method based on image expansion and contraction technology
CN112991437B (en) * 2021-04-08 2023-01-10 上海盛益精密机械有限公司 Full-automatic acupuncture point positioning method based on image expansion and contraction technology
CN113689326A (en) * 2021-08-06 2021-11-23 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN113689326B (en) * 2021-08-06 2023-08-04 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN114812429A (en) * 2022-03-06 2022-07-29 南京理工大学 Binocular vision metal gear three-dimensional appearance measuring device and method based on Gray code structured light
CN114812429B (en) * 2022-03-06 2022-12-13 南京理工大学 Binocular vision metal gear three-dimensional appearance measuring device and method based on Gray code structured light

Also Published As

Publication number Publication date
CN112509055B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN112509055B (en) Acupuncture point positioning system and method based on combination of binocular vision and coded structured light
CN108564041B (en) Face detection and restoration method based on RGBD camera
Fusiello et al. Efficient stereo with multiple windowing
CN111145238A (en) Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
CN111028295A (en) 3D imaging method based on coded structured light and dual purposes
CN104424662A (en) Stereo scanning device
CN113129430B (en) Underwater three-dimensional reconstruction method based on binocular structured light
CN108245788B (en) Binocular distance measuring device and method and accelerator radiotherapy system comprising same
CN111508068B (en) Three-dimensional reconstruction method and system applied to binocular endoscopic image
CN116309829B (en) Cuboid scanning body group decoding and pose measuring method based on multi-view vision
Wang et al. Robust motion estimation and structure recovery from endoscopic image sequences with an adaptive scale kernel consensus estimator
CN116883471B (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN114399527A (en) Method and device for unsupervised depth and motion estimation of monocular endoscope
CN113409242A (en) Intelligent monitoring method for point cloud of rail intersection bow net
CN112991517A (en) Three-dimensional reconstruction method for texture image coding and decoding automatic matching
CN115619790B (en) Hybrid perspective method, system and equipment based on binocular positioning
CN116597488A (en) Face recognition method based on Kinect database
CN115252992B (en) Trachea cannula navigation system based on structured light stereoscopic vision
CN109410272B (en) Transformer nut recognition and positioning device and method
Lacher et al. Low-cost surface reconstruction for aesthetic results assessment and prediction in breast cancer surgery
CN112288689B (en) Three-dimensional reconstruction method and system for operation area in microsurgery imaging process
CN113052898B (en) Point cloud and strong-reflection target real-time positioning method based on active binocular camera
CN115018890A (en) Three-dimensional model registration method and system
US20220335649A1 (en) Camera pose determinations with depth
CN111743628A (en) Automatic puncture mechanical arm path planning method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant