WO2023165451A1 - 三维模型建立方法、内窥镜及存储介质 - Google Patents

三维模型建立方法、内窥镜及存储介质 Download PDF

Info

Publication number
WO2023165451A1
WO2023165451A1 PCT/CN2023/078598 CN2023078598W WO2023165451A1 WO 2023165451 A1 WO2023165451 A1 WO 2023165451A1 CN 2023078598 W CN2023078598 W CN 2023078598W WO 2023165451 A1 WO2023165451 A1 WO 2023165451A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching
preset
image
pixel
disparity map
Prior art date
Application number
PCT/CN2023/078598
Other languages
English (en)
French (fr)
Inventor
杨云霏
刘晓瑶
李志坚
Original Assignee
上海微创医疗机器人(集团)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海微创医疗机器人(集团)股份有限公司 filed Critical 上海微创医疗机器人(集团)股份有限公司
Publication of WO2023165451A1 publication Critical patent/WO2023165451A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the invention relates to the technical field of medical devices, in particular to a method for establishing a three-dimensional model, an endoscope and a storage medium.
  • the three-dimensional effect of minimally invasive abdominal surgery assisted by endoscope generally requires doctors to wear 3D glasses and watch the 3D display screen while operating; or, binocular endoscopes take two abdominal images, which are displayed on the left and right In the two monitors, the content of the left and right monitors enters the left and right eyes of the doctor respectively, so as to achieve a stereoscopic effect.
  • the traditional binocular stereo matching algorithm needs to match each feature point in the left and right camera views, and the calculation time of matching features is long, which cannot meet the real-time requirements during the operation; Rich texture images, but the tissues and organs in the abdominal cavity do not have strong texture information, and the three-dimensional accuracy is not high.
  • the first aspect of the present application proposes a method for establishing a three-dimensional model for establishing a three-dimensional scene model of an area to be measured, the method comprising:
  • a precision disparity map is obtained by using a preset stereo matching algorithm
  • a plurality of original structural images projected by different structured light patterns are obtained to increase the surface texture of the area to be measured, and the original images obtained by unprojection; wherein, different structured light patterns
  • the corresponding original structure images are different; according to the original structure images, the precision disparity map is calculated by using the preset stereo matching algorithm, and the precision disparity map and the original image are obtained using the first preset rule to obtain the 3D scene model, and the 3D stereo accuracy is guaranteed. Accelerate the speed of stereo matching to achieve real-time effects.
  • the second aspect of the present application proposes an endoscope, comprising:
  • the projection module is used to project different structured light patterns on the area to be tested;
  • the imaging module is used to obtain a plurality of original structural images obtained by projecting different structured light patterns on the region to be measured, and an original image obtained by not projecting on the region to be measured; the original structured images corresponding to different structured light patterns are different ;
  • the image processing device is connected to both the imaging module and the projection module, and is configured to:
  • a precision disparity map is obtained by using a preset stereo matching algorithm
  • the third aspect of the present application proposes a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above-mentioned method are implemented.
  • FIG. 1 is a schematic flowchart of a method for establishing a three-dimensional model provided in an embodiment of the present application
  • Fig. 2 is a schematic flow chart of acquiring a structured light pattern provided in an embodiment of the present application
  • FIG. 3 is a schematic diagram of two different structured light patterns provided in an embodiment of the present application.
  • Fig. 4 is a schematic diagram of an original image and an original structure image provided in an embodiment of the present application.
  • FIG. 5 is a schematic flow chart of obtaining a precision disparity map provided in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the principles of a de-distortion and stereo correction process provided in an embodiment of the present application.
  • FIG. 7 is a schematic flow chart of obtaining a precision disparity map provided in another embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a first matching algorithm provided in an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a second matching algorithm provided in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a precision disparity map provided in an embodiment of the present application.
  • Fig. 11 is a schematic flowchart of a part of a method for establishing a three-dimensional model provided in an embodiment of the present application
  • Fig. 12 is a schematic diagram of a three-dimensional point cloud image provided in an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a three-dimensional scene model provided in an embodiment of the present application.
  • Fig. 14 is a schematic structural diagram of an endoscope provided in an embodiment of the present application.
  • first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the application.
  • the present application proposes a method for establishing a three-dimensional model, an endoscope and a storage medium to establish a three-dimensional scene model of the area to be measured irradiated by the endoscope, and to speed up stereo matching while ensuring three-dimensional accuracy to achieve real-time effects.
  • the method for establishing a three-dimensional model is used to establish a three-dimensional scene model of an area to be measured irradiated by an endoscope, as shown in FIG. 1 , including:
  • Step S10 Obtain multiple original structural images obtained by projecting different structured light patterns on the area to be tested, and original images obtained by not projecting on the area to be tested;
  • Step S20 Calculate and obtain a precision disparity map by using a preset stereo matching algorithm according to each original structure image
  • Step S30 Obtain a 3D scene model of the region to be measured by using the first preset rule according to the precision disparity map and the original image.
  • a plurality of original structural images projected by different structured light patterns are obtained to increase the surface texture of the area to be measured, and the original images obtained by unprojection; wherein, different structured light patterns
  • the corresponding original structure images are different; according to the original structure images, the precision disparity map is calculated by using the preset stereo matching algorithm, and the precision disparity map and the original image are obtained using the first preset rule to obtain the 3D scene model, and the 3D stereo accuracy is guaranteed. Accelerate the speed of stereo matching to achieve real-time effects.
  • different structured light patterns correspond to different original structured images; the structured light pattern includes a speckle light pattern, and the structured projection includes a speckle projection.
  • the area to be measured is the irradiation area of the endoscope
  • the original structure image includes a first original structure image and a second original structure image
  • the first original structure image and the second original structure image have different structured light patterns, so that During the process of performing the first preset algorithm and the second preset algorithm, they correspond to each other to generate a high-precision three-dimensional scene model.
  • step S10 acquiring a plurality of original structural images obtained by projecting different structured light patterns on the area to be tested, and before the original image obtained by not projecting on the area to be tested, further includes:
  • Step S101 Acquire a structured light pattern, the structured light pattern includes a first structured light pattern and a second structured light pattern different from the first structured light pattern.
  • step S101: acquiring a structured light pattern includes:
  • Step S1011 Acquiring the structure canvas, the structure canvas includes several grids, and spots are set in each grid; the grids correspond to the spots one by one;
  • Step S1012 Obtain the random code of each spot
  • Step S1013 Obtain the code of the speckle in each grid according to the second preset rule according to the random code, and obtain the speckle pattern;
  • Step S1014 Obtain a structured light pattern according to the judgment result of the coding of the speckle in the speckle pattern.
  • the preset rule detects that the code of the speckle in the speckle pattern is not unique each time, the preset rule is reset, and it is necessary to ensure that the second preset rule is different each time to speed up each grid. the allocation progress of the inner blob code, Thereby improving the efficiency of 3D model establishment.
  • step S1014 Acquiring the structured light pattern according to the judgment result of the encoding of the speckle in the speckle pattern, including:
  • Step S1014a judging whether the coding of the dots in the dot pattern is unique
  • Step S1014b if yes, determine the speckle pattern as a structured light pattern
  • the random code includes at least one of the shape, orientation, size, color, grayscale, and offset distance and offset direction in the corresponding grid of each spot.
  • the codes of the spots in the first structured light pattern corresponding to the first original structured image are different from the codes of the spots in the second structured light pattern corresponding to the second original structured image;
  • the codes of all spots are different and have global uniqueness;
  • the codes of all spots in the second structured light pattern are different and have global uniqueness.
  • spots in the grid can occupy multiple pixels.
  • the left picture in Figure 3 is the first structured light pattern
  • the right picture in Figure 3 is the second structured light pattern.
  • Figure 3 is only for illustration, and there are countless kinds of structured light patterns ; just make sure that the encoding of all spots in the first structured light pattern is different from the encoding of all spots in the second structured light pattern.
  • the area to be measured in Figure 4 is the abdominal cavity
  • the left image in Figure 4 is the original image not projected to the area to be measured
  • the right image in Figure 4 is the original structured image with structured light patterns
  • structured light patterns with different codes can be Add characteristic textures for the area to be tested, such as tissues and organs in the abdominal cavity in a dark environment.
  • step S20 Calculate the precision disparity map by using a preset stereo matching algorithm according to each original structure image, including:
  • Step S21 Perform de-distortion processing and stereo correction processing on each original structural image in sequence to obtain multiple structural images
  • Step S22 According to each original structure image, a precision disparity map is obtained by using a preset stereo matching algorithm.
  • the first original structure image and the second original structure image are sequentially subjected to de-distortion and stereo correction processing, and the first original structure image and the second original structure image aligned with two non-coplanar lines Structural image, corrected to be aligned in the same plane, that is, the structural image includes the first structural image and the second structural image on the same plane, and the preset point P is projected on the feature point PI and preset point P of the first structural image
  • the matching points Pr projected on the second structure image are in the same row, so that in the process of the preset stereo matching algorithm, only the matching points on the same row need to be searched, which speeds up the efficiency of the algorithm.
  • the preset point can be any point in the space, which is not on the first original structure image and the second original structure image.
  • the preset stereo matching algorithm includes a first preset algorithm and a second preset algorithm; Step S22: According to each structural image, the precision disparity map is calculated by using the preset stereo matching algorithm, including follows the steps below:
  • Step S221 According to the first structural image and the second structural image, the first matching disparity map is obtained by using the first preset algorithm;
  • Step S222 According to the first structural image, the second structural image and the first matching disparity map, a second preset algorithm is used to calculate a precision disparity map.
  • step S221 according to the first structural image and the second structural image, the first matching disparity map is calculated by using the first preset algorithm, including:
  • Step S2211 According to the first structural image and the second structural image, a low-resolution disparity map is calculated by using the first matching algorithm;
  • Step S2212 Perform the first optimization process on the low-resolution disparity map to obtain the first matching disparity map.
  • the first matching algorithm includes but not limited to a rough matching algorithm.
  • the size of the low resolution is 1/N times that of the first structure image, where N is an integer; the size of the first structure image is the same as that of the second structure image.
  • Step S22111 Obtain several first matching grid points on the first structure image, and each first matching grid point has a distance of a first preset number of pixels between them;
  • Step S22112 Obtain the preset first matching grid point p on the first structure image and the original parallax search range of the preset first matching grid point on the second structure image;
  • Step S22113 Calculate the matching cost value of each pixel in the original parallax search range according to the preset energy function
  • Step S22114 Compare the matching cost value of each pixel in the original parallax search range, and determine the preset first matching pixel q corresponding to the minimum matching cost value in the original parallax search range; the preset first matching pixel q and the preset Let the first matching grid point p be in the same row;
  • Step S22115 According to the comparison result of the minimum matching cost value of the preset first matching pixel point q and the preset first matching cost threshold, determine the mapped first matching pixel point p' in the first structure image.
  • the preset energy function includes a preset matching cost function.
  • the preset first matching grid point p can be randomly selected, the original disparity search range is relatively large, and in the subsequent process of finding the minimum matching cost value, the range is gradually narrowed according to the disparity value of each grid point.
  • the disparity search range refers to point A on the first structure image, point A' on the second structure image, and point A and point A' correspond to each other on the same line, with a distance of 40 pixels from point A',
  • the 40th pixel is the range from the start search point to A' point.
  • the matching cost value of each pixel in the original disparity search range is calculated sequentially from left to right.
  • the preset first matching pixel point q and the preset first matching grid point p may be on the same column or may not be on the same column; for example, the preset first matching grid point p is located in the 10*10 On the grid points, the preset first matching pixel point q is located on the 10*4 grid points.
  • the preset energy function is a zero-mean normalized cross-correlation function C ZNCC (x, y, d), and the expression of C ZNCC (x, y, d) is as follows:
  • x, y are the coordinates of the pixel point
  • d is the parallax value
  • m is the number of structure image pairs
  • the range of k and p is [-l, l]
  • the range of n is [1,m]
  • the preset time is within 3 milliseconds, and 3 images of the first structure and 3 images of the second structure are sequentially acquired every 1 millisecond.
  • the correlation value C is calculated by combining the pixel information in the time domain and the space domain. The larger the correlation value, the smaller the minimum matching cost value.
  • the matching cost value corresponding to each pixel within the original disparity search range in the second structure image can be calculated in parallel, thereby speeding up calculation efficiency.
  • step S22115 determine the mapped first matching pixel in the first structure image according to the comparison result of the minimum matching cost value of the preset first matching pixel point and the preset first matching cost threshold ,include:
  • Step S22115a Determine whether the minimum matching cost value of the preset first matching pixel point q is less than the preset first matching cost threshold
  • Step S22115b If not, determine the preset first matching grid point p as a parallax hole
  • Step S22115c If yes, obtain the first parallax search range of the preset first matching pixel point q on the first structure image, and calculate the matching cost value of each pixel in the first parallax search range according to the preset energy function, Comparing the matching cost value of each pixel in the first parallax search range, determining the minimum matching cost value in the first parallax search range, and determining the pixel point corresponding to the minimum matching cost value in the first parallax search range as the mapping first matching Pixel p'; mapping the first matching pixel p' to be in the same row as the preset first matching pixel q.
  • the parallax hole means that the parallax value of the grid point on the image is a preset parallax value, and the preset parallax value may be 0.
  • the original disparity search range is larger than the first disparity search range; the original disparity search range and the first disparity search range are on the same line; the first disparity search range is based on the grid points around the preset first matching grid point p
  • the parallax value shrinks dynamically.
  • the matching cost of each pixel within the first disparity search range on the first structure image can be calculated in parallel to speed up calculation efficiency.
  • step S22116 according to the comparison result of the minimum matching cost value of the preset first matching pixel point and the preset first matching cost threshold, determine the first mapping in the first structure image. After matching pixels, it also includes:
  • Step S22116 Obtain a low-resolution disparity map according to the comparison result of the distance between the preset first matching grid point p and the mapped first matching pixel point p' and the preset first matching distance.
  • step S22116 According to the comparison result of the distance between the preset first matching grid point p and the mapped first matching pixel point p' and the preset first matching distance, obtain a low-resolution disparity map, including:
  • Step S22116a Determine whether the distance p' between the preset first matching grid point p and the mapped first matching pixel point is smaller than the preset first matching distance;
  • Step S22116b If it is greater than or equal to the preset first matching distance, then determine the preset first matching grid point as a parallax hole;
  • Step S22116c If it is less than the preset first matching distance, calculate the preset first matching grid point p according to the horizontal coordinate Xp of the preset first matching grid point p and the horizontal coordinate Xq of the preset first matching pixel point The disparity value of each first matching grid point on the first structure image is reacquired, and the disparity value of each first matching grid point on the first structure image is calculated to obtain a low-resolution disparity map.
  • the disparity values of all first matching grid points on the first structure image are calculated, and all first matching grid points are combined to form a low-resolution disparity map.
  • the distance p' between the preset first matching grid point p and the mapped first matching pixel point is less than the preset first matching distance, it is determined that the preset first matching grid point p of the first structure image and the second The disparity value of the preset first matching pixel point q of the structural image is the same, and point p corresponds to point q.
  • Step S2212 Perform the first optimization process on the low-resolution disparity map to obtain the first matching disparity map, including:
  • Step S22121 removing discrete grid points
  • Step S22122 filling and sampling the positions of the disparity holes and the discrete grid points according to the bilinear interpolation method to obtain the first matching disparity map.
  • the filled disparity map is sampled, and the size of the filled disparity map is restored to the size of the first structure image or the second structure image, so as to obtain the first matching disparity map.
  • the calculation method of the preset energy function in the second preset algorithm is the same as the calculation method of the preset energy function in the first preset algorithm, and will not be described in detail below; the second preset algorithm is described in detail below:
  • step S222 According to the first structural image, the second structural image and the first matching disparity map, the second preset algorithm is used to calculate the precision disparity map, including:
  • Step S2221 According to the first structural image, the second structural image and the first matching disparity map, use the second matching algorithm to calculate and obtain the second matching disparity map;
  • Step S2222 Perform a second optimization process on the second matching disparity map to obtain a precision disparity map.
  • the second matching algorithm includes, but is not limited to, a fine matching algorithm.
  • step S2221 according to the first structural image, the second structural image and the first matching disparity map, the second matching algorithm is used to calculate and obtain the second matching disparity map, including:
  • Step S22211 Obtain several second matching grid points on the first structure image, and there is a second preset number of pixels between each second matching grid point;
  • Step S22212 Acquire the preset second matching grid points on the first structure image And obtain the preset second matching grid points according to the first matching disparity map a second disparity search range on the second structure image;
  • Step S22213 Calculate the matching cost value of each pixel in the second parallax search range according to the preset energy function
  • Step S22214 Compare the matching cost value of each pixel in the second parallax search range, determine the minimum matching cost value in the second parallax search range, and determine the pixel point corresponding to the minimum matching cost value in the second parallax search range as the predetermined Set the second matching pixel Default second matching pixel Match grid points with preset second on the same line;
  • Step S22215 According to the preset second matching pixel
  • the comparison result of the minimum matching cost value and the preset second matching cost threshold determines the mapped second matching pixel in the first structure image
  • the expression of the second disparity search range is:
  • x q is the coordinate of the preset second matching pixel point q
  • disp(q) is the preset second matching grid point parallax value
  • the preset narrowband search radius is set according to actual needs, which is not limited in this application.
  • step S22215 according to the preset second matching pixel
  • the comparison result of the minimum matching cost value and the preset second matching cost threshold determines the mapped second matching pixel in the first structure image include:
  • Step S22215a Determine the preset second matching pixel Whether the minimum matching cost value is less than the preset second matching generation value threshold;
  • Step S22215b If not, the second matching grid point will be preset Determined as a parallax hole;
  • Step S22215c If yes, obtain the preset second matching pixel The third parallax search range on the first structure image, and calculate the matching cost value of each pixel in the third parallax search range according to the preset energy function, and compare the matching cost value of each pixel in the third parallax search range , determine the minimum matching cost value within the third parallax search range, and determine the pixel corresponding to the minimum matching cost value within the third parallax search range as the second matching pixel point for mapping Map the second matching pixel Match Pixels with Preset 2nd on the same line.
  • step S22215 according to the preset second matching pixel
  • the comparison result of the minimum matching cost value and the preset second matching cost threshold determines the mapped second matching pixel in the first structure image After that, also include:
  • Step S22216 According to the preset second matching grid point Match the pixels with the mapped second
  • the second matching disparity map is obtained by comparing the result of the distance between the two with the preset second matching distance.
  • step S22216 according to the preset second matching grid point Match the pixels with the mapped second
  • the comparison result of the pitch and the preset second matching pitch to obtain the second matching disparity map including:
  • Step S22216a Determine the preset second matching grid point Match the pixels with the mapped second Whether the distance between is less than the preset second matching distance;
  • Step S22216b If it is greater than or equal to the preset second matching distance, the preset grid point Determined as a parallax hole;
  • Step S22216c If it is smaller than the preset second matching distance, then according to the preset second matching grid point the horizontal coordinates of and preset second match pixel , calculate the disparity value of the preset second matching grid point, and reacquire the second matching grid point on the first structure image, and calculate the view angle of each second matching grid point on the first structure image difference to get the second matching disparity map.
  • the disparity values of all the second matching grid points on the first structure image are calculated, and all the second matching grid points are combined to form a low-resolution disparity map.
  • step S2222 performing a second optimization process on the second matching disparity map to obtain a precision disparity map, including:
  • Step S22221 removing discrete grid points
  • Step S22222 filling the parallax holes and the positions of the discrete grid points according to the bilinear interpolation method
  • Step S22223 Obtain a precision disparity map according to the sub-pixel optimization technology and the local inclined plane model, as shown in FIG. 10 .
  • grid points with a disparity value of 0 in the second matching disparity map are disparity holes.
  • the sub-pixel optimization technology is used to obtain sub-pixels using the method of quadratic curve interpolation.
  • the principle is that the parallax point and its surrounding two parallax points determine a quadratic curve according to the principle of 3 points.
  • a parabola, the vertex of the parabola is the optimal sub-pixel.
  • the local inclined plane model smoothes the disparity map, and each local inclined plane is fitted by least squares. Since each local area has no correlation, the least squares fitting of each local inclined plane can be calculated in parallel Improve efficiency.
  • step S30 Acquire the 3D scene model of the region to be tested according to the precision disparity map and the original image using the first preset rule, including:
  • Step S31 Obtain the camera calibration parameters of the endoscope
  • Step S32 According to the camera calibration parameters, convert the precision disparity map formula to obtain a 3D point cloud map, as shown in Figure 12;
  • Step S33 Preprocessing the 3D point cloud image to obtain a 3D surface model
  • Step S34 Perform point cloud texture mapping processing on the 3D surface model and the original image to obtain a 3D scene model, as shown in FIG. 13 .
  • the left picture in Fig. 13 is a three-dimensional scene model of a surgical instrument suspended above the tissue, ready to clamp or shear the lesion
  • the right picture in Fig. 13 is a three-dimensional top view of the in vitro simulation scene.
  • the camera calibration parameters are the calibration parameters of the endoscope before leaving the factory, and the calibration parameters include camera internal parameter matrix, distortion coefficient matrix, intrinsic matrix, fundamental matrix, rotation matrix and translation matrix.
  • the steps to obtain the camera calibration parameters are as follows:
  • Step S311 Obtain the grid pattern
  • Step S312 collecting multiple calibration poses of the grid plate
  • Step S313 Find the corner points of the grid plate, and use matrix calculation to obtain camera calibration parameters.
  • the working distance of the endoscope is 3cm-12.5cm from the lens, so design a black and white square with a size of 3mm, 12 squares wide and 9 squares long, and calibrate between 4cm-12cm in front of the lens.
  • This distance is the working distance of the endoscope and the imaging is the clearest; during the calibration process, the calibration plate is rotated, tilted, and moved back and forth, so that the calibration plate appears in different postures at each field of view of the camera, and about 15 images are taken.
  • the precision disparity map is converted into spatial xyz coordinate information, and the precision disparity map can be visualized in the form of a heat map.
  • the unit of the disparity is pixel
  • the abscissa of the disparity value on the image is marked as X
  • the vertical coordinate of the disparity value on the image is marked as Y
  • the depth information is recorded as Z
  • the unit is mm.
  • X (x-cx)*Z/f
  • Y (y-cy)*Z/f
  • Z (f*baseline)/disp
  • baseline is the distance between the optical centers of the two cameras, which is also the baseline distance
  • disp is the parallax value
  • f represents the normalized focal length
  • cx is the abscissa of the camera optical center on the image
  • cy is the ordinate of the camera optical center on the image
  • the preprocessing includes point cloud denoising processing and point cloud gridding processing; for point cloud denoising, first perform data compression on point cloud downsampling to improve algorithm efficiency, and then perform statistical analysis on the neighborhood of each point, Eliminate neighborhood points that do not meet certain criteria.
  • Point cloud meshing use the greedy triangulation method to triangulate the point cloud, that is, first project the point cloud to the two-dimensional coordinate plane through the normal line, and then perform in-plane triangulation on the projected point cloud to obtain each point Finally, according to the topological connection relationship of the projected points in the plane, the topological connection between each original 3D point is determined, and the obtained triangular mesh is the reconstructed 3D surface model.
  • the point cloud texture mapping process stores all the color information of the original image on the texture map, renders according to the texture coordinates and texture map of each grid on the 3D surface model, and finally obtains the 3D scene model.
  • an endoscope is also proposed, which implements the above-mentioned three-dimensional model building method, including: a projection module 10 , an imaging module 20 , and an image processing device 30 .
  • the projection module 10 is used to project different structured light patterns on the area to be measured;
  • the imaging module 20 is used to acquire a plurality of original structural images obtained by projecting different structured light patterns on the area to be measured, and the images not projected on the area to be measured.
  • the original image the image processing device 30 is connected to the imaging module 20 and the projection module 10, and is configured to: calculate a precision disparity map by using a preset stereo matching algorithm according to each original structure image;
  • the first preset rule obtains the 3D scene model of the region to be measured.
  • a plurality of original structural images projected by different structured light patterns are acquired to increase the surface texture of the region to be measured, and the original images obtained by unprojection; wherein, different structured light patterns correspond to The original structure images are different; according to the original structure images, the precision disparity map is calculated by using the preset stereo matching algorithm.
  • the difference map and the original image adopt the first preset rule to obtain the 3D scene model, and when the 3D stereo accuracy is guaranteed, the stereo matching speed is accelerated to achieve a real-time effect; moreover, the tiny projection module is embedded in the endoscope without increasing the actual It has the volume of endoscope, simple structure and convenient equipment.
  • the endoscope further includes: an illumination module, a camera module, a cold light source 60 and a control circuit 70 .
  • the cold light source 60 is used to provide cold light source illumination to the projection module 10 via the light guide;
  • the control circuit 70 is configured as follows: the first end is connected to the projection module 10 and the imaging module 20, and the second end is connected to the image processing device 30 , used to switch between the cold light source lighting mode and the structured light pattern projection mode, control the projection frequency and camera imaging frequency, and the compact device can realize complex control.
  • the lighting module includes a first lighting lamp 41 and a second lighting lamp 42 that are symmetrically distributed on the front end of the endoscope;
  • the camera module includes a first camera 51 and a second camera 52; wherein the first camera 51 and the second camera 52 are symmetrically distributed on the front end of the endoscope, and both are located between the first illuminating lamp 41 and the second illuminating lamp 42 .
  • the first lighting lamp 41 and the second lighting lamp 42 emit visible light.
  • the image processing device 30 is further configured to:
  • the structural images include a first structural image and a second structural image on the same plane;
  • the precision disparity map is calculated by using the preset stereo matching algorithm.
  • the preset stereo matching algorithm includes a first preset algorithm and a second preset algorithm; the image processing device 30 is further configured to: use the first preset algorithm according to the first structural image and the second structural image A first matching disparity map is obtained through calculation; and a precision disparity map is obtained through calculation using a second preset algorithm according to the first structural image, the second structural image and the first matching disparity map.
  • the image processing device 30 is further configured to: use the first matching algorithm to calculate the low-resolution disparity map according to the first structural image and the second structural image; perform the first optimization process on the low-resolution disparity map to obtain First matching disparity map.
  • the image processing device 30 is further configured to: acquire a plurality of first matching grid points on the first structure image, and each first matching grid point is separated by a first preset number of pixels;
  • the mapped first matching pixel point in the first structure image is determined.
  • the image processing device 30 is further configured to: acquire the low-resolution disparity map according to the comparison result between the preset first matching grid point and the distance between the mapped first matching pixel point and the preset first matching distance .
  • the image processing device 30 is further configured to: acquire a plurality of second matching grid points on the first structure image, and each second matching grid point is separated by a second preset number of pixels;
  • the mapped second matching pixel in the first structure image is determined.
  • the image processing device 30 is further configured to: acquire the second matching disparity map according to the comparison result of the distance between the preset second matching grid point and the mapped second matching pixel point and the preset second matching distance .
  • a storage medium is also proposed, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the above-mentioned method are implemented.
  • the execution of the steps is not strictly limited in order, and the steps may be executed in other orders. Moreover, at least a part of the steps may include multiple sub-steps or multiple stages, these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times, these sub-steps or stages The order of execution is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Endoscopes (AREA)

Abstract

本发明公开了一种三维模型建立方法、内窥镜及存储介质,三维模型建立方法包括:获取不同结构光图案投影于所述待测区域得到的多个原始结构图像,以及未投影于所述待测区域得到的原始图像;根据各所述原始结构图像采用预设立体匹配算法计算得到精度视差图;根据所述精度视差图和所述原始图像采用第一预设规则获取所述待测区域的三维场景模型。

Description

三维模型建立方法、内窥镜及存储介质
本申请要求2022年3月1日申请的申请号为202210196082.6的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本发明涉及医疗器械技术领域,尤其涉及一种三维模型建立方法、内窥镜及存储介质。
背景技术
在内窥镜辅助下的腹腔微创手术的三维效果,一般需要医生通过佩戴3D眼睛,边观看3D显示屏边手术;或,双目内窥镜拍摄两路腹腔图像,图像分别显示在左右两个显示器中,通过左右显示器内容分别进入医生左右眼,从而实现立体效果。
传统采用双目立体匹配算法需要匹配左右相机视图中的每一个特征点,匹配特征计算时间长,达不到手术过程中的实时性要求;并且在双目立体匹配算法寻找特征点过程中,需要丰富的纹理图像,但腹腔内组织器官并不具有强烈的纹理信息,三维立体精度不高。
发明内容
本申请的第一方面提出一种三维模型建立方法,用于建立待测区域的三维场景模型,所述方法包括:
获取不同结构光图案投影于所述待测区域得到的多个原始结构图像,以及未投影于所述待测区域得到的原始图像;
根据各所述原始结构图像采用预设立体匹配算法计算得到精度视差图;
根据所述精度视差图和所述原始图像采用第一预设规则获取所述待测区域的三维场景模型。
于上述实施例中提供的三维模型建立方法中,获取不同结构光图案投影得到的多个原始结构图像,以增加待测区域的表面纹理,和未投影得到的原始图像;其中,不同结构光图案对应的原始结构图像不同;根据各原始结构图像采用预设立体匹配算法计算得到精度视差图,对精度视差图和原始图像采用第一预设规则获取三维场景模型,保证三维立体精度的情况下,加快立体匹配速度,达到实时效果。
本申请的第二方面提出一种内窥镜,包括:
投影模组,用于将不同结构光图案投影于待测区域;
成像模组,用于获取不同结构光图案投影于所述待测区域得到的多个原始结构图像,以及未投影于所述待测区域得到的原始图像;不同结构光图案对应的原始结构图像不同;
图像处理装置,与所述成像模组及所述投影模组均连接,被配置为:
根据各所述原始结构图像采用预设立体匹配算法计算得到精度视差图;
根据所述精度视差图和所述原始图像采用第一预设规则获取所述待测区域的三维场景模型。
本申请的第三方面提出一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上述的方法的步骤。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,并可依 照说明书的内容予以实施,以下以本发明的较佳实施例并配合附图详细说明如后。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他实施例的附图。
图1为本申请一实施例中提供的一种三维模型建立方法的流程示意图;
图2为本申请一实施例中提供的一种获取结构光图案的流程示意图;
图3为本申请一实施例中提供的一种两种不同结构光图案的示意图;
图4为本申请一实施例中提供的一种原始图像和原始结构图像的示意图;
图5为本申请一实施例中提供的一种获取精度视差图的流程示意图;
图6为本申请一实施例中提供的一种去畸变和立体校正处理的原理示意图;
图7为本申请另一实施例中提供的一种获取精度视差图的流程示意图;
图8为本申请一实施例中提供的一种第一匹配算法的流程示意图;
图9为本申请一实施例中提供的一种第二匹配算法的流程示意图;
图10为本申请一实施例中提供的一种精度视差图的示意图;
图11为本申请一实施例中提供的一种三维模型建立方法的部分流程示意图;
图12为本申请一实施例中提供的一种三维点云图的示意图;
图13为本申请一实施例中提供的一种三维场景模型的示意图;
图14为本申请一实施例中提供的一种内窥镜的结构示意图。
具体实施方式
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的较佳的实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。
除非另有获取,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
在使用本文中描述的“包括”、“具有”、和“包含”的情况下,除非使用了明确的限定用语,例如“仅”、“由……组成”等,否则还可以添加另一部件。除非相反地提及,否则单数形式的术语可以包括复数形式,并不能理解为其数量为一个。
应当理解,尽管本文可以使用术语“第一”、“第二”等来描述各种元件,但是这些元件不应受这些术语的限制。这些术语仅用于将一个元件和另一个元件区分开。例如,在不脱离本申请的范围的情况下,第一元件可以被称为第二元件,并且类似地,第二元件可以被称为第一元件。
在本申请中,除非另有明确的规定和限定,术语“相连”、“连接”等术语应做广义理解,例如,可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
为了说明本申请上述的技术方案,下面通过具体实施例来进行说明。
使用双目立体匹配算法,如全局立体匹配算法图割(Graph Cut)、信念传播(Belief Propagation)等,需建立全局能量函数,通过最小化全局能量函数得到最优视差值,但对内存占用量大,处理速度特别慢;如局部立体匹配算法(Sum of absolute differences,SAD)等,通过能量最小化方法进行视差估计,每个像素并行计算,可以实时,但是局部范围的求解会带来匹配效果较差;半全局立体匹配算法SGM(Semi-global Matching)依旧采用全局框架,但是在计算能量函数最小化的步骤时使用高效率的一维路径聚合方法来代替全局算法中的二维最小化算法,使用一维最优来近似二维最优,可以得到较好的视差图,算法效率也有较好的提升,但是仍需要一两秒的时间来完成视差计算。
因此,本申请提出一种三维模型建立方法、内窥镜及存储介质,建立内窥镜照射的待测区域的三维场景模型,保证三维立体精度的情况下,加快立体匹配速度,达到实时效果。
在本申请的一个实施例中提供的一种三维模型建立方法中,三维模型建立方法用于建立内窥镜照射的待测区域的三维场景模型,如图1所示,包括:
步骤S10:获取不同结构光图案投影于待测区域得到的多个原始结构图像,以及未投影于待测区域得到的原始图像;
步骤S20:根据各原始结构图像采用预设立体匹配算法计算得到精度视差图;
步骤S30:根据精度视差图和原始图像采用第一预设规则获取待测区域的三维场景模型。
于上述实施例中提供的三维模型建立方法中,获取不同结构光图案投影得到的多个原始结构图像,以增加待测区域的表面纹理,和未投影得到的原始图像;其中,不同结构光图案对应的原始结构图像不同;根据各原始结构图像采用预设立体匹配算法计算得到精度视差图,对精度视差图和原始图像采用第一预设规则获取三维场景模型,保证三维立体精度的情况下,加快立体匹配速度,达到实时效果。
作为示例,不同结构光图案对应的原始结构图像不同;结构光图案包括散斑光图案,结构投影包括散斑投影。
作为示例,待测区域为内窥镜的照射区域,原始结构图像包括第一原始结构图像和第二原始结构图像,第一原始结构图像和第二原始结构图像具有不同的结构光图案,便于在进行第一预设算法和第二预设算法过程中,相互对应,生成高精度的三维场景模型。
在一个实施例中,步骤S10:获取不同结构光图案投影于待测区域得到的多个原始结构图像,以及未投影于待测区域得到的原始图像之前,还包括:
步骤S101:获取结构光图案,结构光图案包括第一结构光图案和与第一结构光图案的结构图案互不相同的第二结构光图案。
具体的,如图2所示,步骤S101:获取结构光图案,包括:
步骤S1011:获取结构画布,结构画布包括若干个网格,每一网格内均设置有斑点;网格与斑点一一对应;
步骤S1012:获取各斑点的随机编码;
步骤S1013:根据随机编码按照第二预设规则获取每一网格内斑点的编码,获取斑点图样;
步骤S1014:根据斑点图样内斑点的编码的判断结果,获取结构光图案。
作为示例,第二预设规则在每一次检测到斑点图样内斑点的编码不唯一时,预设规则重新设定,需确保每一次执行的第二预设规则均不同,以加快每一网格内斑点编码的分配进度, 从而提高三维模型建立的效率。
在一个实施例中,步骤S1014:根据斑点图样内斑点的编码的判断结果,获取结构光图案,包括:
步骤S1014a:判断斑点图样内斑点的编码是否唯一;
步骤S1014b:若是,则将斑点图样确定为结构光图案;
若否,则重新获取斑点图样内斑点的编码。
作为示例,随机编码包括每一个斑点的形状、朝向、大小、颜色、灰度,以及在对应网格内的偏移距离和偏移方向中至少一种。
作为示例,与第一原始结构图像对应的第一结构光图案中各斑点的编码和与第二原始结构图像对应的第二结构光图案中各斑点的编码互不相同;第一结构光图案中所有斑点的编码均不相同,具有全局唯一性;第二结构光图案中所有斑点的编码均不相同,具有全局唯一性。
作为示例,每一网格内具有M*M个像素,M为正整数;网格内的斑点可以占据多个像素。
为了便于理解结构光图案中的编码,图3中左图为第一结构光图案,图3中右图为第二结构光图案,需注意,图3仅作示意,结构光图案可以有无数种;只需确保第一结构光图案中所有斑点的编码与第二结构光图案中所有斑点的编码均不相同。图4中待测区域为腹腔内,图4中左图为未投影到待测区域的原始图像,图4中右图为具有结构光图案的原始结构图像;具有不同编码的结构光图案,可以为待测区域,如阴暗环境的腹腔内的组织器官,增加特征纹理。
在一个实施例中,如图5所示,步骤S20:根据各原始结构图像采用预设立体匹配算法计算得到精度视差图,包括:
步骤S21:对各原始结构图像依次进行去畸变处理和立体校正处理,获取多个结构图像;
步骤S22:根据各原始结构图像采用预设立体匹配算法计算得到精度视差图。
作为示例,如图6所示,对第一原始结构图像和第二原始结构图像均依次进行去畸变和立体校正处理,将两个非共面行对准的第一原始结构图像和第二原始结构图像,校正成共面行对准,即,结构图像包括处于同一平面的第一结构图像和第二结构图像,且预设点P投影于第一结构图像的特征点PI与预设点P投影于第二结构图像的匹配点Pr处于同一行,使得在预设立体匹配算法过程中,只需搜索同一行上的匹配点即可,加快算法的效率。其中,预设点可以是空间内任意一点,且不在第一原始结构图像和第二原始结构图像上。
在一个实施例中,如图7所示,预设立体匹配算法包括第一预设算法和第二预设算法;步骤S22:根据各结构图像采用预设立体匹配算法计算得到精度视差图,包括如下步骤:
步骤S221:根据第一结构图像和第二结构图像,采用第一预设算法计算得到第一匹配视差图;
步骤S222:根据第一结构图像、第二结构图像及第一匹配视差图,采用第二预设算法计算得到精度视差图。
下面详细阐述第一预设算法:
在一个实施例中,步骤S221:根据第一结构图像和第二结构图像,采用第一预设算法计算得到第一匹配视差图,包括:
步骤S2211:根据第一结构图像和第二结构图像,采用第一匹配算法计算得到低分辨率视差图;
步骤S2212:对低分辨率视差图进行第一次优化处理,得到第一匹配视差图。
作为示例,第一匹配算法包括但不仅限于粗匹配算法。
作为示例,低分辨率的大小为第一结构图像的1/N倍,N为整数;第一结构图像与第二结构图像的大小相同。
在一个实施例中,如图8所示的第一匹配算法的流程示意图,步骤S2211:根据第一结构图像和第二结构图像,采用第一匹配算法计算得到低分辨率视差图,包括:
步骤S22111:获取第一结构图像上若干个第一匹配网格点,各第一匹配网格点之间具有相距第一预设数量的像素点;
步骤S22112:获取第一结构图像上的预设第一匹配网格点p及预设第一匹配网格点在第二结构图像上的原始视差搜索范围;
步骤S22113:根据预设能量函数计算原始视差搜索范围内每一像素点的匹配代价值;
步骤S22114:比较原始视差搜索范围内每一像素点的匹配代价值,确定原始视差搜索范围内最小匹配代价值所对应的预设第一匹配像素点q;预设第一匹配像素点q与预设第一匹配网格点p处于同一行;
步骤S22115:根据预设第一匹配像素点q的最小匹配代价值与预设第一匹配代价阈值的比较结果,确定第一结构图像内的映射第一匹配像素点p’。
作为示例,预设能量函数包括预设匹配代价函数。
作为示例,可随机选取预设第一匹配网格点p,原始视差搜索范围较大,在后续寻找最小匹配代价值的过程中,根据每一网格点的视差值逐渐缩小范围。
作为示例,视差搜索范围是指第一结构图像上的A点,第二结构图像上的A’点,A点和A’点对应,在同一行上,以距离A’点相距40个像素,第40个像素为开始搜索点至A’点之间的范围。在第二结构图像上,在确定原始视差搜索范围内最小匹配代价值过程中,从左向右依次计算原始视差搜索范围内每一个像素点的匹配代价值。
作为示例,预设第一匹配像素点q与预设第一匹配网格点p可以处于同一列上,也可以不在同一列上;如,预设第一匹配网格点p位于10*10的网格点上,预设第一匹配像素点q位于10*4的网格点上。
在一个实施例中,预设能量函数为零均值归一化互相关函数CZNCC(x,y,d),CZNCC(x,y,d)表达式如下:
其中,x,y为像素点的坐标,d为视差值,为第n幅第一结构图像在(x+k,y+p)位置上的像素值强度,m为结构图对的数量,k与p的范围为[-l,l],n的范围为[1,m],为第一结构图像以(x,y)为中心点,在预设时间内、待测区域内的像素变化强度,为第一结构图像在预设空间内的平均像素强度,为第二结构图像以(x-d,y)为中心点,在预设时间内、待测区域内的像素变化强度,为第二结构图像在预设空间内的平均像素强度。
作为示例,保持待测区域不变,连续获取多张第一结构图像和多张第二结构图像,如预设时间为3毫秒内,每隔1毫秒依次获取3张第一结构图像和3张第二结构图像,从而结合时间域和空间域的像素信息,计算相关度值C,相关度值越大,最小匹配代价值越小。
需要说明的是,第二结构图像内的原始视差搜索范围内每一个像素点对应的匹配代价值可以并行计算,从而加快计算效率。
具体的,请继续参考图8,步骤S22115:根据预设第一匹配像素点的最小匹配代价值与预设第一匹配代价阈值的比较结果,确定第一结构图像内的映射第一匹配像素点,包括:
步骤S22115a:判断预设第一匹配像素点q的最小匹配代价值是否小于预设第一匹配代价阈值;
步骤S22115b:若否,则将预设第一匹配网格点p确定为视差空洞;
步骤S22115c:若是,则获取预设第一匹配像素点q在第一结构图像上的第一视差搜索范围,并根据预设能量函数计算第一视差搜索范围内每一像素点的匹配代价值,比较第一视差搜索范围内每一像素点的匹配代价值,确定第一视差搜索范围内最小匹配代价值,并将第一视差搜索范围内最小匹配代价值对应的像素点确定为映射第一匹配像素点p’;映射第一匹配像素点p’与预设第一匹配像素点q处于同一行。
作为示例,本申请中视差空洞是指图像上网格点的视差值为预设视差值,预设视差值可以为0。
作为示例,原始视差搜索范围大于第一视差搜索范围;原始视差搜索范围和第一视差搜索范围在同一行上;第一视差搜索范围根据预设第一匹配网格点p的周围网格点的视差值动态缩小。
同上,在第一结构图像上的第一视差搜索范围内的每一个像素点的匹配代价值,可以并行计算,加快计算效率。
在一个实施例中,请继续参考图8,步骤S22116:根据预设第一匹配像素点的最小匹配代价值与预设第一匹配代价阈值的比较结果,确定第一结构图像内的映射第一匹配像素点之后,还包括:
步骤S22116:根据预设第一匹配网格点p与映射第一匹配像素点p’的间距与预设第一匹配间距的比较结果,获取低分辨率视差图。
具体的,步骤S22116:根据预设第一匹配网格点p与映射第一匹配像素点p’的间距与预设第一匹配间距的比较结果,获取低分辨率视差图,包括:
步骤S22116a:判断预设第一匹配网格点p与映射第一匹配像素点间p’的距离是否小于预设第一匹配间距;
步骤S22116b:若大于或等于预设第一匹配间距,则将预设第一匹配网格点确定为视差空洞;
步骤S22116c:若小于预设第一匹配间距,则根据预设第一匹配网格点p的水平坐标Xp和预设第一匹配像素点的水平坐标Xq,计算预设第一匹配网格点p的视差值,并重新获取第一结构图像上的第一匹配网格点,计算得到第一结构图像上各第一匹配网格点的视差值,得到低分辨率视差图。
作为示例,第一结构图像上的所有第一匹配网格点的视差值均计算完毕,所有的第一匹配网格点组合形成低分辨率视差图。
作为示例,计算预设第一匹配网格点p的水平坐标Xp和预设第一匹配像素点的水平坐 标Xq的差值,得到预设第一匹配网格点p的视差值d=Xp-Xq。
作为示例,若预设第一匹配网格点p与映射第一匹配像素点间p’小于预设第一匹配间距,则认定第一结构图像的预设第一匹配网格点p和第二结构图像的预设第一匹配像素点q的视差值一致,p点和q点对应。
在一个实施例中,低分辨率视差图中视差值为0的网格点为视差空洞;步骤S2212:对低分辨率视差图进行第一次优化处理,获取第一匹配视差图,包括:
步骤S22121:移除离散网格点;
步骤S22122:根据双线性插值方法填充视差空洞和离散网格点所处的位置并采样,得到第一匹配视差图。
作为示例,双线性插值方法填充视差空洞后,采样填充得到的视差图,将填充后的视差图的尺寸恢复至第一结构图像或第二结构图像的尺寸,从而得到第一匹配视差图。
需注意,第二预设算法中预设能量函数的计算方式与第一预设算法中预设能量函数的计算方式相同,下面不再赘述;下面详细阐述第二预设算法:
在一个实施例中,步骤S222:根据第一结构图像、第二结构图像及第一匹配视差图,采用第二预设算法计算得到精度视差图,包括:
步骤S2221:根据第一结构图像、第二结构图像及第一匹配视差图,采用第二匹配算法计算得到第二匹配视差图;
步骤S2222:对第二匹配视差图进行第二次优化处理,得到精度视差图。
作为示例,第二匹配算法包括但不仅限于细匹配算法。
具体的,如图9所示,步骤S2221:根据第一结构图像、第二结构图像及第一匹配视差图,采用第二匹配算法计算得到第二匹配视差图,包括:
步骤S22211:获取第一结构图像上若干个第二匹配网格点,各第二匹配网格点之间具有相距第二预设数量的像素点;
步骤S22212:获取第一结构图像上的预设第二匹配网格点并根据第一匹配视差图获取预设第二匹配网格点在第二结构图像上的第二视差搜索范围;
步骤S22213:根据预设能量函数计算第二视差搜索范围内每一像素点的匹配代价值;
步骤S22214:比较第二视差搜索范围内每一像素点的匹配代价值,确定第二视差搜索范围内最小匹配代价值,并将第二视差搜索范围内最小匹配代价值对应的像素点确定为预设第二匹配像素点预设第二匹配像素点与预设第二匹配网格点处于同一行;
步骤S22215:根据预设第二匹配像素点的最小匹配代价值与预设第二匹配代价阈值的比较结果,确定第一结构图像内的映射第二匹配像素点
在一个实施例中,第二视差搜索范围的表达式为:
其中,xq为预设第二匹配像素点q的坐标,为预设窄带搜索半径,disp(q)为预设第二匹配网格点的视差值。
作为示例,预设窄带搜索半径根据实际需求设定,本申请不作限定。
在一个实施例中,步骤S22215:根据预设第二匹配像素点的最小匹配代价值与预设第二匹配代价阈值的比较结果,确定第一结构图像内的映射第二匹配像素点包括:
步骤S22215a:判断预设第二匹配像素点的最小匹配代价值是否小于预设第二匹配代 价阈值;
步骤S22215b:若否,则将预设第二匹配网格点确定为视差空洞;
步骤S22215c:若是,则获取预设第二匹配像素点在第一结构图像上的第三视差搜索范围,并根据预设能量函数计算第三视差搜索范围内每一像素点的匹配代价值,比较第三视差搜索范围内每一像素点的匹配代价值,确定第三视差搜索范围内最小匹配代价值,并将第三视差搜索范围内最小匹配代价值对应的像素点确定为映射第二匹配像素点映射第二匹配像素点与预设第二匹配像素点处于同一行。
在一个实施例中,请继续参考图9,步骤S22215:根据预设第二匹配像素点的最小匹配代价值与预设第二匹配代价阈值的比较结果,确定第一结构图像内的映射第二匹配像素点之后,还包括:
步骤S22216:根据预设第二匹配网格点与映射第二匹配像素点的间距与预设第二匹配间距的比较结果,获取第二匹配视差图。
具体的,请继续参考图9,步骤S22216:根据预设第二匹配网格点与映射第二匹配像素点的间距与预设第二匹配间距的比较结果,获取第二匹配视差图,包括:
步骤S22216a:判断预设第二匹配网格点与映射第二匹配像素点间的距离是否小于预设第二匹配间距;
步骤S22216b:若大于或等于预设第二匹配间距,则将预设网格点确定为视差空洞;
步骤S22216c:若小于预设第二匹配间距,则根据预设第二匹配网格点的水平坐标和预设第二匹配像素点的水平坐标,计算预设第二匹配网格点的视差值,并重新获取第一结构图像上的第二匹配网格点,计算得到第一结构图像上各第二匹配网格点的视差值,得到第二匹配视差图。
作为示例,作为示例,第一结构图像上的所有第二匹配网格点的视差值均计算完毕,所有的第二匹配网格点组合形成低分辨率视差图。
作为示例,计算预设第二匹配网格点的水平坐标和预设第二匹配像素点的水平坐标的差值,得到预设第二匹配网格点的视差值
在一个实施例中,步骤S2222:对第二匹配视差图进行第二次优化处理,获取精度视差图,包括:
步骤S22221:移除离散网格点;
步骤S22222:根据双线性插值方法填充视差空洞和离散网格点所处的位置;
步骤S22223:根据子像素优化技术和局部倾斜平面模型,获取精度视差图,如图10所示。
作为示例,第二匹配视差图中视差值为0的网格点为视差空洞。
作为示例,采用子像素优化技术,使用二次曲线内插的方法获得子像素,原理是该视差点及其周围两个视差点按照3点确定一条二次曲线的原理,得到过这3点的抛物线,该抛物线的顶点就是最优子像素。
作为示例,局部倾斜平面模型平滑视差图,每个局部倾斜平面用最小二乘拟合,由于每个局部区域没有相关性,所以每个局部倾斜平面的最小二乘拟合可以用并行计算的方式提高效率。
在一个实施例中,如图11所示,步骤S30:根据精度视差图和原始图像采用第一预设规则获取待测区域的三维场景模型,包括:
步骤S31:获取内窥镜的相机标定参数;
步骤S32:根据相机标定参数,对精度视差图公式转换,获取三维点云图,如图12所示;
步骤S33:对三维点云图进行预处理,获取三维曲面模型;
步骤S34:对三维曲面模型和原始图像进行点云纹理贴图处理,获取三维场景模型,如图13所示。
作为示例,图13中的左图为手术器械悬浮在组织上空,准备夹持或剪切病变位置的三维场景模型,图13中的右图为体外模拟场景的三维俯视图。
具体的,相机标定参数为内窥镜出厂前的标定参数,标定参数包括相机内参数矩阵、畸变系数矩阵、本征矩阵、基础矩阵、旋转矩阵及平移矩阵。相机标定参数的获取步骤如下:
步骤S311:获取格子板图案;
步骤S312:采集格子板的多个标定姿态;
步骤S313:寻找格子板角点,采用矩阵计算,获取相机标定参数。
作为示例,内窥镜的工作距离在距离镜头3cm-12.5cm处,所以设计大小3mm的黑白方格,宽为12个方格,长为9个方格,在镜头前4cm-12cm间标定。该距离为内窥镜的工作距离且成像最为清晰;在标定过程中旋转、倾斜、前后移动标定板,使得标定板以不同姿态出现在相机各视野处,大概拍摄15张图像。
具体的,根据相机标定参数,并结合平行双目视觉几何关系,将精度视差图,转换为空间xyz坐标信息,可以以热力图的形式可视化精度视差图,颜色越蓝,精度视差图的视差值越小,颜色越黄,精度视差图的视差值越大。
作为示例,视差的单位为像素,视差值在图像上的横坐标记为X,视差值在图像上的纵坐标记为Y,深度信息记为Z,单位为毫米。X=(x-cx)*Z/f,Y=(y-cy)*Z/f,Z=(f*baseline)/disp;其中,baseline为两个相机光心之间的距离,也是基线距离;disp为视差值;f表示归一化的焦距;cx为相机光心在图像上的横坐标;cy为相机光心在图像上的纵坐标;
作为示例,预处理包括点云去噪处理及点云网格化处理;点云去噪,先对点云下采样进行数据压缩,提高算法效率,然后对每个点的邻域进行统计分析,剔除不符合一定标准的邻域点。点云网格化,使用贪心三角化法对点云进行三角化,即先将点云通过法线投影到二维坐标平面,再对投影得到的点云做平面内的三角化,得到各点的拓扑连接关系,最后根据平面内投影点的拓扑连接关系确定各原始三维点间的拓扑连接,所得三角网格即为重建得到的三维曲面模型。
作为示例,点云纹理贴图处理,将原始图像的所有颜色信息存储于纹理图上,显示时根据三维曲面模型上每个网格的纹理坐标和纹理图进行渲染,最终得到三维场景模型。
在本申请的一个实施例中,如图14所示,还提出一种内窥镜,执行如上述的三维模型建立方法,包括:投影模组10、成像模组20、图像处理装置30。投影模组10用于将不同结构光图案投影于待测区域;成像模组20用于获取不同结构光图案投影于待测区域得到的多个原始结构图像,以及未投影于待测区域得到的原始图像;图像处理装置30,与成像模组20及投影模组10均连接,被配置为:根据各原始结构图像采用预设立体匹配算法计算得到精度视差图;根据精度视差图和原始图像采用第一预设规则获取待测区域的三维场景模型。
于上述实施例中提供的内窥镜中,获取不同结构光图案投影得到的多个原始结构图像,以增加待测区域的表面纹理,和未投影得到的原始图像;其中,不同结构光图案对应的原始结构图像不同;根据各原始结构图像采用预设立体匹配算法计算得到精度视差图,对精度视 差图和原始图像采用第一预设规则获取三维场景模型,保证三维立体精度的情况下,加快立体匹配速度,达到实时效果;并且,将微小投影模组嵌入于内窥镜中,不增加现有内窥镜体积,结构简单,装备便捷。
在一个实施例中,请继续参考图14,内窥镜还包括:照明模组、摄像模组、冷光源60及控制电路70。冷光源60用于经由导光束向投影模组10提供冷光源照明;控制电路70被配置为:第一端与投影模组10及成像模组20均连接,第二端与图像处理装置30连接,用于切换冷光源照明模式和结构光图案投影模式,控制投影频率,相机成像频率,设备小巧可实现复杂控制。
作为示例,照明模组包括于内窥镜的前端面呈对称分布的第一照明灯41和第二照明灯42;摄像模组包括第一摄像头51和第二摄像头52;其中,第一摄像头51和第二摄像头52于内窥镜的前端面呈对称分布,且均位于第一照明灯41与第二照明灯42之间。
作为示例,第一照明灯41和第二照明灯42发射可见光。
在一个实施例中,图像处理装置30还被配置为:
对各原始结构图像依次进行去畸变处理和立体校正处理,获取多个结构图像;结构图像包括处于同一平面的第一结构图像和第二结构图像;
根据各结构图像采用预设立体匹配算法计算得到精度视差图。
在一个实施例中,预设立体匹配算法包括第一预设算法和第二预设算法;图像处理装置30还被配置为:根据第一结构图像和第二结构图像,采用第一预设算法计算得到第一匹配视差图;根据第一结构图像、第二结构图像及第一匹配视差图,采用第二预设算法计算得到精度视差图。
具体的,图像处理装置30还被配置为:根据第一结构图像和第二结构图像,采用第一匹配算法计算得到低分辨率视差图;对低分辨率视差图进行第一次优化处理,得到第一匹配视差图。
在一个实施例中,图像处理装置30还被配置为:获取第一结构图像上若干个第一匹配网格点,各第一匹配网格点之间具有相距第一预设数量的像素点;
获取第一结构图像上的预设第一匹配网格点,并获取预设第一匹配网格点在第二结构图像上的原始视差搜索范围;
根据预设能量函数计算原始视差搜索范围内每一像素点的匹配代价值;
比较原始视差搜索范围内每一像素点的匹配代价值,确定原始视差搜索范围内最小匹配代价值,并将原始视差搜索范围内最小匹配代价值对应的像素点确定为预设第一匹配像素点;预设第一匹配像素点与预设第一匹配网格点处于同一行;
根据预设第一匹配像素点的最小匹配代价值与预设第一匹配代价阈值的比较结果,确定第一结构图像内的映射第一匹配像素点。
在一个实施例中,图像处理装置30还被配置为:根据预设第一匹配网格点与映射第一匹配像素点的间距与预设第一匹配间距的比较结果,获取低分辨率视差图。
在一个实施例中,图像处理装置30还被配置为:获取第一结构图像上若干个第二匹配网格点,各第二匹配网格点之间具有相距第二预设数量的像素点;
获取第一结构图像上的预设第二匹配网格点,并根据第一匹配视差图获取预设第二匹配网格点在第二结构图像上的第二视差搜索范围;
根据预设能量函数计算第二视差搜索范围内每一像素点的匹配代价值;
比较第二视差搜索范围内每一像素点的匹配代价值,确定第二视差搜索范围内最小匹配代价值,并将第二视差搜索范围内最小匹配代价对应的像素点确定为预设第二匹配像素点;预设第二匹配像素点与预设第二匹配网格点处于同一行;
根据预设第二匹配像素点的最小匹配代价值与预设第二匹配代价阈值的比较结果,确定第一结构图像内的映射第二匹配像素点。
在一个实施例中,图像处理装置30还被配置为:根据预设第二匹配网格点与映射第二匹配像素点的间距与预设第二匹配间距的比较结果,获取第二匹配视差图。
在本申请的一个实施例中,还提出一种存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现如上述的方法的步骤。
关于上述实施例中的三维模型建立方法的具体限定可以参见上文中对于三维模型建立方法的限定,在此不再赘述。
应该理解的是,除非本文中有明确的说明,所述的步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,所述的步骤的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (13)

  1. 一种三维模型建立方法,用于建立待测区域的三维场景模型,所述方法包括:
    获取不同结构光图案投影于所述待测区域得到的多个原始结构图像,以及未投影于所述待测区域得到的原始图像;
    根据各所述原始结构图像采用预设立体匹配算法计算得到精度视差图;
    根据所述精度视差图和所述原始图像采用第一预设规则获取所述待测区域的三维场景模型。
  2. 根据权利要求1所述的方法,其中,所述根据各所述原始结构图像采用预设立体匹配算法计算得到精度视差图,包括:对各所述原始结构图像依次进行去畸变处理和立体校正处理,获取多个结构图像;根据各所述结构图像采用所述预设立体匹配算法计算得到所述精度视差图;
    所述结构图像包括处于同一平面的第一结构图像和第二结构图像,且预设点投影于所述第一结构图像的特征点与所述预设点投影于所述第二结构图像的匹配点处于同一行;
    所述预设立体匹配算法包括第一预设算法和第二预设算法;
    所述根据各所述结构图像采用所述预设立体匹配算法计算得到所述精度视差图,包括:根据所述第一结构图像和所述第二结构图像,采用所述第一预设算法计算得到第一匹配视差图;根据所述第一结构图像、所述第二结构图像及所述第一匹配视差图,采用所述第二预设算法计算得到所述精度视差图。
  3. 根据权利要求2所述的方法,其中,所述根据所述第一结构图像和所述第二结构图像,采用第一预设算法计算得到第一匹配视差图,包括:
    根据所述第一结构图像和所述第二结构图像,采用所述第一匹配算法计算得到低分辨率视差图;
    对所述低分辨率视差图进行第一次优化处理,得到所述第一匹配视差图,包括:
    移除离散网格点;
    根据双线性插值方法填充所述视差空洞和所述离散网格点所处的网格点位置并采样,得到所述第一匹配视差图。
  4. 根据权利要求3所述的方法,其中,所述根据所述第一结构图像和所述第二结构图像,采用所述第一匹配算法计算得到低分辨率视差图,包括:
    获取所述第一结构图像上若干个第一匹配网格点,各所述第一匹配网格点之间具有相距第一预设数量的像素点;
    获取所述第一结构图像上的预设第一匹配网格点,及所述预设第一匹配网格点在所述第二结构图像上的原始视差搜索范围;
    根据预设能量函数计算所述原始视差搜索范围内每一像素点的匹配代价值;
    比较所述原始视差搜索范围内每一像素点的匹配代价值,确定所述原始视差搜索范围内最小匹配代价值,并将所述原始视差搜索范围内最小匹配代价值对应的像素点确定为预设第一匹配像素点;所述预设第一匹配像素点与所述预设第一匹配网格点处于同一行;
    根据所述预设第一匹配像素点的最小匹配代价值与预设第一匹配代价阈值的比较结果,确定所述第一结构图像内的映射第一匹配像素点,包括:
    判断所述预设第一匹配像素点的最小匹配代价值是否小于预设第一匹配代价阈值;
    若否,则将所述预设第一匹配网格点确定为视差空洞;
    若是,则获取所述预设第一匹配像素点在所述第一结构图像上的第一视差搜索范围,并根据所述预设能量函数计算所述第一视差搜索范围内每一像素点的匹配代价值,比较所述第一视差搜索范围内每一像素点的匹配代价值,确定所述第一视差搜索范围内最小匹配代价值,并将所述第一视差搜索范围内最小匹配代价值对应的像素点确定为映射第一匹配像素点;所述映射第一匹配像素点与所述预设第一匹配像素点处于同一行。
  5. 根据权利要求4所述的方法,其中,所述确定所述第一结构图像内的映射第一匹配像素点之后,还包括:
    根据所述预设第一匹配网格点与所述映射第一匹配像素点的间距与预设第一匹配间距的比较结果,获取所述低分辨率视差图,所述获取所述低分辨率视差图包括:
    判断所述预设第一匹配网格点与所述映射第一匹配像素点间的距离是否小于预设第一匹配间距;
    若大于或等于所述预设第一匹配间距,则将所述预设第一匹配网格点确定为视差空洞;
    若小于所述预设第一匹配间距,则根据所述预设第一匹配网格点的水平坐标和所述预设第一匹配像素点的水平坐标,计算所述预设第一匹配网格点的视差值,并重新获取所述第一结构图像上的第一匹配网格点,计算得到所述第一结构图像上各所述第一匹配网格点的视差值,得到所述低分辨率视差图。
  6. 根据权利要求2所述的方法,其中,所述根据所述第一结构图像、所述第二结构图像及所述第一匹配视差图,采用第二预设算法计算得到所述精度视差图,包括:
    根据所述第一结构图像、所述第二结构图像及所述第一匹配视差图,采用所述第二匹配算法计算得到第二匹配视差图;
    对所述第二匹配视差图进行第二次优化处理,得到所述精度视差图,包括:
    移除离散网格点;
    根据双线性插值方法填充所述视差空洞和所述离散网格点所处的网格点位置;
    根据子像素优化技术和局部倾斜平面模型,获取所述精度视差图。
  7. 根据权利要求6所述的方法,其中,所述根据所述第一结构图像、所述第二结构图像及所述第一匹配视差图,采用所述第二匹配算法计算得到第二匹配视差图,包括:
    获取所述第一结构图像上若干个第二匹配网格点,各所述第二匹配网格点之间具有相距第二预设数量的像素点;
    获取所述第一结构图像上的预设第二匹配网格点,并根据所述第一匹配视差图获取所述预设第二匹配网格点在所述第二结构图像上的第二视差搜索范围;
    根据预设能量函数计算所述第二视差搜索范围内每一像素点的匹配代价值;
    比较所述第二视差搜索范围内每一像素点的匹配代价值,确定所述第二视差搜索范围内最小匹配代价值,并将所述第二视差搜索范围内最小匹配代价对应的像素点确定为预设第二匹配像素点;所述预设第二匹配像素点与所述预设第二匹配网格点处于同一行;
    根据所述预设第二匹配像素点的最小匹配代价值与预设第二匹配代价阈值的比较结果,确定所述第一结构图像内的映射第二匹配像素点,包括:
    判断所述预设第二匹配像素点的最小匹配代价值是否小于预设第二匹配代价阈值;
    若否,则将所述预设第二匹配网格点确定为视差空洞;
    若是,则获取所述预设第二匹配像素点在所述第一结构图像上的第三视差搜索范围,并 根据所述预设能量函数计算所述第三视差搜索范围内每一像素点的匹配代价值,比较所述第三视差搜索范围内每一像素点的匹配代价值,确定所述第三视差搜索范围内最小匹配代价值,并将所述第三视差搜索范围内最小匹配代价值对应的像素点确定为映射第二匹配像素点;所述映射第二匹配像素点与所述预设第二匹配像素点处于同一行。
  8. 根据权利要求7所述的方法,其中,确定所述第一结构图像内的映射第二匹配像素点之后,还包括:
    根据所述预设第二匹配网格点与所述映射第二匹配像素点的间距与预设第二匹配间距的比较结果,获取所述第二匹配视差图,包括:
    判断所述预设第二匹配网格点与所述映射第二匹配像素点间的距离是否小于预设第二匹配间距;
    若大于或等于所述预设第二匹配间距,则将所述预设网格点确定为视差空洞;
    若小于所述预设第二匹配间距,则根据所述预设第二匹配网格点的水平坐标和所述预设第二匹配像素点的水平坐标,计算所述预设第二匹配网格点的视差值,并重新获取所述第一结构图像上的第二匹配网格点,计算得到所述第一结构图像上各所述第二匹配网格点的视差值,得到所述第二匹配视差图。
  9. 根据权利要求1或2所述的方法,其中,所述获取不同结构光图案投影于所述待测区域得到的多个原始结构图像,以及未投影于所述待测区域得到的原始图像之前,还包括:
    获取所述结构光图案,所述结构光图案包括第一结构光图案和与所述第一结构光图案的结构图案互不相同的第二结构光图案。
  10. 根据权利要求9所述的方法,其中,所述获取所述结构光图案包括:
    获取结构画布,所述结构画布包括若干个网格,每一所述网格内均设置有斑点;所述网格与所述斑点一一对应;
    获取各所述斑点的随机编码;
    根据所述随机编码按照第二预设规则获取每一网格内所述斑点的编码,获取斑点图样;
    根据所述斑点图样内所述斑点的编码的判断结果,获取所述结构光图案,包括:
    判断所述斑点图样内斑点的编码是否唯一;
    若是,则将所述斑点图样确定为所述结构光图案;
    若否,则重新获取所述斑点图样内斑点的编码。
  11. 一种内窥镜,包括:
    投影模组,用于将不同结构光图案投影于待测区域;
    成像模组,用于获取不同结构光图案投影于所述待测区域得到的多个原始结构图像,以及未投影于所述待测区域得到的原始图像;不同结构光图案对应的原始结构图像不同;
    图像处理装置,与所述成像模组及所述投影模组均连接,被配置为:
    根据各所述原始结构图像采用预设立体匹配算法计算得到精度视差图;
    根据所述精度视差图和所述原始图像采用第一预设规则获取所述待测区域的三维场景模型。
  12. 根据权利要求11所述的内窥镜,还包括:
    照明模组;
    摄像模组;
    冷光源,用于向所述投影模组提供冷光源照明;
    控制电路,被配置为:第一端与所述投影模组及所述成像模组均连接,第二端与所述图像处理装置连接,用于切换冷光源照明模式和结构光图案投影模式。
  13. 一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至10任一项所述的方法的步骤。
PCT/CN2023/078598 2022-03-01 2023-02-28 三维模型建立方法、内窥镜及存储介质 WO2023165451A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210196082.6 2022-03-01
CN202210196082.6A CN114565739A (zh) 2022-03-01 2022-03-01 三维模型建立方法、内窥镜及存储介质

Publications (1)

Publication Number Publication Date
WO2023165451A1 true WO2023165451A1 (zh) 2023-09-07

Family

ID=81714887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/078598 WO2023165451A1 (zh) 2022-03-01 2023-02-28 三维模型建立方法、内窥镜及存储介质

Country Status (2)

Country Link
CN (1) CN114565739A (zh)
WO (1) WO2023165451A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565739A (zh) * 2022-03-01 2022-05-31 上海微创医疗机器人(集团)股份有限公司 三维模型建立方法、内窥镜及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190387210A1 (en) * 2017-02-28 2019-12-19 Peking University Shenzhen Graduate School Method, Apparatus, and Device for Synthesizing Virtual Viewpoint Images
CN111508068A (zh) * 2020-04-20 2020-08-07 华中科技大学 一种应用于双目内窥镜图像的三维重建方法及系统
CN112741689A (zh) * 2020-12-18 2021-05-04 上海卓昕医疗科技有限公司 应用光扫描部件来实现导航的方法及系统
CN114066950A (zh) * 2021-10-27 2022-02-18 北京的卢深视科技有限公司 单目散斑结构光图像匹配方法、电子设备及存储介质
CN114565739A (zh) * 2022-03-01 2022-05-31 上海微创医疗机器人(集团)股份有限公司 三维模型建立方法、内窥镜及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6503195B1 (en) * 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
CN106802138B (zh) * 2017-02-24 2019-09-24 先临三维科技股份有限公司 一种三维扫描系统及其扫描方法
CN108961383B (zh) * 2017-05-19 2021-12-14 杭州海康威视数字技术股份有限公司 三维重建方法及装置
CN107945268B (zh) * 2017-12-15 2019-11-29 深圳大学 一种基于二元面结构光的高精度三维重建方法及系统
CN111145238B (zh) * 2019-12-12 2023-09-22 中国科学院深圳先进技术研究院 单目内窥镜图像的三维重建方法、装置及终端设备
CN112819777B (zh) * 2021-01-28 2022-12-27 重庆西山科技股份有限公司 一种双目内窥镜辅助显示方法、系统、装置和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190387210A1 (en) * 2017-02-28 2019-12-19 Peking University Shenzhen Graduate School Method, Apparatus, and Device for Synthesizing Virtual Viewpoint Images
CN111508068A (zh) * 2020-04-20 2020-08-07 华中科技大学 一种应用于双目内窥镜图像的三维重建方法及系统
CN112741689A (zh) * 2020-12-18 2021-05-04 上海卓昕医疗科技有限公司 应用光扫描部件来实现导航的方法及系统
CN114066950A (zh) * 2021-10-27 2022-02-18 北京的卢深视科技有限公司 单目散斑结构光图像匹配方法、电子设备及存储介质
CN114565739A (zh) * 2022-03-01 2022-05-31 上海微创医疗机器人(集团)股份有限公司 三维模型建立方法、内窥镜及存储介质

Also Published As

Publication number Publication date
CN114565739A (zh) 2022-05-31

Similar Documents

Publication Publication Date Title
US11612307B2 (en) Light field capture and rendering for head-mounted displays
ES2394046T3 (es) Dispositivo de dibujo y método de dibujo
CN104335005B (zh) 3d扫描以及定位系统
ES2400277B1 (es) Técnicas para reconstrucción estéreo rápida a partir de imágenes
TWI520576B (zh) 將二維影像轉換爲三維影像的方法與系統及電腦可讀媒體
US20160295194A1 (en) Stereoscopic vision system generatng stereoscopic images with a monoscopic endoscope and an external adapter lens and method using the same to generate stereoscopic images
CN109903377B (zh) 一种无需相位展开的三维人脸建模方法及系统
US20170035268A1 (en) Stereo display system and method for endoscope using shape-from-shading algorithm
TW201319722A (zh) 從單鏡頭影像呈現立體影像的方法與影像擷取系統
WO2023165451A1 (zh) 三维模型建立方法、内窥镜及存储介质
CN111508068B (zh) 一种应用于双目内窥镜图像的三维重建方法及系统
CN105496556B (zh) 一种用于手术导航的高精度光学定位系统
US8902305B2 (en) System and method for managing face data
WO2018032841A1 (zh) 绘制三维图像的方法及其设备、系统
US20230316639A1 (en) Systems and methods for enhancing medical images
CN109461206A (zh) 一种多目立体视觉的人脸三维重建装置及方法
Mahdy et al. Projector calibration using passive stereo and triangulation
US20240175677A1 (en) Measuring system providing shape from shading
EP3130273B1 (en) Stereoscopic visualization system and method for endoscope using shape-from-shading algorithm
Zhou et al. Circular generalized cylinder fitting for 3D reconstruction in endoscopic imaging based on MRF
CN112686865A (zh) 一种3d视图辅助检测方法、系统、装置及存储介质
CN208319312U (zh) 一种面向膝盖软骨移植术的术中器具位姿导航装置
TW201509360A (zh) 單鏡頭內視鏡立體視覺化系統及其方法
JP2001118074A (ja) 3次元画像作成方法、3次元画像作成装置及びプログラム記録媒体
TWI538651B (zh) Stereo visualization system and method of endoscopy using chromaticity forming method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23762850

Country of ref document: EP

Kind code of ref document: A1