WO2023109960A1 - 三维扫描的处理方法、装置和三维扫描设备 - Google Patents
三维扫描的处理方法、装置和三维扫描设备 Download PDFInfo
- Publication number
- WO2023109960A1 WO2023109960A1 PCT/CN2022/139716 CN2022139716W WO2023109960A1 WO 2023109960 A1 WO2023109960 A1 WO 2023109960A1 CN 2022139716 W CN2022139716 W CN 2022139716W WO 2023109960 A1 WO2023109960 A1 WO 2023109960A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- matching
- dimensional
- point
- frame
- light plane
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 45
- 238000012545 processing Methods 0.000 claims description 20
- 238000012795 verification Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Definitions
- the present disclosure relates to the technical field of three-dimensional scanning, and in particular, to a processing method and device for three-dimensional scanning, and three-dimensional scanning equipment.
- 3D scanning technology is more and more widely used in industry.
- some industrial objects to be measured are often bulky industrial parts, which have high requirements for scanning efficiency.
- the most common scanning solution at present is binocular laser scanning, and the number of laser lines of binocular laser scanning is less than 20 lines. This scanning efficiency cannot meet some scenarios requiring high scanning efficiency.
- the number of scanning lines needs to be increased, but the increase in the number of scanning lines in the binocular stereo vision system will lead to a decrease in the accuracy of matching.
- the method to improve scanning efficiency is to increase the number of scanning lines, but increasing the number of scanning lines in the binocular scanning system will lead to matching
- the problem of sharp drop in accuracy has not yet been proposed an effective solution.
- the main purpose of the present disclosure is to provide a three-dimensional scanning processing method, device, and three-dimensional scanning equipment to solve the scene in the related art that the scanning efficiency of binocular laser scanning cannot meet some high scanning efficiency requirements.
- the method of improving scanning efficiency is through Increasing the number of scanning lines, but increasing the number of scanning lines in a binocular scanning system will lead to a sharp drop in matching accuracy.
- a processing method for three-dimensional scanning includes: projecting multiple lines to the surface of the measured object through a pattern projector; collecting two-dimensional images of the surface of the measured object through three cameras, correspondingly obtaining three frames of two-dimensional images; determining two frames of the three frames of two-dimensional images Matching point pairs between the two correspondingly obtain three sets of matching point pairs; verify the matching consistency between the matching point pairs; perform three-dimensional reconstruction on the matching point pairs with matching consistency to obtain the surface of the measured object 3D point.
- determining matching point pairs between two pairs of the three frames of two-dimensional images to obtain three sets of matching point pairs includes: acquiring a plurality of lines in the three frames of two-dimensional images, wherein the lines are formed by Composed of multiple pixel points, a pixel point on a line in a frame of two-dimensional image is used as a selected point, and multiple candidate matching points matching the selected point in another frame of two-dimensional image are determined; based on the principle of triangulation, the performing three-dimensional reconstruction on the selected point and the plurality of candidate matching points to obtain a plurality of first candidate three-dimensional points; determining the first candidate three-dimensional points satisfying preset conditions as second candidate three-dimensional points, wherein the first There are multiple two candidate three-dimensional points; the candidate matching point corresponding to the second candidate three-dimensional point and the selected point form the matching point pair.
- the method further includes: acquiring a plurality of light planes corresponding to the second candidate three-dimensional points, wherein the light planes are the A light plane corresponding to the selected point; determining a target light plane corresponding to the selected point from the plurality of light planes.
- determining the target light plane corresponding to the selected point from the multiple light planes includes: obtaining multiple light planes corresponding to multiple pixel points on the same line as the selected point; calculating each The number of occurrences of each light plane, and the light plane with the most occurrences is taken as the target light plane.
- verifying the matching consistency between the three sets of matching point pairs includes: using the matching point corresponding to the target light plane as a target matching point; combining the selected point and the target matching point to form a target matching point pairs to obtain three groups of target matching point pairs; verify the consistency among the three groups of target matching point pairs.
- the three frames of two-dimensional images are respectively the first frame of two-dimensional images, the second frame of two-dimensional images and the third frame of two-dimensional images, and verifying the matching consistency between the three groups of target matching point pairs includes: Obtaining a selected point in the first frame of two-dimensional image, and a target light plane 1 determined by matching the selected point through the first frame of two-dimensional image and the second frame of two-dimensional image; obtaining the A selected point in the first frame of two-dimensional image, and the target light plane 2 determined by matching the selected point through the first frame of two-dimensional image and the third frame of two-dimensional image; acquiring the first frame The target matching point of the selected point in the two-dimensional image in the second frame of two-dimensional image; acquiring the target matching point in the second frame of two-dimensional image passing through the second frame of two-dimensional image and the The third frame of the two-dimensional image matches the determined target light plane three; judging whether the target light plane one, the target light plane two and the target light plane three are the same light plane; when the target light
- determining a plurality of candidate matching points matching the selected point in another frame of two-dimensional image includes: obtaining the selected point Corresponding to the epipolar line equation of the other frame of two-dimensional image; taking intersections of the epipolar line equation and multiple lines in the other frame of two-dimensional image as the plurality of candidate matching points.
- a three-dimensional scanning device includes three cameras, and the three cameras are combined in pairs to obtain three binocular systems; wherein, the three binocular systems are used to collect two-dimensional images of the surface of the measured object, and correspondingly obtain three frames A two-dimensional image, wherein, determine the matching point pairs between two two-dimensional images of the three frames, correspondingly obtain three sets of matching point pairs; verify the matching consistency between the matching point pairs; Three-dimensional reconstruction is performed on the matched point pairs to obtain three-dimensional points on the surface of the measured object.
- a processing device for three-dimensional scanning includes: a projection unit configured to project multiple lines onto the surface of the object to be measured through a pattern projector; a collection unit configured to collect two-dimensional images of the surface of the object to be measured through three cameras, corresponding to three frames and two three-dimensional image; the first determination unit is configured to determine the matching point pairs between two pairs of the three frames of two-dimensional images, and correspondingly obtains three sets of matching point pairs; the verification unit is configured to verify the matching point pairs between matching consistency; the reconstruction unit is configured to perform three-dimensional reconstruction on the matching point pairs with matching consistency to obtain the three-dimensional points on the surface of the measured object.
- the determination unit includes: a first acquisition subunit configured to acquire multiple lines in the three frames of two-dimensional images, wherein the lines are composed of multiple pixel points, the first determination subunit, It is configured to use a pixel point on a line in a frame of two-dimensional image as a selected point, and determine a plurality of candidate matching points matching the selected point in another frame of two-dimensional image;
- the reconstruction subunit is configured to be based on The principle of triangulation is to carry out three-dimensional reconstruction of the selected point and the plurality of candidate matching points to obtain a plurality of first candidate three-dimensional points;
- the second determination subunit is configured to use the first candidate three-dimensional points that meet the preset conditions Determined as the second candidate three-dimensional point, wherein there are multiple second candidate three-dimensional points;
- the first component subunit is configured to combine the candidate matching point corresponding to the second candidate three-dimensional point with the selected points make up the matching point pairs.
- the device further includes: an acquisition unit configured to acquire a plurality of light planes corresponding to the second candidate three-dimensional point before verifying the matching consistency among the three sets of matching point pairs, wherein the The light plane is the light plane corresponding to the selected point; the second determining unit is configured to determine a target light plane corresponding to the selected point from the plurality of light planes.
- the second determination unit includes: a second acquisition subunit configured to acquire multiple light planes corresponding to multiple pixel points on the same line of the selected point; a calculation subunit configured to Count the number of occurrences of each light plane, and use the light plane with the most occurrences as the target light plane.
- the verification unit includes: using the matching point corresponding to the target light plane as a target matching point; a second composition subunit configured to form a target matching point pair between the selected point and the target matching point , to obtain three groups of target matching point pairs; the verification subunit is configured to verify the consistency among the three groups of target matching point pairs.
- the three frames of two-dimensional images are respectively the first frame of two-dimensional images, the second frame of two-dimensional images and the third frame of two-dimensional images
- the verification subunit includes: a first acquisition module configured to acquire the The selected point in the first frame of two-dimensional image, and the target light plane one determined by the selected point through the matching of the first frame of two-dimensional image and the second frame of two-dimensional image; the second acquisition module, configured to acquire a selected point in the first frame of two-dimensional image, and a target light plane 2 determined by matching the selected point through the first frame of two-dimensional image and the third frame of two-dimensional image;
- the third acquisition module is configured to acquire the target matching point of the selected point in the first frame of two-dimensional image in the second frame of two-dimensional image;
- the fourth acquisition module is configured to acquire the second frame of two-dimensional image
- the target matching point in the frame of two-dimensional image matches the target light plane three determined by the second frame of two-dimensional image and the third frame of two-dimensional image;
- the judging module
- the first determination subunit includes: a fifth acquisition unit configured to acquire an epipolar line equation corresponding to the selected point to the other frame of two-dimensional image; combine the epipolar line equation with the Intersection points of multiple lines in another two-dimensional image are used as the multiple candidate matching points.
- a computer-readable storage medium includes a stored program, wherein the program executes the processing method for three-dimensional scanning described in any one of the above .
- a processor is provided, and the processor is used for running a program, wherein, when the program is running, the processing method for three-dimensional scanning described in any one of the above is executed.
- the following steps are adopted: projecting multiple lines to the surface of the measured object through a pattern projector; collecting two-dimensional images of the surface of the measured object through three cameras, and correspondingly obtaining three frames of two-dimensional images; determining the three frames Matching point pairs between two two-dimensional images correspond to three sets of matching point pairs; verify the matching consistency between the matching point pairs; perform three-dimensional reconstruction on the matching point pairs with matching consistency to obtain the matched point pairs Measure the three-dimensional points on the surface of the object, and solve the scene where the scanning efficiency of binocular laser scanning cannot meet the requirements of some high scanning efficiency in the related technology.
- the method to improve the scanning efficiency is to increase the number of scanning lines, but increase the scan The number of lines will lead to a sharp drop in matching accuracy.
- the two-dimensional images of the surface of the measured object are collected by three cameras to obtain three frames of two-dimensional images, and the three frames of two-dimensional images are matched in pairs to obtain three sets of matching point pairs, and three-dimensional matching point pairs with matching consistency are obtained. Reconstruct to obtain the three-dimensional points on the surface of the object to be measured, thereby achieving the effect of improving the accuracy of matching.
- FIG. 1 is a flow chart of a processing method for three-dimensional scanning provided according to an embodiment of the present disclosure
- Fig. 2 is a schematic diagram of an optional three-frame two-dimensional image provided according to an embodiment of the present disclosure
- Fig. 3 is a schematic diagram of an optional three-dimensional scanning device provided according to an embodiment of the present disclosure
- Fig. 4 is a schematic diagram of a three-dimensional scanning processing device provided according to an embodiment of the present disclosure.
- FIG. 1 is a flow chart of a processing method for three-dimensional scanning provided according to an embodiment of the present disclosure. As shown in FIG. 1 , the method includes the following steps:
- Step S101 projecting multiple lines onto the surface of the object to be measured by a pattern projector.
- a pattern projector of a 3D scanning device projects a plurality of scan lines (for example, laser scan lines may be used) onto the surface of an object to be constructed into a 3D model.
- step S102 three cameras are used to collect two-dimensional images of the surface of the measured object, and correspondingly three frames of two-dimensional images are obtained.
- the three cameras of the three-dimensional scanning device collect two-dimensional images of the surface of the object to be constructed in a three-dimensional model, and correspondingly obtain three frames of two-dimensional images (for example, Fig. 2 three 2D images shown).
- the three frames of two-dimensional images are respectively a first frame of two-dimensional images, a second frame of two-dimensional images and a third frame of two-dimensional images.
- the three-dimensional scanning device is a handheld three-dimensional scanner, which can move and scan relative to the measured object.
- the three cameras collect synchronously, that is, the three cameras collect synchronously at the first time and at the second time until the scanning is completed, so as to ensure the correspondence of the images obtained by the three cameras.
- the three cameras can also be asynchronous, but it is necessary to ensure that the acquisition time interval is extremely small, and the position of the 3D scanning device relative to the measured object is almost constant. If the three-dimensional scanning device is used fixedly every time the measured object is collected, the three-dimensional scanning device is fixed at the first position to collect one part of the measured object, and then fixed to the second position to collect another part of the measured object until it is detected If the object to be measured is scanned, there is no limit to whether the acquisitions of the three cameras are synchronized.
- Step S103 determining matching point pairs between two pairs of the three frames of two-dimensional images, and correspondingly obtaining three sets of matching point pairs.
- Two-by-two matching is performed on the three frames of two-dimensional images to obtain three corresponding sets of matching point pairs.
- a point A(1,2) on the first frame of the two-dimensional image, A(1,2) is matched in the second frame of the two-dimensional image and the matching point coordinates are B 1 (2,3);
- the matching point pairs of the first two-dimensional image and the second two-dimensional image are A(1,2) and B 1 (2,3);
- A(1,2) is obtained by matching the third two-dimensional image
- the matching point coordinates are C(1,3); then the matching point pairs of the first two-dimensional image and the third two-dimensional image are A(1,2) and C(1,3);
- the matching point obtained by matching B 1 (2, 3) on the third two-dimensional image in the third frame is C(1, 3); then the matching point pair between the second two-dimensional image and the third two-dimensional image is B 1 (2,3) and C(1,3).
- Step S104 verifying the matching consistency between the matching point pairs.
- Step S105 performing three-dimensional reconstruction on the matching point pairs with matching consistency, to obtain three-dimensional points on the surface of the measured object.
- three-dimensional reconstruction is performed on the point to obtain the three-dimensional point of the object that needs to be constructed into a three-dimensional model, and the three-dimensional model of the object is constructed based on the three-dimensional point.
- the two-dimensional images of the object are collected by three cameras to obtain three frames of two-dimensional images, pairwise matching is performed on the three frames of two-dimensional images to obtain matching point pairs, and three-dimensional reconstruction is performed on the matching point pairs with matching consistency , so as to obtain the 3D model of the object, which improves the accuracy of 3D reconstruction matching.
- determining matching point pairs between two two-dimensional images of three frames, correspondingly obtaining three sets of matching point pairs includes: acquiring three frames of two-dimensional images Multiple lines, wherein the line is composed of multiple pixels, a pixel on the line in one frame of two-dimensional image is used as a selected point, and multiple candidate matches matching the selected point in another frame of two-dimensional image are determined point; based on the principle of triangulation, three-dimensional reconstruction is performed on the selected point and multiple candidate matching points to obtain multiple first candidate three-dimensional points; the first candidate three-dimensional point meeting the preset condition is determined as the second candidate three-dimensional point, wherein, There are multiple second candidate three-dimensional points; the candidate matching point corresponding to the second candidate three-dimensional point and the selected point form a matching point pair.
- three frames of two-dimensional images contain multiple lines (i.e., multiple scanning lines emitted by the pattern projector), and the lines are composed of multiple pixel points.
- the corresponding image of Cam L in Figure 2 is The first frame of two-dimensional image
- Cam M corresponds to the second frame of two-dimensional image
- Cam H corresponds to the third frame of two-dimensional image.
- the candidate matching points in the second frame of the 2D image include B1, B2, B3, B4, B5 and B6.
- the screening method is to calculate the distance Distance(k) from the first candidate 3D point to the light plane Plane(k) corresponding to all scan lines.
- the distance value of a first candidate three-dimensional point is within the set distance threshold, that is, Distance(k) ⁇ dist TH, it means that the first candidate three-dimensional point is in the light plane k (th projection line), That is, the first candidate three-dimensional point satisfies the preset requirement.
- the second candidate three-dimensional point has O 1 , O 2 and O 3
- the candidate matching points B 1 , B 2 and B 3 corresponding to O 1 , O 2 and O 3 form a matching point pair with point A.
- the matching point of each pixel point on the first frame of two-dimensional image on the second frame of two-dimensional image is obtained through the above method. Also use the same method to obtain the matching point pairs of the first frame of 2D image and the third frame of 2D image, and the matching point pairs of the second frame of 2D image and the third frame of 2D image.
- the method before verifying the matching consistency among the three sets of matching point pairs, the method further includes: acquiring multiple light planes corresponding to the second candidate three-dimensional point , where the light plane is the light plane corresponding to the selected point; the target light plane corresponding to the selected point is determined from multiple light planes.
- the second candidate 3D point is determined by calculating the distance Distance(k) between the point and the light plane Plane(k) corresponding to all scan lines.
- the distance value of a second candidate three-dimensional point is within the set distance threshold, that is, Distance(k) ⁇ dist TH, it means that the point is in the k-th light plane (the k-th projection line). That is to say, one second candidate three-dimensional point corresponds to one light plane.
- the light planes corresponding to the second candidate three-dimensional points O 1 , O 2 and O 3 are K 1 , K 2 and K 3 respectively.
- the best light plane (that is, the above-mentioned target light plane) is selected from these light planes.
- the reconstructed three-dimensional points on the surface of the object can be obtained more accurately.
- determining the target light plane corresponding to the selected point from multiple light planes includes: acquiring multiple pixel points on the same line as the selected point Corresponding multiple light planes; count the number of occurrences of each light plane, and use the light plane with the most occurrences as the target light plane.
- the method for selecting the best light plane from the light planes K 1 , K 2 and K 3 is: firstly obtain multiple light planes corresponding to multiple pixel points of the selected point on the same line. As shown in FIG. 2 , there are multiple pixel points A 1 , A 2 , A 3 and A 4 on the same line as point A.
- the multiple light planes corresponding to A 1 are K 1 and K 2 ; the multiple light planes corresponding to A 2 are K 1 and K 4 ; the multiple light planes corresponding to A 3 are K 1 , K 2 and K 3 ; A 4
- the corresponding multiple light planes are K 1 and K 2 ; the number of occurrences of each light plane is calculated, and the light plane with the most occurrences is taken as the best light plane, that is, the best light plane corresponding to point A is K 1 .
- the lines in the three-frame two-dimensional images may be discontinuous lines, then by obtaining the selected point on the line and searching for multiple light planes corresponding to multiple pixels in the neighborhood to determine the corresponding to the selected point Optimal light plane.
- the optimal light plane corresponding to the selected point is determined through multiple light planes corresponding to multiple pixel points of the selected point on the same line, which improves the accuracy of the light plane corresponding to the pixel point.
- verifying the matching consistency among the three sets of matching point pairs includes: using the matching point corresponding to the target light plane as the target matching point; The target matching point pairs are formed with the target matching points, and three sets of target matching point pairs are obtained; the consistency among the three sets of target matching point pairs is verified.
- the matching point corresponding to the best light plane is the best matching point.
- the best matching point pair of the first two-dimensional image and the second two-dimensional image is A(1,2) and B1 (2,3), and the corresponding best light plane is K1 ;
- the first frame The best matching point pairs between the 2D image and the third frame of 2D image are A(1, 2) and C(1, 3), and the corresponding best light plane is K 1 ;
- the best matching point pair of the frame two-dimensional image is B 1 (2, 3) and C (1, 3), and the corresponding best light plane is K 1 ;
- Only the 3D points reconstructed by the best matching point pairs may be the 3D points on the surface of the object that needs to build a 3D model, so it is necessary to verify the consistency between the best matching point pairs.
- the three frames of two-dimensional images are respectively the first frame of the two-dimensional image, the second frame of the two-dimensional image and the third frame of the two-dimensional image, and the verified
- the matching consistency between the three groups of target matching point pairs includes: obtaining a selected point in the first frame of two-dimensional image, and the selected point passing through the first frame of two-dimensional image and the second Frame 2D image matching determined target light plane 1; Acquiring a selected point in the first frame 2D image, and the selected point passing through the first frame 2D image and the third frame 2D
- the target light plane 2 determined by image matching; obtaining the target matching point of the selected point in the two-dimensional image of the first frame in the two-dimensional image of the second frame; obtaining the two-dimensional image of the second frame
- the target matching point matches the target light plane 3 determined by matching the second two-dimensional image with the third frame two-dimensional image; judging the target light plane one, the target light plane two and the target light plane three Whether
- a point A(1,2) on the first two-dimensional image, A(1,2) is matched in the second two-dimensional image and the best matching point coordinate is B 1 (2,3); then the first The best matching point pair of the frame 2D image and the second frame of 2D image is A(1, 2) and B 1 (2, 3), and the corresponding best light plane is K 1 .
- the coordinates of the best matching point obtained by A(1, 2) in the third frame of 2D image matching are C(1, 3); then the best matching point pair of the first frame of 2D image and the third frame of 2D image is A(1, 2) and C(1, 3), the corresponding optimal light plane is K 1 .
- the best matching point obtained by matching B 1 (2, 3) on the second frame of two-dimensional image in the third frame of two-dimensional image is C (1, 3); then the second frame of two-dimensional image and the third frame of two-dimensional
- the best matching point pair of the image is B 1 (2, 3) and C (1, 3), and the corresponding best light plane is K 1 .
- the best light plane obtained by pairwise matching is the same, that is, K 1 , indicating that the three groups of best matching point pairs have matching consistency.
- a pixel point on a line in a frame of two-dimensional image is used as a selected point, and multiple pixels matching the selected point in another frame of two-dimensional image are determined.
- candidate matching points including: obtaining the epipolar line equation corresponding to another frame of two-dimensional image from the selected point; taking the intersection points of the epipolar line equation and multiple lines in another frame of two-dimensional image as multiple candidate matching points.
- the polar line is calculated based on the relative positions of Cam L and Cam M.
- Line equations such as the schematic epipolar equations labeled in Figure 2.
- the intersection points of the epipolar line equation and multiple lines in another frame of two-dimensional image are used as multiple candidate matching points.
- the three-dimensional scanning processing method uses a pattern projector to project multiple lines onto the surface of the measured object; collects two-dimensional images of the surface of the measured object through three cameras, and correspondingly obtains three frames of two-dimensional images; determine Matching point pairs between two pairs of the three frames of two-dimensional images correspond to three sets of matching point pairs; verify the matching consistency between the matching point pairs; perform three-dimensional reconstruction on the matching point pairs with matching consistency, Obtaining the three-dimensional points on the surface of the measured object solves the problem that the scanning efficiency of the binocular laser scanning in the related art cannot meet some high scanning efficiency requirements.
- the method to improve the scanning efficiency is to increase the number of scanning lines, but the Increasing the number of scanning lines in the system will lead to a sharp drop in matching accuracy.
- the two-dimensional images of the surface of the measured object are collected by three cameras to obtain three frames of two-dimensional images, and the three frames of two-dimensional images are matched in pairs to obtain three sets of matching point pairs, and three-dimensional matching point pairs with matching consistency are obtained. Reconstruct to obtain the three-dimensional points on the surface of the object to be measured, thereby achieving the effect of improving the accuracy of matching.
- An embodiment of the present disclosure also provides a three-dimensional scanning device.
- the three-dimensional scanning device includes three cameras, and the three cameras are combined in pairs to obtain three binocular systems; wherein, the three binocular systems are used to collect objects to be measured.
- the two-dimensional image of the surface corresponds to three frames of two-dimensional images, wherein the matching point pairs between two two-dimensional images of the three frames are determined, and three sets of matching point pairs are correspondingly obtained; the matching consistency between the matching point pairs is verified; Perform 3D reconstruction on the matched point pairs with matching consistency to obtain 3D points on the surface of the measured object.
- FIG. 3 it is a schematic diagram of an optional three-dimensional scanning device provided according to an embodiment of the present disclosure, wherein Cam L, Cam M and Cam R are three cameras, and two of the three cameras form three binocular systems.
- Projector is a pattern projector.
- the embodiment of the present disclosure also provides a three-dimensional scanning processing device. It should be noted that the three-dimensional scanning processing device of the present disclosure embodiment can be used to execute the processing method for three-dimensional scanning provided by the present disclosure embodiment.
- the processing apparatus for three-dimensional scanning provided by the embodiments of the present disclosure will be introduced below.
- FIG. 4 is a schematic diagram of a three-dimensional scanning processing device according to an embodiment of the disclosure. As shown in FIG. 4 , the device includes: a projection unit 401 , an acquisition unit 402 , a first determination unit 403 , a verification unit 404 and a reconstruction unit 405 .
- the projection unit 401 is configured to project multiple lines to the surface of the measured object through the pattern projector.
- the acquisition unit 402 is configured to acquire two-dimensional images of the surface of the measured object through three cameras, and correspondingly obtain three frames of two-dimensional images.
- the first determining unit 403 is configured to determine matching point pairs between two pairs of three frames of two-dimensional images, and correspondingly obtain three groups of matching point pairs.
- the verification unit 404 is configured to verify matching consistency between matching point pairs.
- the reconstruction unit 405 is configured to perform three-dimensional reconstruction on the matched point pairs with matching consistency to obtain three-dimensional points on the surface of the measured object.
- the three-dimensional scanning processing device uses the projection unit 401 to project multiple lines to the surface of the measured object through the pattern projector; the acquisition unit 402 collects two-dimensional images of the surface of the measured object through three cameras, and correspondingly obtains three frames Two-dimensional image; the first determination unit 403 determines the matching point pairs between two pairs of three frames of two-dimensional images, correspondingly obtaining three groups of matching point pairs; the verification unit 404 verifies the matching consistency between the matching point pairs; the reconstruction unit 405 pairs Three-dimensional reconstruction is performed on matching point pairs with matching consistency to obtain three-dimensional points on the surface of the measured object, which solves the scene where the scanning efficiency of binocular laser scanning in the related technology cannot meet some high scanning efficiency requirements.
- the method to improve scanning efficiency is through Increase the number of scanning lines, but increasing the number of scanning lines in the binocular scanning system will lead to a sharp drop in matching accuracy.
- Three cameras are used to collect two-dimensional images of the surface of the measured object to obtain three frames of two-dimensional images. Match the three frames of two-dimensional images two by two to obtain three sets of matching point pairs, perform three-dimensional reconstruction on the matching point pairs with matching consistency, and obtain the three-dimensional points on the surface of the object to be measured, thereby achieving the effect of improving the accuracy of matching .
- the determining unit includes: a first acquiring subunit configured to determine matching point pairs between two two-dimensional images of three frames, correspondingly obtain three Before group matching point pairs, multiple lines in three frames of two-dimensional images are obtained, wherein the lines are composed of multiple pixels, and the first determination subunit is configured to use one pixel point on the lines in one frame of two-dimensional images as A selected point is used to determine a plurality of candidate matching points matching the selected point in another two-dimensional image; the reconstruction subunit is configured to perform three-dimensional reconstruction on the selected point and the plurality of candidate matching points based on the principle of triangulation, to obtain a plurality of first candidate three-dimensional points; the second determination subunit is configured to determine the first candidate three-dimensional points satisfying preset conditions as second candidate three-dimensional points, wherein there are multiple second candidate three-dimensional points; the first composition The subunit is configured to form a matching point pair between the candidate matching point corresponding to the second candidate three-dimensional point and the
- the device further includes: an acquisition unit configured to acquire the second candidate three-dimensional point before verifying the matching consistency among the three sets of matching point pairs A plurality of corresponding light planes, wherein the light plane is a light plane corresponding to the selected point; the second determining unit is configured to determine a target light plane corresponding to the selected point from the plurality of light planes.
- the second determination unit includes: a second acquisition subunit configured to acquire multiple pixel points corresponding to multiple pixel points on the same line as the selected point. light planes; the calculating subunit is configured to calculate the number of occurrences of each light plane, and use the light plane with the largest number of occurrences as the target light plane.
- the verification unit includes: using the matching point corresponding to the target light plane as the target matching point; the second component subunit is configured to match the selected point with the target The matching points form target matching point pairs to obtain three sets of target matching point pairs; the verification subunit is configured to verify the consistency among the three sets of target matching point pairs.
- the three frames of two-dimensional images are respectively the first frame of two-dimensional images, the second frame of two-dimensional images and the third frame of two-dimensional images
- the verification subunit includes :
- the first acquisition module is configured to acquire the selected point in the first frame of two-dimensional image, and the target light plane 1 determined by the selected point through the matching of the first frame of two-dimensional image and the second frame of two-dimensional image;
- the second The acquisition module is configured to acquire the selected point in the first frame of two-dimensional image, and the target light plane 2 determined by the selected point after matching the first frame of two-dimensional image with the third frame of two-dimensional image;
- the third acquisition module It is configured to obtain the target matching point of the selected point in the first frame of two-dimensional image in the second frame of two-dimensional image;
- the fourth acquisition module is configured to obtain the target matching point in the second frame of two-dimensional image passing through the second The target light plane three determined by matching the frame two-dimensional image with the third frame two-
- the first determination subunit includes: a fifth acquisition unit configured to acquire an epipolar equation corresponding to another frame of two-dimensional image from a selected point; The intersection points of the epipolar line equation and multiple lines in another frame of two-dimensional image are used as multiple candidate matching points.
- the processing device for the three-dimensional scanning includes a processor and a memory, the projection unit, the acquisition unit, the first determination unit, the verification unit and the reconstruction unit are all stored in the memory as program units, and the above-mentioned programs stored in the memory are executed by the processor. Program unit to realize the corresponding function.
- the processor includes a kernel, and the kernel fetches corresponding program units from the memory.
- One or more kernels can be set, and the reconstruction of the three-dimensional points of the measured object can be realized by adjusting the kernel parameters.
- Memory may include non-permanent memory in computer-readable media, in the form of random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM), memory including at least one memory chip.
- RAM random access memory
- ROM read-only memory
- flash RAM flash memory
- An embodiment of the present invention provides a computer-readable storage medium on which a program is stored, and when the program is executed by a processor, a processing method for three-dimensional scanning is realized.
- An embodiment of the present invention provides a processor, and the processor is used to run a program, wherein a processing method for three-dimensional scanning is executed when the program is running.
- An embodiment of the present invention provides a device.
- the device includes a processor, a memory, and a program stored in the memory and operable on the processor.
- the processor executes the program, the following steps are implemented: Projecting multiple lines to the measured object through the pattern projector Object surface; collect two-dimensional images of the surface of the measured object through three cameras, correspondingly obtain three frames of two-dimensional images; determine the matching point pairs between two two frames of three two-dimensional images, correspondingly obtain three sets of matching point pairs; verify the matching Matching consistency between point pairs; 3D reconstruction is performed on matching point pairs with matching consistency to obtain 3D points on the surface of the measured object.
- determining matching point pairs between two pairs of three frames of two-dimensional images includes: obtaining multiple lines in three frames of two-dimensional images, wherein the lines are composed of multiple pixel points , using a pixel point on the line in one frame of two-dimensional image as the selected point, and determining multiple candidate matching points matching the selected point in another frame of two-dimensional image; based on the principle of triangulation, the selected point and multiple candidate points Matching points for 3D reconstruction to obtain a plurality of first candidate 3D points; determining the first candidate 3D points satisfying preset conditions as second candidate 3D points, wherein there are multiple second candidate 3D points; The candidate matching points corresponding to the points form matching point pairs with the selected points.
- the method before verifying the matching consistency between the three sets of matching point pairs, the method further includes: acquiring a plurality of light planes corresponding to the second candidate three-dimensional point, wherein the light plane is a light plane corresponding to the selected point; Determine the target light plane corresponding to the selected point from a plurality of light planes.
- determining the target light plane corresponding to the selected point from multiple light planes includes: obtaining multiple light planes corresponding to multiple pixel points on the same line as the selected point; times, the light plane with the most occurrences is taken as the target light plane.
- verify the matching consistency between the three sets of matching point pairs including: using the matching point corresponding to the target light plane as the target matching point; combining the selected point and the target matching point to form a target matching point pair to obtain three sets of target Match point pairs; verify the consistency between the three sets of target match point pairs.
- the three frames of two-dimensional images are respectively the first frame of two-dimensional images, the second frame of two-dimensional images and the third frame of two-dimensional images, and verifying the matching consistency between the three groups of target matching point pairs includes: obtaining the first A selected point in the two-dimensional image frame, and a target light plane 1 determined by matching the selected point with the two-dimensional image in the first frame and the two-dimensional image in the second frame; acquiring the selected point in the two-dimensional image in the first frame, and Target light plane 2 determined by matching the selected point through the first frame of two-dimensional image and the third frame of two-dimensional image; obtaining the target matching point of the selected point in the first frame of two-dimensional image in the second frame of two-dimensional image; Obtain the target light plane 3 determined by matching the target matching points in the second two-dimensional image with the second frame two-dimensional image and the third frame two-dimensional image; determine whether the target light plane one, target light plane two and target light plane three are The same light plane; when the target light plane 1, target light plane 2 and target light
- determining a plurality of candidate matching points matching the selected point in another frame of two-dimensional image includes: obtaining the selected point corresponding to another An epipolar line equation of a two-dimensional image; the intersections of the epipolar line equation and multiple lines in another two-dimensional image are used as multiple candidate matching points.
- the devices in this article can be servers, PCs, PADs, mobile phones, etc.
- the present disclosure also provides a computer program product, which, when executed on a data processing device, is adapted to execute a program initialized with the following method steps: projecting multiple lines onto the surface of a measured object through a pattern projector; Measure the two-dimensional image of the surface of the object, and obtain three frames of two-dimensional images correspondingly; determine the matching point pairs between two two-dimensional images of the three frames, and obtain three sets of matching point pairs correspondingly; verify the matching consistency between the matching point pairs; Perform 3D reconstruction on the matched point pairs with matching consistency to obtain 3D points on the surface of the measured object.
- determining matching point pairs between two pairs of three frames of two-dimensional images includes: obtaining multiple lines in three frames of two-dimensional images, wherein the lines are composed of multiple pixel points , using a pixel point on the line in one frame of two-dimensional image as the selected point, and determining multiple candidate matching points matching the selected point in another frame of two-dimensional image; based on the principle of triangulation, the selected point and multiple candidate points Matching points for 3D reconstruction to obtain a plurality of first candidate 3D points; determining the first candidate 3D points satisfying preset conditions as second candidate 3D points, wherein there are multiple second candidate 3D points; The candidate matching points corresponding to the points form matching point pairs with the selected points.
- the method before verifying the matching consistency between the three sets of matching point pairs, the method further includes: acquiring a plurality of light planes corresponding to the second candidate three-dimensional point, wherein the light plane is a light plane corresponding to the selected point; Determine the target light plane corresponding to the selected point from a plurality of light planes.
- determining the target light plane corresponding to the selected point from multiple light planes includes: obtaining multiple light planes corresponding to multiple pixel points on the same line as the selected point; times, the light plane with the most occurrences is taken as the target light plane.
- verify the matching consistency between the three sets of matching point pairs including: using the matching point corresponding to the target light plane as the target matching point; combining the selected point and the target matching point to form a target matching point pair to obtain three sets of target Match point pairs; verify the consistency between the three sets of target match point pairs.
- the three frames of two-dimensional images are respectively the first frame of two-dimensional images, the second frame of two-dimensional images and the third frame of two-dimensional images, and verifying the matching consistency between the three groups of target matching point pairs includes: obtaining the first A selected point in the two-dimensional image frame, and a target light plane 1 determined by matching the selected point with the two-dimensional image in the first frame and the two-dimensional image in the second frame; acquiring the selected point in the two-dimensional image in the first frame, and Target light plane 2 determined by matching the selected point through the first frame of two-dimensional image and the third frame of two-dimensional image; obtaining the target matching point of the selected point in the first frame of two-dimensional image in the second frame of two-dimensional image; Obtain the target light plane 3 determined by matching the target matching points in the second two-dimensional image with the second frame two-dimensional image and the third frame two-dimensional image; determine whether the target light plane one, target light plane two and target light plane three are The same light plane; when the target light plane 1, target light plane 2 and target light
- determining a plurality of candidate matching points matching the selected point in another frame of two-dimensional image includes: obtaining the selected point corresponding to another An epipolar line equation of a two-dimensional image; the intersections of the epipolar line equation and multiple lines in another two-dimensional image are used as multiple candidate matching points.
- the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
- the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
- a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- Memory may include non-permanent storage in computer readable media, in the form of random access memory (RAM) and/or nonvolatile memory such as read only memory (ROM) or flash RAM.
- RAM random access memory
- ROM read only memory
- flash RAM flash random access memory
- Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
- Information may be computer readable instructions, data structures, modules of a program, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
- computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
- the embodiments of the present application may be provided as methods, systems or computer program products. Accordingly, the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
- a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the three-dimensional scanning processing method, device and three-dimensional scanning equipment provided by the embodiments of the present disclosure project multiple lines to the surface of the measured object through the pattern projector; collect two-dimensional images of the surface of the measured object through three cameras, and obtain correspondingly Three frames of two-dimensional images; determining matching point pairs between two pairs of the three frames of two-dimensional images, and correspondingly obtaining three sets of matching point pairs; verifying the matching consistency between the matching point pairs; pairing the matching point pairs with matching consistency Matching point pairs for three-dimensional reconstruction to obtain three-dimensional points on the surface of the measured object solves the problem that the scanning efficiency of binocular laser scanning in the related art cannot meet some high scanning efficiency requirements.
- the method to improve scanning efficiency is to increase the scanning line
- increasing the number of scanning lines in a binocular scanning system will lead to a sharp drop in matching accuracy.
- the two-dimensional images of the surface of the measured object are collected by three cameras to obtain three frames of two-dimensional images, and the three frames of two-dimensional images are matched in pairs to obtain three sets of matching point pairs, and three-dimensional matching point pairs with matching consistency are obtained. Reconstruct to obtain the three-dimensional points on the surface of the object to be measured, thereby achieving the effect of improving the accuracy of matching.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims (11)
- 一种三维扫描的处理方法,包括:通过图案投射器投射多线至被测物体表面;通过三个相机采集所述被测物体表面的二维图像,对应得到三帧二维图像;确定所述三帧二维图像两两之间的匹配点对,对应得到三组匹配点对;验证所述匹配点对之间的匹配一致性;对具有匹配一致性的匹配点对进行三维重建,得到所述被测物体表面的三维点。
- 根据权利要求1所述的方法,其中,确定所述三帧二维图像两两之间的匹配点对,对应得到三组匹配点对,包括:获取所述三帧二维图像中的多个线条,其中,所述线条由多个像素点组成,将一帧二维图像中线条上一个像素点作为选定点,确定另一帧二维图像中与所述选定点匹配的多个候选匹配点;基于三角法原理将所述选定点与所述多个候选匹配点进行三维重建,得到多个第一候选三维点;将满足预设条件的第一候选三维点确定为第二候选三维点,其中,所述第二候选三维点有多个;将所述第二候选三维点对应的所述候选匹配点与所述选定点组成所述匹配点对。
- 根据权利要求2所述的方法,其中,在验证所述三组匹配点对之间的匹配一致性之前,所述方法还包括:获取所述第二候选三维点对应的多个光平面,其中,所述光平面为所述选定点对应的光平面;从所述多个光平面中确定所述选定点对应的目标光平面。
- 根据权利要求3所述的方法,其中,从所述多个光平面中确定所述选定点对应的目标光平面,包括:获取与所述选定点在同一线条上的多个像素点对应的多个光平面;计算每个光平面出现的次数,将出现次数最多的光平面作为目标光平面。
- 根据权利要求4所述的方法,其中,验证所述三组匹配点对之间的匹配一致性,包括:将所述目标光平面对应的匹配点作为目标匹配点;将所述选定点与所述目标匹配点组成目标匹配点对,得到三组目标匹配点对;验证所述三组目标匹配点对之间的一致性。
- 根据权利要求5所述的方法,其中,其中,所述三帧二维图像分别为第一帧二维图像,第二帧二维图像和第三帧二维图像,验证所述三组目标匹配点对之间的匹配一致性包括:获取所述第一帧二维图像中的选定点,以及所述选定点经过所述第一帧二维图像与所述第二帧二维图像匹配确定的目标光平面一;获取所述第一帧二维图像中的选定点,以及所述选定点经过所述第一帧二维图像与所述第三帧二维图像匹配确定的目标光平面二;获取所述第一帧二维图像中的选定点在所述第二帧二维图像中的目标匹配点;获取所述第二帧二维图像中所述目标匹配点经过所述第二帧二维图像与所述第三帧二维图像匹配确定的目标光平面三;判断所述目标光平面一,所述目标光平面二和所述目标光平面三是否为同一光平面;当所述目标光平面一,所述目标光平面二和所述目标光平面三为同一光平面时,则所述三组目标匹配点对具有匹配一致性。
- 根据权利要求2所述的方法,其中,将一帧二维图像中线条上一个像素点作为选定点,确定另一帧二维图像中与所述选定点匹配的多个候选匹配点,包括:获取所述选定点对应到所述另一帧二维图像的极线方程;将所述极线方程与所述另一帧二维图像中的多个线条的交点作为所述多个候选匹配点。
- 一种三维扫描设备,其中,所述三维扫描设备中包括三个相机,将所述三个相机两两组合,得到三个双目系统;其中,所述三个双目系统用于采集被测物体表面的二维图像,对应得到三帧二维图像,其中,确定所述三帧二维图像两两之间的匹配点对,对应得到三组匹配点对;验证所述匹配点对之间的匹配一致性;对具有匹配一致性的匹配点对进行三维重建,得到所述被测物体表面的三维点。
- 一种三维扫描的处理装置,包括:投射单元,被配置为通过图案投射器投射多线至被测物体表面;采集单元,被配置为通过三个相机采集所述被测物体表面的二维图像,对应得到三帧二维图像;第一确定单元,被配置为确定所述三帧二维图像两两之间的匹配点对,对应得到三组匹配点对;验证单元,被配置为验证所述匹配点对之间的匹配一致性;重建单元,被配置为对具有匹配一致性的匹配点对进行三维重建,得到所述被测物体表面的三维点。
- 一种计算机可读存储介质,其中,所述存储介质包括存储的程序,其中,所述程序执行权利要求1至7中任意一项所述的三维扫描的处理方法。
- 一种处理器,其中,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至7中任意一项所述的三维扫描的处理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22906705.3A EP4451221A1 (en) | 2021-12-17 | 2022-12-16 | Three-dimensional scanning processing method and apparatus and three-dimensional scanning device |
KR1020247020190A KR20240113513A (ko) | 2021-12-17 | 2022-12-16 | 3차원 스캔의 처리 방법, 장치 및 3차원 스캔 기기 |
CA3242368A CA3242368A1 (en) | 2021-12-17 | 2022-12-16 | Method and apparatus for processing three-dimensional scanning, and three-dimensional scanning device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111556738.2 | 2021-12-17 | ||
CN202111556738.2A CN116266379A (zh) | 2021-12-17 | 2021-12-17 | 三维扫描的处理方法、装置和三维扫描设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023109960A1 true WO2023109960A1 (zh) | 2023-06-22 |
Family
ID=86743932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/139716 WO2023109960A1 (zh) | 2021-12-17 | 2022-12-16 | 三维扫描的处理方法、装置和三维扫描设备 |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP4451221A1 (zh) |
KR (1) | KR20240113513A (zh) |
CN (1) | CN116266379A (zh) |
CA (1) | CA3242368A1 (zh) |
WO (1) | WO2023109960A1 (zh) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108288292A (zh) * | 2017-12-26 | 2018-07-17 | 中国科学院深圳先进技术研究院 | 一种三维重建方法、装置及设备 |
US20190353477A1 (en) * | 2016-09-14 | 2019-11-21 | Hangzhou Scantech Company Limited | Three-dimensional sensor system and three-dimensional data acquisition method |
WO2021088481A1 (zh) * | 2019-11-08 | 2021-05-14 | 南京理工大学 | 一种基于条纹投影的高精度动态实时360度全方位点云获取方法 |
CN113390357A (zh) * | 2021-07-08 | 2021-09-14 | 南京航空航天大学 | 一种基于双目多线结构光的铆钉齐平度测量方法 |
-
2021
- 2021-12-17 CN CN202111556738.2A patent/CN116266379A/zh active Pending
-
2022
- 2022-12-16 WO PCT/CN2022/139716 patent/WO2023109960A1/zh active Application Filing
- 2022-12-16 KR KR1020247020190A patent/KR20240113513A/ko unknown
- 2022-12-16 EP EP22906705.3A patent/EP4451221A1/en active Pending
- 2022-12-16 CA CA3242368A patent/CA3242368A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190353477A1 (en) * | 2016-09-14 | 2019-11-21 | Hangzhou Scantech Company Limited | Three-dimensional sensor system and three-dimensional data acquisition method |
CN108288292A (zh) * | 2017-12-26 | 2018-07-17 | 中国科学院深圳先进技术研究院 | 一种三维重建方法、装置及设备 |
WO2021088481A1 (zh) * | 2019-11-08 | 2021-05-14 | 南京理工大学 | 一种基于条纹投影的高精度动态实时360度全方位点云获取方法 |
CN113390357A (zh) * | 2021-07-08 | 2021-09-14 | 南京航空航天大学 | 一种基于双目多线结构光的铆钉齐平度测量方法 |
Also Published As
Publication number | Publication date |
---|---|
EP4451221A1 (en) | 2024-10-23 |
CN116266379A (zh) | 2023-06-20 |
KR20240113513A (ko) | 2024-07-22 |
CA3242368A1 (en) | 2023-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zioulis et al. | Spherical view synthesis for self-supervised 360 depth estimation | |
KR102096806B1 (ko) | 3차원 센서 시스템 및 3차원 데이터 획득방법 | |
JP6897563B2 (ja) | 画像処理装置と画像処理方法およびプログラム | |
US20200334840A1 (en) | Three-Dimensional Scanning System and Scanning Method Thereof | |
CN106033621B (zh) | 一种三维建模的方法及装置 | |
CN109461181A (zh) | 基于散斑结构光的深度图像获取方法及系统 | |
US20210044787A1 (en) | Three-dimensional reconstruction method, three-dimensional reconstruction device, and computer | |
US20190012804A1 (en) | Methods and apparatuses for panoramic image processing | |
US20120176478A1 (en) | Forming range maps using periodic illumination patterns | |
US9025862B2 (en) | Range image pixel matching method | |
JP6580761B1 (ja) | 偏光ステレオカメラによる深度取得装置及びその方法 | |
CN105241397A (zh) | 基于结构光的实时测量拼接方法及其设备 | |
CN102903101B (zh) | 使用多台相机进行水面数据采集与重建的方法 | |
CN109974623B (zh) | 基于线激光和双目视觉的三维信息获取方法和装置 | |
WO2018120168A1 (zh) | 一种视觉检测方法及系统 | |
WO2018001252A1 (zh) | 一种投射单元及包括该单元的拍摄装置、处理器、成像设备 | |
US11803982B2 (en) | Image processing device and three-dimensional measuring system | |
JP2010276433A (ja) | 撮像装置、画像処理装置及び距離計測装置 | |
CN110260801A (zh) | 用于测量物料体积的方法和装置 | |
Zhang et al. | Synthetic aperture based on plenoptic camera for seeing through occlusions | |
JP2013101464A (ja) | 画像処理装置および画像処理方法 | |
CN110825079A (zh) | 一种地图构建方法及装置 | |
JP2016114445A (ja) | 3次元位置算出装置およびそのプログラム、ならびに、cg合成装置 | |
WO2023109960A1 (zh) | 三维扫描的处理方法、装置和三维扫描设备 | |
CN114877826A (zh) | 一种双目立体匹配三维测量方法、系统及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22906705 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 3242368 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2024535326 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20247020190 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020247020190 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022906705 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022906705 Country of ref document: EP Effective date: 20240717 |