WO2021203883A1 - 三维扫描方法、三维扫描系统和计算机可读存储介质 - Google Patents
三维扫描方法、三维扫描系统和计算机可读存储介质 Download PDFInfo
- Publication number
- WO2021203883A1 WO2021203883A1 PCT/CN2021/079192 CN2021079192W WO2021203883A1 WO 2021203883 A1 WO2021203883 A1 WO 2021203883A1 CN 2021079192 W CN2021079192 W CN 2021079192W WO 2021203883 A1 WO2021203883 A1 WO 2021203883A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional
- information
- camera
- color texture
- pose
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 238000013507 mapping Methods 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000009877 rendering Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 description 28
- 230000008569 process Effects 0.000 description 19
- 238000005516 engineering process Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 6
- 239000000047 product Substances 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 238000012805 post-processing Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/254—Projection of a pattern, viewing through a pattern, e.g. moiré
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/245—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
Definitions
- This application relates to the field of three-dimensional scanning technology, in particular to three-dimensional scanning methods, three-dimensional scanning systems, computer equipment, and computer-readable storage media.
- the optical 3D scanner is a device that uses optical imaging to obtain 3D information of the measured object. It is currently widely used in industrial product detection, reverse setting, simulation, positioning and other fields.
- Tracking 3D scanning is a new type of 3D scanning technology, which mainly uses 3D scanners and trackers to realize 3D measurement of objects. Compared with the traditional point-mounted 3D scanning or photographic 3D scanning, the tracking 3D scanning technology is more convenient to use, more stable, and has a larger measurement range, which is convenient for users to realize easily and conveniently in the workshop, outdoor and various complex environments. Three-dimensional measurement.
- Tracker mainly includes laser trackers, fixed dual-camera three-dimensional scanners, posture capture and tracking equipment, head-mounted three-dimensional coordinate data glasses, and geometric measurement devices based on LED tag tracking for large-scale objects such as ship curved steel plates. Wait.
- the above-mentioned existing tracking three-dimensional scanning device mainly adopts a combination of a tracker and a scanner to realize the three-dimensional measurement of an object, wherein the tracker is used for stitching three-dimensional data, and the scanner is used for obtaining three-dimensional data.
- the realization of the 3D scanning function depends on the function and accuracy of the scanner itself.
- the scanners in the above-mentioned existing devices mainly adopt hand-held monochrome laser scanners or raster projection scanners, which have relatively single functions, and lack sufficient adaptability for scanning scenes with higher requirements for colors and textures.
- the existing tracking devices cannot yet achieve such functions.
- Color texture scanning devices are mainly hand-held white light scanners, which mainly include a projector, one or more black and white cameras and a color camera.
- the projector uses coded structured light for projection, and the black and white camera captures objects while projecting. Contour information, and splicing the point and surface information of the front and rear frames through feature recognition; in order to avoid the projected pattern from affecting the texture effect, the color camera obtains the surface texture information of the object in the projection gap, and performs texture mapping based on the three-dimensional information of the black and white camera.
- the main problem of the above device is that the color camera and the black-and-white camera cross-shoot asynchronously. Although the interval is very short, because the two are not synchronized, the 3D model after texture mapping has a certain color texture compared with the original scanned object. The dislocation.
- the embodiments of the present application provide a three-dimensional scanning method, a three-dimensional scanning system, a computer device, and a computer-readable storage medium to at least solve the problem of misalignment of the color texture of the three-dimensional model in the related art.
- an embodiment of the present application provides a three-dimensional scanning system, including a three-dimensional scanner, a tracker, and a computing unit.
- the three-dimensional scanner and the tracker are respectively electrically connected to the computing unit;
- the three-dimensional The scanner is used to collect the three-dimensional point and surface information of the scanned object,
- the tracker is used to track the first pose of the three-dimensional scanner when the three-dimensional scanner collects the three-dimensional point and surface information, and the calculation unit uses To reconstruct a three-dimensional model of the scanned object according to the three-dimensional point and surface information and the first pose;
- the three-dimensional scanner is also used to collect color texture information on the surface of the scanned object; the tracker is also used to track when the three-dimensional scanner collects color texture information on the surface of the scanned object.
- the second pose of the three-dimensional scanner; the calculation unit is further configured to generate a color texture on the surface of the three-dimensional model according to the color texture information and the second pose.
- the three-dimensional scanner includes: a first camera and a second camera for collecting three-dimensional point and surface information of the scanned object, and a third camera for collecting the color texture information.
- the three-dimensional scanner includes: a first camera and a second camera for collecting three-dimensional point and surface information of the scanned object, wherein the first camera is also used for collecting the color Texture information.
- the three-dimensional scanner includes: a first camera and a second camera for collecting three-dimensional point and surface information of the scanned object, wherein the second camera is also used for collecting the color Texture information.
- the three-dimensional scanner further includes: a structured light projector for projecting a structured light pattern on the surface of the scanned object when the three-dimensional scanner collects the three-dimensional point and surface information;
- the three-dimensional scanning system further includes: a clock synchronization unit, which is electrically connected to the three-dimensional scanner and the tracker, respectively; the clock synchronization unit is used to provide a clock synchronization signal; wherein,
- the structured light projector, the first camera, the second camera, the third camera, and the tracker work synchronously according to the clock synchronization signal.
- the three-dimensional scanner further includes: a structured light projector for projecting a structured light pattern on the surface of the scanned object when the three-dimensional scanner collects the three-dimensional point and surface information;
- the three-dimensional scanning system further includes: a clock synchronization unit, which is electrically connected to the three-dimensional scanner and the tracker, respectively; the clock synchronization unit is used to provide a clock synchronization signal; wherein,
- the structured light projector, the first camera, the second camera, and the tracker work synchronously according to the clock synchronization signal.
- the three-dimensional scanner further includes: a structured light projector, used to project the invisible light waveband structure on the surface of the scanned object when the three-dimensional scanner collects the three-dimensional point and surface information Light projection pattern;
- the structured light projection pattern in the invisible light band can be captured by the first camera and the second camera, and the structured light projection pattern in the invisible light band cannot be captured by the third camera.
- the three-dimensional scanning system further includes: a visible light source, the visible light source being used to supplement light to the scanned object when collecting color texture information.
- an embodiment of the present application provides a three-dimensional scanning method, including:
- Collect the three-dimensional point and surface information of the scanned object and track the first pose of the three-dimensional scanner when collecting the three-dimensional point and surface information; and collect the color texture information of the scanned object surface, and collect the color texture Tracking the second pose of the three-dimensional scanner when information;
- a color texture is generated on the surface of the three-dimensional model.
- collecting three-dimensional point and surface information of the scanned object includes:
- the first camera and the second camera are used to collect image information of the scanned object on which the structured light projection pattern is projected on the surface, and generate three-dimensional point and surface information of the scanned object according to the image information.
- the three-dimensional point and surface information and the color texture information of the scanned object are collected non-simultaneously.
- the three-dimensional point and surface information of the scanned object is collected, and the first pose of the three-dimensional scanner is tracked when the three-dimensional point and surface information is collected; and the color texture of the surface of the scanned object is collected Information, and tracking the second pose of the three-dimensional scanner when collecting the color texture information includes:
- collecting three-dimensional point and surface information of the scanned object, and tracking the first pose of the three-dimensional scanner when collecting the three-dimensional point and surface information; and collecting the color texture of the surface of the scanned object Information, and tracking the second pose of the three-dimensional scanner when collecting the color texture information includes:
- first camera and the second camera Use the first camera and the second camera to collect the three-dimensional point and surface information of the scanned object, and track the first pose of the three-dimensional scanner when the three-dimensional point and surface information is collected; and use the second camera to collect the The color texture information of the surface of the scanned object is tracked, and the second pose of the three-dimensional scanner is tracked when the color texture information is collected.
- collecting three-dimensional point and surface information of the scanned object, and tracking the first pose of the three-dimensional scanner when collecting the three-dimensional point and surface information; and collecting the color texture of the surface of the scanned object Information, and tracking the second pose of the three-dimensional scanner when collecting the color texture information includes:
- the first camera and the second camera to collect the three-dimensional point and surface information of the scanned object, and track the first pose of the three-dimensional scanner when the three-dimensional point and surface information is collected; and use the third camera to collect the scanned object
- the color texture information of the surface of the object, and the second pose of the three-dimensional scanner is tracked when the color texture information is collected.
- the structured light projection pattern projected on the surface of the scanned object is a structured light projection pattern in the invisible light band; the structured light projection pattern in the invisible light band can be used to collect the three-dimensional point and surface information
- the camera captures the color texture information, but cannot be captured by the camera that collects the color texture information; the three-dimensional point and surface information of the scanned object and the color texture information are collected at the same time.
- generating a color texture on the surface of the three-dimensional model according to the color texture information and the second pose includes:
- the three-dimensional model is reconstructed in the second coordinate system.
- the three-dimensional model of the scanned object is reconstructed according to the three-dimensional point and surface information and the first pose; according to the color texture information and the second pose, in the three-dimensional
- the color texture generated on the surface of the model includes:
- mapping the color texture information to the three-dimensional point and surface information in the first coordinate system mapping the color texture information to the three-dimensional point and surface information in the first coordinate system
- the three-dimensional point and surface information and the color texture information are collected in the first coordinate system, and the three-dimensional model is reconstructed in the second coordinate system.
- generating a color texture on the surface of the three-dimensional model according to the color texture information and the second pose includes:
- generating a color texture on the surface of the three-dimensional model according to the color texture information and the second pose includes:
- the frequency at which the third camera collects the color texture information is lower than the frequency at which the first camera and the second camera collect the three-dimensional point and surface information.
- an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and running on the processor.
- the processor executes the computer program, The three-dimensional scanning method as described in the above second aspect is realized.
- an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the three-dimensional scanning method as described in the second aspect is implemented.
- the three-dimensional scanning method, three-dimensional scanning system, computer equipment, and computer-readable storage medium collect three-dimensional point and surface information of the scanned object, and track three-dimensional information when collecting the three-dimensional point and surface information.
- the first pose of the scanner and collect the color texture information on the surface of the scanned object, and track the second pose of the 3D scanner when the color texture information is collected; reconstruct the scanned object according to the 3D point and surface information and the first pose
- the three-dimensional model of the object according to the color texture information and the second pose, the color texture is generated on the surface of the three-dimensional model, which solves the problem of the color texture of the three-dimensional model in related technologies and improves the color texture mapping of the three-dimensional model Accuracy.
- Fig. 1a is a schematic structural diagram of a three-dimensional scanning system according to an embodiment of the present application.
- Fig. 1b is a schematic structural diagram of another three-dimensional scanning system according to an embodiment of the present application.
- Fig. 2 is a flowchart of a three-dimensional scanning method according to an embodiment of the present application
- FIG. 3 is a flowchart of the reconstruction process of a three-dimensional model without color texture according to an embodiment of the present application
- FIG. 4 is a flowchart of a method for rebuilding a three-dimensional model with color texture based on real-time color texture information mapping according to an embodiment of the present application
- Fig. 5 is a schematic structural diagram of a three-dimensional scanning system according to an optional embodiment of the present application.
- FIG. 6 is a schematic diagram of the connection structure of each component in the three-dimensional scanning system according to an optional embodiment of the present application.
- Fig. 7 is a flowchart of a three-dimensional scanning method according to an optional embodiment of the present application.
- Fig. 8 is a schematic diagram of the hardware structure of a computer device according to an embodiment of the present application.
- connection refers to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Wherein, if there is no conflict, the electrical connection may be a wired connection or a wireless connection.
- the "plurality” referred to in this application refers to two or more.
- “And/or” describes the association relationship of the associated objects, which means that there can be three kinds of relationships. For example, “A and/or B” can mean: A alone exists, A and B exist at the same time, and B exists alone.
- the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
- the terms “first”, “second”, “third”, etc. involved in this application merely distinguish similar objects, and do not represent a specific order for the objects.
- a linear structured light is used as an example to introduce the structured light visual inspection based on this application and the basic principle of non-contact tracking.
- the structured light projector When performing 3D scanning, the structured light projector first projects a linear laser to the scanned object.
- the projected linear laser forms a laser projection plane.
- a bright line will be formed on the surface of the scanned object.
- Scan line Since the scan line includes all the surface points where the laser projection plane intersects the object, the three-dimensional coordinates of the corresponding surface point of the object can be obtained according to the coordinates of the scan line.
- the three-dimensional coordinates are mapped to the laser projection plane to obtain a two-dimensional image of the scan line. According to the coordinates of the points on the two-dimensional image of the scan line, the three-dimensional coordinates of the corresponding object surface points can be calculated. This is the basic principle of structured light visual inspection.
- Non-contact tracking technology using a tracking camera to capture at least three target features on the surface of the 3D scanner; due to the target features on the surface of the 3D scanner, the spatial position of the binocular camera (including the first camera and the second camera) of the 3D scanner The relationship is pre-calibrated.
- the calculation unit can obtain the pose of the 3D scanner and the conversion relationship between the coordinate system of the 3D scanner and the coordinate system of the tracker according to at least the features of the 3D target captured by the tracking camera;
- the relationship converts the coordinates of the three-dimensional point and surface information collected by the three-dimensional scanner into the coordinate system of the tracker, and then performs splicing and fusion according to the coordinates of the three-dimensional point and surface information to reconstruct a complete three-dimensional model.
- Fig. 1a is a schematic structural diagram of a three-dimensional scanning system according to an embodiment of the present application. As shown in Fig. 1a, the three-dimensional scanning system includes: a three-dimensional scanner 11, a tracker 12, and a computing unit 13, wherein,
- the three-dimensional scanner 11 is electrically connected to the computing unit 13.
- the three-dimensional scanner 11 includes a structured light projector 111, a first camera 1121 and a second camera 1122 for collecting three-dimensional point and surface information of the scanned object, at least three target features 113.
- the above-mentioned first camera 1121 and second camera 1122 include cameras, CCD sensors, or CMOS sensors capable of capturing the visible light waveband or the invisible light waveband of the target space.
- the above-mentioned structured light projector 111 includes a projector configured to sequentially project a structured light pattern onto the surface of the scanned object, and may be, for example, a digital light processing (DLP) projector.
- the structured light projected by the structured light projector 111 may be speckle, fringe, Gray code or other coded structured light.
- the structured light projector 111, the first camera 1121, the second camera 1122, and at least three target features 113 are installed on the mounting frame, and their spatial position relationships are all pre-calibrated. Therefore, in the triangulation calculation, the distance and angle between the target features, the first camera 1121 and the second camera 1122 are known, and the position and projection angle of the structured light projector 111 are known. Known.
- the at least three target features 113 of the three-dimensional scanner 11 may be self-luminous target features or reflective target features.
- the tracker 12 is electrically connected to the computing unit 13.
- the tracker 12 is used to track the first of the three-dimensional scanner 11 by capturing at least three target features 113 of the three-dimensional scanner 11 when the three-dimensional scanner 11 collects three-dimensional point and surface information. Posture.
- the tracker 12 includes at least one tracking camera, and the tracking camera is used to capture at least three target features 113 fixed on the surface of the three-dimensional scanner 11. Since the spatial position relationship between the at least three target features 113 is pre-calibrated, the pose of the three-dimensional scanner 11 can be determined based on the at least three target features 113.
- the calculation unit 13 is configured to reconstruct a three-dimensional model of the scanned object according to the three-dimensional point and surface information and the first pose collected by the first camera 1121 and the second camera 1122.
- the basic principles for the calculation unit 13 to reconstruct the three-dimensional model of the scanned object are the principle of triangulation and the principle of epipolar line constraint.
- the three-dimensional scanner 11 is also used to collect color texture information on the surface of the scanned object.
- the tracker 12 is also used to track the second pose of the three-dimensional scanner 11 when the three-dimensional scanner 11 collects color texture information of the surface of the scanned object.
- the calculation unit 13 is also used to generate a color texture on the surface of the three-dimensional model based on the color texture information and the second pose.
- the computing unit 13 uses the three-dimensional scanner 11 to project the two-dimensional image information of the scanned object with structured light projection patterns on the surface, and uses the calibrated three-dimensional point and surface information to be collected.
- the spatial position relationship of the multiple cameras reconstructs the three-dimensional point and surface information in the coordinate system of the cameras of the three-dimensional scanner 11.
- the calculation unit 13 converts the three-dimensional point and surface information into the coordinate system of the target feature of the three-dimensional scanner 11 according to the conversion relationship between the calibrated camera and the at least three target features fixed on the surface of the three-dimensional scanner 11.
- the tracker 12 simultaneously captures at least three target features 113 on the surface of the three-dimensional scanner 11. Since the spatial position relationship between the at least three target features 113 is also pre-calibrated, the calculation unit 13 is based on the captured information of the at least three target features 113 on the surface of the three-dimensional scanner 11 and the known information of the at least three target features 113. The spatial position relationship between the target features 113 can obtain the conversion relationship between the coordinate system of the tracker 12 and the coordinate system of the target feature of the three-dimensional scanner 11.
- the calculation unit 13 obtains the coordinates of the three-dimensional point and surface information in the coordinate system of the tracker 12 according to the conversion relationship between the coordinate system of the tracker 12 and the coordinate system of the target feature of the three-dimensional scanner 11. According to the coordinates, the three-dimensional The point and surface information is subjected to three-dimensional reconstruction of the scanned object under the coordinate system of the tracker 12 to obtain a three-dimensional model.
- the color texture generated by the calculation unit 13 on the surface of the three-dimensional model is also realized based on the conversion relationship between coordinate systems.
- the point and surface information of the handheld white light scanner in the related technology is spliced through feature recognition. It cannot obtain the features used for splicing point and surface information when collecting color texture information, so it can only capture the last time it was collected.
- the coordinates corresponding to the point and surface information are used as the coordinates corresponding to the currently collected color texture information; and because the hand-held white light scanner uses the color camera to obtain the color texture information and the black and white camera to obtain the point and surface information, the non-synchronized cross-shooting is taken. Therefore, there is a time interval between the collection time of the point and surface information and the color texture information.
- any movement of the handheld white light scanner during this time interval will cause the coordinates corresponding to the last collected point and surface information to correspond to the currently collected color texture information.
- the coordinates of are different, which causes the color texture of the 3D model to be misaligned.
- the tracker 12 adopts a non-contact method to track three-dimensional point and surface information when collecting three-dimensional point and surface information and when collecting color texture information.
- the three-dimensional scanning system provided in this embodiment can obtain the three-dimensional point and surface information and the color texture collected by the three-dimensional scanner 11
- the accurate coordinates of the information in the coordinate system of the tracker 12 solves the problem of the misalignment of the color texture of the three-dimensional model in related technologies, and improves the accuracy of the color texture mapping of the three-dimensional model.
- a structured light projector 111 is used in this embodiment, and the structured light pattern is projected on the surface of the scanned object when the three-dimensional scanner 11 collects three-dimensional point and surface information.
- the three-dimensional scanning system with the structured light projector 111 in this embodiment adopts the structured light pattern projected by the structured light projector 111 as the feature mark. Eliminates the workload of posting feature marks on the surface of the scanned object. Not only that, because feature marks are no longer posted on the surface of the scanned object, the reconstructed 3D model with color texture can express the original surface features of the scanned object without causing additional features to appear on the surface of the 3D model. Marking improves the practicability of the 3D scanning system and avoids the workload caused by post-processing additional feature markings of the 3D model.
- the three-dimensional scanner 11 in this embodiment can collect color and texture information on the surface of the scanned object.
- the 3D scanner 11 includes a first camera 1121 and a second camera 1122 for collecting 3D point and surface information of the scanned object, and a second camera 1122 for collecting color texture information.
- Three cameras 1123 are shown in Figure 1b.
- the 3D scanner 11 includes a first camera 1121 and a second camera 1122 for collecting 3D point and surface information of the scanned object, wherein the first camera 1121 is also multiplexed To collect color texture information.
- using the first camera 1121 to collect three-dimensional point and surface information and collect color and texture information can reduce the cost of the three-dimensional scanning system and reduce the volume and weight of the three-dimensional scanner.
- the first camera 1121 and the second camera 1122 are both color cameras, and one of the color cameras is multiplexed to collect color texture information.
- the first camera 1121 and the second camera 1122 are both color cameras.
- the 3D scanning system further includes a clock synchronization unit 14, which is electrically connected to the 3D scanner 11 and the tracker 12, respectively.
- the clock synchronization unit 14 is used to provide a clock synchronization signal.
- the structured light projector 111, the first camera 1121, the second camera 1122, and the tracker 12 in the 3D scanner 11 work synchronously according to the clock synchronization signal; the third camera 1123 and the tracker 12 work synchronously according to the clock synchronization signal.
- the clock synchronization unit 14 in this embodiment can be an independent unit independent of the tracker 12, the 3D scanner 11, and the calculation unit 13, or it can be located in the tracker 12, the 3D scanner 11, and the calculation unit 13. In any unit or device.
- the structured light projector 111, the first camera 1121, the second camera 1122, and the tracker 12 in the three-dimensional scanner 11 work synchronously according to the clock synchronization signal.
- the structured light projector 111 is facing the surface of the scanned object.
- the first camera 1121, the second camera 1122 and the tracker 12 take pictures at the same time.
- the third camera 1123 and the tracker in the three-dimensional scanner 11 work synchronously according to the clock synchronization signal, including: the third camera 1123 and the tracker 12 take pictures at the same time.
- the structured light projector 111, the first camera 1121, and the second camera 1122 may work simultaneously with the third camera 1123, or may work non-simultaneously.
- the three-dimensional scanner includes a first camera 1121, a second camera 1122, a third camera 1123, and a structured light projector 111.
- the structured light projector 111 is used to project a structured light projection pattern in the invisible light band on the surface of the scanned object when the three-dimensional scanner collects three-dimensional point and surface information.
- the three-dimensional scanning system also includes: a clock synchronization unit 14, which is electrically connected to the three-dimensional scanner 11 and the tracker 12, respectively; the clock synchronization unit 14 is used to provide a clock synchronization signal; wherein the structured light projector 111, the first The camera 1121, the second camera 1122, the third camera 1123, and the tracker work synchronously according to a clock synchronization signal.
- the structured light projection pattern of the invisible light band projected by the structured light projector 111 can be captured by the first camera 1121 and the second camera 1122, but cannot be captured by the third camera 1123.
- the first camera 1121, the second camera 1122, and the third camera 1123 can simultaneously collect 3D point and surface information or color texture information, which simplifies the timing design of the collection process and also helps to improve the efficiency of 3D model reconstruction.
- the three-dimensional scanning system further includes a visible light source, and the visible light source is used in conjunction with the third camera 1123.
- the visible light source is used to supplement light to the scanned object when the third camera 1123 collects color texture information.
- the visible light source can be one or more flashes or light boxes. In the case that the visible light source is a flash or light box, this flash or light box supplements the plane of the scanned object currently scanned by the three-dimensional scanner 11; in the case where the visible light source is multiple flashes or light boxes, the multiple The flash or light box surrounds the scanned object to realize multi-angle fill light for the scanned object.
- the visible light source may be electrically connected to the clock synchronization unit 14 through a wired connection or a wireless connection, so as to work synchronously with the third camera 1123.
- the three-dimensional scanning method provided by this embodiment will be described and illustrated below. It should be noted that although the three-dimensional scanning method described in the embodiment is preferably used in the three-dimensional scanning system provided in the embodiment of the present application, the three-dimensional scanning method is applied to other non-contact tracking-based three-dimensional scanning systems It can also be conceived.
- Fig. 2 is a flowchart of a three-dimensional scanning method according to an embodiment of the present application. As shown in Fig. 2, the process includes the following steps:
- Step S201 Collect the three-dimensional point and surface information of the scanned object, and track the first pose of the three-dimensional scanner when the three-dimensional point and surface information is collected.
- the three-dimensional point and surface information of the scanned object can be collected through the principle of binocular vision imaging.
- the structured light projection pattern is projected on the surface of the scanned object through a visible light waveband structured light projector or an invisible light waveband structured light projector, and then the first camera and the second camera whose spatial position relationship is pre-calibrated are used on the surface of the scanned object Take pictures, and reconstruct the three-dimensional point and surface information of the scanned object through the principle of binocular vision imaging.
- the structured light projection pattern may be a speckle pattern, a stripe pattern, a Gray code pattern or other coded structured light patterns.
- the first pose of the 3D scanner can be tracked by non-contact tracking.
- at least three target features are fixed on the surface of the three-dimensional scanner, and the spatial position relationship of the at least three target features is pre-calibrated. These at least three target features are tracked by the tracker and combined with the pre-calibrated spatial position relationship of the at least three target features, the first pose information of the three-dimensional scanner can be obtained.
- the pose information includes position information and posture information.
- Step S202 Collect the color texture information of the surface of the scanned object, and track the second pose of the three-dimensional scanner when the color texture information is collected.
- the color texture information of the surface of the scanned object can be collected through the first camera or the third camera.
- the spatial position of the third camera is also pre-calibrated.
- the second pose of the 3D scanner can also be tracked through non-contact tracking.
- at least three target features are fixed on the surface of the three-dimensional scanner, and the spatial position relationship of the at least three target features is also pre-calibrated. These at least three target features are tracked by the tracker and combined with the pre-calibrated spatial position relationship of the at least three target features, the second pose information of the three-dimensional scanner can be obtained, and the pose information also includes position information and posture information.
- Step S203 Based on the three-dimensional point and surface information and the first pose, reconstruct a three-dimensional model of the scanned object.
- the three-dimensional model of the scanned object can be reconstructed according to the known three-dimensional model reconstruction method in the related art.
- Step S204 Generate a color texture on the surface of the three-dimensional model according to the color texture information and the second pose information.
- the coordinate system of the color texture information can be converted to the same coordinate system as the three-dimensional model (equivalent to the above-mentioned second coordinate system) by the method of coordinate system conversion, thereby mapping the color texture information to the three-dimensional model.
- Model surface the coordinates of the color texture information can be converted to the reconstructed three-dimensional model based on the second pose information, the pre-calibrated spatial position information of the camera used to collect color texture information, and the pre-calibrated spatial position relationship of at least three target features In the coordinate system.
- the three-dimensional scanning system collects color texture information and three-dimensional point and surface information at the same time. Some of the first pose and the second pose collected by the device are the same pose; for these same poses, the color texture information and three-dimensional point and surface information collected in the first coordinate system are converted to the second coordinate The conversion relationship in the system is the same.
- the color texture information can be directly mapped to the three-dimensional point and surface information in the first coordinate system to obtain the three-dimensional point and surface information with color texture, and then The coordinates of the three-dimensional point and surface information of the color texture are converted from the first coordinate system to the second coordinate system, and the three-dimensional model is reconstructed, thereby obtaining the three-dimensional model of the scanned object with the color texture.
- FIG. 3 is a flowchart of the reconstruction process of a three-dimensional model without color texture according to an embodiment of the present application. As shown in FIG. 3, the three-dimensional scanning and reconstruction process of this embodiment includes the following steps:
- Step S301 calibrate the target features on the surface of the three-dimensional scanner and the spatial position relationship between all the cameras in the three-dimensional scanner.
- Step S302 Project a structured light pattern on the surface of the scanned object, obtain the two-dimensional image information of the scanned object through multiple cameras in the three-dimensional scanner, and use the spatial position relationship between the calibrated cameras, according to the principle of triangulation and The epipolar constraint principle reconstructs the three-dimensional point and surface information in the camera coordinate system.
- Step S303 According to the conversion relationship between the calibrated camera and the target feature on the surface of the 3D scanner, the coordinates of the 3D point and surface information in the camera coordinate system are converted to the coordinate system of the target feature on the 3D scanner surface.
- Step S304 When the camera of the 3D scanner is shooting, the tracker synchronously captures at least three target features on the surface of the 3D scanner. According to the known spatial position distribution relationship of the target feature on the surface of the three-dimensional scanner, the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the three-dimensional scanner is obtained.
- Step S305 According to the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the three-dimensional scanner, the coordinates of the three-dimensional point and surface information in the coordinate system of the tracker are obtained, and then according to the three-dimensional The point and surface information and its coordinates are reconstructed to obtain a three-dimensional model of the scanned object.
- steps S301 to S305 are exemplary descriptions of the reconstruction process of the three-dimensional model without color texture in the embodiment of the present application, and the actual three-dimensional reconstruction process may not be limited to this.
- the color texture information may be mapped to the surface of the three-dimensional model after the reconstruction of the three-dimensional model is completed, or even after the three-dimensional model is globally optimized for the three-dimensional point and surface information.
- the color texture information may be mapped to the three-dimensional point and surface information corresponding to the three-dimensional model during the reconstruction of the three-dimensional model or before the reconstruction of the three-dimensional model. For example, you can map the color texture information to the surface of the three-dimensional point and surface information in the coordinate system of the three-dimensional scanner or the coordinate system of the tracker, and then put the three-dimensional point and surface information with the color texture information in the coordinate system of the tracker Perform splicing and fusion to obtain a three-dimensional model with color texture.
- step S201 and step S202 may be executed simultaneously or non-simultaneously.
- the collection of the three-dimensional point and surface information and color texture information of the scanned object is non-simultaneous.
- the 3D scanner can use two cameras to collect 3D point and surface information, and one of the cameras can collect color texture information.
- Three-dimensional scanners can also use three cameras, of which two cameras collect three-dimensional point and surface information, and the other camera collects color texture information.
- the structured light projector does not need to project structured light projection patterns when collecting color texture information, so the structured light projector can select any visible light band structured light projector or any invisible light For the structured light projector of the waveband, as long as the first camera and the second camera can capture the structured light projection pattern projected by the structured light projector.
- the visible light band is also called white light;
- the invisible light band can be but not limited to the infrared light band.
- a three-dimensional scanner including three cameras can be used, wherein two cameras collect three-dimensional point and surface information, and the other camera collects color texture information; and the structure of the three-dimensional scanner
- the structured light projection pattern projected by the light projector is a structured light projection pattern in the invisible light band.
- the structured light projection pattern in the invisible light band can be captured by a camera that collects three-dimensional point and surface information, but cannot be captured by a camera that collects color texture information. . Therefore, even if the three cameras shoot at the same time, the other camera will not capture the structured light projection pattern when collecting color and texture information, so as to avoid the influence of the structured light projection pattern on the surface of the scanned object.
- the third camera of the three-dimensional scanner collects color texture information on the surface of the scanned object.
- the color texture information includes the coordinates in the coordinate system of the camera of the three-dimensional scanner and the color information corresponding to each coordinate. Since the spatial position relationship between the camera of the 3D scanner and the target feature on the surface of the 3D scanner is calibrated in advance, the conversion between the coordinate system of the camera of the 3D scanner and the coordinate system of the target feature of the 3D scanner can be obtained According to the conversion relationship, the coordinates of the color texture information in the coordinate system of the camera can be converted to the coordinate system of the target feature of the three-dimensional scanner. While the third camera of the three-dimensional scanner is shooting, the tracker simultaneously captures at least three target features on the surface of the three-dimensional scanner.
- the spatial position relationship between the at least three target features is also pre-calibrated, it is based on the captured information of the at least three target features on the surface of the 3D scanner and the known space between the at least three target features
- the position relationship can obtain the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the three-dimensional scanner.
- the coordinates of the color texture information can be converted to the coordinate system of the tracker, and the relationship between the color texture information and the coordinate system of the tracker can be obtained.
- the mapping relationship, and finally the color texture is generated on the surface of the three-dimensional model according to the mapping relationship.
- the color texture information is mapped to the three-dimensional point in real time
- three-dimensional point and surface information with color texture is obtained.
- Fig. 4 is a flowchart of a method for rebuilding a three-dimensional model with color texture based on real-time color texture information mapping according to an embodiment of the present application. As shown in Fig. 4, the process includes the following steps:
- Step S401 According to the image information and the spatial position relationship of the multiple cameras, the three-dimensional point and surface information in the coordinate system of the camera of the three-dimensional scanner is reconstructed.
- Step S402 In the coordinate system of the camera of the three-dimensional scanner, map the color texture information collected synchronously with the image information to the three-dimensional point and surface information to obtain the three-dimensional point and surface information with the color texture.
- Step S403 According to the conversion relationship between the coordinate system of the camera of the three-dimensional scanner and the coordinate system of the target feature of the three-dimensional scanner, transform the three-dimensional point and surface information with color texture into the coordinate system of the target feature of the three-dimensional scanner.
- Step S404 Obtain the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the 3D scanner according to the at least three target features captured by the tracker; wherein, the spatial position relationship of the at least three target features on the 3D scanner It is pre-calibrated.
- Step S405 According to the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the 3D scanner, the coordinates of the three-dimensional point surface information with color texture in the coordinate system of the tracker are obtained, and according to the three-dimensional point surface with color texture The coordinates of the information in the coordinate system of the tracker are reconstructed to obtain a three-dimensional model with color texture on the surface.
- the above-mentioned 3D model reconstruction with color texture based on real-time color texture information mapping is particularly suitable for the scanning prompt process in the 3D scanning process, that is, the process of generating the 3D model preview image with color texture in the scan preview image.
- Projecting color texture to the surface of the 3D model can be implemented in many ways.
- One way to project the color texture to the surface of the 3D model is to perform color rendering on the point cloud corresponding to the 3D model based on the color texture information, that is, color texture information.
- the color information in is assigned to the corresponding point in the point cloud.
- This method is particularly suitable for the reconstruction process of a three-dimensional model with color texture based on real-time color texture information mapping shown in step S401 to step S405.
- Another way to project the color texture onto the surface of the 3D model is to divide the surface of the 3D model into grids, and determine the color texture information corresponding to each grid obtained by the division; fill in each grid obtained by the division Color texture information corresponding to the grid.
- This method is especially suitable for the color texture post-processing of the three-dimensional model reconstruction process with color texture.
- the color texture post-processing refers to generating a color texture on the surface of the three-dimensional model after the three-dimensional scanned model is scanned.
- the third camera is used for collecting color texture information.
- the frequency is lower than the frequency at which the first camera and the second camera collect three-dimensional point and surface information.
- the frequency at which the first camera and the second camera collect 3D point and surface information can be several times that of the color texture information collected by the third camera, which can reduce the number of times the third camera collects color texture information, reduce the amount of image data transmission and image processing Computer resources for data.
- Fig. 5 is a schematic structural diagram of a three-dimensional scanning system according to an optional embodiment of the present application.
- the three-dimensional scanning system includes: a non-contact tracker 12 including at least one tracking camera for capturing the three-dimensional scanner The pose.
- the three-dimensional scanner 11 is used to perform three-dimensional scanning based on the principle of triangulation.
- the three-dimensional scanner includes at least one structured light projector 111, at least one binocular camera (equivalent to the aforementioned first camera 1121 and second camera 1122) and at least A texture camera (equivalent to the aforementioned third camera 1123), and multiple target features fixed on the surface of the 3D scanner, of which at least three target features can be captured by the tracker 12 in the field of view of the tracker 12; the calculation unit 13 , Used to generate 3D point and surface information, calculate conversion matrix, perform coordinate conversion, and rebuild 3D model.
- FIG. 6 is a schematic diagram of the connection structure of each component in the three-dimensional scanning system according to an optional embodiment of the present application.
- the calculation unit 13 further includes: a clock synchronization unit 14, which is connected to the three-dimensional scanner 11 and the tracker 12.
- the camera is connected to the structured light projector 111 to provide a clock synchronization signal; the two-dimensional image feature extractor 131 is used to extract at least two linear patterns on the two-dimensional image captured by the binocular camera and the tracking camera of the scanned object A set of two-dimensional lines; a three-dimensional point and surface information generator 132 for generating a three-dimensional point and surface information set based on a two-dimensional line set; a texture feature extractor 133 for extracting color and texture information of the scanned object by the third camera; The texture mapper 134 is used to map the color texture information to the three-dimensional point and surface information to perform color texture mapping; the coordinate converter 135 is used to calculate a conversion (RT) matrix between different coordinate systems and perform coordinate conversion.
- RT conversion
- Fig. 7 is a flowchart of a three-dimensional scanning method according to an optional embodiment of the present application. As shown in Fig. 7, the process includes the following steps:
- Step S701 calibrate the target feature on the three-dimensional scanner and the spatial position relationship between one or more binocular cameras and one texture camera.
- Step S702 The scanner projects structured light on the surface of the object to be scanned, the scanner camera acquires a two-dimensional image, and the spatial position relationship of the calibrated scanner camera is used to find a match based on the epipolar constraint relationship between the binocular images and related algorithms Then, according to the principle of triangulation, the three-dimensional point and surface information P in the coordinate system Oc of the scanner camera is reconstructed.
- Step S703 The texture camera acquires color texture information on the surface of the object.
- Step S705 the scanner target feature acquired by the tracker, and the spatial position distribution relationship of the target feature on the scanner is known.
- the rear intersection algorithm can be used to obtain the outer orientation element of the image, thereby obtaining the conversion matrix R from the tracker to the scanner target feature coordinate system 2 T 2 .
- the tracker coordinate system to obtain the point plane information P to; P 2 P 1 * R 2 + T 2: Step S706: using a R 2 T 2 obtained point and area information of the coordinates P 2 at the points P 1 to the tracker coordinate system
- the coordinates of: P 2 : P 2 (P*R 1 +T 1 )*R 2 +T 2 . That is, the coordinates of the point and surface information on the surface of the object to be scanned obtained by the scanner in the world coordinate system.
- Step S707 According to the conversion relationship between the tracker and the scanner target feature coordinate system, the coordinates of the texture information to the tracker coordinate system are obtained, and the texture information is subjected to texture mapping in the tracker coordinate system.
- Texture mapping can be a color rendering of a point cloud, or it can be mapped to a surface by dividing a grid.
- the number of shots taken by the texture camera can be less than the number of shots taken by the binocular camera.
- the above texture mapping can be performed in real time, that is, the color texture information is mapped to the three-dimensional point and surface data in the current coordinate system according to the spatial position conversion relationship of the scanner at the current time; it can also be performed in post-processing, that is, the scan is completed After global optimization of the point and surface information, mapping is performed according to the conversion relationship of the texture image.
- the real-time texture display is only used for scanning prompts, generally coloring on the point cloud; post-processing texture, that is, grid mapping according to the RT position of the texture image after scanning is used to generate a textured grid Model output results.
- the texture mapping step in step S707 includes the following steps:
- Step 1 Determine the effective texture image of the geometric triangle of the model:
- the triangular mesh of the 3D model can be converted to the texture camera coordinate system through the following formula to obtain the texture coordinates corresponding to the vertices of the triangular mesh. After the image is sliced, only the required texture image is retained.
- P uv represents the two-dimensional pixel coordinates in the texture camera coordinate system
- K represents the texture camera internal parameter matrix
- P w represents the grid vertex coordinates in the world coordinate system
- R 3 T 3 represents the conversion from the world coordinate system to the texture camera coordinate system. matrix.
- Step 2 Sampling the geometric triangles, using the bilinear difference to determine the color value of the sampling point in the effective texture image, thereby determining the color of the set triangle in the effective texture image.
- Step 3 According to the position relationship between the geometric model and the texture camera, the weight of the texture image is defined, and the composite weight is constructed to merge the texture.
- the defined function weights include normal vector weights, edge weights and geometric weights.
- Step 4 Save the geometric model and texture information, record the corresponding relationship between the model and the texture image, and display the three-dimensional model with color texture.
- FIG. 8 is a schematic diagram of the hardware structure of a computer device according to an embodiment of the present application.
- the computer device may include a processor 81 and a memory 82 storing computer program instructions.
- the aforementioned processor 81 may include a central processing unit (CPU), or a specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
- CPU central processing unit
- ASIC Application Specific Integrated Circuit
- the memory 82 may include a large-capacity memory for data or instructions.
- the storage 82 may include a hard disk drive (Hard Disk Drive, referred to as HDD), a floppy disk drive, a solid state drive (Solid State Drive, referred to as SSD), flash memory, optical disk, magneto-optical disk, magnetic tape, or universal serial Universal Serial Bus (USB for short) driver or a combination of two or more of these.
- the storage 82 may include removable or non-removable (or fixed) media.
- the memory 82 may be internal or external to the data processing device.
- the memory 82 is a non-volatile (Non-Volatile) memory.
- the memory 82 includes a read-only memory (Read-Only Memory, ROM for short).
- the ROM can be mask-programmed ROM, programmable ROM (Programmable Read-Only Memory, referred to as PROM), erasable PROM (Erasable Programmable Read-Only Memory, referred to as EPROM), and electrically erasable Except PROM (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), Electrically Alterable Read-Only Memory (EAROM for short) or Flash memory (FLASH), or a combination of two or more of these.
- PROM Programmable ROM
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrically Alterable Read-Only Memory
- FLASH Flash memory
- the processor 81 reads and executes computer program instructions stored in the memory 82 to implement any one of the three-dimensional scanning methods in the foregoing embodiments.
- the computer device may further include a communication interface 83 and a bus 80.
- a communication interface 83 and a bus 80.
- the processor 81, the memory 82, and the communication interface 83 are connected through the bus 80 and complete mutual communication.
- the communication interface 83 is used to implement communication between various modules, devices, units, and/or devices in the embodiments of the present application.
- the communication interface 83 can also implement data communication with other components such as external devices, image acquisition devices, databases, external storage, and image processing workstations.
- the bus 80 includes hardware, software, or both, and couples the components of the computer device to each other.
- the bus 80 includes but is not limited to at least one of the following: a data bus (Data Bus), an address bus (Address Bus), a control bus (Control Bus), an expansion bus (Expansion Bus), and a local bus (Local Bus).
- the bus 80 may include an Accelerated Graphics Port (AGP) or other graphics buses, an Enhanced Industry Standard Architecture (EISA) bus, and a front side bus (Front Side Bus).
- AGP Accelerated Graphics Port
- EISA Enhanced Industry Standard Architecture
- Front Side Bus Front Side Bus
- FSB HyperTransport
- ISA Industry Standard Architecture
- InfiniBand wireless bandwidth
- LPC Low Pin Count
- MCA Micro Channel Architecture
- PCI Peripheral Component Interconnect
- PCI-X Peripheral Component Interconnect
- SATA Serial Advanced Technology Attachment
- VLB Video Electronics Standards Association Local Bus
- the bus 80 may include one or more buses.
- the embodiment of the present application may provide a computer-readable storage medium for implementation.
- the computer-readable storage medium stores computer program instructions; when the computer program instructions are executed by the processor, any one of the three-dimensional scanning methods in the foregoing embodiments is implemented.
- the three-dimensional point and surface information with color texture of the scanned object is obtained through the non-contact tracking scanning method, and the three-dimensional point and surface information with color texture is reconstructed.
- Model or after the three-dimensional model is reconstructed, the color and texture information of the scanned object obtained by the non-contact tracking scanning method is mapped to the surface of the three-dimensional model.
- the tracker captures the real-time pose of the three-dimensional scanner to ensure that each frame of the texture obtains an accurate conversion relationship.
- the embodiments of the present application can flexibly and conveniently realize the color texture scanning of the surface of large objects in a complex environment, and accurately reconstruct the three-dimensional model with color texture, which is especially suitable for objects with color texture.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims (22)
- 一种三维扫描系统,包括三维扫描仪、跟踪器和计算单元,所述三维扫描仪和所述跟踪器分别与所述计算单元电性连接;所述三维扫描仪用于采集被扫描对象的三维点面信息,所述跟踪器用于在所述三维扫描仪采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿,所述计算单元用于根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;其特征在于,所述三维扫描仪,还用于采集所述被扫描对象表面的色彩纹理信息;所述跟踪器,还用于在所述三维扫描仪采集所述被扫描对象表面的色彩纹理信息时跟踪所述三维扫描仪的第二位姿;所述计算单元,还用于根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理。
- 根据权利要求1所述的三维扫描系统,其中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像头和第二摄像头,以及用于采集所述色彩纹理信息的第三摄像头。
- 根据权利要求1所述的三维扫描系统,其中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像头和第二摄像头,其中,所述第一摄像头还用于采集所述色彩纹理信息。
- 根据权利要求1所述的三维扫描系统,其中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像头和第二摄像头,其中,所述第二摄像头还用于采集所述色彩纹理信息。
- 根据权利要求2所述的三维扫描系统,其中,所述三维扫描仪还包括:结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投射结构光图案;所述三维扫描系统还包括:时钟同步单元,所述时钟同步单元分别与所述三维扫描仪和所述跟踪器电性连接;所述时钟同步单元用于提供时钟同步信号;其中,所述结构光投影器、所述第一摄像头、所述第二摄像头、所述第三摄像头以及所述跟踪器根据所述时钟同步信号同步工作。
- 根据权利要求3或4所述的三维扫描系统,其中,所述三维扫描仪还包括:结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投射结构光图案;所述三维扫描系统还包括:时钟同步单元,所述时钟同步单元分别与所述三维扫描仪和所述跟踪器电性连接;所述时钟同步单元用于提供时钟同步信号;其中,所述结构光投影器、所述第一摄像头、所述第二摄像头以及所述跟踪器根据所述时钟同步信号同步工作。
- 根据权利要求5所述的三维扫描系统,其中,所述结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投影不可见光波段的结构光投影图案;所述不可见光波段的结构光投影图案能够被所述第一摄像头和所述第二摄像头捕获,所述不可见光波段的结构光投影图案不能够被所述第三摄像头捕获。
- 根据权利要求1所述的三维扫描系统,其中,所述三维扫描系统还包括:可见光源,所述可见光源用于在采集色彩纹理信息时对所述被扫描对象补光。
- 一种三维扫描方法,其特征在于,所述方法包括:采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿;根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理。
- 根据权利要求9所述的三维扫描方法,其中,所述采集被扫描对象的三维点面信息包括:在所述被扫描对象的表面投射结构光投影图案;使用第一摄像头和第二摄像头采集表面投射有所述结构光投影图案的被扫描对象的图像信息,并根据所述图像信息生成所述被扫描对象的三维点面信息。
- 根据权利要求10所述的三维扫描方法,其中,对所述被扫描对象的所述三维点面信息和所述色彩纹理信息的采集是非同时的。
- 根据权利要求10所述的三维扫描方法,其中,所述采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用所述第一摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
- 根据权利要求10所述的三维扫描方法,其中,所述采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用所述第二摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
- 根据权利要求10所述的三维扫描方法,其中,所述采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用第三摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
- 根据权利要求14所述的三维扫描方法,其中,在所述被扫描对象的表面投射的结构光投影图案为不可见光波段的结构光投影图案;所述不可见光波段的结构光投影图案能够被采集所述三维点面信息的摄像头捕获,而不能够被采集所述色彩纹理信息的摄像头捕获;对所述被扫描对象的所述三维点面信息和所述色彩纹理信息的采集是同时的。
- 根据权利要求9至15中任一项所述的三维扫描方法,其中,所述根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:根据所述第二位姿,确定在第一坐标系中采集到的所述色彩纹理信息在第二坐标系中的坐标;根据所述坐标,在所述第二坐标系中将所述色彩纹理信息映射到所述三维模型的表面;其中,所述三维模型是在所述第二坐标系中重建的。
- 根据权利要求15所述的三维扫描方法,其中,所述根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:在所述第一位姿和所述第二位姿相同的情况下,在第一坐标系中将所述色彩纹理信息映射到所述三维点面信息中;在第二坐标系中,根据映射所述色彩纹理信息后的三维点面信息,重建得到具有色彩纹理的所述被扫描对象的三维模型;其中,所述三维点面信息和所述色彩纹理信息是在所述第一坐标系中采集的,所述三维模型是在所述第二坐标系中重建的。
- 根据权利要求9至15中任一项所述的三维扫描方法,其中,所述根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:根据所述第二位姿,确定与所述色彩纹理信息对应的点云;根据所述色彩纹理信息,对所述点云进行色彩渲染。
- 根据权利要求9至15中任一项所述的三维扫描方法,其中,所述根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:将所述三维模型的表面进行网格分割,并根据所述第二位姿,确定分割得到的每个网格对应的色彩纹理信息;在分割得到的每个网格中填充对应的色彩纹理信息。
- 根据权利要求14所述的三维扫描方法,其中,所述第三摄像头采集所述色彩纹理信息的频率低于所述第一摄像头和所述第二摄像头采集所述三维点面信息的频率。
- 一种计算机设备,包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求9至20中任一项所述的三维扫描方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求9至20中任一项所述的三维扫描方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010278835.9A CN113514008B (zh) | 2020-04-10 | 2020-04-10 | 三维扫描方法、三维扫描系统和计算机可读存储介质 |
CN202010278835.9 | 2020-04-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021203883A1 true WO2021203883A1 (zh) | 2021-10-14 |
Family
ID=78022859
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/079192 WO2021203883A1 (zh) | 2020-04-10 | 2021-03-05 | 三维扫描方法、三维扫描系统和计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113514008B (zh) |
WO (1) | WO2021203883A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114189594A (zh) * | 2022-02-17 | 2022-03-15 | 杭州思看科技有限公司 | 三维扫描装置、方法、计算机设备及存储介质 |
CN115065761A (zh) * | 2022-06-13 | 2022-09-16 | 中亿启航数码科技(北京)有限公司 | 一种多镜头扫描装置及其扫描方法 |
CN115252992A (zh) * | 2022-07-28 | 2022-11-01 | 北京大学第三医院(北京大学第三临床医学院) | 基于结构光立体视觉的气管插管导航系统 |
CN115661369A (zh) * | 2022-12-14 | 2023-01-31 | 思看科技(杭州)股份有限公司 | 三维扫描方法、三维扫描的控制方法、系统和电子装置 |
CN116418967A (zh) * | 2023-04-13 | 2023-07-11 | 青岛图海纬度科技有限公司 | 水下动态环境激光扫描的色彩还原方法和设备 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114485479B (zh) * | 2022-01-17 | 2022-12-30 | 吉林大学 | 基于双目相机和惯性导航的结构光扫描测量方法及系统 |
CN114554025B (zh) * | 2022-04-27 | 2022-07-22 | 杭州思看科技有限公司 | 三维扫描方法、系统、电子装置和存储介质 |
CN115187663A (zh) * | 2022-06-30 | 2022-10-14 | 先临三维科技股份有限公司 | 扫描仪姿态定位方法、装置、设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104976968A (zh) * | 2015-06-16 | 2015-10-14 | 江苏科技大学 | 一种基于led标签跟踪的三维几何测量方法及系统 |
CN105157566A (zh) * | 2015-05-08 | 2015-12-16 | 深圳市速腾聚创科技有限公司 | 彩色三维激光扫描仪及三维立体彩色点云扫描的方法 |
CN106898022A (zh) * | 2017-01-17 | 2017-06-27 | 徐渊 | 一种手持式快速三维扫描系统及方法 |
US20170337726A1 (en) * | 2016-05-17 | 2017-11-23 | Vangogh Imaging, Inc. | 3d photogrammetry |
CN108805976A (zh) * | 2018-05-31 | 2018-11-13 | 武汉中观自动化科技有限公司 | 一种三维扫描系统及方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8855366B2 (en) * | 2011-11-29 | 2014-10-07 | Qualcomm Incorporated | Tracking three-dimensional objects |
JP6355710B2 (ja) * | 2013-03-15 | 2018-07-11 | ファロ テクノロジーズ インコーポレーテッド | 非接触型光学三次元測定装置 |
CN109000582B (zh) * | 2018-03-15 | 2021-07-02 | 杭州思看科技有限公司 | 跟踪式三维扫描装置的扫描方法及系统、存储介质、设备 |
CN109211118A (zh) * | 2018-08-13 | 2019-01-15 | 宣城徽目智能科技有限公司 | 一种三维扫描测头空间位姿跟踪系统 |
-
2020
- 2020-04-10 CN CN202010278835.9A patent/CN113514008B/zh active Active
-
2021
- 2021-03-05 WO PCT/CN2021/079192 patent/WO2021203883A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105157566A (zh) * | 2015-05-08 | 2015-12-16 | 深圳市速腾聚创科技有限公司 | 彩色三维激光扫描仪及三维立体彩色点云扫描的方法 |
CN104976968A (zh) * | 2015-06-16 | 2015-10-14 | 江苏科技大学 | 一种基于led标签跟踪的三维几何测量方法及系统 |
US20170337726A1 (en) * | 2016-05-17 | 2017-11-23 | Vangogh Imaging, Inc. | 3d photogrammetry |
CN106898022A (zh) * | 2017-01-17 | 2017-06-27 | 徐渊 | 一种手持式快速三维扫描系统及方法 |
CN108805976A (zh) * | 2018-05-31 | 2018-11-13 | 武汉中观自动化科技有限公司 | 一种三维扫描系统及方法 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114189594A (zh) * | 2022-02-17 | 2022-03-15 | 杭州思看科技有限公司 | 三维扫描装置、方法、计算机设备及存储介质 |
CN115065761A (zh) * | 2022-06-13 | 2022-09-16 | 中亿启航数码科技(北京)有限公司 | 一种多镜头扫描装置及其扫描方法 |
CN115065761B (zh) * | 2022-06-13 | 2023-09-12 | 中亿启航数码科技(北京)有限公司 | 一种多镜头扫描装置及其扫描方法 |
CN115252992A (zh) * | 2022-07-28 | 2022-11-01 | 北京大学第三医院(北京大学第三临床医学院) | 基于结构光立体视觉的气管插管导航系统 |
CN115661369A (zh) * | 2022-12-14 | 2023-01-31 | 思看科技(杭州)股份有限公司 | 三维扫描方法、三维扫描的控制方法、系统和电子装置 |
CN116418967A (zh) * | 2023-04-13 | 2023-07-11 | 青岛图海纬度科技有限公司 | 水下动态环境激光扫描的色彩还原方法和设备 |
CN116418967B (zh) * | 2023-04-13 | 2023-10-13 | 青岛图海纬度科技有限公司 | 水下动态环境激光扫描的色彩还原方法和设备 |
Also Published As
Publication number | Publication date |
---|---|
CN113514008A (zh) | 2021-10-19 |
CN113514008B (zh) | 2022-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021203883A1 (zh) | 三维扫描方法、三维扫描系统和计算机可读存储介质 | |
US11003897B2 (en) | Three-dimensional real face modeling method and three-dimensional real face camera system | |
CN108876926B (zh) | 一种全景场景中的导航方法及系统、ar/vr客户端设备 | |
CN106228507B (zh) | 一种基于光场的深度图像处理方法 | |
CN104335005B (zh) | 3d扫描以及定位系统 | |
CN110728671B (zh) | 基于视觉的无纹理场景的稠密重建方法 | |
US11816829B1 (en) | Collaborative disparity decomposition | |
CN109801374B (zh) | 一种通过多角度图像集重构三维模型的方法、介质及系统 | |
US20130095920A1 (en) | Generating free viewpoint video using stereo imaging | |
CN107917701A (zh) | 基于主动式双目立体视觉的测量方法及rgbd相机系统 | |
CN103971404A (zh) | 一种高性价比的3d实景复制装置 | |
Dias et al. | Registration and fusion of intensity and range data for 3D modelling of real world scenes | |
KR100834157B1 (ko) | 영상 합성을 위한 조명환경 재구성 방법 및 프로그램이기록된 기록매체 | |
Serna et al. | Data fusion of objects using techniques such as laser scanning, structured light and photogrammetry for cultural heritage applications | |
WO2022078442A1 (zh) | 一种基于光扫描和智能视觉融合的3d信息采集方法 | |
US20220398760A1 (en) | Image processing device and three-dimensional measuring system | |
CN108629828B (zh) | 三维大场景的移动过程中的场景渲染过渡方法 | |
US20230062973A1 (en) | Image processing apparatus, image processing method, and storage medium | |
JP4354708B2 (ja) | 多視点カメラシステム | |
Lanman et al. | Surround structured lighting for full object scanning | |
Liu et al. | The applications and summary of three dimensional reconstruction based on stereo vision | |
Harvent et al. | Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system | |
CN104034729A (zh) | 用于电路板分选的五维成像系统及其成像方法 | |
CN116205961A (zh) | 多镜头组合影像和激光雷达点云的自动配准方法及其系统 | |
Wong et al. | 3D object model reconstruction from image sequence based on photometric consistency in volume space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21784733 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21784733 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/04/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21784733 Country of ref document: EP Kind code of ref document: A1 |