WO2021203883A1 - 三维扫描方法、三维扫描系统和计算机可读存储介质 - Google Patents

三维扫描方法、三维扫描系统和计算机可读存储介质 Download PDF

Info

Publication number
WO2021203883A1
WO2021203883A1 PCT/CN2021/079192 CN2021079192W WO2021203883A1 WO 2021203883 A1 WO2021203883 A1 WO 2021203883A1 CN 2021079192 W CN2021079192 W CN 2021079192W WO 2021203883 A1 WO2021203883 A1 WO 2021203883A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
information
camera
color texture
pose
Prior art date
Application number
PCT/CN2021/079192
Other languages
English (en)
French (fr)
Inventor
王江峰
陈尚俭
Original Assignee
杭州思看科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州思看科技有限公司 filed Critical 杭州思看科技有限公司
Publication of WO2021203883A1 publication Critical patent/WO2021203883A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/245Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object

Definitions

  • This application relates to the field of three-dimensional scanning technology, in particular to three-dimensional scanning methods, three-dimensional scanning systems, computer equipment, and computer-readable storage media.
  • the optical 3D scanner is a device that uses optical imaging to obtain 3D information of the measured object. It is currently widely used in industrial product detection, reverse setting, simulation, positioning and other fields.
  • Tracking 3D scanning is a new type of 3D scanning technology, which mainly uses 3D scanners and trackers to realize 3D measurement of objects. Compared with the traditional point-mounted 3D scanning or photographic 3D scanning, the tracking 3D scanning technology is more convenient to use, more stable, and has a larger measurement range, which is convenient for users to realize easily and conveniently in the workshop, outdoor and various complex environments. Three-dimensional measurement.
  • Tracker mainly includes laser trackers, fixed dual-camera three-dimensional scanners, posture capture and tracking equipment, head-mounted three-dimensional coordinate data glasses, and geometric measurement devices based on LED tag tracking for large-scale objects such as ship curved steel plates. Wait.
  • the above-mentioned existing tracking three-dimensional scanning device mainly adopts a combination of a tracker and a scanner to realize the three-dimensional measurement of an object, wherein the tracker is used for stitching three-dimensional data, and the scanner is used for obtaining three-dimensional data.
  • the realization of the 3D scanning function depends on the function and accuracy of the scanner itself.
  • the scanners in the above-mentioned existing devices mainly adopt hand-held monochrome laser scanners or raster projection scanners, which have relatively single functions, and lack sufficient adaptability for scanning scenes with higher requirements for colors and textures.
  • the existing tracking devices cannot yet achieve such functions.
  • Color texture scanning devices are mainly hand-held white light scanners, which mainly include a projector, one or more black and white cameras and a color camera.
  • the projector uses coded structured light for projection, and the black and white camera captures objects while projecting. Contour information, and splicing the point and surface information of the front and rear frames through feature recognition; in order to avoid the projected pattern from affecting the texture effect, the color camera obtains the surface texture information of the object in the projection gap, and performs texture mapping based on the three-dimensional information of the black and white camera.
  • the main problem of the above device is that the color camera and the black-and-white camera cross-shoot asynchronously. Although the interval is very short, because the two are not synchronized, the 3D model after texture mapping has a certain color texture compared with the original scanned object. The dislocation.
  • the embodiments of the present application provide a three-dimensional scanning method, a three-dimensional scanning system, a computer device, and a computer-readable storage medium to at least solve the problem of misalignment of the color texture of the three-dimensional model in the related art.
  • an embodiment of the present application provides a three-dimensional scanning system, including a three-dimensional scanner, a tracker, and a computing unit.
  • the three-dimensional scanner and the tracker are respectively electrically connected to the computing unit;
  • the three-dimensional The scanner is used to collect the three-dimensional point and surface information of the scanned object,
  • the tracker is used to track the first pose of the three-dimensional scanner when the three-dimensional scanner collects the three-dimensional point and surface information, and the calculation unit uses To reconstruct a three-dimensional model of the scanned object according to the three-dimensional point and surface information and the first pose;
  • the three-dimensional scanner is also used to collect color texture information on the surface of the scanned object; the tracker is also used to track when the three-dimensional scanner collects color texture information on the surface of the scanned object.
  • the second pose of the three-dimensional scanner; the calculation unit is further configured to generate a color texture on the surface of the three-dimensional model according to the color texture information and the second pose.
  • the three-dimensional scanner includes: a first camera and a second camera for collecting three-dimensional point and surface information of the scanned object, and a third camera for collecting the color texture information.
  • the three-dimensional scanner includes: a first camera and a second camera for collecting three-dimensional point and surface information of the scanned object, wherein the first camera is also used for collecting the color Texture information.
  • the three-dimensional scanner includes: a first camera and a second camera for collecting three-dimensional point and surface information of the scanned object, wherein the second camera is also used for collecting the color Texture information.
  • the three-dimensional scanner further includes: a structured light projector for projecting a structured light pattern on the surface of the scanned object when the three-dimensional scanner collects the three-dimensional point and surface information;
  • the three-dimensional scanning system further includes: a clock synchronization unit, which is electrically connected to the three-dimensional scanner and the tracker, respectively; the clock synchronization unit is used to provide a clock synchronization signal; wherein,
  • the structured light projector, the first camera, the second camera, the third camera, and the tracker work synchronously according to the clock synchronization signal.
  • the three-dimensional scanner further includes: a structured light projector for projecting a structured light pattern on the surface of the scanned object when the three-dimensional scanner collects the three-dimensional point and surface information;
  • the three-dimensional scanning system further includes: a clock synchronization unit, which is electrically connected to the three-dimensional scanner and the tracker, respectively; the clock synchronization unit is used to provide a clock synchronization signal; wherein,
  • the structured light projector, the first camera, the second camera, and the tracker work synchronously according to the clock synchronization signal.
  • the three-dimensional scanner further includes: a structured light projector, used to project the invisible light waveband structure on the surface of the scanned object when the three-dimensional scanner collects the three-dimensional point and surface information Light projection pattern;
  • the structured light projection pattern in the invisible light band can be captured by the first camera and the second camera, and the structured light projection pattern in the invisible light band cannot be captured by the third camera.
  • the three-dimensional scanning system further includes: a visible light source, the visible light source being used to supplement light to the scanned object when collecting color texture information.
  • an embodiment of the present application provides a three-dimensional scanning method, including:
  • Collect the three-dimensional point and surface information of the scanned object and track the first pose of the three-dimensional scanner when collecting the three-dimensional point and surface information; and collect the color texture information of the scanned object surface, and collect the color texture Tracking the second pose of the three-dimensional scanner when information;
  • a color texture is generated on the surface of the three-dimensional model.
  • collecting three-dimensional point and surface information of the scanned object includes:
  • the first camera and the second camera are used to collect image information of the scanned object on which the structured light projection pattern is projected on the surface, and generate three-dimensional point and surface information of the scanned object according to the image information.
  • the three-dimensional point and surface information and the color texture information of the scanned object are collected non-simultaneously.
  • the three-dimensional point and surface information of the scanned object is collected, and the first pose of the three-dimensional scanner is tracked when the three-dimensional point and surface information is collected; and the color texture of the surface of the scanned object is collected Information, and tracking the second pose of the three-dimensional scanner when collecting the color texture information includes:
  • collecting three-dimensional point and surface information of the scanned object, and tracking the first pose of the three-dimensional scanner when collecting the three-dimensional point and surface information; and collecting the color texture of the surface of the scanned object Information, and tracking the second pose of the three-dimensional scanner when collecting the color texture information includes:
  • first camera and the second camera Use the first camera and the second camera to collect the three-dimensional point and surface information of the scanned object, and track the first pose of the three-dimensional scanner when the three-dimensional point and surface information is collected; and use the second camera to collect the The color texture information of the surface of the scanned object is tracked, and the second pose of the three-dimensional scanner is tracked when the color texture information is collected.
  • collecting three-dimensional point and surface information of the scanned object, and tracking the first pose of the three-dimensional scanner when collecting the three-dimensional point and surface information; and collecting the color texture of the surface of the scanned object Information, and tracking the second pose of the three-dimensional scanner when collecting the color texture information includes:
  • the first camera and the second camera to collect the three-dimensional point and surface information of the scanned object, and track the first pose of the three-dimensional scanner when the three-dimensional point and surface information is collected; and use the third camera to collect the scanned object
  • the color texture information of the surface of the object, and the second pose of the three-dimensional scanner is tracked when the color texture information is collected.
  • the structured light projection pattern projected on the surface of the scanned object is a structured light projection pattern in the invisible light band; the structured light projection pattern in the invisible light band can be used to collect the three-dimensional point and surface information
  • the camera captures the color texture information, but cannot be captured by the camera that collects the color texture information; the three-dimensional point and surface information of the scanned object and the color texture information are collected at the same time.
  • generating a color texture on the surface of the three-dimensional model according to the color texture information and the second pose includes:
  • the three-dimensional model is reconstructed in the second coordinate system.
  • the three-dimensional model of the scanned object is reconstructed according to the three-dimensional point and surface information and the first pose; according to the color texture information and the second pose, in the three-dimensional
  • the color texture generated on the surface of the model includes:
  • mapping the color texture information to the three-dimensional point and surface information in the first coordinate system mapping the color texture information to the three-dimensional point and surface information in the first coordinate system
  • the three-dimensional point and surface information and the color texture information are collected in the first coordinate system, and the three-dimensional model is reconstructed in the second coordinate system.
  • generating a color texture on the surface of the three-dimensional model according to the color texture information and the second pose includes:
  • generating a color texture on the surface of the three-dimensional model according to the color texture information and the second pose includes:
  • the frequency at which the third camera collects the color texture information is lower than the frequency at which the first camera and the second camera collect the three-dimensional point and surface information.
  • an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and running on the processor.
  • the processor executes the computer program, The three-dimensional scanning method as described in the above second aspect is realized.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the three-dimensional scanning method as described in the second aspect is implemented.
  • the three-dimensional scanning method, three-dimensional scanning system, computer equipment, and computer-readable storage medium collect three-dimensional point and surface information of the scanned object, and track three-dimensional information when collecting the three-dimensional point and surface information.
  • the first pose of the scanner and collect the color texture information on the surface of the scanned object, and track the second pose of the 3D scanner when the color texture information is collected; reconstruct the scanned object according to the 3D point and surface information and the first pose
  • the three-dimensional model of the object according to the color texture information and the second pose, the color texture is generated on the surface of the three-dimensional model, which solves the problem of the color texture of the three-dimensional model in related technologies and improves the color texture mapping of the three-dimensional model Accuracy.
  • Fig. 1a is a schematic structural diagram of a three-dimensional scanning system according to an embodiment of the present application.
  • Fig. 1b is a schematic structural diagram of another three-dimensional scanning system according to an embodiment of the present application.
  • Fig. 2 is a flowchart of a three-dimensional scanning method according to an embodiment of the present application
  • FIG. 3 is a flowchart of the reconstruction process of a three-dimensional model without color texture according to an embodiment of the present application
  • FIG. 4 is a flowchart of a method for rebuilding a three-dimensional model with color texture based on real-time color texture information mapping according to an embodiment of the present application
  • Fig. 5 is a schematic structural diagram of a three-dimensional scanning system according to an optional embodiment of the present application.
  • FIG. 6 is a schematic diagram of the connection structure of each component in the three-dimensional scanning system according to an optional embodiment of the present application.
  • Fig. 7 is a flowchart of a three-dimensional scanning method according to an optional embodiment of the present application.
  • Fig. 8 is a schematic diagram of the hardware structure of a computer device according to an embodiment of the present application.
  • connection refers to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Wherein, if there is no conflict, the electrical connection may be a wired connection or a wireless connection.
  • the "plurality” referred to in this application refers to two or more.
  • “And/or” describes the association relationship of the associated objects, which means that there can be three kinds of relationships. For example, “A and/or B” can mean: A alone exists, A and B exist at the same time, and B exists alone.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
  • the terms “first”, “second”, “third”, etc. involved in this application merely distinguish similar objects, and do not represent a specific order for the objects.
  • a linear structured light is used as an example to introduce the structured light visual inspection based on this application and the basic principle of non-contact tracking.
  • the structured light projector When performing 3D scanning, the structured light projector first projects a linear laser to the scanned object.
  • the projected linear laser forms a laser projection plane.
  • a bright line will be formed on the surface of the scanned object.
  • Scan line Since the scan line includes all the surface points where the laser projection plane intersects the object, the three-dimensional coordinates of the corresponding surface point of the object can be obtained according to the coordinates of the scan line.
  • the three-dimensional coordinates are mapped to the laser projection plane to obtain a two-dimensional image of the scan line. According to the coordinates of the points on the two-dimensional image of the scan line, the three-dimensional coordinates of the corresponding object surface points can be calculated. This is the basic principle of structured light visual inspection.
  • Non-contact tracking technology using a tracking camera to capture at least three target features on the surface of the 3D scanner; due to the target features on the surface of the 3D scanner, the spatial position of the binocular camera (including the first camera and the second camera) of the 3D scanner The relationship is pre-calibrated.
  • the calculation unit can obtain the pose of the 3D scanner and the conversion relationship between the coordinate system of the 3D scanner and the coordinate system of the tracker according to at least the features of the 3D target captured by the tracking camera;
  • the relationship converts the coordinates of the three-dimensional point and surface information collected by the three-dimensional scanner into the coordinate system of the tracker, and then performs splicing and fusion according to the coordinates of the three-dimensional point and surface information to reconstruct a complete three-dimensional model.
  • Fig. 1a is a schematic structural diagram of a three-dimensional scanning system according to an embodiment of the present application. As shown in Fig. 1a, the three-dimensional scanning system includes: a three-dimensional scanner 11, a tracker 12, and a computing unit 13, wherein,
  • the three-dimensional scanner 11 is electrically connected to the computing unit 13.
  • the three-dimensional scanner 11 includes a structured light projector 111, a first camera 1121 and a second camera 1122 for collecting three-dimensional point and surface information of the scanned object, at least three target features 113.
  • the above-mentioned first camera 1121 and second camera 1122 include cameras, CCD sensors, or CMOS sensors capable of capturing the visible light waveband or the invisible light waveband of the target space.
  • the above-mentioned structured light projector 111 includes a projector configured to sequentially project a structured light pattern onto the surface of the scanned object, and may be, for example, a digital light processing (DLP) projector.
  • the structured light projected by the structured light projector 111 may be speckle, fringe, Gray code or other coded structured light.
  • the structured light projector 111, the first camera 1121, the second camera 1122, and at least three target features 113 are installed on the mounting frame, and their spatial position relationships are all pre-calibrated. Therefore, in the triangulation calculation, the distance and angle between the target features, the first camera 1121 and the second camera 1122 are known, and the position and projection angle of the structured light projector 111 are known. Known.
  • the at least three target features 113 of the three-dimensional scanner 11 may be self-luminous target features or reflective target features.
  • the tracker 12 is electrically connected to the computing unit 13.
  • the tracker 12 is used to track the first of the three-dimensional scanner 11 by capturing at least three target features 113 of the three-dimensional scanner 11 when the three-dimensional scanner 11 collects three-dimensional point and surface information. Posture.
  • the tracker 12 includes at least one tracking camera, and the tracking camera is used to capture at least three target features 113 fixed on the surface of the three-dimensional scanner 11. Since the spatial position relationship between the at least three target features 113 is pre-calibrated, the pose of the three-dimensional scanner 11 can be determined based on the at least three target features 113.
  • the calculation unit 13 is configured to reconstruct a three-dimensional model of the scanned object according to the three-dimensional point and surface information and the first pose collected by the first camera 1121 and the second camera 1122.
  • the basic principles for the calculation unit 13 to reconstruct the three-dimensional model of the scanned object are the principle of triangulation and the principle of epipolar line constraint.
  • the three-dimensional scanner 11 is also used to collect color texture information on the surface of the scanned object.
  • the tracker 12 is also used to track the second pose of the three-dimensional scanner 11 when the three-dimensional scanner 11 collects color texture information of the surface of the scanned object.
  • the calculation unit 13 is also used to generate a color texture on the surface of the three-dimensional model based on the color texture information and the second pose.
  • the computing unit 13 uses the three-dimensional scanner 11 to project the two-dimensional image information of the scanned object with structured light projection patterns on the surface, and uses the calibrated three-dimensional point and surface information to be collected.
  • the spatial position relationship of the multiple cameras reconstructs the three-dimensional point and surface information in the coordinate system of the cameras of the three-dimensional scanner 11.
  • the calculation unit 13 converts the three-dimensional point and surface information into the coordinate system of the target feature of the three-dimensional scanner 11 according to the conversion relationship between the calibrated camera and the at least three target features fixed on the surface of the three-dimensional scanner 11.
  • the tracker 12 simultaneously captures at least three target features 113 on the surface of the three-dimensional scanner 11. Since the spatial position relationship between the at least three target features 113 is also pre-calibrated, the calculation unit 13 is based on the captured information of the at least three target features 113 on the surface of the three-dimensional scanner 11 and the known information of the at least three target features 113. The spatial position relationship between the target features 113 can obtain the conversion relationship between the coordinate system of the tracker 12 and the coordinate system of the target feature of the three-dimensional scanner 11.
  • the calculation unit 13 obtains the coordinates of the three-dimensional point and surface information in the coordinate system of the tracker 12 according to the conversion relationship between the coordinate system of the tracker 12 and the coordinate system of the target feature of the three-dimensional scanner 11. According to the coordinates, the three-dimensional The point and surface information is subjected to three-dimensional reconstruction of the scanned object under the coordinate system of the tracker 12 to obtain a three-dimensional model.
  • the color texture generated by the calculation unit 13 on the surface of the three-dimensional model is also realized based on the conversion relationship between coordinate systems.
  • the point and surface information of the handheld white light scanner in the related technology is spliced through feature recognition. It cannot obtain the features used for splicing point and surface information when collecting color texture information, so it can only capture the last time it was collected.
  • the coordinates corresponding to the point and surface information are used as the coordinates corresponding to the currently collected color texture information; and because the hand-held white light scanner uses the color camera to obtain the color texture information and the black and white camera to obtain the point and surface information, the non-synchronized cross-shooting is taken. Therefore, there is a time interval between the collection time of the point and surface information and the color texture information.
  • any movement of the handheld white light scanner during this time interval will cause the coordinates corresponding to the last collected point and surface information to correspond to the currently collected color texture information.
  • the coordinates of are different, which causes the color texture of the 3D model to be misaligned.
  • the tracker 12 adopts a non-contact method to track three-dimensional point and surface information when collecting three-dimensional point and surface information and when collecting color texture information.
  • the three-dimensional scanning system provided in this embodiment can obtain the three-dimensional point and surface information and the color texture collected by the three-dimensional scanner 11
  • the accurate coordinates of the information in the coordinate system of the tracker 12 solves the problem of the misalignment of the color texture of the three-dimensional model in related technologies, and improves the accuracy of the color texture mapping of the three-dimensional model.
  • a structured light projector 111 is used in this embodiment, and the structured light pattern is projected on the surface of the scanned object when the three-dimensional scanner 11 collects three-dimensional point and surface information.
  • the three-dimensional scanning system with the structured light projector 111 in this embodiment adopts the structured light pattern projected by the structured light projector 111 as the feature mark. Eliminates the workload of posting feature marks on the surface of the scanned object. Not only that, because feature marks are no longer posted on the surface of the scanned object, the reconstructed 3D model with color texture can express the original surface features of the scanned object without causing additional features to appear on the surface of the 3D model. Marking improves the practicability of the 3D scanning system and avoids the workload caused by post-processing additional feature markings of the 3D model.
  • the three-dimensional scanner 11 in this embodiment can collect color and texture information on the surface of the scanned object.
  • the 3D scanner 11 includes a first camera 1121 and a second camera 1122 for collecting 3D point and surface information of the scanned object, and a second camera 1122 for collecting color texture information.
  • Three cameras 1123 are shown in Figure 1b.
  • the 3D scanner 11 includes a first camera 1121 and a second camera 1122 for collecting 3D point and surface information of the scanned object, wherein the first camera 1121 is also multiplexed To collect color texture information.
  • using the first camera 1121 to collect three-dimensional point and surface information and collect color and texture information can reduce the cost of the three-dimensional scanning system and reduce the volume and weight of the three-dimensional scanner.
  • the first camera 1121 and the second camera 1122 are both color cameras, and one of the color cameras is multiplexed to collect color texture information.
  • the first camera 1121 and the second camera 1122 are both color cameras.
  • the 3D scanning system further includes a clock synchronization unit 14, which is electrically connected to the 3D scanner 11 and the tracker 12, respectively.
  • the clock synchronization unit 14 is used to provide a clock synchronization signal.
  • the structured light projector 111, the first camera 1121, the second camera 1122, and the tracker 12 in the 3D scanner 11 work synchronously according to the clock synchronization signal; the third camera 1123 and the tracker 12 work synchronously according to the clock synchronization signal.
  • the clock synchronization unit 14 in this embodiment can be an independent unit independent of the tracker 12, the 3D scanner 11, and the calculation unit 13, or it can be located in the tracker 12, the 3D scanner 11, and the calculation unit 13. In any unit or device.
  • the structured light projector 111, the first camera 1121, the second camera 1122, and the tracker 12 in the three-dimensional scanner 11 work synchronously according to the clock synchronization signal.
  • the structured light projector 111 is facing the surface of the scanned object.
  • the first camera 1121, the second camera 1122 and the tracker 12 take pictures at the same time.
  • the third camera 1123 and the tracker in the three-dimensional scanner 11 work synchronously according to the clock synchronization signal, including: the third camera 1123 and the tracker 12 take pictures at the same time.
  • the structured light projector 111, the first camera 1121, and the second camera 1122 may work simultaneously with the third camera 1123, or may work non-simultaneously.
  • the three-dimensional scanner includes a first camera 1121, a second camera 1122, a third camera 1123, and a structured light projector 111.
  • the structured light projector 111 is used to project a structured light projection pattern in the invisible light band on the surface of the scanned object when the three-dimensional scanner collects three-dimensional point and surface information.
  • the three-dimensional scanning system also includes: a clock synchronization unit 14, which is electrically connected to the three-dimensional scanner 11 and the tracker 12, respectively; the clock synchronization unit 14 is used to provide a clock synchronization signal; wherein the structured light projector 111, the first The camera 1121, the second camera 1122, the third camera 1123, and the tracker work synchronously according to a clock synchronization signal.
  • the structured light projection pattern of the invisible light band projected by the structured light projector 111 can be captured by the first camera 1121 and the second camera 1122, but cannot be captured by the third camera 1123.
  • the first camera 1121, the second camera 1122, and the third camera 1123 can simultaneously collect 3D point and surface information or color texture information, which simplifies the timing design of the collection process and also helps to improve the efficiency of 3D model reconstruction.
  • the three-dimensional scanning system further includes a visible light source, and the visible light source is used in conjunction with the third camera 1123.
  • the visible light source is used to supplement light to the scanned object when the third camera 1123 collects color texture information.
  • the visible light source can be one or more flashes or light boxes. In the case that the visible light source is a flash or light box, this flash or light box supplements the plane of the scanned object currently scanned by the three-dimensional scanner 11; in the case where the visible light source is multiple flashes or light boxes, the multiple The flash or light box surrounds the scanned object to realize multi-angle fill light for the scanned object.
  • the visible light source may be electrically connected to the clock synchronization unit 14 through a wired connection or a wireless connection, so as to work synchronously with the third camera 1123.
  • the three-dimensional scanning method provided by this embodiment will be described and illustrated below. It should be noted that although the three-dimensional scanning method described in the embodiment is preferably used in the three-dimensional scanning system provided in the embodiment of the present application, the three-dimensional scanning method is applied to other non-contact tracking-based three-dimensional scanning systems It can also be conceived.
  • Fig. 2 is a flowchart of a three-dimensional scanning method according to an embodiment of the present application. As shown in Fig. 2, the process includes the following steps:
  • Step S201 Collect the three-dimensional point and surface information of the scanned object, and track the first pose of the three-dimensional scanner when the three-dimensional point and surface information is collected.
  • the three-dimensional point and surface information of the scanned object can be collected through the principle of binocular vision imaging.
  • the structured light projection pattern is projected on the surface of the scanned object through a visible light waveband structured light projector or an invisible light waveband structured light projector, and then the first camera and the second camera whose spatial position relationship is pre-calibrated are used on the surface of the scanned object Take pictures, and reconstruct the three-dimensional point and surface information of the scanned object through the principle of binocular vision imaging.
  • the structured light projection pattern may be a speckle pattern, a stripe pattern, a Gray code pattern or other coded structured light patterns.
  • the first pose of the 3D scanner can be tracked by non-contact tracking.
  • at least three target features are fixed on the surface of the three-dimensional scanner, and the spatial position relationship of the at least three target features is pre-calibrated. These at least three target features are tracked by the tracker and combined with the pre-calibrated spatial position relationship of the at least three target features, the first pose information of the three-dimensional scanner can be obtained.
  • the pose information includes position information and posture information.
  • Step S202 Collect the color texture information of the surface of the scanned object, and track the second pose of the three-dimensional scanner when the color texture information is collected.
  • the color texture information of the surface of the scanned object can be collected through the first camera or the third camera.
  • the spatial position of the third camera is also pre-calibrated.
  • the second pose of the 3D scanner can also be tracked through non-contact tracking.
  • at least three target features are fixed on the surface of the three-dimensional scanner, and the spatial position relationship of the at least three target features is also pre-calibrated. These at least three target features are tracked by the tracker and combined with the pre-calibrated spatial position relationship of the at least three target features, the second pose information of the three-dimensional scanner can be obtained, and the pose information also includes position information and posture information.
  • Step S203 Based on the three-dimensional point and surface information and the first pose, reconstruct a three-dimensional model of the scanned object.
  • the three-dimensional model of the scanned object can be reconstructed according to the known three-dimensional model reconstruction method in the related art.
  • Step S204 Generate a color texture on the surface of the three-dimensional model according to the color texture information and the second pose information.
  • the coordinate system of the color texture information can be converted to the same coordinate system as the three-dimensional model (equivalent to the above-mentioned second coordinate system) by the method of coordinate system conversion, thereby mapping the color texture information to the three-dimensional model.
  • Model surface the coordinates of the color texture information can be converted to the reconstructed three-dimensional model based on the second pose information, the pre-calibrated spatial position information of the camera used to collect color texture information, and the pre-calibrated spatial position relationship of at least three target features In the coordinate system.
  • the three-dimensional scanning system collects color texture information and three-dimensional point and surface information at the same time. Some of the first pose and the second pose collected by the device are the same pose; for these same poses, the color texture information and three-dimensional point and surface information collected in the first coordinate system are converted to the second coordinate The conversion relationship in the system is the same.
  • the color texture information can be directly mapped to the three-dimensional point and surface information in the first coordinate system to obtain the three-dimensional point and surface information with color texture, and then The coordinates of the three-dimensional point and surface information of the color texture are converted from the first coordinate system to the second coordinate system, and the three-dimensional model is reconstructed, thereby obtaining the three-dimensional model of the scanned object with the color texture.
  • FIG. 3 is a flowchart of the reconstruction process of a three-dimensional model without color texture according to an embodiment of the present application. As shown in FIG. 3, the three-dimensional scanning and reconstruction process of this embodiment includes the following steps:
  • Step S301 calibrate the target features on the surface of the three-dimensional scanner and the spatial position relationship between all the cameras in the three-dimensional scanner.
  • Step S302 Project a structured light pattern on the surface of the scanned object, obtain the two-dimensional image information of the scanned object through multiple cameras in the three-dimensional scanner, and use the spatial position relationship between the calibrated cameras, according to the principle of triangulation and The epipolar constraint principle reconstructs the three-dimensional point and surface information in the camera coordinate system.
  • Step S303 According to the conversion relationship between the calibrated camera and the target feature on the surface of the 3D scanner, the coordinates of the 3D point and surface information in the camera coordinate system are converted to the coordinate system of the target feature on the 3D scanner surface.
  • Step S304 When the camera of the 3D scanner is shooting, the tracker synchronously captures at least three target features on the surface of the 3D scanner. According to the known spatial position distribution relationship of the target feature on the surface of the three-dimensional scanner, the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the three-dimensional scanner is obtained.
  • Step S305 According to the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the three-dimensional scanner, the coordinates of the three-dimensional point and surface information in the coordinate system of the tracker are obtained, and then according to the three-dimensional The point and surface information and its coordinates are reconstructed to obtain a three-dimensional model of the scanned object.
  • steps S301 to S305 are exemplary descriptions of the reconstruction process of the three-dimensional model without color texture in the embodiment of the present application, and the actual three-dimensional reconstruction process may not be limited to this.
  • the color texture information may be mapped to the surface of the three-dimensional model after the reconstruction of the three-dimensional model is completed, or even after the three-dimensional model is globally optimized for the three-dimensional point and surface information.
  • the color texture information may be mapped to the three-dimensional point and surface information corresponding to the three-dimensional model during the reconstruction of the three-dimensional model or before the reconstruction of the three-dimensional model. For example, you can map the color texture information to the surface of the three-dimensional point and surface information in the coordinate system of the three-dimensional scanner or the coordinate system of the tracker, and then put the three-dimensional point and surface information with the color texture information in the coordinate system of the tracker Perform splicing and fusion to obtain a three-dimensional model with color texture.
  • step S201 and step S202 may be executed simultaneously or non-simultaneously.
  • the collection of the three-dimensional point and surface information and color texture information of the scanned object is non-simultaneous.
  • the 3D scanner can use two cameras to collect 3D point and surface information, and one of the cameras can collect color texture information.
  • Three-dimensional scanners can also use three cameras, of which two cameras collect three-dimensional point and surface information, and the other camera collects color texture information.
  • the structured light projector does not need to project structured light projection patterns when collecting color texture information, so the structured light projector can select any visible light band structured light projector or any invisible light For the structured light projector of the waveband, as long as the first camera and the second camera can capture the structured light projection pattern projected by the structured light projector.
  • the visible light band is also called white light;
  • the invisible light band can be but not limited to the infrared light band.
  • a three-dimensional scanner including three cameras can be used, wherein two cameras collect three-dimensional point and surface information, and the other camera collects color texture information; and the structure of the three-dimensional scanner
  • the structured light projection pattern projected by the light projector is a structured light projection pattern in the invisible light band.
  • the structured light projection pattern in the invisible light band can be captured by a camera that collects three-dimensional point and surface information, but cannot be captured by a camera that collects color texture information. . Therefore, even if the three cameras shoot at the same time, the other camera will not capture the structured light projection pattern when collecting color and texture information, so as to avoid the influence of the structured light projection pattern on the surface of the scanned object.
  • the third camera of the three-dimensional scanner collects color texture information on the surface of the scanned object.
  • the color texture information includes the coordinates in the coordinate system of the camera of the three-dimensional scanner and the color information corresponding to each coordinate. Since the spatial position relationship between the camera of the 3D scanner and the target feature on the surface of the 3D scanner is calibrated in advance, the conversion between the coordinate system of the camera of the 3D scanner and the coordinate system of the target feature of the 3D scanner can be obtained According to the conversion relationship, the coordinates of the color texture information in the coordinate system of the camera can be converted to the coordinate system of the target feature of the three-dimensional scanner. While the third camera of the three-dimensional scanner is shooting, the tracker simultaneously captures at least three target features on the surface of the three-dimensional scanner.
  • the spatial position relationship between the at least three target features is also pre-calibrated, it is based on the captured information of the at least three target features on the surface of the 3D scanner and the known space between the at least three target features
  • the position relationship can obtain the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the three-dimensional scanner.
  • the coordinates of the color texture information can be converted to the coordinate system of the tracker, and the relationship between the color texture information and the coordinate system of the tracker can be obtained.
  • the mapping relationship, and finally the color texture is generated on the surface of the three-dimensional model according to the mapping relationship.
  • the color texture information is mapped to the three-dimensional point in real time
  • three-dimensional point and surface information with color texture is obtained.
  • Fig. 4 is a flowchart of a method for rebuilding a three-dimensional model with color texture based on real-time color texture information mapping according to an embodiment of the present application. As shown in Fig. 4, the process includes the following steps:
  • Step S401 According to the image information and the spatial position relationship of the multiple cameras, the three-dimensional point and surface information in the coordinate system of the camera of the three-dimensional scanner is reconstructed.
  • Step S402 In the coordinate system of the camera of the three-dimensional scanner, map the color texture information collected synchronously with the image information to the three-dimensional point and surface information to obtain the three-dimensional point and surface information with the color texture.
  • Step S403 According to the conversion relationship between the coordinate system of the camera of the three-dimensional scanner and the coordinate system of the target feature of the three-dimensional scanner, transform the three-dimensional point and surface information with color texture into the coordinate system of the target feature of the three-dimensional scanner.
  • Step S404 Obtain the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the 3D scanner according to the at least three target features captured by the tracker; wherein, the spatial position relationship of the at least three target features on the 3D scanner It is pre-calibrated.
  • Step S405 According to the conversion relationship between the coordinate system of the tracker and the coordinate system of the target feature of the 3D scanner, the coordinates of the three-dimensional point surface information with color texture in the coordinate system of the tracker are obtained, and according to the three-dimensional point surface with color texture The coordinates of the information in the coordinate system of the tracker are reconstructed to obtain a three-dimensional model with color texture on the surface.
  • the above-mentioned 3D model reconstruction with color texture based on real-time color texture information mapping is particularly suitable for the scanning prompt process in the 3D scanning process, that is, the process of generating the 3D model preview image with color texture in the scan preview image.
  • Projecting color texture to the surface of the 3D model can be implemented in many ways.
  • One way to project the color texture to the surface of the 3D model is to perform color rendering on the point cloud corresponding to the 3D model based on the color texture information, that is, color texture information.
  • the color information in is assigned to the corresponding point in the point cloud.
  • This method is particularly suitable for the reconstruction process of a three-dimensional model with color texture based on real-time color texture information mapping shown in step S401 to step S405.
  • Another way to project the color texture onto the surface of the 3D model is to divide the surface of the 3D model into grids, and determine the color texture information corresponding to each grid obtained by the division; fill in each grid obtained by the division Color texture information corresponding to the grid.
  • This method is especially suitable for the color texture post-processing of the three-dimensional model reconstruction process with color texture.
  • the color texture post-processing refers to generating a color texture on the surface of the three-dimensional model after the three-dimensional scanned model is scanned.
  • the third camera is used for collecting color texture information.
  • the frequency is lower than the frequency at which the first camera and the second camera collect three-dimensional point and surface information.
  • the frequency at which the first camera and the second camera collect 3D point and surface information can be several times that of the color texture information collected by the third camera, which can reduce the number of times the third camera collects color texture information, reduce the amount of image data transmission and image processing Computer resources for data.
  • Fig. 5 is a schematic structural diagram of a three-dimensional scanning system according to an optional embodiment of the present application.
  • the three-dimensional scanning system includes: a non-contact tracker 12 including at least one tracking camera for capturing the three-dimensional scanner The pose.
  • the three-dimensional scanner 11 is used to perform three-dimensional scanning based on the principle of triangulation.
  • the three-dimensional scanner includes at least one structured light projector 111, at least one binocular camera (equivalent to the aforementioned first camera 1121 and second camera 1122) and at least A texture camera (equivalent to the aforementioned third camera 1123), and multiple target features fixed on the surface of the 3D scanner, of which at least three target features can be captured by the tracker 12 in the field of view of the tracker 12; the calculation unit 13 , Used to generate 3D point and surface information, calculate conversion matrix, perform coordinate conversion, and rebuild 3D model.
  • FIG. 6 is a schematic diagram of the connection structure of each component in the three-dimensional scanning system according to an optional embodiment of the present application.
  • the calculation unit 13 further includes: a clock synchronization unit 14, which is connected to the three-dimensional scanner 11 and the tracker 12.
  • the camera is connected to the structured light projector 111 to provide a clock synchronization signal; the two-dimensional image feature extractor 131 is used to extract at least two linear patterns on the two-dimensional image captured by the binocular camera and the tracking camera of the scanned object A set of two-dimensional lines; a three-dimensional point and surface information generator 132 for generating a three-dimensional point and surface information set based on a two-dimensional line set; a texture feature extractor 133 for extracting color and texture information of the scanned object by the third camera; The texture mapper 134 is used to map the color texture information to the three-dimensional point and surface information to perform color texture mapping; the coordinate converter 135 is used to calculate a conversion (RT) matrix between different coordinate systems and perform coordinate conversion.
  • RT conversion
  • Fig. 7 is a flowchart of a three-dimensional scanning method according to an optional embodiment of the present application. As shown in Fig. 7, the process includes the following steps:
  • Step S701 calibrate the target feature on the three-dimensional scanner and the spatial position relationship between one or more binocular cameras and one texture camera.
  • Step S702 The scanner projects structured light on the surface of the object to be scanned, the scanner camera acquires a two-dimensional image, and the spatial position relationship of the calibrated scanner camera is used to find a match based on the epipolar constraint relationship between the binocular images and related algorithms Then, according to the principle of triangulation, the three-dimensional point and surface information P in the coordinate system Oc of the scanner camera is reconstructed.
  • Step S703 The texture camera acquires color texture information on the surface of the object.
  • Step S705 the scanner target feature acquired by the tracker, and the spatial position distribution relationship of the target feature on the scanner is known.
  • the rear intersection algorithm can be used to obtain the outer orientation element of the image, thereby obtaining the conversion matrix R from the tracker to the scanner target feature coordinate system 2 T 2 .
  • the tracker coordinate system to obtain the point plane information P to; P 2 P 1 * R 2 + T 2: Step S706: using a R 2 T 2 obtained point and area information of the coordinates P 2 at the points P 1 to the tracker coordinate system
  • the coordinates of: P 2 : P 2 (P*R 1 +T 1 )*R 2 +T 2 . That is, the coordinates of the point and surface information on the surface of the object to be scanned obtained by the scanner in the world coordinate system.
  • Step S707 According to the conversion relationship between the tracker and the scanner target feature coordinate system, the coordinates of the texture information to the tracker coordinate system are obtained, and the texture information is subjected to texture mapping in the tracker coordinate system.
  • Texture mapping can be a color rendering of a point cloud, or it can be mapped to a surface by dividing a grid.
  • the number of shots taken by the texture camera can be less than the number of shots taken by the binocular camera.
  • the above texture mapping can be performed in real time, that is, the color texture information is mapped to the three-dimensional point and surface data in the current coordinate system according to the spatial position conversion relationship of the scanner at the current time; it can also be performed in post-processing, that is, the scan is completed After global optimization of the point and surface information, mapping is performed according to the conversion relationship of the texture image.
  • the real-time texture display is only used for scanning prompts, generally coloring on the point cloud; post-processing texture, that is, grid mapping according to the RT position of the texture image after scanning is used to generate a textured grid Model output results.
  • the texture mapping step in step S707 includes the following steps:
  • Step 1 Determine the effective texture image of the geometric triangle of the model:
  • the triangular mesh of the 3D model can be converted to the texture camera coordinate system through the following formula to obtain the texture coordinates corresponding to the vertices of the triangular mesh. After the image is sliced, only the required texture image is retained.
  • P uv represents the two-dimensional pixel coordinates in the texture camera coordinate system
  • K represents the texture camera internal parameter matrix
  • P w represents the grid vertex coordinates in the world coordinate system
  • R 3 T 3 represents the conversion from the world coordinate system to the texture camera coordinate system. matrix.
  • Step 2 Sampling the geometric triangles, using the bilinear difference to determine the color value of the sampling point in the effective texture image, thereby determining the color of the set triangle in the effective texture image.
  • Step 3 According to the position relationship between the geometric model and the texture camera, the weight of the texture image is defined, and the composite weight is constructed to merge the texture.
  • the defined function weights include normal vector weights, edge weights and geometric weights.
  • Step 4 Save the geometric model and texture information, record the corresponding relationship between the model and the texture image, and display the three-dimensional model with color texture.
  • FIG. 8 is a schematic diagram of the hardware structure of a computer device according to an embodiment of the present application.
  • the computer device may include a processor 81 and a memory 82 storing computer program instructions.
  • the aforementioned processor 81 may include a central processing unit (CPU), or a specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • the memory 82 may include a large-capacity memory for data or instructions.
  • the storage 82 may include a hard disk drive (Hard Disk Drive, referred to as HDD), a floppy disk drive, a solid state drive (Solid State Drive, referred to as SSD), flash memory, optical disk, magneto-optical disk, magnetic tape, or universal serial Universal Serial Bus (USB for short) driver or a combination of two or more of these.
  • the storage 82 may include removable or non-removable (or fixed) media.
  • the memory 82 may be internal or external to the data processing device.
  • the memory 82 is a non-volatile (Non-Volatile) memory.
  • the memory 82 includes a read-only memory (Read-Only Memory, ROM for short).
  • the ROM can be mask-programmed ROM, programmable ROM (Programmable Read-Only Memory, referred to as PROM), erasable PROM (Erasable Programmable Read-Only Memory, referred to as EPROM), and electrically erasable Except PROM (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), Electrically Alterable Read-Only Memory (EAROM for short) or Flash memory (FLASH), or a combination of two or more of these.
  • PROM Programmable ROM
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Alterable Read-Only Memory
  • FLASH Flash memory
  • the processor 81 reads and executes computer program instructions stored in the memory 82 to implement any one of the three-dimensional scanning methods in the foregoing embodiments.
  • the computer device may further include a communication interface 83 and a bus 80.
  • a communication interface 83 and a bus 80.
  • the processor 81, the memory 82, and the communication interface 83 are connected through the bus 80 and complete mutual communication.
  • the communication interface 83 is used to implement communication between various modules, devices, units, and/or devices in the embodiments of the present application.
  • the communication interface 83 can also implement data communication with other components such as external devices, image acquisition devices, databases, external storage, and image processing workstations.
  • the bus 80 includes hardware, software, or both, and couples the components of the computer device to each other.
  • the bus 80 includes but is not limited to at least one of the following: a data bus (Data Bus), an address bus (Address Bus), a control bus (Control Bus), an expansion bus (Expansion Bus), and a local bus (Local Bus).
  • the bus 80 may include an Accelerated Graphics Port (AGP) or other graphics buses, an Enhanced Industry Standard Architecture (EISA) bus, and a front side bus (Front Side Bus).
  • AGP Accelerated Graphics Port
  • EISA Enhanced Industry Standard Architecture
  • Front Side Bus Front Side Bus
  • FSB HyperTransport
  • ISA Industry Standard Architecture
  • InfiniBand wireless bandwidth
  • LPC Low Pin Count
  • MCA Micro Channel Architecture
  • PCI Peripheral Component Interconnect
  • PCI-X Peripheral Component Interconnect
  • SATA Serial Advanced Technology Attachment
  • VLB Video Electronics Standards Association Local Bus
  • the bus 80 may include one or more buses.
  • the embodiment of the present application may provide a computer-readable storage medium for implementation.
  • the computer-readable storage medium stores computer program instructions; when the computer program instructions are executed by the processor, any one of the three-dimensional scanning methods in the foregoing embodiments is implemented.
  • the three-dimensional point and surface information with color texture of the scanned object is obtained through the non-contact tracking scanning method, and the three-dimensional point and surface information with color texture is reconstructed.
  • Model or after the three-dimensional model is reconstructed, the color and texture information of the scanned object obtained by the non-contact tracking scanning method is mapped to the surface of the three-dimensional model.
  • the tracker captures the real-time pose of the three-dimensional scanner to ensure that each frame of the texture obtains an accurate conversion relationship.
  • the embodiments of the present application can flexibly and conveniently realize the color texture scanning of the surface of large objects in a complex environment, and accurately reconstruct the three-dimensional model with color texture, which is especially suitable for objects with color texture.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种三维扫描方法、三维扫描系统、计算机设备和计算机可读存储介质。该三维扫描方法包括:采集被扫描对象的三维点面信息,并在采集三维点面信息时跟踪三维扫描仪(11)的第一位姿;以及采集被扫描对象表面的色彩纹理信息,并在采集色彩纹理信息时跟踪三维扫描仪(11)的第二位姿;根据三维点面信息和第一位姿,重建被扫描对象的三维模型;根据色彩纹理信息和第二位姿,在三维模型的表面生成色彩纹理。

Description

三维扫描方法、三维扫描系统和计算机可读存储介质
相关申请
本申请要求2020年4月10日申请的,申请号为202010278835.9,发明名称为“三维扫描方法、三维扫描系统和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及三维扫描技术领域,特别是涉及三维扫描方法、三维扫描系统、计算机设备和计算机可读存储介质。
背景技术
光学三维扫描仪是利用光学成像获取被测量物体三维信息的一种设备,目前广泛应用于工业产品检测、逆向设置、仿真、定位等领域。跟踪式三维扫描是其中一种新型的三维扫描技术,该技术主要利用三维扫描仪和跟踪仪等设备共同实现物体的三维测量。相对于传统的贴点式三维扫描或摄影式三维扫描,跟踪式三维扫描技术使用更加方便、稳定性更好、测量范围更大,方便用户在车间、室外和各种复杂环境下轻松便捷地实现三维测量。
相关的跟踪式三维扫描装置主要包括激光跟踪仪、固定式双摄像头三维扫描仪姿态捕捉跟踪设备、头戴式三维坐标数据眼镜、针对大尺度物体如船舶曲面钢板的基于LED标签跟踪的几何测量装置等。上述的现有的跟踪式三维扫描装置主要采取跟踪器和扫描仪的结合共同实现物体的三维测量,其中跟踪器用于拼接三维数据,扫描仪用于获得三维数据。也就是说,三维扫描功能的实现取决于扫描仪自身的功能和精度。上述现有装置中的扫描仪主要采用手持单色激光扫描仪或光栅投影式扫描仪,功能较为单一,对于色彩和纹理有更高要求的扫描场景,缺乏足够的适应性。例如,对于需要获得物体表面色彩特征的扫描场景,例如文物、家居的数字化扫描重建,以及对于网购商品的三维展示等,现有跟踪式装置尚不能实现这样的功能。
相关的色彩纹理扫描装置主要是手持式白光扫描仪,主要包括一个投影器,一个或多个黑白摄像头和一个彩色摄像头,投影器采用编码结构光的形式进行投影,黑白摄像头在投影的同时获取物体轮廓信息,并通过特征识别进行前后帧的点面信息拼接;为了避免拍摄到投影图案影响贴图效果,彩色摄像头在投影的间隙获取物体表面纹理信息,并基于黑白摄像头的三维信息进行纹理贴图。上述装置的主要问题是,彩色摄像头和黑白摄像头非同步交叉拍摄,虽然间隔时间很短,但是因为二者的不同步,所以纹理贴图后的三维模型与原始被扫描对象相比,色彩纹理存在一定的错位。
发明内容
本申请实施例提供了一种三维扫描方法、三维扫描系统、计算机设备和计算机可读存储介质,以至少解决相关技术中三维模型的色彩纹理存在错位的问题。
第一方面,本申请实施例提供了一种三维扫描系统,包括三维扫描仪、跟踪器和计算单元,所述三维扫描仪和所述跟踪器分别与所述计算单元电性连接;所述三维扫描仪用于采集被扫描对象的三维点面信息,所述跟踪器用于在所述三维扫描仪采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿,所述计算单元用于根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;
其中,所述三维扫描仪,还用于采集所述被扫描对象表面的色彩纹理信息;所述跟踪器,还用于在所述三维扫描仪采集所述被扫描对象表面的色彩纹理信息时跟踪所述三维扫描仪的第二位姿;所述计算单元,还用于根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理。
在其中一些实施例中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像 头和第二摄像头,以及用于采集所述色彩纹理信息的第三摄像头。
在其中一些实施例中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像头和第二摄像头,其中,所述第一摄像头还用于采集所述色彩纹理信息。
在其中一些实施例中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像头和第二摄像头,其中,所述第二摄像头还用于采集所述色彩纹理信息。
在其中一些实施例中,所述三维扫描仪还包括:结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投射结构光图案;
所述三维扫描系统还包括:时钟同步单元,所述时钟同步单元分别与所述三维扫描仪和所述跟踪器电性连接;所述时钟同步单元用于提供时钟同步信号;其中,
所述结构光投影器、所述第一摄像头、所述第二摄像头、所述第三摄像头以及所述跟踪器根据所述时钟同步信号同步工作。
在其中一些实施例中,所述三维扫描仪还包括:结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投射结构光图案;
所述三维扫描系统还包括:时钟同步单元,所述时钟同步单元分别与所述三维扫描仪和所述跟踪器电性连接;所述时钟同步单元用于提供时钟同步信号;其中,
所述结构光投影器、所述第一摄像头、所述第二摄像头以及所述跟踪器根据所述时钟同步信号同步工作。
在其中一些实施例中,所述三维扫描仪还包括:结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投影不可见光波段的结构光投影图案;
所述不可见光波段的结构光投影图案能够被所述第一摄像头和所述第二摄像头捕获,所述不可见光波段的结构光投影图案不能够被所述第三摄像头捕获。
在其中一些实施例中,所述三维扫描系统还包括:可见光源,所述可见光源用于在采集色彩纹理信息时对所述被扫描对象补光。
第二方面,本申请实施例提供了一种三维扫描方法,包括:
采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿;
根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;
根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理。
在其中一些实施例中,采集被扫描对象的三维点面信息包括:
在所述被扫描对象的表面投射结构光投影图案;
使用第一摄像头和第二摄像头采集表面投射有所述结构光投影图案的被扫描对象的图像信息,并根据所述图像信息生成所述被扫描对象的三维点面信息。
在其中一些实施例中,对所述被扫描对象的所述三维点面信息和所述色彩纹理信息的采集是非同时的。
在其中一些实施例中,采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:
使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用所述第一摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
在其中一些实施例中,采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:
使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用所述第二摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
在其中一些实施例中,采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:
使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用第三摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
在其中一些实施例中,在所述被扫描对象的表面投射的结构光投影图案为不可见光波段的结构光投影图案;所述不可见光波段的结构光投影图案能够被采集所述三维点面信息的摄像头捕获,而不能够被采集所述色彩纹理信息的摄像头捕获;对所述被扫描对象的所述三维点面信息和所述色彩纹理信息的采集是同时的。
在其中一些实施例中,根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:
根据所述第二位姿,确定在第一坐标系中采集到的所述色彩纹理信息在第二坐标系中的坐标;
根据所述坐标,在所述第二坐标系中将所述色彩纹理信息映射到所述三维模型的表面;
其中,所述三维模型是在所述第二坐标系中重建的。
在其中一些实施例中,根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:
在所述第一位姿和所述第二位姿相同的情况下,在第一坐标系中将所述色彩纹理信息映射到所述三维点面信息中;
在第二坐标系中,根据映射所述色彩纹理信息后的三维点面信息,重建得到具有色彩纹理的所述被扫描对象的三维模型;
其中,所述三维点面信息和所述色彩纹理信息是在所述第一坐标系中采集的,所述三维模型是在所述第二坐标系中重建的。
在其中一些实施例中,根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:
根据所述第二位姿,确定与所述色彩纹理信息对应的点云;
根据所述色彩纹理信息,对所述点云进行色彩渲染。
在其中一些实施例中,根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:
将所述三维模型的表面进行网格分割,并根据所述第二位姿,确定分割得到的每个网格对应的色彩纹理信息;
在分割得到的每个网格中填充对应的色彩纹理信息。
在其中一些实施例中,所述第三摄像头采集所述色彩纹理信息的频率低于所述第一摄像头和所述第二摄像头采集所述三维点面信息的频率。
第三方面,本申请实施例提供了一种计算机设备,包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第二方面所述的三维扫描方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述第二方面所述的三维扫描方法。
相比于相关技术,本申请实施例提供的三维扫描方法、三维扫描系统、计算机设备和计算机可读存储介质,通过采集被扫描对象的三维点面信息,并在采集三维点面信息时跟踪三维扫描仪的第一位姿; 以及采集被扫描对象表面的色彩纹理信息,并在采集色彩纹理信息时跟踪三维扫描仪的第二位姿;根据三维点面信息和第一位姿,重建被扫描对象的三维模型;根据色彩纹理信息和第二位姿,在三维模型的表面生成色彩纹理的方式,解决了相关技术中三维模型的色彩纹理存在错位的问题,提高了三维模型的色彩纹理贴图的准确度。
本申请的一个或多个实施例的细节在以下附图和描述中提出,以使本申请的其他特征、目的和优点更加简明易懂。
附图说明
为了更清楚地说明本申请实施例或相关技术中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a是根据本申请实施例的一种三维扫描系统的结构示意图;
图1b是根据本申请实施例的另一种三维扫描系统的结构示意图;
图2是根据本申请实施例的三维扫描方法的流程图;
图3是根据本申请实施例的不具有色彩纹理的三维模型的重建过程的流程图;
图4是根据本申请实施例的基于实时色彩纹理信息映射的具有色彩纹理的三维模型重建方法的流程图;
图5是根据本申请可选实施例的三维扫描系统的结构示意图;
图6是根据本申请可选实施例的三维扫描系统中各组成部分的连接结构示意图;
图7是根据本申请可选实施例的三维扫描方法的流程图;
图8是根据本申请实施例的计算机设备的硬件结构示意图。
附图标记:三维扫描仪11;结构光投影器111;第一摄像头1121;第二摄像头1122;第三摄像头1123;目标特征113;跟踪器12;计算单元13;二维图像特征提取器131;三维点面信息生成器132;纹理特征提取器133;纹理映射器134;坐标转换器135;时钟同步单元14;总线80;处理器81;存储器82;通信接口83。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行描述和说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。基于本申请提供的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
显而易见地,下面描述中的附图仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其他类似情景。此外,还可以理解的是,虽然这种开发过程中所作出的努力可能是复杂并且冗长的,然而对于与本申请公开的内容相关的本领域的普通技术人员而言,在本申请揭露的技术内容的基础上进行的一些设计,制造或者生产等变更只是常规的技术手段,不应当理解为本申请公开的内容不充分。
在本申请中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域普通技术人员显式地和隐式地理解的是,本申请所描述的实施例在不冲突的情况下,可以与其它实施例相结合。
除非另作定义,本申请所涉及的技术术语或者科学术语应当为本申请所属技术领域内具有一般技能的人士所理解的通常意义。本申请所涉及的“一”、“一个”、“一种”、“该”等类似词语并不表示数量限制,可表示单数或复数。本申请所涉及的术语“包括”、“包含”、“具有”以及它们任何变形,意图在于覆盖不排他的包含;例如包含了一系列步骤或模块(单元)的过程、方法、系统、产品或设备没有限定 于已列出的步骤或单元,而是可以还包括没有列出的步骤或单元,或可以还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。本申请所涉及的“连接”、“相连”、“耦接”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性连接,不管是直接的还是间接的。其中,在不冲突的情况下,电性连接可以是有线连接也可以是无线连接。本申请所涉及的“多个”是指两个或两个以上。“和/或”描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。本申请所涉及的术语“第一”、“第二”、“第三”等仅仅是区别类似的对象,不代表针对对象的特定排序。
为了便于理解,首先以线状结构光为例介绍本申请所基于的结构光视觉检测,以及非接触式跟踪的基本原理。
进行三维扫描时,首先由结构光投影器向被扫描对象投射线状激光,投射的线状激光形成一个激光投射平面,激光投射平面与被扫描对象相交时,会在被扫描对象表面形成一条亮的扫描线。由于扫描线包含了激光投射平面与物体相交的所有的表面点,因此根据扫描线的坐标可以得到物体的相应的表面点的三维坐标。该三维坐标映射到激光投射平面上,则得到扫描线的二维图像。根据扫描线的二维图像上的点的坐标即可以计算出其对应的物体表面点的三维坐标,这就是结构光视觉检测的基本原理。
非接触式跟踪技术,采用跟踪相机捕获三维扫描仪表面的至少三个目标特征;由于三维扫描仪表面的目标特征、三维扫描仪的双目相机(包括第一摄像头和第二摄像头)的空间位置关系是预先标定好的,因此,计算单元根据跟踪相机捕获的至少三维目标特征能够获得三维扫描仪的位姿以及三维扫描仪的坐标系与跟踪器的坐标系之间的转换关系;根据该转换关系将三维扫描仪采集到的三维点面信息的坐标转换到跟踪器的坐标系中,然后根据三维点面信息的坐标进行拼接融合,重建得到完整的三维模型。
本实施例提供了一种三维扫描系统。图1a是根据本申请实施例的三维扫描系统的结构示意图,如图1a所示,该三维扫描系统包括:三维扫描仪11、跟踪器12和计算单元13,其中,
如图1a所示,三维扫描仪11与计算单元13电性连接。在本实施例中,三维扫描仪11包括结构光投影器111、用于采集被扫描对象的三维点面信息的第一摄像头1121和第二摄像头1122、至少三个目标特征113。
其中,上述的第一摄像头1121和第二摄像头1122包括能够捕获目标空间的可见光波段或者不可见光波段的相机、CCD传感器或者CMOS传感器。上述的结构光投影器111包括被设置为顺序地投射结构光图案到被扫描对象的表面的投影仪,例如可以是数字光处理(DLP)投影仪。结构光投影器111投射的结构光可以是散斑、条纹、格雷码或者其他的编码结构光。
在本实施例中,结构光投影器111、第一摄像头1121、第二摄像头1122,以及至少三个目标特征113安装在安装架上,且它们的空间位置关系均被预先标定。因此,在三角测量法计算中,目标特征之间、第一摄像头1121和第二摄像头1122之间的距离、角度等信息是已知的,结构光投影器111的位置及投射角度等信息是已知的。
在本实施例中,三维扫描仪11的至少三个目标特征113可以是自发光的目标特征或者反光的目标特征。
跟踪器12与计算单元13电性连接,跟踪器12用于在三维扫描仪11采集三维点面信息时,通过捕捉三维扫描仪11的至少三个目标特征113来跟踪三维扫描仪11的第一位姿。
在本实施例中,跟踪器12至少包括一个跟踪摄像头,该跟踪摄像头用于捕捉三维扫描仪11表面固定的至少三个目标特征113。由于这至少三个目标特征113之间的空间位置关系被预先标定,因此,根据这至少三个目标特征113能够确定三维扫描仪11的位姿。
计算单元13,用于根据第一摄像头1121和第二摄像头1122采集到的三维点面信息和第一位姿,重建被扫描对象的三维模型。其中,计算单元13重建被扫描对象的三维模型的基本原理是三角法原理和极线约束原理。
在本实施例中,三维扫描仪11还用于采集被扫描对象表面的色彩纹理信息。跟踪器12还用于在三维扫描仪11采集被扫描对象表面的色彩纹理信息时跟踪三维扫描仪11的第二位姿。计算单元13还用 于根据色彩纹理信息和第二位姿,在三维模型的表面生成色彩纹理。
采用本实施例提供的三维扫描系统,首先,计算单元13通过三维扫描仪11采集到的表面投射有结构光投影图案的被扫描对象的二维图像信息,通过已标定的采集三维点面信息的多个摄像头的空间位置关系,重建出在三维扫描仪11的摄像头的坐标系下的三维点面信息。然后,计算单元13根据已标定的摄像头与固定在三维扫描仪11表面的至少三个目标特征之间的转换关系,将三维点面信息转换到三维扫描仪11的目标特征的坐标系中。
其中,三维扫描仪11的第一摄像头1121和第二摄像头1122在进行拍摄的同时,跟踪器12同步捕捉三维扫描仪11表面的至少三个目标特征113。由于这至少三个目标特征113之间的空间位置关系也被预先标定,因此,计算单元13根据捕捉到的三维扫描仪11表面的至少三个目标特征113的信息和已知的这至少三个目标特征113之间的空间位置关系,就能够得到跟踪器12的坐标系和三维扫描仪11的目标特征的坐标系的转换关系。最后,计算单元13根据跟踪器12的坐标系和三维扫描仪11的目标特征的坐标系的转换关系,得到三维点面信息在跟踪器12的坐标系下的坐标,根据该坐标即可根据三维点面信息在跟踪器12的坐标系下进行被扫描对象的三维重建,得到三维模型。
与之类似地,计算单元13在三维模型的表面生成色彩纹理也是基于坐标系之间的转换关系实现的。
一方面,相关技术中的手持式白光扫描仪的点面信息是通过特征识别进行拼接的,其在采集色彩纹理信息时无法获得用于点面信息拼接的特征,因此其只能够将上一次采集的点面信息对应的坐标作为当前采集的色彩纹理信息对应的坐标;又由于手持式白光扫描仪用于获取色彩纹理信息的彩色摄像头和用于获取点面信息的黑白摄像头是非同步交叉拍摄的,因此其点面信息和色彩纹理信息的采集时间存在时间间隔,在该时间间隔内手持式白光扫描仪的任何移动都将导致上一次采集的点面信息对应的坐标与当前采集的色彩纹理信息对应的坐标不同,从而导致了三维模型的色彩纹理存在错位。与相关技术中彩色摄像头和黑白摄像头非同步交叉拍摄的方式不同之处在于,在上述实施例中,通过跟踪器12采用非接触式方式在采集三维点面信息时和采集色彩纹理信息时跟踪三维扫描仪11的位姿。通过这种方式,无论三维点面信息和色彩纹理信息是同步采集还是非同步交叉采集的,通过本实施例提供的三维扫描系统都能够获得三维扫描仪11采集到的三维点面信息和色彩纹理信息在跟踪器12的坐标系中的准确坐标,从而解决了相关技术中三维模型的色彩纹理存在错位的问题,提高了三维模型的色彩纹理贴图的准确度。
另一方面,在本实施例中,在本实施例中采用了结构光投影器111,在三维扫描仪11采集三维点面信息时在被扫描对象的表面投射结构光图案。相比于相关技术中在被扫描对象的表面张贴特征标记的方式而言,采用本实施例中具有结构光投影器111的三维扫描系统采用结构光投影器111投射的结构光图案作为特征标记,免去了在被扫描对象的表面张贴特征标记的工作量。不仅如此,由于在被扫描对象的表面不再张贴特征标记,因此,重建得到的具有色彩纹理的三维模型能够表现原始的被扫描对象的表面特征而不会导致在三维模型的表面出现额外的特征标记,提高了三维扫描系统的实用性,避免了后期处理三维模型的额外特征标记导致的工作量。
本实施例中的三维扫描仪11能够采集被扫描对象表面的色彩纹理信息。
如图1b所示,在其中一些实施例中,三维扫描仪11包括用于采集被扫描对象的三维点面信息的第一摄像头1121和第二摄像头1122,还包括用于采集色彩纹理信息的第三摄像头1123。
如图1a所示,在另一些实施例中,三维扫描仪11包括用于采集被扫描对象的三维点面信息的第一摄像头1121和第二摄像头1122,其中的第一摄像头1121还被复用于采集色彩纹理信息。在本实施例中,将第一摄像头1121复用于采集三维点面信息和采集色彩纹理信息,能够降低三维扫描系统的成本,并减小三维扫描仪的体积和重量。
在其中一些实施例中,第一摄像头1121和第二摄像头1122都是彩色摄像头,其中一个彩色摄像头被复用于采集色彩纹理信息。第一摄像头1121和第二摄像头1122都是彩色摄像头的优势在于能够降低这两个摄像头之间参数的差异,提高三维点面信息采集的效率和精度。
在其中一些实施例中,为了实现三维扫描仪11和跟踪器12的同步工作,三维扫描系统还包括时钟 同步单元14,时钟同步单元14分别与三维扫描仪11和跟踪器12电性连接。时钟同步单元14用于提供时钟同步信号。其中,三维扫描仪11中的结构光投影器111、第一摄像头1121、第二摄像头1122以及跟踪器12根据时钟同步信号同步工作;第三摄像头1123和跟踪器12根据时钟同步信号同步工作。需要说明的是,本实施例中时钟同步单元14可以是独立于跟踪器12、三维扫描仪11及计算单元13的独立单元,也可以位于跟踪器12、三维扫描仪11及计算单元13中的任一单元或者设备中。
在本实施例中三维扫描仪11中的结构光投影器111、第一摄像头1121、第二摄像头1122以及跟踪器12根据时钟同步信号同步工作包括:结构光投影器111在向被扫描对象的表面投射结构光图案期间,第一摄像头1121和第二摄像头1122以及跟踪器12同时拍摄。
在本实施例中三维扫描仪11中的第三摄像头1123和跟踪器根据时钟同步信号同步工作包括:第三摄像头1123和跟踪器12同时拍摄。
在上述实施例中,结构光投影器111、第一摄像头1121、第二摄像头1122,它们与第三摄像头1123可以是同时工作的,也可以是非同时工作的。
例如,在另一些实施例中,三维扫描仪包括第一摄像头1121、第二摄像头1122、第三摄像头1123以及结构光投影器111。其中,结构光投影器111用于在三维扫描仪采集三维点面信息时在被扫描对象的表面投影不可见光波段的结构光投影图案。三维扫描系统还包括:时钟同步单元14,时钟同步单元14分别与三维扫描仪11和跟踪器12电性连接;时钟同步单元14用于提供时钟同步信号;其中,结构光投影器111、第一摄像头1121、第二摄像头1122、第三摄像头1123以及跟踪器根据时钟同步信号同步工作。并且,结构光投影器111投射的不可见光波段的结构光投影图案能够被第一摄像头1121和第二摄像头1122捕获,但不能够被第三摄像头1123捕获。
通过上述的实施例,第一摄像头1121、第二摄像头1122和第三摄像头1123能够同时采集三维点面信息或色彩纹理信息,简化了采集过程的时序设计,也有助于提高三维模型重建的效率。
在一些实施例中,三维扫描系统还包括可见光源,可见光源配合第三摄像头1123使用。可见光源用于在该第三摄像头1123采集色彩纹理信息时对被扫描对象补光。可见光源可以为一个或者多个闪光灯或者灯箱。在可见光源为一个闪光灯或灯箱的情况下,这一个闪光灯或灯箱对三维扫描仪11当前扫描的被扫描对象的平面进行补光;在可见光源为多个闪光灯或灯箱的情况下,这多个闪光灯或灯箱围绕在被扫描对象周围实现对被扫描对象的多角度补光。通过可见光源对被扫描对象的补光,可以增强第三摄像头1123采集到的色彩纹理信息的亮度,去除因单点光源导致的阴影,使得扫描得到的色彩纹理图片更为真实。
其中,可见光源可以通过有线连接或者无线连接的方式与时钟同步单元14电性连接,从而与第三摄像头1123同步工作。
下面将对本实施例提供的三维扫描方法进行描述和说明。需要说明的是,在实施例描述的三维扫描方法虽然较优地用于本申请实施例提供的三维扫描系统中,但是将该三维扫描方法应用于其他的基于非接触式跟踪的三维扫描系统中也是可以被构想的。
图2是根据本申请实施例的三维扫描方法的流程图,如图2所示,该流程包括如下步骤:
步骤S201:采集被扫描对象的三维点面信息,并在采集三维点面信息时跟踪三维扫描仪的第一位姿。
在本步骤中,可以通过双目视觉成像原理,采集到被扫描对象的三维点面信息。例如,通过可见光波段结构光投影器或者不可见光波段结构光投影器在被扫描对象表面投射结构光投影图案,然后采用空间位置关系被预先标定的第一摄像头和第二摄像头对被扫描对象的表面进行拍摄,通过双目视觉成像原理,重建得到被扫描对象的三维点面信息。其中,该结构光投影图案可以是散斑图案、条纹图案、格雷码图案或者其他的编码结构光图案。
在本步骤中,三维扫描仪的第一位姿可以通过非接触式跟踪方式来跟踪。例如,在三维扫描仪的表面固定有至少三个目标特征,且这至少三个目标特征的空间位置关系是预先标定的。由跟踪器跟踪这至少三个目标特征,结合预先标定的至少三个目标特征的空间位置关系,就能够得到三维扫描仪的第一位 姿信息,该位姿信息包括位置信息和姿态信息。
步骤S202:采集被扫描对象表面的色彩纹理信息,并在采集色彩纹理信息时跟踪三维扫描仪的第二位姿。
在本步骤中,可以通过第一摄像头或者第三摄像头采集被扫描对象表面的色彩纹理信息。该第三摄像头的空间位置也是被预先标定的。三维扫描仪的第二位姿同样地可以通过非接触式跟踪方式来跟踪。例如,在三维扫描仪的表面固定有至少三个目标特征,且这至少三个目标特征的空间位置关系也是预先标定的。由跟踪器跟踪这至少三个目标特征,结合预先标定的至少三个目标特征的空间位置关系,就能够得到三维扫描仪的第二位姿信息,该位姿信息也包括位置信息和姿态信息。
步骤S203:根据三维点面信息和第一位姿,重建被扫描对象的三维模型。
在本步骤中,在获得三维点面信息和三维扫描仪的第一位姿后,根据相关技术中已知的三维模型重建方法就能够重建得到被扫描对象的三维模型。
步骤S204:根据色彩纹理信息和第二位姿信息,在三维模型的表面生成色彩纹理。
在一些实施例中,可以通过坐标系转换的方法,将色彩纹理信息的坐标系转换到与三维模型相同的坐标系(相当于上述的第二坐标系)中,从而将色彩纹理信息映射到三维模型表面。其中,可以基于第二位姿信息以及预先标定的用于采集色彩纹理信息的摄像头的空间位置信息、预先标定的至少三个目标特征的空间位置关系,将色彩纹理信息的坐标转换到重建三维模型的坐标系中。
在另一些实施例中,例如,在采用第一摄像头、第二摄像头和第三摄像头,结合不可见光波段的结构光投影器,三维扫描系统同时采集色彩纹理信息和三维点面信息,此时跟踪器采集到的一些第一位姿与第二位姿是相同的位姿;对于这些相同位姿下在第一坐标系中采集到的色彩纹理信息和三维点面信息,其转换到第二坐标系中的转换关系是相同的,因此,在这种情况下可以直接在第一坐标系中将色彩纹理信息映射到三维点面信息中,得到具有色彩纹理的三维点面信息,然后再将具有色彩纹理的三维点面信息的坐标由第一坐标系转换到第二坐标系中,进行三维模型的重建,从而得到具有色彩纹理的被扫描对象的三维模型。
图3是根据本申请实施例的不具有色彩纹理的三维模型的重建过程的流程图,如图3所示,本实施例的三维扫描及重建过程包括如下步骤:
步骤S301:标定三维扫描仪表面的目标特征,以及三维扫描仪中所有的摄像头之间的空间位置关系。
步骤S302:在被扫描对象表面投射结构光图案,通过三维扫描仪中的多个摄像头获取被扫描对象的二维图像信息,并通过已标定的摄像头之间的空间位置关系,根据三角法原理和极线约束原理重建出摄像头坐标系下的三维点面信息。
步骤S303:根据已标定的摄像头和三维扫描仪表面的目标特征之间的转换关系,将摄像头坐标系下的三维点面信息的坐标转换到三维扫描仪表面的目标特征的坐标系中。
步骤S304:三维扫描仪的摄像头在进行拍摄时,跟踪器同步捕捉三维扫描器表面的至少三个目标特征。在根据已知的目标特征在三维扫描仪表面的空间位置分布关系,得到跟踪器的坐标系到三维扫描仪的目标特征的坐标系之间的转换关系。
步骤S305:根据跟踪器的坐标系到三维扫描仪的目标特征的坐标系之间的转换关系,得到三维点面信息在跟踪器的坐标系中的坐标,进而在跟踪器的坐标系下根据三维点面信息及其坐标重建得到被扫描对象的三维模型。
需要说明的是,上述步骤S301~步骤S305是对本申请实施例的不具有色彩纹理的三维模型的重建过程的示例性描述,实际的三维重建过程可以不限于此。
例如,在一些实施例中,可以是在三维模型重建完成之后,甚至是三维模型进行三维点面信息的全局优化之后,再将色彩纹理信息映射到三维模型的表面的。
在另一些实施例中,可以是在三维模型重建过程中,或者三维模型重建之前将色彩纹理信息映射到三维模型对应的三维点面信息中的。例如,可以在三维扫描仪的坐标系或者跟踪器的坐标系中,将色彩 纹理信息映射到三维点面信息的表面,然后再将具有色彩纹理信息的三维点面信息在跟踪器的坐标系中进行拼接融合,得到具有色彩纹理的三维模型。
在本实施例中,步骤S201和步骤S202可以是同时执行的,也可以是非同时执行的。
例如,在步骤S201和步骤S202是非同时执行的情况下,对被扫描对象的三维点面信息和色彩纹理信息的采集是非同时的。在此情况下,三维扫描仪可以采用两个摄像头采集三维点面信息,并且其中一个摄像头能够采集色彩纹理信息。三维扫描仪也可以采用三个摄像头,其中两个摄像头采集三维点面信息,另一个摄像头采集色彩纹理信息。
由于非同时采集三维点面信息和色彩纹理信息,结构光投影器在采集色彩纹理信息时可以不投射结构光投影图案,因此结构光投影器可以选择任何可见光波段的结构光投影器或者任何非可见光波段的结构光投影器,只要第一摄像头和第二摄像头能够捕获该结构光投影器投射的结构光投影图案即可。
其中,可见光波段又称为白光;非可见光波段可以但不限于红外光波段。
在步骤S201和步骤S202同时执行的情况下,则可以采用包括三个摄像头的三维扫描仪,其中,两个摄像头采集三维点面信息,另一个摄像头采集色彩纹理信息;并且该三维扫描仪的结构光投影器投射的结构光投影图案为不可见光波段的结构光投影图案,该不可见光波段的结构光投影图案能够被采集三维点面信息的摄像头捕获,而不能够被采集色彩纹理信息的摄像头捕获。因此,即使三个摄像头同时拍摄,其中的另一个摄像头在采集色彩纹理信息时也不会捕获结构光投影图案,避免结构光投影图案对被扫描对象表面的影响。
在本实施例中,三维扫描仪的第三摄像头采集被扫描对象表面的色彩纹理信息。色彩纹理信息包括在三维扫描仪的摄像头的坐标系下的坐标以及各坐标对应的颜色信息。由于三维扫描仪的摄像头与三维扫描仪表面的目标特征之间的空间位置关系被预先标定,因此,可以得到三维扫描仪的摄像头的坐标系与三维扫描仪的目标特征的坐标系之间的转换关系,并根据该转换关系可以将色彩纹理信息在摄像头的坐标系下的坐标转换到三维扫描仪的目标特征的坐标系中。三维扫描仪的第三摄像头在进行拍摄的同时,跟踪器同步捕捉三维扫描仪表面的至少三个目标特征。由于这至少三个目标特征之间的空间位置关系也被预先标定,因此,根据捕捉到的三维扫描仪表面的至少三个目标特征的信息和已知的这至少三个目标特征之间的空间位置关系,就能够得到跟踪器的坐标系和三维扫描仪目标特征的坐标系的转换关系。根据跟踪器的坐标系和三维扫描仪目标特征的坐标系的转换关系,就能够将色彩纹理信息的坐标转换到跟踪器的坐标系中,从而得到色彩纹理信息与跟踪器的坐标系之间的映射关系,最后根据映射关系在三维模型的表面生成色彩纹理。
在另一些实施例中,在生成摄像头坐标系下的三维点面信息的同时,或者在生成三维扫描仪的目标特征坐标系下的三维点面信息的同时,实时将色彩纹理信息映射到三维点面信息中,得到具有色彩纹理的三维点面信息。
图4是根据本申请实施例的基于实时色彩纹理信息映射的具有色彩纹理的三维模型重建方法的流程图,如图4所示,该流程包括如下步骤:
步骤S401:根据图像信息、多个摄像头的空间位置关系,重建得到三维扫描仪的摄像头的坐标系下的三维点面信息。
步骤S402:在三维扫描仪的摄像头的坐标系下将与图像信息同步采集的色彩纹理信息映射到三维点面信息中,得到具有色彩纹理的三维点面信息。
步骤S403:根据三维扫描仪的摄像头的坐标系与三维扫描仪的目标特征的坐标系的转换关系,将具有色彩纹理的三维点面信息转换到三维扫描仪的目标特征的坐标系中。
步骤S404:根据跟踪器捕捉到的至少三个目标特征,得到跟踪器的坐标系和三维扫描仪目标特征的坐标系的转换关系;其中,至少三个目标特征在三维扫描仪上的空间位置关系是被预先标定的。
步骤S405:根据跟踪器的坐标系和三维扫描仪目标特征的坐标系的转换关系,得到具有色彩纹理的三维点面信息在跟踪器的坐标系下的坐标,并根据具有色彩纹理的三维点面信息在跟踪器的坐标系下的坐标重建得到表面具有色彩纹理的三维模型。
通过上述步骤S401至步骤S405能够快速得到表面具有色彩纹理的三维模型。上述基于实时色彩纹理信息映射的具有色彩纹理的三维模型重建尤其适用于在三维扫描过程中的扫描提示过程,即生成扫描预览图中的具有色彩纹理的三维模型预览图的过程中。
将色彩纹理投影到三维模型的表面可以采取多种实现方式,将色彩纹理投影到三维模型的表面的方式之一是根据色彩纹理信息,对三维模型对应的点云进行色彩渲染,即将色彩纹理信息中的颜色信息赋值给点云中对应的点。该方式尤其适用于步骤S401至步骤S405所示的基于实时色彩纹理信息映射的具有色彩纹理的三维模型重建过程。
将色彩纹理投影到三维模型的表面的另一种方式是将三维模型的表面进行网格分割,并确定分割得到的每个网格对应的色彩纹理信息;在分割得到的每个网格中填充与该网格对应的色彩纹理信息。该方式尤其适用于色彩纹理后处理的具有色彩纹理的三维模型重建过程。该色彩纹理后处理是指在扫描得到的三维扫描模型之后,再在三维模型的表面生成色彩纹理。
在其中一些实施例中,在三维扫描仪包括用于采集三维点面信息的第一摄像头第二摄像头,以及用于采集色彩纹理信息的第三摄像头的情况下,第三摄像头采集色彩纹理信息的频率低于第一摄像头和第二摄像头采集三维点面信息的频率。例如,第一摄像头和第二摄像头采集三维点面信息的频率可以为第三摄像头采集色彩纹理信息的数倍,这样可以减少第三摄像头采集色彩纹理信息的次数,降低图像数据传输量和处理图像数据的计算机资源。
下面通过可选实施例对本申请进行描述和说明。
图5是根据本申请可选实施例的三维扫描系统的结构示意图,如图5所示,该三维扫描系统包括:非接触式的跟踪器12,包括至少一个跟踪摄像头,用于捕捉三维扫描仪的位姿。三维扫描仪11,用于通过三角测量法原理进行三维扫描,三维扫描仪包含至少一个结构光投影器111,至少一个双目相机(相当于上述的第一摄像头1121和第二摄像头1122)和至少一个纹理摄像头(相当于上述的第三摄像头1123),以及多个固定在三维扫描仪表面的目标特征,其中至少三个目标特征可以在跟踪器12的视野中被跟踪器12捕捉;计算单元13,用于生成三维点面信息、计算转换矩阵、进行坐标转换、以及重建三维模型。
图6是根据本申请可选实施例的三维扫描系统中各组成部分的连接结构示意图,参考图6,计算单元13还包括:时钟同步单元14,与三维扫描仪11和跟踪器12上的所有摄像头和结构光投影器111连接,用于提供时钟同步信号;二维图像特征提取器131,用于提取被扫描对象被双目相机和跟踪摄像头拍摄的二维图像上的至少两条线状图案的二维线条集合;三维点面信息生成器132,用于根据二维线条集合生成三维点面信息集合;纹理特征提取器133,用于提取被扫描对象被第三摄像头拍摄的色彩纹理信息;纹理映射器134,用于将色彩纹理信息映射到三维点面信息中,进行色彩纹理贴图;坐标转换器135,用于计算不同坐标系之间的转换(RT)矩阵,进行坐标转换。
图7是根据本申请可选实施例的三维扫描方法的流程图,如图7所示,该流程包括如下步骤:
步骤S701:标定好三维扫描仪上的目标特征以及一个及以上的双目相机和一个纹理摄像头的空间位置关系。
步骤S702:扫描仪投射结构光在待扫描物体表面,扫描仪摄像头获取二维图像,通过已标定的扫描仪摄像头的空间位置关系,根据双目图像之间的极线约束关系及相关算法寻找匹配点,继而根据三角法原理重建出扫描仪摄像头坐标系Oc下的三维点面信息P。
步骤S703:纹理摄像头获取物体表面的色彩纹理信息。
步骤S704:根据已标定的扫描仪摄像头和扫描仪目标特征之间的转换矩阵R 1T 1,把点面信息P转化到目标特征坐标系下P 1:P 1=P*R 1+T 1
步骤S705:跟踪器获取的扫描仪目标特征,同时该目标特征在扫描仪上的空间位置分布关系已知。通过扫描仪目标特征在二维图像的坐标信息,以及重建得到的三维点信息,可以利用后方交会算法得到图像的外方位元素,从而获得跟踪器到扫描仪目标特征坐标系之间的转换矩阵R 2T 2
步骤S706:利用R 2T 2得到点P 1到跟踪器坐标系下的点面信息坐标P 2:P 2=P 1*R 2+T 2;从而 得到点面信息P到跟踪器坐标系下的坐标:P 2:P 2=(P*R 1+T 1)*R 2+T 2。即是扫描仪得到的待扫描物体表面点面信息在世界坐标系下的坐标。
步骤S707:根据跟踪器到扫描仪目标特征坐标系之间的转换关系,得到纹理信息到跟踪器坐标系下的坐标,纹理信息在跟踪器坐标系下进行纹理贴图。纹理贴图可以是对点云进行色彩渲染,也可以是通过分割网格的方式映射到面。
在实际扫描过程中,纹理摄像头的拍摄次数可以少于双目相机的拍摄次数。
上述纹理贴图可以是实时进行的,即根据当前时刻扫描仪的空间位置转换关系将色彩纹理信息映射到当前坐标系下的三维点面数据中;也可以是在后处理中进行的,即扫描完成并进行对点面信息进行全局优化后,再根据纹理图片的转换关系进行贴图。
在一个实施例中,实时贴图显示只是用于扫描提示,一般是在点云上着色;后处理贴图,即扫描后根据纹理图片的RT位置进行网格贴图是用于生成的带纹理的网格模型输出结果。
在一个实施例中,步骤S707中的纹理贴图步骤包括如下步骤:
步骤1:确定模型几何三角形的有效纹理图像:
三维模型的三角网格可以通过以下公式转换到纹理摄像头坐标系下,得到三角网格顶点对应的纹理坐标,将图像切片后只保留需要的纹理图像。
P uv=K*(P wR 3+T 3)
其中,P uv表示纹理摄像头坐标系下的二维像素坐标,K表示纹理摄像头内参矩阵,P w表示世界坐标系下网格顶点坐标,R 3T 3表示世界坐标系到纹理摄像头坐标系的转换矩阵。
步骤2:对几何三角形进行采样,用双线性差值确定采样点在有效纹理图像中的颜色值,从而确定集合三角形在有效纹理图像中的颜色。
步骤3:根据几何模型与纹理摄像头的位置关系,定义纹理图像的权重,并构造复合权重对纹理进行融合处理。其中,定义的函数权重有法向量权重,边缘权重和几何权重。
步骤4:保存几何模型和纹理信息并记录模型与纹理图像的对应关系,并显示具有色彩纹理的三维模型。
需要说明的是,在上述流程中或者附图的流程图中示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
另外,结合图2描述的本申请实施例三维扫描方法可以由计算机设备来实现。图8为根据本申请实施例的计算机设备的硬件结构示意图。
计算机设备可以包括处理器81以及存储有计算机程序指令的存储器82。
具体地,上述处理器81可以包括中央处理器(CPU),或者特定集成电路(Application Specific Integrated Circuit,简称为ASIC),或者可以被配置成实施本申请实施例的一个或多个集成电路。
其中,存储器82可以包括用于数据或指令的大容量存储器。举例来说而非限制,存储器82可包括硬盘驱动器(Hard Disk Drive,简称为HDD)、软盘驱动器、固态驱动器(Solid State Drive,简称为SSD)、闪存、光盘、磁光盘、磁带或通用串行总线(Universal Serial Bus,简称为USB)驱动器或者两个或更多个以上这些的组合。在合适的情况下,存储器82可包括可移除或不可移除(或固定)的介质。在合适的情况下,存储器82可在数据处理装置的内部或外部。在特定实施例中,存储器82是非易失性(Non-Volatile)存储器。在特定实施例中,存储器82包括只读存储器(Read-Only Memory,简称为ROM)。在合适的情况下,该ROM可以是掩模编程的ROM、可编程ROM(ProgrammableRead-Only Memory,简称为PROM)、可擦除PROM(Erasable Programmable Read-Only Memory,简称为EPROM)、电可擦除PROM(Electrically Erasable Programmable Read-Only Memory,简称为EEPROM)、电可改写ROM(Electrically Alterable Read-Only Memory,简称为EAROM)或闪存(FLASH)或者两个或更多个以上这些的组合。存储器82可以用来存储或者缓存需要处理和/或通信使用的各种数据文件,以及处理器81所执行的可能的程序指令。
处理器81通过读取并执行存储器82中存储的计算机程序指令,以实现上述实施例中的任意一种三维扫描方法。
在其中一些实施例中,计算机设备还可包括通信接口83和总线80。其中,如图8所示,处理器81、存储器82、通信接口83通过总线80连接并完成相互间的通信。
通信接口83用于实现本申请实施例中各模块、装置、单元和/或设备之间的通信。通信接口83还可以实现与其他部件例如:外接设备、图像采集设备、数据库、外部存储以及图像处理工作站等之间进行数据通信。
总线80包括硬件、软件或两者,将计算机设备的部件彼此耦接在一起。总线80包括但不限于以下至少之一:数据总线(Data Bus)、地址总线(Address Bus)、控制总线(Control Bus)、扩展总线(Expansion Bus)、局部总线(Local Bus)。举例来说而非限制,总线80可包括图形加速接口(Accelerated Graphics Port,简称为AGP)或其他图形总线、增强工业标准架构(Extended Industry Standard Architecture,简称为EISA)总线、前端总线(Front Side Bus,简称为FSB)、超传输(Hyper Transport,简称为HT)互连、工业标准架构(Industry Standard Architecture,简称为ISA)总线、无线带宽(InfiniBand)互连、低引脚数(Low Pin Count,简称为LPC)总线、存储器总线、微信道架构(Micro Channel Architecture,简称为MCA)总线、外围组件互连(Peripheral Component Interconnect,简称为PCI)总线、PCI-Express(PCI-X)总线、串行高级技术附件(Serial Advanced Technology Attachment,简称为SATA)总线、视频电子标准协会局部(Video Electronics Standards Association Local Bus,简称为VLB)总线或其他合适的总线或者两个或更多个以上这些的组合。在合适的情况下,总线80可包括一个或多个总线。尽管本申请实施例描述和示出了特定的总线,但本申请考虑任何合适的总线或互连。
另外,结合上述实施例中的三维扫描方法,本申请实施例可提供一种计算机可读存储介质来实现。该计算机可读存储介质上存储有计算机程序指令;该计算机程序指令被处理器执行时实现上述实施例中的任意一种三维扫描方法。
综上所述,通过本申请提供的上述实施例或者可选实施方式,通过非接触的跟踪式扫描的方法获得被扫描对象的具有色彩纹理的三维点面信息,并重建得到具有色彩纹理的三维模型;或者在重建得到三维模型之后,将非接触的跟踪式扫描的方法获得的被扫描对象的色彩纹理信息映射到三维模型的表面。与现有的手持式白光扫描仪相比,本申请实施例进行纹理贴图时,由跟踪器对三维扫描仪进行实时位姿捕捉,确保每一帧贴图都获得准确的转换关系。相比于现有技术,本申请实施例能够灵活、方便地在复杂环境下实现对大型物体表面的色彩纹理扫描,并精确地进行具有色彩纹理的三维模型重建,特别适用于具有色彩纹理的物体的数字化扫描重建和网购商品的彩色三维展示等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (22)

  1. 一种三维扫描系统,包括三维扫描仪、跟踪器和计算单元,所述三维扫描仪和所述跟踪器分别与所述计算单元电性连接;所述三维扫描仪用于采集被扫描对象的三维点面信息,所述跟踪器用于在所述三维扫描仪采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿,所述计算单元用于根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;其特征在于,
    所述三维扫描仪,还用于采集所述被扫描对象表面的色彩纹理信息;
    所述跟踪器,还用于在所述三维扫描仪采集所述被扫描对象表面的色彩纹理信息时跟踪所述三维扫描仪的第二位姿;
    所述计算单元,还用于根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理。
  2. 根据权利要求1所述的三维扫描系统,其中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像头和第二摄像头,以及用于采集所述色彩纹理信息的第三摄像头。
  3. 根据权利要求1所述的三维扫描系统,其中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像头和第二摄像头,其中,所述第一摄像头还用于采集所述色彩纹理信息。
  4. 根据权利要求1所述的三维扫描系统,其中,所述三维扫描仪包括:用于采集所述被扫描对象的三维点面信息的第一摄像头和第二摄像头,其中,所述第二摄像头还用于采集所述色彩纹理信息。
  5. 根据权利要求2所述的三维扫描系统,其中,所述三维扫描仪还包括:结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投射结构光图案;
    所述三维扫描系统还包括:时钟同步单元,所述时钟同步单元分别与所述三维扫描仪和所述跟踪器电性连接;所述时钟同步单元用于提供时钟同步信号;其中,
    所述结构光投影器、所述第一摄像头、所述第二摄像头、所述第三摄像头以及所述跟踪器根据所述时钟同步信号同步工作。
  6. 根据权利要求3或4所述的三维扫描系统,其中,所述三维扫描仪还包括:结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投射结构光图案;
    所述三维扫描系统还包括:时钟同步单元,所述时钟同步单元分别与所述三维扫描仪和所述跟踪器电性连接;所述时钟同步单元用于提供时钟同步信号;其中,
    所述结构光投影器、所述第一摄像头、所述第二摄像头以及所述跟踪器根据所述时钟同步信号同步工作。
  7. 根据权利要求5所述的三维扫描系统,其中,所述结构光投影器,用于在所述三维扫描仪采集所述三维点面信息时在所述被扫描对象的表面投影不可见光波段的结构光投影图案;
    所述不可见光波段的结构光投影图案能够被所述第一摄像头和所述第二摄像头捕获,所述不可见光波段的结构光投影图案不能够被所述第三摄像头捕获。
  8. 根据权利要求1所述的三维扫描系统,其中,所述三维扫描系统还包括:可见光源,所述可见光源用于在采集色彩纹理信息时对所述被扫描对象补光。
  9. 一种三维扫描方法,其特征在于,所述方法包括:
    采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿;
    根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;
    根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理。
  10. 根据权利要求9所述的三维扫描方法,其中,所述采集被扫描对象的三维点面信息包括:
    在所述被扫描对象的表面投射结构光投影图案;
    使用第一摄像头和第二摄像头采集表面投射有所述结构光投影图案的被扫描对象的图像信息,并根据所述图像信息生成所述被扫描对象的三维点面信息。
  11. 根据权利要求10所述的三维扫描方法,其中,对所述被扫描对象的所述三维点面信息和所述色彩纹理信息的采集是非同时的。
  12. 根据权利要求10所述的三维扫描方法,其中,所述采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:
    使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用所述第一摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
  13. 根据权利要求10所述的三维扫描方法,其中,所述采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:
    使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用所述第二摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
  14. 根据权利要求10所述的三维扫描方法,其中,所述采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿包括:
    使用第一摄像头和第二摄像头采集被扫描对象的三维点面信息,并在采集所述三维点面信息时跟踪所述三维扫描仪的第一位姿;以及使用第三摄像头采集所述被扫描对象表面的色彩纹理信息,并在采集所述色彩纹理信息时跟踪所述三维扫描仪的第二位姿。
  15. 根据权利要求14所述的三维扫描方法,其中,在所述被扫描对象的表面投射的结构光投影图案为不可见光波段的结构光投影图案;所述不可见光波段的结构光投影图案能够被采集所述三维点面信息的摄像头捕获,而不能够被采集所述色彩纹理信息的摄像头捕获;对所述被扫描对象的所述三维点面信息和所述色彩纹理信息的采集是同时的。
  16. 根据权利要求9至15中任一项所述的三维扫描方法,其中,所述根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:
    根据所述第二位姿,确定在第一坐标系中采集到的所述色彩纹理信息在第二坐标系中的坐标;
    根据所述坐标,在所述第二坐标系中将所述色彩纹理信息映射到所述三维模型的表面;
    其中,所述三维模型是在所述第二坐标系中重建的。
  17. 根据权利要求15所述的三维扫描方法,其中,所述根据所述三维点面信息和所述第一位姿,重建所述被扫描对象的三维模型;根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:
    在所述第一位姿和所述第二位姿相同的情况下,在第一坐标系中将所述色彩纹理信息映射到所述三维点面信息中;
    在第二坐标系中,根据映射所述色彩纹理信息后的三维点面信息,重建得到具有色彩纹理的所述被扫描对象的三维模型;
    其中,所述三维点面信息和所述色彩纹理信息是在所述第一坐标系中采集的,所述三维模型是在所述第二坐标系中重建的。
  18. 根据权利要求9至15中任一项所述的三维扫描方法,其中,所述根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:
    根据所述第二位姿,确定与所述色彩纹理信息对应的点云;
    根据所述色彩纹理信息,对所述点云进行色彩渲染。
  19. 根据权利要求9至15中任一项所述的三维扫描方法,其中,所述根据所述色彩纹理信息和所述第二位姿,在所述三维模型的表面生成色彩纹理包括:
    将所述三维模型的表面进行网格分割,并根据所述第二位姿,确定分割得到的每个网格对应的色彩纹理信息;
    在分割得到的每个网格中填充对应的色彩纹理信息。
  20. 根据权利要求14所述的三维扫描方法,其中,所述第三摄像头采集所述色彩纹理信息的频率低于所述第一摄像头和所述第二摄像头采集所述三维点面信息的频率。
  21. 一种计算机设备,包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求9至20中任一项所述的三维扫描方法。
  22. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求9至20中任一项所述的三维扫描方法。
PCT/CN2021/079192 2020-04-10 2021-03-05 三维扫描方法、三维扫描系统和计算机可读存储介质 WO2021203883A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010278835.9A CN113514008B (zh) 2020-04-10 2020-04-10 三维扫描方法、三维扫描系统和计算机可读存储介质
CN202010278835.9 2020-04-10

Publications (1)

Publication Number Publication Date
WO2021203883A1 true WO2021203883A1 (zh) 2021-10-14

Family

ID=78022859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/079192 WO2021203883A1 (zh) 2020-04-10 2021-03-05 三维扫描方法、三维扫描系统和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113514008B (zh)
WO (1) WO2021203883A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189594A (zh) * 2022-02-17 2022-03-15 杭州思看科技有限公司 三维扫描装置、方法、计算机设备及存储介质
CN115065761A (zh) * 2022-06-13 2022-09-16 中亿启航数码科技(北京)有限公司 一种多镜头扫描装置及其扫描方法
CN115252992A (zh) * 2022-07-28 2022-11-01 北京大学第三医院(北京大学第三临床医学院) 基于结构光立体视觉的气管插管导航系统
CN115661369A (zh) * 2022-12-14 2023-01-31 思看科技(杭州)股份有限公司 三维扫描方法、三维扫描的控制方法、系统和电子装置
CN116418967A (zh) * 2023-04-13 2023-07-11 青岛图海纬度科技有限公司 水下动态环境激光扫描的色彩还原方法和设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114485479B (zh) * 2022-01-17 2022-12-30 吉林大学 基于双目相机和惯性导航的结构光扫描测量方法及系统
CN114554025B (zh) * 2022-04-27 2022-07-22 杭州思看科技有限公司 三维扫描方法、系统、电子装置和存储介质
CN115187663A (zh) * 2022-06-30 2022-10-14 先临三维科技股份有限公司 扫描仪姿态定位方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104976968A (zh) * 2015-06-16 2015-10-14 江苏科技大学 一种基于led标签跟踪的三维几何测量方法及系统
CN105157566A (zh) * 2015-05-08 2015-12-16 深圳市速腾聚创科技有限公司 彩色三维激光扫描仪及三维立体彩色点云扫描的方法
CN106898022A (zh) * 2017-01-17 2017-06-27 徐渊 一种手持式快速三维扫描系统及方法
US20170337726A1 (en) * 2016-05-17 2017-11-23 Vangogh Imaging, Inc. 3d photogrammetry
CN108805976A (zh) * 2018-05-31 2018-11-13 武汉中观自动化科技有限公司 一种三维扫描系统及方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855366B2 (en) * 2011-11-29 2014-10-07 Qualcomm Incorporated Tracking three-dimensional objects
JP6355710B2 (ja) * 2013-03-15 2018-07-11 ファロ テクノロジーズ インコーポレーテッド 非接触型光学三次元測定装置
CN109000582B (zh) * 2018-03-15 2021-07-02 杭州思看科技有限公司 跟踪式三维扫描装置的扫描方法及系统、存储介质、设备
CN109211118A (zh) * 2018-08-13 2019-01-15 宣城徽目智能科技有限公司 一种三维扫描测头空间位姿跟踪系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105157566A (zh) * 2015-05-08 2015-12-16 深圳市速腾聚创科技有限公司 彩色三维激光扫描仪及三维立体彩色点云扫描的方法
CN104976968A (zh) * 2015-06-16 2015-10-14 江苏科技大学 一种基于led标签跟踪的三维几何测量方法及系统
US20170337726A1 (en) * 2016-05-17 2017-11-23 Vangogh Imaging, Inc. 3d photogrammetry
CN106898022A (zh) * 2017-01-17 2017-06-27 徐渊 一种手持式快速三维扫描系统及方法
CN108805976A (zh) * 2018-05-31 2018-11-13 武汉中观自动化科技有限公司 一种三维扫描系统及方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189594A (zh) * 2022-02-17 2022-03-15 杭州思看科技有限公司 三维扫描装置、方法、计算机设备及存储介质
CN115065761A (zh) * 2022-06-13 2022-09-16 中亿启航数码科技(北京)有限公司 一种多镜头扫描装置及其扫描方法
CN115065761B (zh) * 2022-06-13 2023-09-12 中亿启航数码科技(北京)有限公司 一种多镜头扫描装置及其扫描方法
CN115252992A (zh) * 2022-07-28 2022-11-01 北京大学第三医院(北京大学第三临床医学院) 基于结构光立体视觉的气管插管导航系统
CN115661369A (zh) * 2022-12-14 2023-01-31 思看科技(杭州)股份有限公司 三维扫描方法、三维扫描的控制方法、系统和电子装置
CN116418967A (zh) * 2023-04-13 2023-07-11 青岛图海纬度科技有限公司 水下动态环境激光扫描的色彩还原方法和设备
CN116418967B (zh) * 2023-04-13 2023-10-13 青岛图海纬度科技有限公司 水下动态环境激光扫描的色彩还原方法和设备

Also Published As

Publication number Publication date
CN113514008A (zh) 2021-10-19
CN113514008B (zh) 2022-08-23

Similar Documents

Publication Publication Date Title
WO2021203883A1 (zh) 三维扫描方法、三维扫描系统和计算机可读存储介质
US11003897B2 (en) Three-dimensional real face modeling method and three-dimensional real face camera system
CN108876926B (zh) 一种全景场景中的导航方法及系统、ar/vr客户端设备
CN106228507B (zh) 一种基于光场的深度图像处理方法
CN104335005B (zh) 3d扫描以及定位系统
CN110728671B (zh) 基于视觉的无纹理场景的稠密重建方法
US11816829B1 (en) Collaborative disparity decomposition
CN109801374B (zh) 一种通过多角度图像集重构三维模型的方法、介质及系统
US20130095920A1 (en) Generating free viewpoint video using stereo imaging
CN107917701A (zh) 基于主动式双目立体视觉的测量方法及rgbd相机系统
CN103971404A (zh) 一种高性价比的3d实景复制装置
Dias et al. Registration and fusion of intensity and range data for 3D modelling of real world scenes
KR100834157B1 (ko) 영상 합성을 위한 조명환경 재구성 방법 및 프로그램이기록된 기록매체
Serna et al. Data fusion of objects using techniques such as laser scanning, structured light and photogrammetry for cultural heritage applications
WO2022078442A1 (zh) 一种基于光扫描和智能视觉融合的3d信息采集方法
US20220398760A1 (en) Image processing device and three-dimensional measuring system
CN108629828B (zh) 三维大场景的移动过程中的场景渲染过渡方法
US20230062973A1 (en) Image processing apparatus, image processing method, and storage medium
JP4354708B2 (ja) 多視点カメラシステム
Lanman et al. Surround structured lighting for full object scanning
Liu et al. The applications and summary of three dimensional reconstruction based on stereo vision
Harvent et al. Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
CN104034729A (zh) 用于电路板分选的五维成像系统及其成像方法
CN116205961A (zh) 多镜头组合影像和激光雷达点云的自动配准方法及其系统
Wong et al. 3D object model reconstruction from image sequence based on photometric consistency in volume space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21784733

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21784733

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03/04/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21784733

Country of ref document: EP

Kind code of ref document: A1