WO2019015154A1 - 基于单目三维扫描系统的三维重构方法和装置 - Google Patents

基于单目三维扫描系统的三维重构方法和装置 Download PDF

Info

Publication number
WO2019015154A1
WO2019015154A1 PCT/CN2017/107506 CN2017107506W WO2019015154A1 WO 2019015154 A1 WO2019015154 A1 WO 2019015154A1 CN 2017107506 W CN2017107506 W CN 2017107506W WO 2019015154 A1 WO2019015154 A1 WO 2019015154A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
target
stripe
plane equation
line
Prior art date
Application number
PCT/CN2017/107506
Other languages
English (en)
French (fr)
Inventor
刘增艺
王文斌
赵晓波
Original Assignee
先临三维科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 先临三维科技股份有限公司 filed Critical 先临三维科技股份有限公司
Priority to EP17899221.0A priority Critical patent/EP3457078B1/en
Priority to US16/081,958 priority patent/US10783651B2/en
Priority to JP2018560102A priority patent/JP6564537B1/ja
Priority to CA3022442A priority patent/CA3022442C/en
Publication of WO2019015154A1 publication Critical patent/WO2019015154A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/02Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness
    • G01B21/04Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
    • G01B21/042Calibration or calibration artifacts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present invention relates to the field of three-dimensional scanning, and in particular to a three-dimensional reconstruction method and apparatus based on a monocular three-dimensional scanning system.
  • Three-dimensional digital technology is an emerging interdisciplinary field active in international research in recent years, and is widely used in many fields such as reverse engineering, cultural relics protection, industrial inspection and virtual reality.
  • Handheld portable 3D scanners are widely used in 3D scanning for their convenience and flexibility.
  • the principle of the existing handheld 3D scanner is mainly based on the active stereoscopic mode of structured light.
  • structured light There are various modes of structured light, such as infrared laser speckle, DLP projection speckle, DLP projection analog laser stripe, laser stripe, etc. .
  • DLP-projected analog laser stripes, laser stripes are structured light
  • the handheld 3D scanner has the highest precision and best scanning details.
  • the basic workflow of analog laser stripe with DLP projection and laser stripe for structured light is:
  • the three-dimensional reconstruction algorithm is used to perform three-dimensional reconstruction on the matched matching stripe and the corresponding marker point center;
  • the corresponding stripe matching on the left and right camera images in the above process is mainly based on the guidance of the stripe plane equation.
  • the number of stripes is greater than 15, the matching error rate of the corresponding stripe on the camera image will be significantly improved, and the noise increase will reduce the accuracy of the scan data. Sex.
  • the scanning efficiency is not effectively improved. Therefore, an effective method for improving scanning efficiency under the inherent scanning frame rate limitation is to increase the number of stripes while improving the accuracy of stripe matching.
  • the existing hand-held multi-striped binocular three-dimensional scanning technology in the scanning process, with the increase in the number of stripes
  • the dot matching error rate is increased, resulting in an increase in scanning data noise; and the optical plane needs to be calibrated before scanning, which requires more stringent equipment installation accuracy and stability; in addition, as the number of stripes increases, the left and right image corresponding stripes
  • the search complexity is rapidly increasing; and the number of stripes is limited, and the full range of the camera field of view cannot be fully utilized, so that the scanning efficiency is not improved; because of the binocular occlusion, the occlusion of some of the measured objects cannot be reconstructed in three dimensions. Because binocular stereo vision is used, when the surface of the object to be measured is stepped, the parallax is discontinuous and mismatching occurs.
  • At least some embodiments of the present invention provide a three-dimensional reconstruction method and apparatus based on a monocular three-dimensional scanning system to at least solve the technical problem that binocular stereoscopic three-dimensional reconstruction may have occlusion.
  • a three-dimensional reconstruction method based on a monocular three-dimensional scanning system includes: an invisible structured light scanning module, a camera, and a projection device, wherein the The method includes: acquiring a depth map of the measured object by using the invisible structured light scanning module, and converting the depth map into a three-dimensional data point set, wherein the three-dimensional data point set includes a plurality of three-dimensional points; a target light plane equation corresponding to the target three-dimensional point in the plurality of three-dimensional points; projecting the target three-dimensional point onto the modulated multi-line stripe image, and determining the modulated multi-line stripe image a target stripe corresponding to the target plane of light equation, wherein the modulated multi-line stripe image is an image captured by the camera after the multi-line stripe image is projected onto the object to be measured by the projection device; A target light plane equation and a center coordinate of the target stripe acquire a three-dimensional point reconstructed
  • the method further includes: The three-dimensional scanning system performs calibration to obtain structural parameters of the monocular three-dimensional scanning system.
  • the monocular three-dimensional scanning system is calibrated, and acquiring the structural parameters of the monocular three-dimensional scanning system includes: calibrating the camera, acquiring internal and external parameters of the camera; and acquiring the invisible structured light a rotation translation matrix corresponding to a relative positional relationship between the scanning module and the camera; calibrating an optical plane equation corresponding to each stripe in the multi-line stripe image to obtain a plurality of calibrated light plane equations.
  • determining a target light plane equation corresponding to the target three-dimensional point in the plurality of three-dimensional points comprises: acquiring an Euclidean distance of the target three-dimensional point to the plurality of calibrated light plane equations, and Determining the light plane equation with the shortest Euclidean distance in the plurality of calibrated light plane equations; in the case where the Euclidean distance between the target three-dimensional point and the optical plane equation having the shortest Euclidean distance is lower than a predetermined distance And determining the light plane equation with the shortest Euclidean distance as the target light plane equation.
  • the target three-dimensional point is projected onto the modulated multi-line stripe image
  • determining a target stripe corresponding to the target light plane equation in the modulated multi-line stripe image comprises: determining the Whether the target three-dimensional point has a stripe line segment within a preset range of the projection point in the modulated multi-line stripe image, wherein the stripe line segment is a center line extraction of the modulated multi-line stripe image a line segment formed by dividing the center line connected domain; and in the case where the target three-dimensional point has a stripe line segment in a preset range of projection points in the modulated multi-line stripe image, the stripe line segment is The target stripe corresponding to the target light plane equation is determined.
  • a storage medium comprising a stored program, wherein the program is executed to perform the method of any of the above.
  • a processor configured to execute a program, wherein the program is executed to perform the method of any of the above.
  • a three-dimensional reconstruction apparatus based on a monocular three-dimensional scanning system.
  • the monocular three-dimensional scanning system includes: an invisible structured light scanning module, a camera, and a projection device, wherein The device includes: an acquisition unit configured to acquire a depth map of the measured object by using the invisible structured light scanning module, and convert the depth map into a three-dimensional data point set, wherein the three-dimensional data point set includes a three-dimensional point; a determining unit configured to determine a target light plane equation corresponding to the target three-dimensional point in the plurality of three-dimensional points; and a projection unit configured to project the target three-dimensional point onto the modulated multi-line stripe image Determining a target stripe corresponding to the target light plane equation in the modulated multi-line stripe image, wherein the modulated multi-line stripe image is to project a multi-line stripe image to the projection device An image acquired by the camera after the object to be measured; an acquiring unit configured to be according to the target light plane
  • the device further includes: a calibration module, configured to: before acquiring the depth map of the measured object by using the invisible structured light scanning module, and converting the depth map into a three-dimensional data point set,
  • the monocular three-dimensional scanning system is calibrated to obtain structural parameters of the monocular three-dimensional scanning system.
  • the object to be measured collected by the invisible structured light scanning module may be Depth map, determining the target light plane equation corresponding to the target three-dimensional point in the three-dimensional data point of the depth map transformation, and then determining the target stripe corresponding to the target light plane equation in the modulated multi-line stripe image acquired by the single camera, and then according to the target light
  • the plane equation and the center coordinate of the target stripe obtain the 3D points reconstructed by the target stripe in the camera coordinate system, which realizes the accurate reconstruction of the 3D point using the monocular 3D scanning system, completes the 3D scanning, and avoids the binocular 3D scanning system adopting binocular Stereoscopic vision causes the problem of visual discontinuity when the surface of the object to be measured is stepped, and some of the objects to be measured are occluded, so that the dual camera of the binocular scanning system cannot capture the image of the occlusion portion, and thus the three-dimensional occlusion portion cannot be three-dimensionally Reconstruction solves
  • FIG. 1 is a flow chart of an optional three-dimensional reconstruction method based on a monocular three-dimensional scanning system according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an optional multi-line stripe demodulation pattern in accordance with an embodiment of the present invention
  • FIG. 3 is a schematic diagram of an optional strip segment segmentation and a three-dimensional module point cloud back projection according to an embodiment of the invention
  • FIG. 4 is a schematic diagram showing the structure of an optional three-dimensional handheld infrared structured light three-dimensional module combined with a monocular multi-strip three-dimensional scanning system according to an embodiment of the invention
  • FIG. 5 is a schematic diagram of an alternative three-dimensional reconstruction apparatus based on a monocular three-dimensional scanning system, in accordance with an embodiment of the present invention.
  • an embodiment of a three-dimensional reconstruction method based on a monocular three-dimensional scanning system is provided.
  • the steps shown in the flowchart of the drawing may be in a computer such as a set of computer executable instructions. The steps are performed in the system, and although the logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • the monocular three-dimensional scanning system in the three-dimensional reconstruction method based on the monocular three-dimensional scanning system of the embodiment of the present invention may include: an invisible structured light scanning module, a camera, and a projection device, and FIG. 1 is a diagram according to an embodiment of the present invention.
  • An optional flowchart of a three-dimensional reconstruction method based on a monocular three-dimensional scanning system, as shown in FIG. 1, the method includes the following steps:
  • Step S102 collecting a depth map of the measured object by using the invisible structured light scanning module, and converting the depth map into a three-dimensional data point set, wherein the three-dimensional data point set includes a plurality of three-dimensional points;
  • Step S104 determining a target light plane equation corresponding to the target three-dimensional point in the plurality of three-dimensional points
  • Step S106 projecting the target three-dimensional point onto the modulated multi-line stripe image, and determining a target stripe corresponding to the target light plane equation in the modulated multi-line stripe image, wherein the modulated multi-line stripe image is utilized
  • Step S108 acquiring a three-dimensional point reconstructed by the target stripe in the camera coordinate system according to the target light plane equation and the central coordinate of the target stripe.
  • the target light plane equation corresponding to the target three-dimensional point of the three-dimensional data point of the depth map conversion is determined, and then the modulated image acquired by the single camera is determined.
  • the target stripe corresponding to the target plane plane equation in the multi-line stripe image, and then the 3D points reconstructed by the target stripe in the camera coordinate system are obtained according to the target plane plane equation and the center coordinates of the target stripe, thereby realizing the accuracy of using the monocular three-dimensional scanning system.
  • Reconstructing 3D points and completing 3D scanning avoids the problem that the binocular 3D scanning system uses the binocular stereo vision to cause visual discontinuity when the surface of the measured object is stepped, and some of the measured objects are blocked, resulting in binocular
  • the dual camera of the scanning system cannot capture the image of the occlusion part, and thus can not reconstruct the occlusion part in three dimensions, which solves the technical problem that the binocular stereoscopic three-dimensional reconstruction has occlusion.
  • the projection device may be a digital projector, and the corresponding projected multi-line stripe image may be a digital analog laser multi-line stripe image, wherein the digital simulated laser multi-line stripe image may be performed by a monocular three-dimensional scanning system The computer is generated and projected by the digital projector onto the object being measured.
  • the projection device may also be a laser projection device, and the corresponding multi-line stripe image may be a laser multi-line stripe image, which may be directly projected onto the object to be measured by the laser projection device.
  • the projection device is a digital projector
  • the projected multi-line stripe image is a digital multi-line stripe image as an example, but the projection device is not limited to a digital projector.
  • Multi-line stripe images can only be digital multi-line stripe images.
  • the implementation may further include: a monocular three-dimensional scanning system. Calibration is performed to obtain structural parameters of the monocular three-dimensional scanning system.
  • the invisible structured light scanning module may be an infrared structured light scanning module.
  • the monocular three-dimensional scanning system can be first calibrated to obtain the structural parameters of the monocular three-dimensional scanning system, so that the accurate structural parameters can be obtained according to the calibration, and the three-dimensional point can be accurately reconstructed.
  • the monocular three-dimensional scanning system is calibrated, and obtaining the structural parameters of the monocular three-dimensional scanning system may include: calibrating the camera, acquiring internal and external parameters of the camera; and acquiring the invisible structured optical scanning module and The rotational translation matrix corresponding to the relative positional relationship between the cameras; the light plane equation corresponding to each stripe in the multi-line stripe image is calibrated to obtain a plurality of calibrated light plane equations.
  • the camera in the process of calibrating the monocular three-dimensional scanning system, can be calibrated to obtain the internal and external parameters of the camera; and the relative position between the optical scanning module and the camera can be obtained through the invisible structure.
  • the relationship is calibrated to obtain a rotation evaluation matrix corresponding to the relative positional relationship between the invisible structured light scanning module and the camera; and the optical plane equation corresponding to each stripe in the multi-line stripe image can be calibrated to obtain more
  • the calibrated light plane equations allow accurate reconstruction of 3D points based on the camera's internal and external parameters, rotational translation matrix, and optical plane equations.
  • determining a target light plane equation corresponding to the target three-dimensional point in the plurality of three-dimensional points may include: acquiring an Euclidean distance from the target three-dimensional point to the plurality of calibrated light plane equations, and from The light plane equation with the shortest Euclidean distance is determined in the calibrated light plane equation; the Euclidean distance is the shortest when the Euclidean distance between the target 3D point and the Euclidean distance is the shortest Euclidean distance The equation of the light plane is determined as the target plane equation.
  • the Euclidean distance is determined as the target plane plane equation, so that according to the target plane plane equation, Rebuild 3D points accurately.
  • the target three-dimensional point is projected onto the modulated multi-line stripe image
  • determining the target stripe corresponding to the target light plane equation in the modulated multi-line stripe image may include: determining the target three-dimensional Whether the stripe line segment exists in the preset range of the projection point in the modulated multi-line stripe image, wherein the stripe line segment is formed by dividing the center line connecting region after the center line extraction of the modulated multi-line stripe image The line segment; in the case where the target three-dimensional point has a stripe line segment in a preset range of projection points in the modulated multi-line stripe image, the stripe line segment is determined as a target stripe corresponding to the target plane plane equation.
  • the target three-dimensional point has a modulated multi-line stripe image within a preset range of the projection point in the modulated multi-line stripe image, and the center line is extracted and the center line connected domain is segmented.
  • a stripe segment formed and in the case where the target three-dimensional point has a stripe line segment in a preset range of projection points in the modulated multi-line stripe image, the stripe line segment is determined as a target stripe corresponding to the target plane plane equation, Thereby, the target stripe corresponding to the target light plane equation in the stripe line segment can be determined, and the corresponding target stripe is calculated using the target plane plane equation, and the three-dimensional point is accurately reconstructed.
  • the coordinates of the point, A, B, C, D are the coefficients of the target plane equation, (u, v) is the center coordinate of the target stripe, and (c x , c y ) are the coordinates of the main point of the camera, f x , f y Is the equivalent focal length of the camera.
  • the coefficients of the target light plane equation, (u, v) are the central coordinates of the target stripe, (c x , c y ) are the coordinates of the main point of the camera, and f x and f y are the cameras, etc.
  • the focal length With the focal length, the coordinates of the three-dimensional points of (X i , Y i , Z i ) can be accurately obtained, and the three-dimensional points can be accurately constructed.
  • the present invention also provides a preferred embodiment which provides a monocular multi-line three-dimensional scanning method for optical combining of different band structures.
  • the invention mainly takes the technical improvement by combining the three-dimensional module of the invisible light band (infrared structured light) with the monocular visible light multi-line stripe as an example.
  • the purpose of the invention is to use the three-dimensional data reconstructed by the infrared structured light three-dimensional module to guide the three-dimensional reconstruction of the monocular multi-line stripe.
  • the key is that the three-dimensional reconstruction data of the infrared structured light three-dimensional module guides the monocular multi-line stripe and the optical plane equation accurately.
  • Matching improve the matching accuracy of multi-stripes, increase the number of matching stripes and improve the scanning efficiency of the handheld 3D scanning system. For a resolution of 1.3 megapixel camera, 100 stripes can be achieved. At the same frame rate and camera resolution, the scanning efficiency is increased by more than 10 times compared with the prior art.
  • multi-strip scanning can be realized in real time according to the feature without using the marker points.
  • the technical solution provided by the invention comprises the following parts: device construction, system calibration, digital projection and image acquisition Set, determine the sequence number of the point set PtS correlation light plane equation, guide the matching of the corresponding stripe in the multi-line stripe image and three-dimensional reconstruction.
  • a three-dimensional digital imaging sensor composed of an infrared structured light three-dimensional scanning module and a camera and a digital projector may be constructed, and a relative position between the device components is fixed, and the measured object is placed within the measurement range.
  • the system calibration part comprises: calibrating the camera to obtain the internal and external parameters of the camera, the internal reference A, the external reference R, T, and simultaneously calibrating the relative positional relationship between the infrared structured light three-dimensional scanning module and the camera. Translation matrix Ms.
  • FIG. 2 is a schematic diagram of an optional multi-line stripe demodulation pattern according to an embodiment of the present invention.
  • a digital multi-line stripe pattern with a number of stripes greater than 15 is generated by a computer (the maximum number of stripes can be reached). 100 or higher), the digital projector is projected onto the object to be measured, the digital laser image is deformed by the height modulation of the object, and the modulated digital multi-line stripe pattern is generated, and the camera synchronously collects the modulated multi-line stripe pattern.
  • the sequence number of the point set PtS associated light plane equation may be determined, and after acquiring the three-dimensional data PtS of the infrared structured light three-dimensional scanning mode, each three-dimensional point pt(i) of the PtS three-dimensional point set is sequentially calculated (target three-dimensional point)
  • the distance threshold vTH is set to 0.5 mm. Assuming that the distance from pt(i) to the nth optical plane equation is the shortest, and within the vTH threshold range, the nth optical plane equation corresponding to the three-dimensional point pt(i) is simultaneously recorded. This point is removed if pt(i) is not associated with the plane plane equation.
  • Each three-dimensional point of the point set PtS at this time corresponds to the corresponding light plane equation.
  • FIG. 3 is a schematic diagram of an optional strip line segmentation and a three-dimensional module point cloud back projection according to an embodiment of the present invention. As shown in FIG. 3, the center line is extracted by the modulated multi-line stripe pattern, and then The segmentation of each centerline connected domain forms a plurality of independent segments.
  • pt(i) in the three-dimensional data PtS of the infrared structured light three-dimensional module is sequentially projected onto the demodulated multi-line stripe image according to the camera's calibration internal parameter, if the projection point of pt(i) is in the eight neighborhoods
  • the independent line segment (target stripe) of the multi-line stripe is determined corresponding to the nth plane equation (target plane plane equation).
  • FIG. 4 is a schematic diagram showing the structure of an optional three-dimensional handheld infrared structured light three-dimensional module combined with a monocular multi-striped three-dimensional scanning system according to an embodiment of the present invention.
  • the system includes: a digital projector. 101.
  • a camera 102 an infrared structured light three-dimensional module 103, a computer 104, and a sample 105 to be tested.
  • the internal parameters of the camera are:
  • the camera external parameters are:
  • T [-1.77 -5.5 450].
  • the internal parameters of the infrared structured light three-dimensional module are:
  • system structure parameters between the infrared structured light 3D module and the camera are:
  • the infrared structured light three-dimensional module collects the three-dimensional data of the measured object and the camera collects multi-line stripes, and performs center line extraction and connected domain segmentation on the acquired demodulated multi-line fringe pattern. Calculate the distance from the 3D data of the infrared 3D module to the plane plane equation, and keep the sequence number of the plane plane recorded simultaneously within the distance threshold.
  • the three-dimensional data is back-projected onto the camera image plane, and if there is an intersection with the multi-line stripe line segment, the light plane equation corresponding to the line segment is determined.
  • the three-dimensional data of the multi-line stripe is calculated from the multi-line stripe image coordinates and the corresponding light plane equation according to the calibrated camera parameters.
  • the technical solution provided by the invention can utilize the invisible structured optical band three-dimensional reconstruction data to guide the monocular three-dimensional reconstruction of the visible light band structured light; the accurate matching of the visible monocular multi-line stripe and the corresponding light plane equation can be realized; the invisible light can be utilized
  • the 3D reconstruction data of the 3D module determines the optical plane equation; it can also realize the monocular multi-strip scanning without real-time splicing according to the features without using the marker points.
  • the technical solution provided by the invention can simplify the difficulty of matching the monocular multi-stripes with the corresponding optical plane equation and improve the matching accuracy.
  • the limitation of the number of projection stripes in the prior art is removed, and the scanning rate can be increased by more than ten times under the same conditions.
  • the problem of occlusion in binocular stereo vision is solved by the monocular three-dimensional reconstruction method. Collaborative scanning of structured light in different bands.
  • an embodiment of the present invention further provides a storage medium, where the storage medium includes a stored program, wherein, when the program is running, the device where the storage medium is controlled performs the above-described three-dimensional weight based on the monocular three-dimensional scanning system. Construction method.
  • an embodiment of the present invention further provides a processor configured to execute a program, wherein the program is executed to perform the above-described three-dimensional reconstruction method based on a monocular three-dimensional scanning system.
  • an embodiment of a three-dimensional reconstruction device based on a monocular three-dimensional scanning system is also provided. It should be noted that the three-dimensional reconstruction device based on the monocular three-dimensional scanning system is configured to perform the embodiment of the present invention.
  • the three-dimensional reconstruction method based on the monocular three-dimensional scanning system in the embodiment of the present invention can be executed in the three-dimensional reconstruction device based on the monocular three-dimensional scanning system.
  • the monocular three-dimensional scanning system in the three-dimensional reconstruction device based on the monocular three-dimensional scanning system of the embodiment of the present invention may include: an invisible structured light scanning module, a camera, and a projection device, and FIG. 5 is a diagram according to an embodiment of the present invention.
  • An optional schematic diagram of a three-dimensional reconstruction device based on a monocular three-dimensional scanning system, as shown in FIG. 5, the device may include:
  • the acquiring unit 61 is configured to collect the depth map of the measured object by using the invisible structured light scanning module, and convert the depth map into a three-dimensional data point set, wherein the three-dimensional data point set includes a plurality of three-dimensional points; the determining unit 63, setting To determine a target light plane equation corresponding to the target three-dimensional point in the plurality of three-dimensional points; the projection unit 65 is configured to project the target three-dimensional point onto the modulated multi-line stripe image to determine the modulated multi-line stripe image a target stripe corresponding to the target plane plane equation, wherein the modulated multi-line stripe image is an image captured by the camera after the multi-line stripe image is projected onto the object to be measured by the projection device; the obtaining unit 67 is set to be based on the target The plane plane equation and the center coordinates of the target stripe acquire the three-dimensional points reconstructed by the target stripe in the camera coordinate system.
  • the collecting unit 61 in this embodiment is configured to perform step S102 in the embodiment of the present application.
  • the determining unit 63 in this embodiment is configured to perform step S104 in the embodiment of the present application.
  • the projection unit 65 in this embodiment is configured to perform step S106 in the embodiment of the present application, and the obtaining unit 6 in this embodiment is configured as Step S108 in the embodiment of the present application is performed.
  • the above modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiments.
  • the target light plane equation corresponding to the target three-dimensional point of the three-dimensional data point of the depth map conversion is determined, and then the modulation acquired by the single camera is determined.
  • the target stripe corresponding to the target plane plane equation in the multi-line stripe image, and then the 3D points reconstructed by the target stripe in the camera coordinate system are obtained according to the target plane plane equation and the center coordinates of the target stripe, thereby realizing the use of monocular three-dimensional scanning.
  • the system accurately reconstructs the three-dimensional points and completes the three-dimensional scanning, which avoids the problem that the binocular three-dimensional scanning system uses the binocular stereo vision to cause the visual discontinuity when the surface of the measured object is stepped, and some of the measured objects are occluded, resulting in
  • the dual camera of the binocular scanning system cannot capture the image of the occlusion part, and thus can not reconstruct the occlusion part in three dimensions, which solves the technical problem that the binocular stereoscopic three-dimensional reconstruction has occlusion.
  • the apparatus further includes: a calibration module configured to collect a depth map of the measured object by using the invisible structured light scanning module, and convert the depth map into a three-dimensional data point set, The three-dimensional scanning system is calibrated to obtain the structural parameters of the monocular three-dimensional scanning system.
  • the calibration module includes: a first standard stator module configured to calibrate the camera to obtain internal and external parameters of the camera; and a first acquisition module configured to acquire an invisible structured light scanning module and the camera The rotational translation matrix corresponding to the relative positional relationship between the two; the second standard stator module is configured to calibrate the light plane equation corresponding to each stripe in the multi-line stripe image to obtain a plurality of calibrated light plane equations.
  • the determining unit includes: a second acquiring module configured to acquire an Euclidean distance from the target three-dimensional point to the plurality of calibrated light plane equations, and determine from the plurality of calibrated light plane equations The light plane equation with the shortest Euclidean distance; the first determining module is set to the light with the shortest Euclidean distance when the Euclidean distance between the target three-dimensional point and the Euclidean distance is the shortest Euclidean distance is lower than the predetermined distance The plane equation is determined as the target plane plane equation.
  • the projection unit includes: a judging module configured to determine whether a stripe line segment exists in a preset range of the projection point in the modulated multi-line stripe image of the target three-dimensional point, wherein the stripe line segment is The modulated multi-line stripe image is a line segment formed by dividing the center line connected domain after the center line is extracted; and the second determining module is set as a preset of the projection point in the modulated multi-line stripe image at the target three-dimensional point In the case where a stripe line segment exists within the range, the stripe line segment is determined as a target stripe corresponding to the target plane plane equation.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .
  • the three-dimensional reconstruction method and apparatus based on the monocular three-dimensional scanning system provided by the embodiments of the present invention have the following beneficial effects: the three-dimensional scanning is accurately reconstructed by using the monocular three-dimensional scanning system to complete the three-dimensional scanning, thereby avoiding Binocular stereoscopic 3D reconstruction will have occlusion.

Abstract

本发明公开了一种基于单目三维扫描系统的三维重构方法和装置。单目三维扫描系统包括:不可见结构光扫描模组、摄像机、投影设备,该方法包括:利用不可见结构光扫描模组采集被测物体的深度图,并将深度图转换为三维数据点集,三维数据点集中包括多个三维点;确定多个三维点中的目标三维点所对应的目标光平面方程;将目标三维点投影到调制后的多线条纹图像上,确定调制后的多线条纹图像中的与目标光平面方程相对应的目标条纹;根据目标光平面方程以及目标条纹的中心坐标获取目标条纹在摄像机坐标系中重构的三维点。本发明解决了双目立体视觉三维重构会存在遮挡的技术问题。

Description

基于单目三维扫描系统的三维重构方法和装置 技术领域
本发明涉及三维扫描领域,具体而言,涉及一种基于单目三维扫描系统的三维重构方法和装置。
背景技术
三维数字化技术是近年来国际上活跃研究的一个新兴交叉学科领域,被广泛的应用到逆向工程、文物保护、工业检测及虚拟现实等诸多领域。而手持便携式三维扫描仪以其便捷性,灵活性的优点在三维扫描领域被广泛应用。现有手持式三维扫描仪的原理主要是基于结构光的主动立体视觉方式,结构光的模式可以有多种,如红外激光散斑、DLP投影散斑、DLP投影的模拟激光条纹、激光条纹等。这些结构光模式中以DLP投影的模拟激光条纹,激光条纹为结构光的手持三维扫描仪的精度最高、扫描细节最好。以DLP投影的模拟激光条纹,激光条纹为结构光为例的基本工作流程是:
(1)对投射的条纹进行平面拟合;
(2)根据采集到的条纹图进行标志点提取及条纹中心提取;
(3)对条纹中心进行连通域分割,根据平面方程对左右相机图像上的条纹进行对应点匹配;
(4)利用两相机的极线约束关系查找左右相机图像上对应的标志点中心;
(5)根据扫描系统的标定参数,采用三维重建算法对已经匹配好的对应条纹及对应标志点中心进行三维重建;
(6)标志点拼接及条纹三维点旋转平移实现手持三维扫描。
上述过程中左右相机图像上的对应条纹匹配主要是基于条纹平面方程的指导,在条纹数量大于15的时候左右相机图像上的对应条纹的匹配错误率将显著提高,进而噪声增加降低扫描数据的准确性。当条纹数量小于15时扫描效率得不到有效提高。故而在固有的扫描帧率限制下提高扫描效率的有效方法是增加条纹数量同时提高条纹匹配的准确性。
但是现有的手持多条纹双目三维扫描技术,在扫描过程中,随着条纹数量增多对 应点匹配错误率增大,导致扫描数据杂点增多;并且在扫描前,需要标定光平面,对系统的设备安装精度及稳定性的要求更苛刻;另外,随着条纹数量增加左右图像对应条纹的查找复杂度急速增加;以及,条纹数量会受到限制,无法充分利用相机视场的所有范围,使得扫描效率得不到提高;因为双目遮挡的原因导致部分被测物体的遮挡处无法三维重建;由于采用双目立体视觉,故而在被测物体表面呈阶梯状的情况下,造成视差不连续,出现误匹配的现象。
针对双目立体视觉三维重构会存在遮挡的问题,目前尚未提出有效的解决方案。
发明内容
本发明至少部分实施例提供了一种基于单目三维扫描系统的三维重构方法和装置,以至少解决双目立体视觉三维重构会存在遮挡的技术问题。
根据本发明其中一实施例,提供了一种基于单目三维扫描系统的三维重构方法,所述单目三维扫描系统包括:不可见结构光扫描模组、摄像机、投影设备,其中,所述方法包括:利用所述不可见结构光扫描模组采集被测物体的深度图,并将所述深度图转换为三维数据点集,其中,所述三维数据点集中包括多个三维点;确定所述多个三维点中的目标三维点所对应的目标光平面方程;将所述目标三维点投影到调制后的多线条纹图像上,确定所述调制后的多线条纹图像中的与所述目标光平面方程相对应的目标条纹,其中,所述调制后的多线条纹图像为利用所述投影设备将多线条纹图像投射到被测物体上后所述摄像机采集到的图像;根据所述目标光平面方程以及所述目标条纹的中心坐标获取所述目标条纹在所述摄像机坐标系中重构的三维点。
可选地,在利用所述不可见结构光扫描模组采集所述被测物体的深度图,并将所述深度图转换为三维数据点集之前,所述方法还包括:对所述单目三维扫描系统进行标定,获取所述单目三维扫描系统的结构参数。
可选地,对所述单目三维扫描系统进行标定,获取所述单目三维扫描系统的结构参数包括:对所述摄像机进行标定,获取所述摄像机的内外参数;获取所述不可见结构光扫描模组与所述摄像机之间的相对位置关系所对应的旋转平移矩阵;对所述多线条纹图像中的每个条纹对应的光平面方程进行标定,获取多个标定后的光平面方程。
可选地,确定所述多个三维点中的目标三维点所对应的目标光平面方程包括:获取所述目标三维点到所述多个标定后的光平面方程的欧氏距离,并从所述多个标定后的光平面方程中确定出欧氏距离最短的光平面方程;在所述目标三维点到所述欧氏距离最短的光平面方程之间的欧式距离低于预定距离的情况下,将所述欧氏距离最短的光平面方程确定为所述目标光平面方程。
可选地,将所述目标三维点投影到调制后的多线条纹图像上,确定所述调制后的多线条纹图像中的与所述目标光平面方程相对应的目标条纹包括:判断所述目标三维点在所述调制后的多线条纹图像中的投影点的预设范围内是否存在条纹线段,其中,所述条纹线段为对所述调制后的多线条纹图像进行中心线提取后对所述中心线连通域进行分割所形成的线段;在所述目标三维点在所述调制后的多线条纹图像中的投影点的预设范围内存在条纹线段的情况下,将所述条纹线段确定为与所述目标光平面方程相对应的所述目标条纹。
可选地,根据所述目标光平面方程以及所述目标条纹的中心坐标获取所述目标条纹在所述摄像机坐标系中重构的三维点包括:按照以下方程计算所述三维点的坐标:AXi+BYi+CZi+D=0;(u-cx)/fx=Xi/Zi;(v-cy)/fy=Yi/Zi;其中,(Xi、Yi、Zi)为所述三维点的坐标,A、B、C、D为所述目标光平面方程的系数,(u、v)为所述目标条纹的中心坐标,(cx、cy)为所述摄像机的主点坐标,fx、fy为所述摄像机的等效焦距。
根据本发明其中一实施例,还提供了一种存储介质,所述存储介质包括存储的程序,其中,所述程序运行时执行上述任一项所述的方法。
根据本发明其中一实施例,还提供了一种处理器,所述处理器设置为运行程序,其中,所述程序运行时执行上述任一项所述的方法。
根据本发明其中一实施例,还提供了一种基于单目三维扫描系统的三维重构装置,所述单目三维扫描系统包括:不可见结构光扫描模组、摄像机、投影设备,其中,所述装置包括:采集单元,设置为利用所述不可见结构光扫描模组采集被测物体的深度图,并将所述深度图转换为三维数据点集,其中,所述三维数据点集中包括多个三维点;确定单元,设置为确定所述多个三维点中的目标三维点所对应的目标光平面方程;投影单元,设置为将所述目标三维点投影到调制后的多线条纹图像上,确定所述调制后的多线条纹图像中的与所述目标光平面方程相对应的目标条纹,其中,所述调制后的多线条纹图像为利用所述投影设备将多线条纹图像投射到被测物体上后所述摄像机采集到的图像;获取单元,设置为根据所述目标光平面方程以及所述目标条纹的中心坐标获取所述目标条纹在所述摄像机坐标系中重构的三维点。
可选地,所述装置还包括:标定模块,设置为在利用所述不可见结构光扫描模组采集所述被测物体的深度图,并将所述深度图转换为三维数据点集之前,对所述单目三维扫描系统进行标定,获取所述单目三维扫描系统的结构参数。
在本发明至少部分实施例中,可以根据不可见结构光扫描模组采集的被测物体的 深度图,确定深度图转换的三维数据点集中目标三维点对应的目标光平面方程,再确定单个摄像机采集的调制后的多线条纹图像中目标光平面方程对应的目标条纹,进而再根据目标光平面方程以及目标条纹的中心坐标获取目标条纹在摄像机坐标系中重构的三维点,实现了使用单目三维扫描系统准确重构三维点,完成三维扫描,避免了双目三维扫描系统采用双目立体视觉在被测物体表面呈阶梯状的情况下造成视觉不连续的问题,以及部分被测物体被遮挡,导致双目扫描系统的双摄像头无法采集遮挡部分的图像,进而无法对遮挡部分进行三维重建,解决了双目立体视觉三维重构会存在遮挡的技术问题。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的一种可选的基于单目三维扫描系统的三维重构方法的流程图;
图2是根据本发明实施例的一种可选的多线条纹解调图案的示意图;
图3是根据本发明实施例的一种可选的条纹线段分割及三维模组点云反投影的示意图;
图4是根据本发明实施例的一种可选的三维手持红外结构光三维模组与单目多条纹结合的三维扫描系统的结构的示意图;
图5是根据本发明实施例的一种可选的基于单目三维扫描系统的三维重构装置的示意图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这 样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本发明实施例,提供了一种基于单目三维扫描系统的三维重构方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
本发明实施例的基于单目三维扫描系统的三维重构方法中的单目三维扫描系统可以包括:不可见结构光扫描模组、摄像机、投影设备,图1是根据本发明实施例的一种可选的基于单目三维扫描系统的三维重构方法的流程图,如图1所示,该方法包括如下步骤:
步骤S102,利用不可见结构光扫描模组采集被测物体的深度图,并将深度图转换为三维数据点集,其中,三维数据点集中包括多个三维点;
步骤S104,确定多个三维点中的目标三维点所对应的目标光平面方程;
步骤S106,将目标三维点投影到调制后的多线条纹图像上,确定调制后的多线条纹图像中的与目标光平面方程相对应的目标条纹,其中,调制后的多线条纹图像为利用投影设备将多线条纹图像投射到被测物体上后摄像机采集到的图像;
步骤S108,根据目标光平面方程以及目标条纹的中心坐标获取目标条纹在摄像机坐标系中重构的三维点。
通过上述步骤,可以根据不可见结构光扫描模组采集的被测物体的深度图,确定深度图转换的三维数据点集中目标三维点对应的目标光平面方程,再确定单个摄像机采集的调制后的多线条纹图像中目标光平面方程对应的目标条纹,进而再根据目标光平面方程以及目标条纹的中心坐标获取目标条纹在摄像机坐标系中重构的三维点,实现了使用单目三维扫描系统准确重构三维点,完成三维扫描,避免了双目三维扫描系统采用双目立体视觉在被测物体表面呈阶梯状的情况下造成视觉不连续的问题,以及部分被测物体被遮挡,导致双目扫描系统的双摄像头无法采集遮挡部分的图像,进而无法对遮挡部分进行三维重建,解决了双目立体视觉三维重构会存在遮挡的技术问题。
可选地,投影设备可以是数字投影仪,相应的投影的多线条纹图像可以为数字模拟激光多线条纹图像,其中,该数字模拟激光多线条纹图像可以由单目三维扫描系统 中的计算机生成,并由数字投影仪投射到被测物体上。可选地,投影设备还可以是激光投射装置,相应地投影的多线条纹图像可以为激光多线条纹图像,该激光多线条纹图像可以由激光投射装置直接投射到被测物体上。此处需要说明的是,本发明实施例以投影设备为数字投影仪,投影的多线条纹图像为数字多线条纹图像为例进行说明,但并不限定投影设备只能为数字投影仪,投影的多线条纹图像只能为数字多线条纹图像。
作为一种可选的实施例,在利用不可见结构光扫描模组采集被测物体的深度图,并将深度图转换为三维数据点集之前,该实施还可以包括:对单目三维扫描系统进行标定,获取单目三维扫描系统的结构参数。
可选地,不可见结构光扫描模组可以是红外结构光扫描模组。
采用本发明上述实施例,可以先对单目三维扫描系统进行标定,获取单目三维扫描系统的结构参数,从而可以根据标定后获取准确的结构参数,准确重新构建三维点。
作为一种可选的实施例,对单目三维扫描系统进行标定,获取单目三维扫描系统的结构参数可以包括:对摄像机进行标定,获取摄像机的内外参数;获取不可见结构光扫描模组与摄像机之间的相对位置关系所对应的旋转平移矩阵;对多线条纹图像中的每个条纹对应的光平面方程进行标定,获取多个标定后的光平面方程。
采用本发明上述实施例,在对单目三维扫描系统进行标定的过程中,可以通过对摄像机进行标定,获取摄像机的内外参数;可以通过对不可见结构光扫描模组与摄像机之间的相对位置关系进行标定,获取不可见结构光扫描模组与摄像机之间的相对位置关系所对应的旋转评议矩阵;还可以通过对多线条纹图像中的每个条纹对应的光平面方程进行标定,获取多个标定后的光平面方程,从而可以根据摄像机的内外参数、旋转平移矩阵、光平面方程,准确重新构建三维点。
作为一种可选的实施例,确定多个三维点中的目标三维点所对应的目标光平面方程可以包括:获取目标三维点到多个标定后的光平面方程的欧氏距离,并从多个标定后的光平面方程中确定出欧氏距离最短的光平面方程;在目标三维点到欧氏距离最短的光平面方程之间的欧式距离低于预定距离的情况下,将欧氏距离最短的光平面方程确定为目标光平面方程。
采用本发明上述实施例,通过获取目标三维点到多个标定后的光平面方程的欧氏距离,以及从多个标定后的光平面方程中确定出欧氏距离最短的光平面方程,可以在目标三维点到欧氏距离最短的光平面方程之间的欧式距离低于预定距离的情况下,将欧氏距离最短的光平面方程确定为目标光平面方程,从而可以根据目标光平面方程, 准确重新构建三维点。
作为一种可选的实施例,将目标三维点投影到调制后的多线条纹图像上,确定调制后的多线条纹图像中的与目标光平面方程相对应的目标条纹可以包括:判断目标三维点在调制后的多线条纹图像中的投影点的预设范围内是否存在条纹线段,其中,条纹线段为对调制后的多线条纹图像进行中心线提取后对中心线连通域进行分割所形成的线段;在目标三维点在调制后的多线条纹图像中的投影点的预设范围内存在条纹线段的情况下,将条纹线段确定为与目标光平面方程相对应的目标条纹。
采用本发明上述实施例,判断目标三维点在调制后的多线条纹图像中的投影点的预设范围内是否存在调制后的多线条纹图像进行中心线提取后对中心线连通域进行分割所形成的条纹线段,并在目标三维点在调制后的多线条纹图像中的投影点的预设范围内存在条纹线段的情况下,将条纹线段确定为与目标光平面方程相对应的目标条纹,从而可以确定条纹线段中与目标光平面方程对应的目标条纹,并使用目标光平面方程计算对应的目标条纹,准确重新构建三维点。
作为一种可选的实施例,根据目标光平面方程以及目标条纹的中心坐标获取目标条纹在摄像机坐标系中重构的三维点可以包括:按照以下方程计算三维点的坐标:AXi+BYi+CZi+D=0;(u-cx)/fx=Xi/Zi;(v-cy)/fy=Yi/Zi;其中,(Xi、Yi、Zi)为三维点的坐标,A、B、C、D为目标光平面方程的系数,(u、v)为目标条纹的中心坐标,(cx、cy)为摄像机的主点坐标,fx、fy为摄像机的等效焦距。
采用本发明上述实施例,根据目标光平面方程的系数、(u、v)为目标条纹的中心坐标,(cx、cy)为摄像机的主点坐标,fx、fy为摄像机的等效焦距,可以准确得到(Xi、Yi、Zi)的三维点的坐标,准确构建三维点。
本发明还提供了一种优选实施例,该优选实施例提供了一种不同波段结构光结合的单目多线三维扫描方法。
本发明主要以不可见光波段(红外结构光)三维模组与单目可见光多线条纹结合所进行的技术改进作为例子。发明的目的在于利用红外结构光三维模组重构的三维数据指导单目多线条纹的三维重建,关键在于红外结构光三维模组三维重构数据指导单目多线条纹与光平面方程的准确匹配,提高多条纹的匹配准确性,增加匹配的条纹数量从而提高手持三维扫描系统的扫描效率。针对分辨率130万像素的相机可以达到100根条纹,同样的帧率与相机分辨率下,比现有技术上在扫描效率上提高10倍以上。同时可以实现多条纹扫描不用借助标志点而能根据特征实时拼接。
本发明提供的技术方案包括以下部分:设备构建、系统标定、数字投影与图像采 集、确定点集PtS关联光平面方程的序号、指导多线条纹图像中对应条纹的匹配及三维重构。
可选地,可以通过构建由红外结构光三维扫描模组和一个摄像机与数字投影仪组成三维数字成像传感器,且设备组件之间的相对位置固定,在测量范围内摆放被测物体。
可选地,系统标定部分,包括:对相机进行标定从而获取相机的内外参,内参A,外参R、T,同时标定红外结构光三维扫描模组与相机之间的相对位置关系对应的旋转平移矩阵Ms。
可选地,系统标定部分,还包括:进行多线条纹的光平面标定,标定出每个条纹对应的光平面方程plane(i)={AXi+BYi+CZi+D=0},其中(Xi、Yi、Zi)为条纹线段在相机坐标系重构的三维点。
图2是根据本发明实施例的一种可选的多线条纹解调图案的示意图,如图2所示,由计算机生成一幅条纹数量大于15的数字多线条纹图案(条纹数量最大可以达到100条或者更高),用数字投影仪向被测物体投射,数字激光图被物体的高度调制发生变形,产生调制后的数字多线条纹图案,相机同步采集调制后多线条纹图案。
可选地,可以确定点集PtS关联光平面方程的序号,通过获取红外结构光三维扫描模的三维数据PtS后,依次计算PtS三维点集中的每个三维点pt(i)(目标三维点)到每个光平面方程的欧式距离,设距离阈值vTH为0.5mm。假设pt(i)到第n个光平面方程的距离最短,且在vTH阈值范围内,则保留同时记录三维点pt(i)对应的第n个光平面方程。如果pt(i)关联不到光平面方程则删除此点。此时的点集PtS的每个三维点均对应相应的光平面方程。
图3是根据本发明实施例的一种可选的条纹线段分割及三维模组点云反投影的示意图,如图3所示,通过对调制后的多线条纹图案进行中心线提取,然后对每条中心线连通域的分割形成多条独立线段。然后将红外结构光三维模组的三维数据PtS中的pt(i)依次根据相机的标定内参投影到解调后的多线条纹图像上,如果pt(i)的投影点在八邻域内有多线条纹的独立线段,则该多线条纹的独立线段(目标条纹)对应第n个光平面方程(目标光平面方程)确定。
可选地,可以联合求解下列三个方程组即可解得多线条纹的三维点(Xi、Yi、Zi)(摄像机坐标系中重构的三维点),其中的已知量为:(A、B、C、D)为光平面方程系数,(u、v)为条纹中心坐标,(cx、cy)为相机主点坐标,fx、fy为等效焦距;:AXi+BYi+CZi+D=0;(u-cx)/fx=Xi/Zi;(v-cy)/fy=Yi/Zi
图4是根据本发明实施例的一种可选的三维手持红外结构光三维模组与单目多条纹结合的三维扫描系统的结构的示意图,如图4所示,该系统包括:数字投影仪101、摄像机102、红外结构光三维模组103、计算机104、以及被测样品105。
可选地,摄像机的内部参数:
Figure PCTCN2017107506-appb-000001
可选地,摄像机外部参数:
Figure PCTCN2017107506-appb-000002
T=[-1.77 -5.5 450]。
可选地,红外结构光三维模组的内部参数:
Figure PCTCN2017107506-appb-000003
可选地,红外结构光三维模组与相机的之间的系统结构参数:
Figure PCTCN2017107506-appb-000004
Ts=[91.3387 28.1183 1.7905]。
根据上述各部分,事先标定完多线条纹的光平面方程plane(i)={AXi+BYi+CZi+D=0},DLP对被测样品投射可见光波段的多线条纹,同步触发红外结构光三维模组采集被测物的三维数据及相机采集多线条纹,对采集到解调多线条纹图进行中心线提取及连通域分割。计算红外三维模组的三维数据到光平面方程的距离,在距离阈值内保留同时记录光平面方程序号。将该三维数据反投影到相机图像平面上,如与多线 条纹线段有交集则确定了该线段对应的光平面方程。由多线条纹图像坐标与对应的光平面方程根据标定的相机参数解算出多线条纹的三维数据。
本发明提供的技术方案,可以利用不可见结构光波段三维重构数据指导可见光波段结构光的单目三维重建;可以实现可见光单目多线条纹与对应光平面方程的准确匹配;可以利用不可见光三维模组三维重构数据确定光平面方程;还可以实现单目多条纹扫描不用借助标志点而能根据特征实时拼接。
本发明提供的技术方案,可以简化单目多条纹与对应光平面方程匹配的难度,提高匹配的准确性。同时解除了现有技术上对投射条纹数量的限制,同等条件下扫描速率可以提升十倍以上。通过单目三维重构方式解决了双目立体视觉会存在遮挡的问题。不同波段结构光的协同扫描。
根据本发明的另一方面,本发明实施例还提供了一种存储介质,存储介质包括存储的程序,其中,在程序运行时控制存储介质所在设备执行上述的基于单目三维扫描系统的三维重构方法。
根据本发明的另一方面,本发明实施例还提供了一种处理器,处理器设置为运行程序,其中,程序运行时执行上述的基于单目三维扫描系统的三维重构方法。
根据本发明实施例,还提供了一种基于单目三维扫描系统的三维重构装置实施例,需要说明的是,该基于单目三维扫描系统的三维重构装置设置为执行本发明实施例中的基于单目三维扫描系统的三维重构方法,本发明实施例中的基于单目三维扫描系统的三维重构方法可以在该基于单目三维扫描系统的三维重构装置中执行。
本发明实施例的基于单目三维扫描系统的三维重构装置中的单目三维扫描系统可以包括:不可见结构光扫描模组、摄像机、投影设备,图5是根据本发明实施例的一种可选的基于单目三维扫描系统的三维重构装置的示意图,如图5所示,该装置可以包括:
采集单元61,设置为利用不可见结构光扫描模组采集被测物体的深度图,并将深度图转换为三维数据点集,其中,三维数据点集中包括多个三维点;确定单元63,设置为确定多个三维点中的目标三维点所对应的目标光平面方程;投影单元65,设置为将目标三维点投影到调制后的多线条纹图像上,确定调制后的多线条纹图像中的与目标光平面方程相对应的目标条纹,其中,调制后的多线条纹图像为利用投影设备将多线条纹图像投射到被测物体上后摄像机采集到的图像;获取单元67,设置为根据目标光平面方程以及目标条纹的中心坐标获取目标条纹在摄像机坐标系中重构的三维点。
需要说明的是,该实施例中的采集单元61设置为执行本申请实施例中的步骤S102, 该实施例中的确定单元63设置为执行本申请实施例中的步骤S104,该实施例中的投影单元65设置为执行本申请实施例中的步骤S106,该实施例中的获取单元6设置为执行本申请实施例中的步骤S108。上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。
本发明上述实施例,可以根据不可见结构光扫描模组采集的被测物体的深度图,确定深度图转换的三维数据点集中目标三维点对应的目标光平面方程,再确定单个摄像机采集的调制后的多线条纹图像中目标光平面方程对应的目标条纹,进而再根据目标光平面方程以及目标条纹的中心坐标获取目标条纹在摄像机坐标系中重构的三维点,实现了使用单目三维扫描系统准确重构三维点,完成三维扫描,避免了双目三维扫描系统采用双目立体视觉在被测物体表面呈阶梯状的情况下造成视觉不连续的问题,以及部分被测物体被遮挡,导致双目扫描系统的双摄像头无法采集遮挡部分的图像,进而无法对遮挡部分进行三维重建,解决了双目立体视觉三维重构会存在遮挡的技术问题。
作为一种可选的实施例,上述装置还包括:标定模块,设置为在利用不可见结构光扫描模组采集被测物体的深度图,并将深度图转换为三维数据点集之前,对单目三维扫描系统进行标定,获取单目三维扫描系统的结构参数。
作为一种可选的实施例,标定模块包括:第一标定子模块,设置为对摄像机进行标定,获取摄像机的内外参数;第一获取模块,设置为获取不可见结构光扫描模组与摄像机之间的相对位置关系所对应的旋转平移矩阵;第二标定子模块,设置为对多线条纹图像中的每个条纹对应的光平面方程进行标定,获取多个标定后的光平面方程。
作为一种可选的实施例,确定单元包括:第二获取模块,设置为获取目标三维点到多个标定后的光平面方程的欧氏距离,并从多个标定后的光平面方程中确定出欧氏距离最短的光平面方程;第一确定模块,设置为在目标三维点到欧氏距离最短的光平面方程之间的欧式距离低于预定距离的情况下,将欧氏距离最短的光平面方程确定为目标光平面方程。
作为一种可选的实施例,投影单元包括:判断模块,设置为判断目标三维点在调制后的多线条纹图像中的投影点的预设范围内是否存在条纹线段,其中,条纹线段为对调制后的多线条纹图像进行中心线提取后对中心线连通域进行分割所形成的线段;第二确定模块,设置为在目标三维点在调制后的多线条纹图像中的投影点的预设范围内存在条纹线段的情况下,将条纹线段确定为与目标光平面方程相对应的目标条纹。
作为一种可选的实施例,获取单元包括:计算模块,设置为按照以下方程计算三 维点的坐标:AXi+BYi+CZi+D=0;(u-cx)/fx=Xi/Zi;(v-cy)/fy=Yi/Zi;其中,(Xi、Yi、Zi)为三维点的坐标,A、B、C、D为目标光平面方程的系数,(u、v)为目标条纹的中心坐标,(cx、cy)为摄像机的主点坐标,fx、fy为摄像机的等效焦距。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。
工业实用性
如上所述,本发明实施例提供的一种基于单目三维扫描系统的三维重构方法和装置具有以下有益效果:通过使用单目三维扫描系统准确重构三维点以完成三维扫描,进而避免了双目立体视觉三维重构会存在遮挡。

Claims (10)

  1. 一种基于单目三维扫描系统的三维重构方法,所述单目三维扫描系统包括:不可见结构光扫描模组、摄像机、投影设备,其中,所述方法包括:
    利用所述不可见结构光扫描模组采集被测物体的深度图,并将所述深度图转换为三维数据点集,其中,所述三维数据点集中包括多个三维点;
    确定所述多个三维点中的目标三维点所对应的目标光平面方程;
    将所述目标三维点投影到调制后的多线条纹图像上,确定所述调制后的多线条纹图像中的与所述目标光平面方程相对应的目标条纹,其中,所述调制后的多线条纹图像为利用所述投影设备将多线条纹图像投射到被测物体上后所述摄像机采集到的图像;
    根据所述目标光平面方程以及所述目标条纹的中心坐标获取所述目标条纹在所述摄像机坐标系中重构的三维点。
  2. 根据权利要求1所述的方法,其中,在利用所述不可见结构光扫描模组采集所述被测物体的深度图,并将所述深度图转换为三维数据点集之前,所述方法还包括:
    对所述单目三维扫描系统进行标定,获取所述单目三维扫描系统的结构参数。
  3. 根据权利要求2所述的方法,其中,对所述单目三维扫描系统进行标定,获取所述单目三维扫描系统的结构参数包括:
    对所述摄像机进行标定,获取所述摄像机的内外参数;
    获取所述不可见结构光扫描模组与所述摄像机之间的相对位置关系所对应的旋转平移矩阵;
    对所述多线条纹图像中的每个条纹对应的光平面方程进行标定,获取多个标定后的光平面方程。
  4. 根据权利要求3所述的方法,其中,确定所述多个三维点中的目标三维点所对应的目标光平面方程包括:
    获取所述目标三维点到所述多个标定后的光平面方程的欧氏距离,并从所述多个标定后的光平面方程中确定出欧氏距离最短的光平面方程;
    在所述目标三维点到所述欧氏距离最短的光平面方程之间的欧式距离低于预 定距离的情况下,将所述欧氏距离最短的光平面方程确定为所述目标光平面方程。
  5. 根据权利要求1所述的方法,其中,将所述目标三维点投影到调制后的多线条纹图像上,确定所述调制后的多线条纹图像中的与所述目标光平面方程相对应的目标条纹包括:
    判断所述目标三维点在所述调制后的多线条纹图像中的投影点的预设范围内是否存在条纹线段,其中,所述条纹线段为对所述调制后的多线条纹图像进行中心线提取后对所述中心线连通域进行分割所形成的线段;
    在所述目标三维点在所述调制后的多线条纹图像中的投影点的预设范围内存在条纹线段的情况下,将所述条纹线段确定为与所述目标光平面方程相对应的所述目标条纹。
  6. 根据权利要求1所述的方法,其中,根据所述目标光平面方程以及所述目标条纹的中心坐标获取所述目标条纹在所述摄像机坐标系中重构的三维点包括:
    按照以下方程计算所述三维点的坐标:
    AXi+BYi+CZi+D=0
    (u-cx)/fx=Xi/Zi
    (v-cy)/fy=Yi/Zi
    其中,(Xi、Yi、Zi)为所述三维点的坐标,A、B、C、D为所述目标光平面方程的系数,(u、v)为所述目标条纹的中心坐标,(cx、cy)为所述摄像机的主点坐标,fx、fy为所述摄像机的等效焦距。
  7. 一种基于单目三维扫描系统的三维重构装置,所述单目三维扫描系统包括:不可见结构光扫描模组、摄像机、投影设备,其中,所述装置包括:
    采集单元,设置为利用所述不可见结构光扫描模组采集被测物体的深度图,并将所述深度图转换为三维数据点集,其中,所述三维数据点集中包括多个三维点;
    确定单元,设置为确定所述多个三维点中的目标三维点所对应的目标光平面方程;
    投影单元,设置为将所述目标三维点投影到调制后的多线条纹图像上,确定所述调制后的多线条纹图像中的与所述目标光平面方程相对应的目标条纹,其中,所述调制后的多线条纹图像为利用所述投影设备将多线条纹图像投射到被测物体上后所述摄像机采集到的图像;
    获取单元,设置为根据所述目标光平面方程以及所述目标条纹的中心坐标获取所述目标条纹在所述摄像机坐标系中重构的三维点。
  8. 根据权利要求7所述的装置,其中,所述装置还包括:
    标定模块,设置为在利用所述不可见结构光扫描模组采集所述被测物体的深度图,并将所述深度图转换为三维数据点集之前,对所述单目三维扫描系统进行标定,获取所述单目三维扫描系统的结构参数。
  9. 一种存储介质,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至6中任一项所述的方法。
  10. 一种处理器,所述处理器设置为运行程序,其中,所述程序运行时执行权利要求1至6中任一项所述的方法。
PCT/CN2017/107506 2017-07-17 2017-10-24 基于单目三维扫描系统的三维重构方法和装置 WO2019015154A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP17899221.0A EP3457078B1 (en) 2017-07-17 2017-10-24 Monocular three-dimensional scanning system based three-dimensional reconstruction method and apparatus
US16/081,958 US10783651B2 (en) 2017-07-17 2017-10-24 Three-dimensional reconstruction method and device based on monocular three-dimensional scanning system
JP2018560102A JP6564537B1 (ja) 2017-07-17 2017-10-24 単眼3次元走査システムによる3次元再構成法および装置
CA3022442A CA3022442C (en) 2017-10-24 2017-10-24 Three-dimensional reconstruction method and device based on monocular three-dimensional scanning system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710581213.1A CN108269279B (zh) 2017-07-17 2017-07-17 基于单目三维扫描系统的三维重构方法和装置
CN201710581213.1 2017-07-17

Publications (1)

Publication Number Publication Date
WO2019015154A1 true WO2019015154A1 (zh) 2019-01-24

Family

ID=62770883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/107506 WO2019015154A1 (zh) 2017-07-17 2017-10-24 基于单目三维扫描系统的三维重构方法和装置

Country Status (5)

Country Link
US (1) US10783651B2 (zh)
EP (1) EP3457078B1 (zh)
JP (1) JP6564537B1 (zh)
CN (1) CN108269279B (zh)
WO (1) WO2019015154A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599569A (zh) * 2019-09-16 2019-12-20 上海市刑事科学技术研究院 一种建筑物内部二维平面图的生成方法、存储设备及终端
CN110764841A (zh) * 2019-10-10 2020-02-07 珠海格力智能装备有限公司 3d视觉应用开发平台和开发方法
CN113066117A (zh) * 2019-12-13 2021-07-02 顺丰科技有限公司 箱体体积测量方法、装置、计算机设备和存储介质
CN113706692A (zh) * 2021-08-25 2021-11-26 北京百度网讯科技有限公司 三维图像重构方法、装置、电子设备以及存储介质
CN113776785A (zh) * 2021-09-14 2021-12-10 中国石油大学(华东) 一种单目立体视觉系统三维光路分析方法
CN113983933A (zh) * 2021-11-11 2022-01-28 易思维(杭州)科技有限公司 一种多线激光传感器的标定方法
CN114565714A (zh) * 2022-02-11 2022-05-31 山西支点科技有限公司 一种单目视觉传感器混合式高精度三维结构恢复方法
CN114719775A (zh) * 2022-04-06 2022-07-08 新拓三维技术(深圳)有限公司 一种运载火箭舱段自动化形貌重建方法及系统
CN115082815A (zh) * 2022-07-22 2022-09-20 山东大学 基于机器视觉的茶芽采摘点定位方法、装置及采摘系统
US20220364853A1 (en) * 2019-10-24 2022-11-17 Shining 3D Tech Co., Ltd. Three-Dimensional Scanner and Three-Dimensional Scanning Method

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109186492B (zh) * 2018-08-14 2020-04-24 博众精工科技股份有限公司 基于单相机的三维重建方法、装置及系统
CN110443888B (zh) * 2019-08-15 2021-03-26 华南理工大学 一种形成多次反射成像的结构光三维重建装置及方法
CN110634180B (zh) * 2019-08-16 2024-02-02 河南三维泰科电子科技有限公司 一种基于相移轮廓术的多运动物体三维重构方法
TWI722703B (zh) * 2019-12-09 2021-03-21 財團法人工業技術研究院 投影設備與投影校正方法
CN112147625B (zh) * 2020-09-22 2024-03-01 深圳市道通科技股份有限公司 一种标定方法、装置、单目激光测量设备及标定系统
CN112179292B (zh) * 2020-11-20 2022-07-08 苏州睿牛机器人技术有限公司 一种基于投影仪的线结构光视觉传感器标定方法
CN112634377A (zh) * 2020-12-28 2021-04-09 深圳市杉川机器人有限公司 扫地机器人的相机标定方法、终端和计算机可读存储介质
CN114681089B (zh) * 2020-12-31 2023-06-06 先临三维科技股份有限公司 三维扫描装置和方法
CN112833816A (zh) * 2020-12-31 2021-05-25 武汉中观自动化科技有限公司 一种标志点定位与智能反向定位混合的定位方法和系统
CN113077503B (zh) * 2021-03-24 2023-02-07 浙江合众新能源汽车有限公司 盲区视频数据生成方法、系统、设备和计算机可读介质
CN113034676A (zh) * 2021-03-29 2021-06-25 黑芝麻智能科技(上海)有限公司 三维点云图的生成方法、装置、计算机设备和存储介质
CN113012236B (zh) * 2021-03-31 2022-06-07 武汉理工大学 一种基于交叉式双目视觉引导的机器人智能打磨方法
CN112967348A (zh) * 2021-04-01 2021-06-15 深圳大学 基于一维扫描结构光系统的三维重建方法及其相关组件
CN113129357B (zh) * 2021-05-10 2022-09-30 合肥工业大学 一种复杂背景下三维扫描测量光条中心提取方法
CN114264253B (zh) * 2021-12-09 2023-08-11 北京科技大学 高温物体三维轮廓非接触测量装置及其测量方法
CN114777671A (zh) * 2022-04-25 2022-07-22 武汉中观自动化科技有限公司 工件模型处理方法、服务器、前端设备及三维扫描系统
CN114798360A (zh) * 2022-06-29 2022-07-29 深圳市欧米加智能科技有限公司 Pcb板点胶的实时检测方法及相关装置
CN115345994A (zh) * 2022-08-10 2022-11-15 先临三维科技股份有限公司 三维重建方法及装置、系统
CN115127493B (zh) * 2022-09-01 2023-02-10 广东三姆森科技股份有限公司 一种用于产品测量的坐标标定方法及装置
CN115289974B (zh) * 2022-10-09 2023-01-31 思看科技(杭州)股份有限公司 孔位测量方法、装置、计算机设备和存储介质
CN115657061B (zh) * 2022-12-13 2023-04-07 成都量芯集成科技有限公司 一种室内墙面三维扫描装置及方法
CN115984512B (zh) * 2023-03-22 2023-06-13 成都量芯集成科技有限公司 一种平面场景三维重建装置及方法
CN115994954B (zh) * 2023-03-22 2023-06-27 浙江伽奈维医疗科技有限公司 一种高精度大视野近红外光学相机标定装置及标定方法
CN116664408B (zh) * 2023-07-31 2023-10-13 北京朗视仪器股份有限公司 一种彩色结构光的点云上采样方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030067461A1 (en) * 2001-09-24 2003-04-10 Fletcher G. Yates Methods, apparatus and computer program products that reconstruct surfaces from data point sets
CN102999939A (zh) * 2012-09-21 2013-03-27 魏益群 坐标获取装置、实时三维重建系统和方法、立体交互设备
CN106568394A (zh) * 2015-10-09 2017-04-19 西安知象光电科技有限公司 一种手持式三维实时扫描方法
CN106802138A (zh) * 2017-02-24 2017-06-06 杭州先临三维科技股份有限公司 一种三维扫描系统及其扫描方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440590B1 (en) * 2002-05-21 2008-10-21 University Of Kentucky Research Foundation System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns
US7876455B2 (en) 2004-08-03 2011-01-25 TechnoDream21 Co., Ltd. Three-dimensional shape measuring method and apparatus for the same
US7864309B2 (en) * 2007-05-04 2011-01-04 Burke E. Porter Machinery Company Non contact wheel alignment sensor and method
EP2249286A1 (en) * 2009-05-08 2010-11-10 Honda Research Institute Europe GmbH Robot with vision-based 3D shape recognition
US8243289B2 (en) * 2009-05-29 2012-08-14 Perceptron, Inc. System and method for dynamic windowing
CN101697233B (zh) * 2009-10-16 2012-06-06 长春理工大学 一种基于结构光的三维物体表面重建方法
US9251590B2 (en) * 2013-01-24 2016-02-02 Microsoft Technology Licensing, Llc Camera pose estimation for 3D reconstruction
US9083960B2 (en) * 2013-01-30 2015-07-14 Qualcomm Incorporated Real-time 3D reconstruction with power efficient depth sensor usage
WO2015118467A1 (en) * 2014-02-05 2015-08-13 Creaform Inc. Structured light matching of a set of curves from two cameras
KR20170058365A (ko) * 2014-09-16 2017-05-26 케어스트림 헬스 인코포레이티드 레이저 투사를 사용한 치아 표면 이미징 장치
CN106091984B (zh) * 2016-06-06 2019-01-25 中国人民解放军信息工程大学 一种基于线激光的三维点云数据获取方法
EP3258211B1 (en) * 2016-06-17 2023-12-13 Hexagon Technology Center GmbH Determining object reflection properties with respect to particular optical measurement
JP6691837B2 (ja) * 2016-06-27 2020-05-13 株式会社キーエンス 測定装置
TWI610059B (zh) * 2016-08-04 2018-01-01 緯創資通股份有限公司 三維量測方法及應用其之三維量測裝置
CN106524917B (zh) 2016-12-09 2019-01-11 北京科技大学 一种运输带上物体体积测量方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030067461A1 (en) * 2001-09-24 2003-04-10 Fletcher G. Yates Methods, apparatus and computer program products that reconstruct surfaces from data point sets
CN102999939A (zh) * 2012-09-21 2013-03-27 魏益群 坐标获取装置、实时三维重建系统和方法、立体交互设备
CN106568394A (zh) * 2015-10-09 2017-04-19 西安知象光电科技有限公司 一种手持式三维实时扫描方法
CN106802138A (zh) * 2017-02-24 2017-06-06 杭州先临三维科技股份有限公司 一种三维扫描系统及其扫描方法

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599569A (zh) * 2019-09-16 2019-12-20 上海市刑事科学技术研究院 一种建筑物内部二维平面图的生成方法、存储设备及终端
CN110764841A (zh) * 2019-10-10 2020-02-07 珠海格力智能装备有限公司 3d视觉应用开发平台和开发方法
CN110764841B (zh) * 2019-10-10 2024-01-19 珠海格力智能装备有限公司 3d视觉应用开发平台和开发方法
US20220364853A1 (en) * 2019-10-24 2022-11-17 Shining 3D Tech Co., Ltd. Three-Dimensional Scanner and Three-Dimensional Scanning Method
CN113066117A (zh) * 2019-12-13 2021-07-02 顺丰科技有限公司 箱体体积测量方法、装置、计算机设备和存储介质
CN113706692A (zh) * 2021-08-25 2021-11-26 北京百度网讯科技有限公司 三维图像重构方法、装置、电子设备以及存储介质
CN113706692B (zh) * 2021-08-25 2023-10-24 北京百度网讯科技有限公司 三维图像重构方法、装置、电子设备以及存储介质
CN113776785A (zh) * 2021-09-14 2021-12-10 中国石油大学(华东) 一种单目立体视觉系统三维光路分析方法
CN113776785B (zh) * 2021-09-14 2024-01-30 中国石油大学(华东) 一种单目立体视觉系统三维光路分析方法
CN113983933B (zh) * 2021-11-11 2022-04-19 易思维(杭州)科技有限公司 一种多线激光传感器的标定方法
CN113983933A (zh) * 2021-11-11 2022-01-28 易思维(杭州)科技有限公司 一种多线激光传感器的标定方法
CN114565714A (zh) * 2022-02-11 2022-05-31 山西支点科技有限公司 一种单目视觉传感器混合式高精度三维结构恢复方法
CN114719775B (zh) * 2022-04-06 2023-08-29 新拓三维技术(深圳)有限公司 一种运载火箭舱段自动化形貌重建方法及系统
CN114719775A (zh) * 2022-04-06 2022-07-08 新拓三维技术(深圳)有限公司 一种运载火箭舱段自动化形貌重建方法及系统
CN115082815A (zh) * 2022-07-22 2022-09-20 山东大学 基于机器视觉的茶芽采摘点定位方法、装置及采摘系统

Also Published As

Publication number Publication date
JP6564537B1 (ja) 2019-08-21
EP3457078B1 (en) 2020-06-17
EP3457078A1 (en) 2019-03-20
EP3457078A4 (en) 2019-05-22
US20190392598A1 (en) 2019-12-26
JP2019526033A (ja) 2019-09-12
CN108269279B (zh) 2019-11-08
US10783651B2 (en) 2020-09-22
CN108269279A (zh) 2018-07-10

Similar Documents

Publication Publication Date Title
WO2019015154A1 (zh) 基于单目三维扫描系统的三维重构方法和装置
CA3022442C (en) Three-dimensional reconstruction method and device based on monocular three-dimensional scanning system
EP3444560B1 (en) Three-dimensional scanning system and scanning method thereof
CN108151671B (zh) 一种三维数字成像传感器、三维扫描系统及其扫描方法
JP5583761B2 (ja) 動的基準フレームを用いた3次元表面検出方法及び装置
EP2870428B1 (en) System and method for 3d measurement of the surface geometry of an object
US7978892B2 (en) 3D photogrammetry using projected patterns
CN108267097B (zh) 基于双目三维扫描系统的三维重构方法和装置
JP3624353B2 (ja) 3次元形状計測方法およびその装置
KR101706093B1 (ko) 3차원 좌표 추출 시스템 및 그 방법
KR102424135B1 (ko) 2개의 카메라로부터의 곡선의 세트의 구조형 광 매칭
TW201922163A (zh) 用於分析皮膚狀況的系統和方法
CN103959012A (zh) 6自由度位置和取向确定
CA2577840A1 (en) A method for automated 3d imaging
WO2020063987A1 (zh) 三维扫描方法、装置、存储介质和处理器
Reichinger et al. Evaluation of methods for optical 3-D scanning of human pinnas
JP6580761B1 (ja) 偏光ステレオカメラによる深度取得装置及びその方法
CN111047678B (zh) 一种三维人脸采集装置和方法
Win Curve and Circle Fitting of 3D Data Acquired by RGB-D Sensor

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2017899221

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017899221

Country of ref document: EP

Effective date: 20180910

ENP Entry into the national phase

Ref document number: 2018560102

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017899221

Country of ref document: EP

Effective date: 20180912

NENP Non-entry into the national phase

Ref country code: DE