WO2021208442A1 - 一种三维场景的重建系统、方法、设备及存储介质 - Google Patents

一种三维场景的重建系统、方法、设备及存储介质 Download PDF

Info

Publication number
WO2021208442A1
WO2021208442A1 PCT/CN2020/131095 CN2020131095W WO2021208442A1 WO 2021208442 A1 WO2021208442 A1 WO 2021208442A1 CN 2020131095 W CN2020131095 W CN 2020131095W WO 2021208442 A1 WO2021208442 A1 WO 2021208442A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
target
dimensional
acquisition device
Prior art date
Application number
PCT/CN2020/131095
Other languages
English (en)
French (fr)
Inventor
欧清扬
Original Assignee
广东博智林机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东博智林机器人有限公司 filed Critical 广东博智林机器人有限公司
Publication of WO2021208442A1 publication Critical patent/WO2021208442A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the embodiments of the present invention relate to the technical field of surveying, mapping and measurement, and in particular to a system, method, device, and storage medium for reconstructing a three-dimensional scene.
  • the existing indoor three-dimensional reconstruction methods mainly include two kinds: one is to use laser, radar and other ranging sensors to obtain the structural information of the variable surface of the object so as to realize the three-dimensional reconstruction.
  • the second is to collect indoor point cloud data through a depth camera, and to splice point clouds through feature recognition to achieve 3D reconstruction.
  • the embodiment of the invention discloses a three-dimensional scene reconstruction system, method, equipment and storage medium, which realizes the high-precision three-dimensional reconstruction of the three-dimensional scene.
  • an embodiment of the present invention provides a system for reconstructing a three-dimensional scene, the system including:
  • the target is set at the set position of the three-dimensional target to be measured
  • a point cloud data acquisition device configured to acquire point cloud data of a set frame of the three-dimensional target to be measured on which the target is set;
  • a space coordinate acquisition device which is used to acquire the first space coordinates of the preset points of each of the targets;
  • the scene reconstruction device is configured to receive the point cloud data of a set frame and each of the first spatial coordinates, and perform a calculation of the three-dimensional data to be measured according to the point cloud data of the set frame and each of the first spatial coordinates.
  • the target performs 3D scene reconstruction.
  • an embodiment of the present invention also provides a method for reconstructing a three-dimensional scene.
  • the method includes:
  • an embodiment of the present invention also provides a device for reconstructing a three-dimensional scene, the device including:
  • One or more processors are One or more processors;
  • Memory used to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the method for reconstructing a three-dimensional scene provided by any embodiment of the present invention.
  • embodiments of the present invention also provide a storage medium containing computer-executable instructions, which are used to execute the method for reconstructing a three-dimensional scene provided by any embodiment of the present invention when the computer-executable instructions are executed by a computer processor. .
  • FIG. 1A is a schematic structural diagram of a system for reconstructing a three-dimensional scene in Embodiment 1 of the present invention
  • Fig. 1B is a schematic structural diagram of a target in the first embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a system for reconstructing a three-dimensional scene in Embodiment 2 of the present invention
  • Embodiment 3 is a flowchart of a method for reconstructing a three-dimensional scene in Embodiment 3 of the present invention
  • FIG. 4 is a schematic structural diagram of a device for reconstructing a three-dimensional scene in Embodiment 4 of the present invention.
  • Fig. 5 is a schematic structural diagram of a device for reconstructing a three-dimensional scene in Embodiment 5 of the present invention.
  • FIG. 1A is a schematic structural diagram of a system for reconstructing a three-dimensional scene according to Embodiment 1 of the present invention. As shown in FIG. 1A, the system includes: a target 110, a point cloud data acquisition device 120, a spatial coordinate acquisition device 130, and scene reconstruction Equipment 140.
  • the three-dimensional scene may be an indoor scene of a building, a robot scene, a car scene, or other scenes that require three-dimensional reconstruction.
  • the embodiment of the present invention takes an indoor scene of a building as an example for description.
  • the target 110 is set at the set position of the three-dimensional target to be measured;
  • the point cloud data acquisition device 120 is used to obtain the point cloud data of the set frame of the three-dimensional target to be measured on which the target 110 is set; spatial coordinates
  • the acquiring device 130 is configured to acquire the first spatial coordinates of the preset points of each of the targets 110;
  • the scene reconstruction device 140 is configured to receive the point cloud data of the set frame and each of the first spatial coordinates, and Perform a three-dimensional scene reconstruction on the three-dimensional target to be measured according to the point cloud data of the set frame and each of the first space coordinates.
  • the point cloud data acquisition device 120 includes at least one of a three-dimensional camera and a lidar.
  • the number of targets 110 may be 3, 6, 9, 12, 15, 18, or other values, which can be determined according to the reconstruction target.
  • three targets need to be set to facilitate feature recognition and to determine a reconstruction plane with high accuracy.
  • at least 18 (3*6) targets are required, that is, three targets need to be set on each plane of the building.
  • the three-dimensional target to be tested can be a building to be tested, a car to be tested, a robot to be tested, or other three-dimensional objects.
  • the building to be tested can be any existing building, such as a residential building, an ancient building, etc., or a building under construction.
  • the preset point can be at the center of the target or at other preset positions.
  • the color of the target 110 may include only black and white, or may be colored.
  • the use of black and white targets can reduce the amount of data and facilitate feature extraction.
  • the shape of the target 110 can be square, round or other shapes, and can also be a regular shape or an irregular shape.
  • the size of the target 110 may be determined according to the size of the building to be measured and the performance of the point cloud data acquisition device 120 and the space coordinate acquisition device 130.
  • the material of the target 110 may be a PVC sticker or other materials. It should be understood that the size and material of the target 110 need to ensure that the point cloud data acquisition device 120 and the spatial coordinate acquisition device 130 can effectively and accurately collect various features of the target 110 within the range of their resolution.
  • the target 110 includes a target identification code, and the target identification code is set at the center of the target and is used to identify the target 110.
  • the target identification code includes at least one of a two-dimensional code and a barcode.
  • other identification codes can also be used for target identification, such as the target serial number.
  • the identification code of the target 110 can effectively identify the identity of the target 110.
  • multiple targets 110 are needed.
  • a two-dimensional code, barcode or serial number code corresponding to the target 110 is designed.
  • the target 110 includes a circle, a two-dimensional code, and a cross center in order from the outside to the inside.
  • FIG. 1B is a schematic structural diagram of a target 110 provided in the first embodiment of the present invention.
  • the center cross mark 113 can facilitate the space coordinate acquisition device 130 to aim and align the center of the target 110.
  • the ring 111 is placed on the outermost side of the target 110 and can be inscribed in the square contour where the target 110 is located.
  • the center of the ring 111 is The center or bullseye of the target 110 is also the center of the central cross mark, and the two-dimensional code 112 is used to identify the target.
  • the number and placement positions of the targets 110 need to be determined. Specifically, the correspondence between the position of each target 110 and the two-dimensional code can be established. Relationship to determine the position of the target 110 based on the two-dimensional code and the corresponding relationship.
  • the point cloud data acquisition device 120 may be a three-dimensional camera, a lidar, or other devices.
  • a 3D camera is also called a depth camera, which can be used to detect the depth of field distance in the shooting space. It can be a structured light (Structured Light) depth camera, a depth camera based on the Time of Flight (TOF) method, or a binocular stereo vision based depth camera.
  • the depth camera also referred to as a binocular camera
  • the embodiment of the present invention does not limit the specific type of the point cloud data acquisition device 120.
  • the setting frame can be determined according to the reconstruction target and the field of view of the three-dimensional camera.
  • the fixed frame number can be 5 frames, specifically 3 frames for azimuth rotation and 2 frames for the elevation to increase the ceiling and the ground. Of course, it can also be 6 frames, one frame for each plane.
  • the minimum value of the set frame needs to meet the range covered by the reconstruction target, and the data of two adjacent frames may not have an overlapping area. Assuming that the set area in the reconstruction target does not need to be rebuilt, the set area can be vacated.
  • the related actions of the three-dimensional camera such as rotation, pitching, etc., can be realized through a motion device.
  • the reconstruction system of the three-dimensional scene further includes:
  • the reconstruction planning module is used to determine the scanning scheme of the point cloud data acquisition device 120 according to the reconstruction target and the field angle of a single frame of the point cloud data acquisition device 120.
  • the reconstruction target includes the area of the building to be tested that needs to be reconstructed, and may be the reconstruction range.
  • the scanning scheme includes the number of frames that the point cloud data acquisition device 120 needs to shoot, that is, the set frame, and may also include the angle at which the point cloud data acquisition device 120 shoots each frame.
  • the spatial coordinate acquisition device 130 may be any device that can acquire spatial coordinates.
  • the spatial coordinate acquisition device 130 includes at least one of a total station, a laser tracker, a laser radar, and a three-coordinate measuring machine.
  • the total station also known as the Electronic Total Station (ETS)
  • ETS Electronic Total Station
  • the spatial coordinate acquisition device 130 has higher accuracy than the point cloud data acquisition device 120, so as to improve the accuracy of the point cloud data.
  • the scene reconstruction device 140 is used to reconstruct the scene according to the collected data, the first spatial coordinates and the point cloud data of each frame.
  • the point cloud data of each frame is mainly aligned and spliced according to the first space coordinates, so as to generate a reconstructed three-dimensional scene of the three-dimensional target to be measured.
  • the technical solution of the embodiment of the present invention adds the feature points of the three-dimensional scene by setting the target for the three-dimensional target to be measured, which is convenient for feature extraction and subsequent coordinate determination; indoor data is performed by the point cloud data acquisition device and the space coordinate device Two different devices for data collection improve the accuracy and robustness of the data.
  • the spatial coordinate device can determine the spatial coordinates of the preset points of the target with higher accuracy, thereby improving the splicing of point cloud data
  • the space coordinates are determined directly by the equipment, which improves the speed and efficiency of reconstruction.
  • the spatial coordinate acquisition device directly acquires the spatial coordinates of the preset points of each target, and the point cloud data acquisition device collects point cloud data to splice the point cloud data according to the spatial coordinates.
  • Fig. 2 is a schematic structural diagram of a system for reconstructing a three-dimensional scene provided by the second embodiment of the present invention.
  • This embodiment is a refinement and supplement of the previous embodiment.
  • the reconstruction of the three-dimensional scene provided by this embodiment The system further includes a target planning module, which is used to determine the set position of the target according to the field of view angle of the point cloud data acquisition device and the shooting target.
  • the reconstruction system of the three-dimensional scene includes: a target planning module 210, a target 220, a point cloud data acquisition device 230, a spatial coordinate acquisition device 240, a data receiving module 250, a coordinate extraction module 260, and a transformation matrix determination Module 270 and scene reconstruction module 280.
  • the target planning module 210 is configured to determine the set position of the target according to the field of view of the point cloud data acquisition device and the shooting target; the target 220 is set at the set position of the three-dimensional target to be measured;
  • the point cloud data acquisition device 230 is used to acquire the point cloud data of the set frame of the three-dimensional target to be measured on which the target is set;
  • the spatial coordinate acquisition device 240 is used to acquire the preset point of each target
  • the data receiving module 250 is used to receive the point cloud data of the set frame and each of the first space coordinates;
  • the coordinate extraction module 260 is used to extract the mark according to the point cloud data The second space coordinates of the preset point of the target;
  • the transformation matrix determination module 270 is configured to determine the point cloud data based on the first space coordinates and the second space coordinates of the preset points of each target based on a preset algorithm Obtain the transformation matrix of the point cloud data of each frame of the device;
  • the scene reconstruction module 280 is used to transform the point cloud data
  • the three-dimensional target to be tested is the building to be tested, and the three-dimensional scene is the indoor scene of the building to be tested.
  • the shooting target is the reconstruction target, which may include the area to be reconstructed of the three-dimensional target to be measured, and may be the reconstruction range.
  • the angle of view of the point cloud data acquisition device 230 refers to the field of view in which the point cloud data acquisition device 230 captures a single frame of point cloud data. According to the point cloud data collected by the point cloud data acquisition device 230 and the internal parameters of the point cloud data acquisition device 230, the spatial coordinates of each point, that is, the aforementioned second spatial coordinates, can be determined.
  • the spatial coordinate acquisition device 240 has higher measurement accuracy than the point cloud data acquisition device 230.
  • the transformation matrix determination module 270 is mainly used to perform coordinate transformation on the point cloud data based on the first spatial coordinates. Since the spatial coordinate acquisition device 240 corresponding to the first spatial coordinates has higher accuracy than the point cloud data acquisition device 230, Thereby improving the accuracy of point cloud data.
  • the transformation matrix includes a rotation matrix and a translation matrix
  • the preset algorithm includes at least one of a quaternion array method, a singular value decomposition method, and an iterative nearest point method.
  • the scene reconstruction module 280 is specifically configured to:
  • the 3D scene reconstruction system further includes:
  • the depth calibration module is used to: obtain the preset depth correction relational expression; determine the first bullseye distance according to the first spatial coordinates of the preset points of two adjacent targets; Determine the second bullseye distance with two spatial coordinates; determine the depth correction parameters of the point cloud data of each frame of the point cloud data acquisition device according to each of the first bullseye distance, the second bullseye distance, and the preset depth correction relation;
  • the depth correction parameter performs depth correction on the point cloud data, so as to extract the second space coordinates of the preset point of the target according to the depth correction point cloud data.
  • the expression of the preset depth correction relationship is:
  • D P is the depth value of the point cloud data
  • D Q is the depth value of the corrected point cloud data
  • A, B, and C are all depth correction parameters.
  • the value range, initial value, and step length of the depth correction parameters A, B, and C can be preset.
  • the exemplary value range of A can be (5.5 -7 , 7.5 -7 ), and the value of B
  • the range can be (-0.9, 0.9)
  • the value range of C can be (-10, 10), of course, it can also be other ranges. It needs to be determined according to the error between the depth captured by the point cloud data acquisition device and the true depth. This mainly depends on the performance parameters of the point cloud data acquisition device (such as a three-dimensional camera).
  • the initial value can be the lowest value in the value range of each depth correction parameter.
  • the step length can be customized by the user or set by default.
  • the step length of parameter A can be 0.1 -7
  • the step length of parameter B can be 0.1
  • the step length of the parameter C can be 0.05, of course, the value of the step length can also be other values.
  • a depth error function and a preset depth error threshold can be constructed in advance.
  • the parameters A, B and C that is, the values of parameters A, B, and C are substituted into the above-mentioned preset depth correction relation to determine the corrected depth of each target, that is, the coordinates of the preset point (bullss eye) of the corrected target are obtained
  • the corrected second target distance of two adjacent targets according to the corrected coordinates, and substitute the first target distance and the corrected second target distance into the depth error function, when the depth error function is continuous
  • the preset depth error threshold is less than or equal to the two iteration processes, it is determined that the parameters A, B, and C at this time are the required depth correction parameters.
  • the depth of the point cloud data of the frame captured by the point cloud data acquisition device 230 is corrected to obtain the corrected point cloud data.
  • the corrected point cloud data of each frame of point cloud data is obtained.
  • the corrected point cloud data is substituted for the point cloud data collected by the point cloud data acquisition device 230 to perform the various steps of scene reconstruction, that is, the data receiving module is used to receive the corrected point cloud data of the set frame and The first space coordinate, the coordinate extraction module is used to extract the second space coordinate of the preset point of the target according to the corrected point cloud data.
  • the expression of the depth error function can be:
  • D Q is the depth value after the correction point cloud data
  • n is an target distance (a first distance to target or the second target distance)
  • L M is the first target distance
  • L Q point after correction The second bullseye distance corresponding to cloud data, for two adjacent bullseye i(X i ,Y i ,D i ) and j(X j ,Y j ,D j ), the corresponding bullseye distance L ij is expressed as :
  • mi is the spatial coordinate of the preset point i of the target set by the spatial coordinate acquisition device 240, that is, the first spatial coordinate
  • p i is the preset point i of the set target collected by the point cloud data acquisition device 230 space coordinates, i.e. the second spatial coordinate
  • T is a translation vector for the 3-dimensional
  • N i is the error vector conversion target default setting point i
  • l is the total number of targets.
  • I 3 is a 3 ⁇ 3 unit matrix.
  • a conversion error threshold can be set, and when the conversion error function is less than the conversion error threshold, it is determined that the transformation matrix (rotation matrix and translation matrix) meets the requirements.
  • the scene reconstruction module can perform coordinate transformation on the point cloud data of each frame according to the transformation matrix, and perform point cloud splicing according to the transformed point cloud data of each frame, so as to reconstruct the three-dimensional scene of the three-dimensional target to be measured.
  • the three-dimensional scene reconstruction system provided by the embodiments of the present invention can also be applied to the scene reconstruction of three-dimensional object models, such as automobiles, robots, or other objects.
  • the technical solution of the embodiment of the present invention uses a point cloud data acquisition device and spatial coordinates to collect indoor data. Two different devices perform data collection, which improves the accuracy and robustness of the data. At the same time, the spatial coordinate device can be compared The spatial coordinates of the preset points of the target are determined with high precision, thereby improving the accuracy of point cloud data splicing.
  • the spatial coordinates are directly determined by the device, which improves the speed and efficiency of reconstruction; the point cloud data is processed through the first spatial coordinates Depth correction, coordinate transformation of the point cloud data according to the preset algorithm and the first space coordinates, and stitching based on the transformed point cloud data to obtain a reconstructed model, which further improves the accuracy of model reconstruction and improves the quality of the reconstructed model.
  • FIG. 3 is a flowchart of a method for reconstructing a three-dimensional scene provided by the third embodiment of the present invention. This embodiment is applicable to the reconstruction of a three-dimensional scene.
  • the method can be executed by a system or device for reconstructing a three-dimensional scene, as shown in the figure. As shown in 3, the method specifically includes the following steps:
  • Step 310 Based on the point cloud data acquisition device, acquire the point cloud data of the set frame of the three-dimensional target to be measured on which the target is set.
  • Step 320 Obtain the first space coordinates of the preset points of each target based on the space coordinate device.
  • Step 330 Perform a three-dimensional scene reconstruction on the three-dimensional target to be measured according to the point cloud data of the set frame and each of the first space coordinates.
  • the technical solution of the embodiment of the present invention adds the feature points of the three-dimensional scene by setting the target for the three-dimensional target to be measured, which is convenient for feature extraction and subsequent coordinate determination; indoor data is performed by the point cloud data acquisition device and the space coordinate device Two different devices for data collection improve the accuracy and robustness of the data.
  • the spatial coordinate device can determine the spatial coordinates of the preset points of the target with higher accuracy, thereby improving the splicing of point cloud data
  • the space coordinates are determined directly by the equipment, which improves the speed and efficiency of reconstruction.
  • the spatial coordinate acquisition device directly acquires the spatial coordinates of the preset points of each target, and the point cloud data acquisition device collects point cloud data to splice the point cloud data according to the spatial coordinates.
  • the target includes a target identification code, and the target identification code is set in the center of the target and is used to identify the target.
  • the method for reconstructing the three-dimensional scene, before acquiring the point cloud data of the set frame of the three-dimensional target to be measured on which the target is set based on the point cloud data acquisition device further includes:
  • the set position of the target is determined according to the angle of view of the point cloud data acquisition device and the shooting target.
  • performing three-dimensional scene reconstruction on the three-dimensional target to be measured according to the point cloud data of the set frame and each of the first spatial coordinates includes:
  • the first spatial coordinates and the second spatial coordinates of the preset point determine the transformation matrix of each frame of point cloud data of the point cloud data acquisition device; transform the point cloud data of each frame according to each of the transformation matrices , Performing a three-dimensional scene reconstruction on the three-dimensional target to be measured according to the transformed point cloud data of each frame.
  • the transformation matrix includes a rotation matrix and a translation matrix
  • the preset algorithm includes at least one of a quaternion array method, a singular value decomposition method, and an iterative nearest point method.
  • performing 3D scene reconstruction on the 3D target to be measured according to the transformed point cloud data of each frame includes:
  • the target performs 3D scene reconstruction.
  • the point cloud data of the set frame and each of the first spatial coordinates are compared with each other.
  • the 3D target to be measured is reconstructed in the 3D scene, it also includes:
  • the preset depth correction relation determine the first bullseye distance according to the first spatial coordinates of the preset points of two adjacent targets; determine the second bullseye distance according to the second spatial coordinates of the preset points of two adjacent targets Determining the depth correction parameters of the point cloud data of each frame of the point cloud data acquisition device according to each of the first bullseye distance, the second bullseye distance and the preset depth correction relationship; according to the depth correction parameters Point cloud data for depth correction.
  • the three-dimensional scene reconstruction of the building to be tested according to the point cloud data of the set frame and each of the first spatial coordinates includes: the point cloud data of the set frame after depth correction and each The first space coordinates perform a three-dimensional scene reconstruction of the building to be tested.
  • extracting the second space coordinates of the preset point of the target according to the point cloud data includes: extracting the second space coordinates of the preset point of the target according to the depth-corrected point cloud data .
  • the expression of the preset depth correction relationship is:
  • D P is the depth value of the point cloud data
  • D Q is the depth value of the corrected point cloud data
  • A, B, and C are all depth correction parameters.
  • the space coordinate acquisition device includes at least one of a total station, a laser tracker, a laser radar, and a coordinate measuring machine.
  • the device includes: a point cloud data acquisition module 410, a first spatial coordinate acquisition module 420, and a three-dimensional scene reconstruction module 430 .
  • the point cloud data acquisition module 410 is used to acquire the point cloud data of the set frame of the three-dimensional target to be measured based on the point cloud data acquisition device;
  • the first spatial coordinate acquisition module 420 is used to The space coordinate device obtains the first space coordinates of the preset points of each target;
  • the three-dimensional scene reconstruction module 430 is used to compare the point cloud data of the set frame and each of the first space coordinates to the Measure three-dimensional targets for three-dimensional scene reconstruction.
  • the technical solution of the embodiment of the present invention adds the feature points of the three-dimensional scene by setting the target for the three-dimensional target to be measured, which is convenient for feature extraction and subsequent coordinate determination; indoor data is performed by the point cloud data acquisition device and the space coordinate device Two different devices for data collection improve the accuracy and robustness of the data.
  • the spatial coordinate device can determine the spatial coordinates of the preset points of the target with higher accuracy, thereby improving the splicing of point cloud data
  • the space coordinates are determined directly by the equipment, which improves the speed and efficiency of reconstruction.
  • the spatial coordinate acquisition device directly acquires the spatial coordinates of the preset points of each target, and the point cloud data acquisition device collects point cloud data to splice the point cloud data according to the spatial coordinates.
  • the target includes a target two-dimensional code, and the target two-dimensional code is arranged at the center of the target and is used to identify the target.
  • the 3D scene reconstruction device further includes:
  • the target planning module is used to determine the set position of the target according to the angle of view of the point cloud data acquisition device and the shooting target.
  • the 3D scene reconstruction module 430 includes:
  • the data receiving module is used to receive the point cloud data of the set frame and each of the first spatial coordinates; the coordinate extraction module is used to extract the second point of the preset point of the target according to the point cloud data Spatial coordinates; a transformation matrix determination module for determining the point cloud data of each frame of the point cloud data acquisition device based on the preset algorithm, according to the first spatial coordinates and the second spatial coordinates of the preset points of each target Transformation matrix; a scene reconstruction module for transforming the point cloud data of each frame according to each of the transformation matrices, and reconstructing the three-dimensional target to be measured according to the transformed point cloud data of each frame.
  • the transformation matrix includes a rotation matrix and a translation matrix
  • the preset algorithm includes at least one of a quaternion array method, a singular value decomposition method, and an iterative nearest point method.
  • the scene reconstruction module is specifically used for:
  • the 3D scene reconstruction module 430 further includes:
  • the depth calibration module is used to: obtain the preset depth correction relational expression; determine the first bullseye distance according to the first spatial coordinates of the preset points of two adjacent targets;
  • the expression of the preset depth correction relationship is:
  • D P is the depth value of the point cloud data
  • D Q is the depth value of the corrected point cloud data
  • A, B, and C are all depth correction parameters.
  • the space coordinate acquisition device includes at least one of a total station, a laser tracker, a laser radar, and a coordinate measuring machine.
  • the apparatus for reconstructing a three-dimensional scene provided by an embodiment of the present invention can execute the method for reconstructing a three-dimensional scene provided by any embodiment of the present invention, and has corresponding functional modules and beneficial effects for the execution method.
  • FIG. 5 is a schematic structural diagram of a device for reconstructing a three-dimensional scene according to Embodiment 5 of the present invention.
  • the device includes a processor 510, a memory 520, an input device 530, and an output device 540;
  • the number can be one or more.
  • one processor 510 is taken as an example; the processor 510, memory 520, input device 530, and output device 540 in the device can be connected by a bus or other means. Connect as an example.
  • the memory 520 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the three-dimensional scene reconstruction method in the embodiment of the present invention (for example, a three-dimensional scene reconstruction device
  • the processor 510 executes various functional applications and data processing of the device by running software programs, instructions, and modules stored in the memory 520, that is, realizes the above-mentioned three-dimensional scene reconstruction method.
  • the memory 520 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal, and the like.
  • the memory 520 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 520 may further include a memory remotely provided with respect to the processor 510, and these remote memories may be connected to the device/terminal/server through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 530 can be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the device.
  • the output device 540 may include a display device such as a display screen.
  • the sixth embodiment of the invention also provides a storage medium containing computer-executable instructions, when the computer-executable instructions are executed by a computer processor, they are used to perform a method for reconstructing a three-dimensional scene.
  • the method includes:
  • a storage medium containing computer-executable instructions provided by an embodiment of the present invention is not limited to the method operations described above, and can also execute the method for reconstructing a three-dimensional scene provided by any embodiment of the present invention. Related operations in.
  • Floppy disk read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including several instructions to make a computer device (which can be A personal computer, a server, or a network device, etc.) execute the method described in each embodiment of the present invention.
  • a computer device which can be A personal computer, a server, or a network device, etc.
  • the units and modules included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized.
  • the specific names of each functional unit are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种三维场景的重建系统、方法、设备及存储介质。该三维场景的重建系统包括:标靶(110),设置于待测三维目标的设定位置;点云数据获取设备(120),用于获取设置有所述标靶的所述待测三维目标的设定帧的点云数据;空间坐标获取设备(130),用于获取各个所述标靶的预设点的第一空间坐标;场景重建设备(140),用于接收设定帧的所述点云数据以及各个所述第一空间坐标,并根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测建筑进行三维场景重建。该系统通过设置点云数据获取设备和空间坐标获取设备两种设备进行三维场景的重建,提高了重建的精度。

Description

一种三维场景的重建系统、方法、设备及存储介质
相关申请的交叉引用
本申请要求于2020年04月14日提交的申请号为202010288970.1的中国申请的优先权,其在此出于所有目的通过引用将其全部内容并入本文。
技术领域
本发明实施例涉及测绘测量技术领域,尤其涉及一种三维场景的重建系统、方法、设备及存储介质。
背景技术
随着智慧城市、文物保护、室内导航、虚拟现实的发展,人们对室内精细化三维模型的需求越来越高。
现有的室内三维重建的方法主要包括两种:一是采用激光、雷达等测距传感器来获取物体变偶面的结构信息从而实现三维重建。第二种则是通过深度相机采集室内点云数据,通过特征识别进行点云拼接,从而实现三维重建。
发明内容
本发明实施例公开了一种三维场景的重建系统、方法、设备及存储介质,实现了三维场景的高精度三维重建。
第一方面,本发明实施例提供了一种三维场景的重建系统,该系统包括:
标靶,设置于待测三维目标的设定位置;
点云数据获取设备,用于获取设置有所述标靶的所述待测三维目标的设定帧的点云数据;
空间坐标获取设备,用于获取各个所述标靶的预设点的第一空间坐标;
场景重建设备,用于接收设定帧的所述点云数据以及各个所述第一空间坐标,并根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
第二方面,本发明实施例还提供了一种三维场景的重建方法,该方法包括:
基于点云数据获取设备,获取设置有标靶的所述待测三维目标的设定帧的点云数据;
基于空间坐标设备,获取各个所述标靶的预设点的第一空间坐标;
根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
第三方面,本发明实施例还提供了一种三维场景的重建设备,该设备包括:
一个或多个处理器;
存储器,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本发明任意实施例提供的三维场景的重建方法。
第四方面,本发明实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行本发明任意实施例提供的三维场景的重建方法。
附图说明
图1A是本发明实施例一中的一种三维场景的重建系统的结构示意图;
图1B是本发明实施例一中的一种标靶的结构示意图;
图2是本发明实施例二中的一种三维场景的重建系统的结构示意图;
图3是本发明实施例三中的一种三维场景的重建方法的流程图;
图4是本发明实施例四中的一种三维场景的重建装置的结构示意图;
图5是本发明实施例五中的一种三维场景的重建设备的结构示意图。
具体实施方式
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。
实施例一
图1A是本发明实施例一提供的一种三维场景的重建系统的结构示意图,如图1A所示,该系统包括:标靶110、点云数据获取设备120、空间坐标获取设备130和场景重建设备140。
其中,三维场景可以是建筑物的室内场景、机器人场景、汽车场景或者其他需要进行三维重建的场景,为了便于描述本发明实施例以建筑物的室内场景为例进行描述。标靶110,设置于待测三维目标的设定位置;点云数据获取设备120,用于获取设置有所述标靶110的所述待测三维目标的设定帧的点云数据;空间坐标获取设备130,用于获取各个所述标靶110的预设点的第一空间坐标;场景重建设备140,用于接收设定帧的所述点云数据以及各个所述第一空间坐标,并根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
可选的,所述点云数据获取设备120包括三维相机和激光雷达中的至少一项。
具体的,标靶110的数量可以是3个、6个、9个、12个、15个、18个或者其他数值,可以根据重建目标进行确定。通常,对于一个平面来说,需要设置3个标靶,以便于进行特征识别,和高精度地确定一个重建平面。那么,对于一个建筑物室内的完整重建,至少需要18(3*6)个标靶,即需要分别在该建筑物的每个平面上设置三个标靶。待测三维目标可以是待测 建筑物、待测汽车、待测机器人或者其他三维物体。待测建筑物可以是任意一个现有的建筑物,如居民楼、古建筑等,也可以是正处于施工阶段的建筑物。预设点可以在标靶的中心,也可以在其他预设位置。
进一步地,标靶110的颜色可以是仅包括黑色和白色,也可以是彩色。采用黑白标靶可以减少数据量,同时便于进行特征提取。标靶110的形状可以方形、圆形或者其他形状,也可以是规则形状或者不规则形状。标靶110的尺寸可以根据待测建筑物的尺寸以及点云数据获取设备120、空间坐标获取设备130的性能确定。标靶110的材质可以是聚氯乙烯胶贴纸,也可以是其他材质。需要了解的是,标靶110的尺寸和材质需要保证点云数据获取设备120和空间坐标获取设备130可以在其分辨率的范围内有效且准确采集标靶110的各个特征。
可选的,标靶110包括标靶识别码,所述标靶识别码设置于所述标靶的中央,用于识别所述标靶110。
可选的,所述标靶识别码包括二维码和条形码中的至少一项。当然也可以采用其他的标识码进行标靶身份识别,如标靶序号。
具体的,标靶110的识别码可以有效地识别该标靶110的身份。通常,为了重建一个建筑物的室内模型,需要多个标靶110,为了区分各个标靶110,便设计了标靶110对应的二维码、条形码或者序号编码。
可选的,所述标靶110由外至内依次包括圆环、二维码和十字中心。
示例性,图1B是本发明实施例一提供的一种标靶110的结构示意图,如图1B所示,该标靶110由外至内依次包括圆环111、二维码112和中心十字标识113。其中,中心十字标识113可方便空间坐标获取设备130瞄准对齐标靶110中心,圆环111置于标靶110的最外侧,可以内切于标靶110所在的正方形轮廓,圆环111的圆心即标靶110的中心或靶心,也是中心十字标识的中心,二维码112用于识别该标靶。
进一步地,在进行数据采集之前,如采集点云数据和第一空间坐标之 前,需要确定标靶110的数量以及摆放位置,具体的,可以建立各个标靶110的位置与二维码的对应关系,以根据二维码和该对应关系确定标靶110的位置。
具体的,点云数据获取设备120可以是三维相机、激光雷达或者其他设备。三维相机又称为深度相机,可用于检测拍摄空间的景深距离,可以是结构光(Structured Light)深度相机、基于光飞行时间法(Time of flight,TOF)的深度相机或者基于双目立体视觉的深度相机(又称为双目相机),当然,也可以是基于其他算法的深度相机,本发明实施例对点云数据获取设备120的具体类型不进行限定。
具体的,可以根据重建目标以及三维相机的视野范围确定设定帧,如重建目标为重建完整的待测建筑物的模型,三维相机的视野范围(单帧视场角)为120°,则设定帧数可以是5帧,具体为方位旋转拍摄3帧、俯仰增加天花板跟地面的2帧,当然也可以是6帧,各个平面拍摄一帧。设定帧的最小数值需要满足重建目标所涵盖的范围,相邻两帧数据可以不存在重叠区域。假设重建目标中设定区域不需要进行重建,则可以空出该设定区域。
进一步地,可以通过运动装置实现三维相机的相关动作,如旋转、俯仰等。
进一步地,该三维场景的重建系统,还包括:
重建规划模块,用于根据重建目标以及点云数据获取设备120单帧的视场角确定点云数据获取设备120的扫描方案。
其中,重建目标包括待测建筑物的需要进行重建的区域,可以是重建范围。扫描方案包括点云数据获取设备120需要拍摄的帧数,即设定帧,还可以包括点云数据获取设备120拍摄各个帧时的角度。
具体的,空间坐标获取设备130可以是任意一种可以获取空间坐标的装置,可选的,所述空间坐标获取设备130包括全站仪、激光跟踪仪、激 光雷达和三坐标测量机中的至少一项。其中,全站仪,又称为全站性电子测距仪(Electronic Total Station,ETS),可以自动显示所测量的三维坐标,方便、快捷。空间坐标获取设备130具有较点云数据获取设备120更高的精确,以便于提高点云数据的精度。
具体的,场景重建设备140,用于根据所采集的数据,第一空间坐标和各帧点云数据,进行场景重建。主要是根据第一空间坐标对各帧点云数据进行对准和拼接,从而生成待测三维目标的重建三维场景。
本发明实施例的技术方案,通过为待测三维目标设置标靶,增加了三维场景的特征点,便于进行特征提取以及后续的坐标的确定;通过点云数据获取设备和空间坐标设备进行室内数据的采集,两种不同的设备进行数据采集,提高了数据的准确度和鲁棒性,同时,空间坐标设备可以较高精度地确定标靶预设点的空间坐标,从而提高了点云数据拼接的精度,同时,直接由设备确定空间坐标,提高了重建的速度和效率。本发明实施例的技术方案,由空间坐标获取设备,直接获取各个标靶预设点的空间坐标,并由点云数据获取设备采集点云数据,以根据该空间坐标对点云数据进行拼接,从而实现三维目标的三维场景的重建,提高了重建的精度和效率。
实施例二
图2是本发明实施例二提供的一种三维场景的重建系统的结构示意图,本实施例是对上一实施例的细化和补充,可选的,本实施例所提供的三维场景的重建系统还包括:标靶规划模块,用于根据所述点云数据获取设备的视场角和拍摄目标确定所述标靶的设定位置。
如图2所示,该三维场景的重建系统包括:标靶规划模块210、标靶220、点云数据获取设备230、空间坐标获取设备240、数据接收模块250、坐标提取模块260、变换矩阵确定模块270和场景重建模块280。
其中,标靶规划模块210,用于根据所述点云数据获取设备的视场角和拍摄目标确定所述标靶的设定位置;标靶220,设置于待测三维目标的设定 位置;点云数据获取设备230,用于获取设置有所述标靶的所述待测三维目标的设定帧的点云数据;空间坐标获取设备240,用于获取各个所述标靶的预设点的第一空间坐标;数据接收模块250,用于接收设定帧的所述点云数据以及各个所述第一空间坐标;坐标提取模块260,用于根据所述点云数据,提取所述标靶的预设点的第二空间坐标;变换矩阵确定模块270,用于基于预设算法,根据各个标靶的预设点的所述第一空间坐标和第二空间坐标确定所述点云数据获取设备的各帧点云数据的变换矩阵;场景重建模块280,用于根据各个所述变换矩阵对各帧所述点云数据进行变换,根据变换后的各帧点云数据对所述待测三维目标进行三维场景重建。
具体的,以待测三维目标为待测建筑物,三维场景为待测建筑物的室内场景为例。拍摄目标即重建目标,可以包括待测三维目标的需要进行重建的区域,可以是重建范围。点云数据获取设备230的视场角指的是点云数据获取设备230拍摄单帧点云数据的视野范围。根据点云数据获取设备230所采集的点云数据以及点云数据获取设备230的内参可以确定各个点的空间坐标,即上述第二空间坐标。空间坐标获取设备240具有较点云数据获取设备230更高的测量精度。
具体的,变换矩阵确定模块270主要用于基于第一空间坐标,对点云数据进行坐标变换,由于第一空间坐标对应的空间坐标获取设备240具有较点云数据获取设备230更高的精度,从而提高了点云数据的精度。
可选的,所述变换矩阵包括旋转矩阵和平移矩阵,所述预设算法包括四元数组法、奇异值分解法、迭代最近点法中的至少一种。
可选的,所述场景重建模块280,具体用于:
根据各个所述变换矩阵对各帧所述点云数据进行变换;根据各个所述标靶的预设点的第一空间坐标确定各帧所述点云数据的位置关系;根据各个所述位置关系以及变换后的各帧所述点云数据对所述待测三维目标进行三维场景重建。
可选的,该三维场景的重建系统,还包括:
深度标定模块,用于:获取预设深度矫正关系式;根据相邻两个标靶的预设点的第一空间坐标确定第一靶心距;根据相邻两个标靶的预设点的第二空间坐标确定第二靶心距;根据各个第一靶心距、第二靶心距以及所述预设深度矫正关系式确定所述点云数据获取设备的各帧点云数据的深度矫正参数;根据所述深度矫正参数对所述点云数据进行深度矫正,以根据深度矫正后的点云数据进行所述标靶的预设点的第二空间坐标的提取。
可选的,所述预设深度矫正关系式的表达式为:
D Q=A*D P 2+B*D P+C
其中,D P为点云数据的深度值,D Q为矫正后的点云数据的深度值,A、B、C均为深度矫正参数。
具体的,可以预先设定深度矫正参数A、B和C的取值范围、初始值以及步长,示例性的A的取值范围可以是(5.5 -7,7.5 -7),B的取值范围可以是(-0.9,0.9),C的取值范围可以是(-10,10),当然也可以是其他范围,需要根据点云数据获取设备所拍摄的深度与真实深度的误差进行确定,这主要取决于点云数据获取设备(如三维相机)的性能参数。初始值可以是各个深度矫正参数的取值范围中的最下值,步长可以由用户自定义,也可以默认设置,如参数A的步长可以是0.1 -7,参数B的步长可以是0.1,参数C的步长可以是0.05,当然步长的值也可以其他值。
进一步地,对于单帧点云数据,可以预先构建深度误差函数和预设深度误差阈值,根据各个参数的初始值、步长、预设深度矫正关系式以及深度误差函数,一次次迭代参数A、B和C的值,即将参数A、B和C的值代入上述预设深度矫正关系式,确定各个标靶的校正后的深度,即得到矫正后的标靶的预设点(靶心)的坐标,根据矫正后的坐标确定相邻两个标靶的矫正后的第二靶心距,将所述第一靶心距和校正后的第二靶心距代入所述深度误差函数,当深度误差函数在连续两次的迭代过程中均满足小于 等于所述预设深度误差阈值时,则确定此时的参数A、B和C为所需的深度矫正参数。并基于该深度矫正参数和预设深度矫正关系式对点云数据获取设备230拍摄的该帧点云数据的深度进行矫正,得到矫正后的点云数据。依次类推,得到各帧点云数据的矫正后的点云数据。
相应的,将该矫正后的点云数据代替点云数据获取设备230采集的点云数据进行场景重建的各个步骤,即数据接收模块用于接收设定帧的矫正后的所述点云数据以及第一空间坐标,坐标提取模块则用于根据矫正后的所述点云数据,提取所述标靶的预设点的第二空间坐标。
具体的,深度误差函数的表达式可以是:
Figure PCTCN2020131095-appb-000001
其中,D Q即为上述矫正后的点云数据的深度值,n为靶心距(第一靶心距或第二靶心距)的数量,L M为第一靶心距,L Q为矫正后的点云数据对应的第二靶心距,对于两个相邻靶心i(X i,Y i,D i)与j(X j,Y j,D j),其对应的靶心距L i-j的表达式为:
Figure PCTCN2020131095-appb-000002
具体的,以奇异值分解法为例,对变换矩阵确定模块的功能进行具体说明。设第一空间坐标与第二空间坐标的转换关系为:
m i=Rp i+T+N i,i=1,2,3,···,l(l≥3)
其中,m i为空间坐标获取设备240设定标靶的预设点i的空间坐标,即第一空间坐标;p i为点云数据获取设备230采集的设定标靶的预设点i的空间坐标,即第二空间坐标;R为3×3旋转矩阵,T为3维为平移矢量,N i为设定标靶的预设点i的转换误差矢量,l为标靶的总数量。
预先建立转换误差函数:
Figure PCTCN2020131095-appb-000003
Figure PCTCN2020131095-appb-000004
其中,
Figure PCTCN2020131095-appb-000005
当预设点为标靶的靶心时,
Figure PCTCN2020131095-appb-000006
表示点云数据获取设备230下的质心,
Figure PCTCN2020131095-appb-000007
则表示空间坐标获取设备240下的质心。
对矩阵H 3*3进行奇异值分解得到:H 3*3=UDV T,其中,D=diag(d i),d 1≥d 2≥d 3≥0。令
Figure PCTCN2020131095-appb-000008
其中,I 3为3×3的单位矩阵。
当,rank(H 3*3)≥2时,可得旋转矩阵和平移矩阵的表达式为:
Figure PCTCN2020131095-appb-000009
进一步地,可以设置转换误差阈值,当转换误差函数小于所述转换误差阈值时,则确定变换矩阵(旋转矩阵和平移矩阵)满足需求。
接下来,场景重建模块便可以根据变换矩阵对各帧的点云数据进行坐标变换,并根据变换后的各帧点云数据进行点云拼接,以对所述待测三维目标进行三维场景重建。
需要了解的是,本发明实施例所提供的三维场景的重建系统还可以应用于三维物体模型的场景重建,如汽车、机器人或者其他物体。
本发明实施例的技术方案,通过点云数据获取设备和空间坐标进行室内数据的采集,两种不同的设备进行数据采集,提高了数据的准确度和鲁棒性,同时,空间坐标设备可以较高精度地确定标靶预设点的空间坐标,从而提高了点云数据拼接的精度,同时,直接由设备确定空间坐标,提高了重建的速度和效率;通过第一空间坐标对点云数据进行深度矫正,并根 据预设算法和第一空间坐标对点云数据进行坐标变换,基于变换后的点云数据进行拼接,得到重建模型,进一步提高了模型重建的精度,提高了重建模型的质量。
实施例三
图3是本发明实施例三提供的一种三维场景的重建方法的流程图,本实施例可适用于对三维场景的重建情况,该方法可以由三维场景的重建系统或装置来执行,如图3所示,该方法具体包括如下步骤:
步骤310、基于点云数据获取设备,获取设置有标靶的所述待测三维目标的设定帧的点云数据。
步骤320、基于空间坐标设备,获取各个所述标靶的预设点的第一空间坐标。
步骤330、根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
本发明实施例的技术方案,通过为待测三维目标设置标靶,增加了三维场景的特征点,便于进行特征提取以及后续的坐标的确定;通过点云数据获取设备和空间坐标设备进行室内数据的采集,两种不同的设备进行数据采集,提高了数据的准确度和鲁棒性,同时,空间坐标设备可以较高精度地确定标靶预设点的空间坐标,从而提高了点云数据拼接的精度,同时,直接由设备确定空间坐标,提高了重建的速度和效率。本发明实施例的技术方案,由空间坐标获取设备,直接获取各个标靶预设点的空间坐标,并由点云数据获取设备采集点云数据,以根据该空间坐标对点云数据进行拼接,从而实现三维目标的三维场景的重建,提高了重建的精度和效率。
可选的,所述标靶包括标靶识别码,所述标靶识别码设置于所述标靶的中央,用于识别所述标靶。
可选的,该三维场景的重建方法,在基于点云数据获取设备,获取设置有标靶的所述待测三维目标的设定帧的点云数据之前,还包括:
根据所述点云数据获取设备的视场角和拍摄目标确定所述标靶的设定位置。
可选的,根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建,包括:
接收设定帧的所述点云数据以及各个所述第一空间坐标;根据所述点云数据,提取所述标靶的预设点的第二空间坐标;基于预设算法,根据各个标靶的预设点的所述第一空间坐标和第二空间坐标确定所述点云数据获取设备的各帧点云数据的变换矩阵;根据各个所述变换矩阵对各帧所述点云数据进行变换,根据变换后的各帧点云数据对所述待测三维目标进行三维场景重建。
可选的,所述变换矩阵包括旋转矩阵和平移矩阵,所述预设算法包括四元数组法、奇异值分解法、迭代最近点法中的至少一种。
可选的,根据变换后的各帧点云数据对所述待测三维目标进行三维场景重建,包括:
根据各个所述标靶的预设点的第一空间坐标确定各帧所述点云数据的位置关系;根据各个所述位置关系以及变换后的各帧所述点云数据对所述待测三维目标进行三维场景重建。
可选的,该三维场景的重建方法,在获取各个所述标靶的预设点的第一空间坐标之后,在根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建之前,还包括:
获取预设深度矫正关系式;根据相邻两个标靶的预设点的第一空间坐标确定第一靶心距;根据相邻两个标靶的预设点的第二空间坐标确定第二靶心距;根据各个第一靶心距、第二靶心距以及所述预设深度矫正关系式确定所述点云数据获取设备的各帧点云数据的深度矫正参数;根据所述深度矫正参数对所述点云数据进行深度矫正。相应的,根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测建筑进行三维场景重建,包 括:根据深度矫正后的设定帧的所述点云数据以及各个所述第一空间坐标对所述待测建筑进行三维场景重建。相应的,根据所述点云数据,提取所述标靶的预设点的第二空间坐标,包括:根据深度矫正后的点云数据,提取所述标靶的预设点的第二空间坐标。
可选的,所述预设深度矫正关系式的表达式为:
D Q=A*D P 2+B*D P+C
其中,D P为点云数据的深度值,D Q为矫正后的点云数据的深度值,A、B、C均为深度矫正参数。
可选的,所述空间坐标获取设备包括全站仪、激光跟踪仪、激光雷达和三坐标测量机中的至少一项。
实施例四
图4是本发明实施例四提供的一种三维场景的重建装置的示意图,如图4所示,该装置包括:点云数据获取模块410、第一空间坐标获取模块420和三维场景重建模块430。
其中,点云数据获取模块410,用于基于点云数据获取设备,获取设置有标靶的所述待测三维目标的设定帧的点云数据;第一空间坐标获取模块420,用于基于空间坐标设备,获取各个所述标靶的预设点的第一空间坐标;三维场景重建模块430,用于根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
本发明实施例的技术方案,通过为待测三维目标设置标靶,增加了三维场景的特征点,便于进行特征提取以及后续的坐标的确定;通过点云数据获取设备和空间坐标设备进行室内数据的采集,两种不同的设备进行数据采集,提高了数据的准确度和鲁棒性,同时,空间坐标设备可以较高精度地确定标靶预设点的空间坐标,从而提高了点云数据拼接的精度,同时,直接由设备确定空间坐标,提高了重建的速度和效率。本发明实施例的技术方案,由空间坐标获取设备,直接获取各个标靶预设点的空间坐标,并 由点云数据获取设备采集点云数据,以根据该空间坐标对点云数据进行拼接,从而实现三维目标的三维场景重建,提高了重建的精度和效率。可选的,所述标靶包括标靶二维码,所述标靶二维码设置于所述标靶的中央,用于识别所述标靶。
可选的,该三维场景的重建装置,还包括:
标靶规划模块,用于根据所述点云数据获取设备的视场角和拍摄目标确定所述标靶的设定位置。
可选的,三维场景重建模块430,包括:
数据接收模块,用于接收设定帧的所述点云数据以及各个所述第一空间坐标;坐标提取模块,用于根据所述点云数据,提取所述标靶的预设点的第二空间坐标;变换矩阵确定模块,用于基于预设算法,根据各个标靶的预设点的所述第一空间坐标和第二空间坐标确定所述点云数据获取设备的各帧点云数据的变换矩阵;场景重建模块,用于根据各个所述变换矩阵对各帧所述点云数据进行变换,根据变换后的各帧点云数据对所述待测三维目标进行重建。
可选的,所述变换矩阵包括旋转矩阵和平移矩阵,所述预设算法包括四元数组法、奇异值分解法、迭代最近点法中的至少一种。
可选的,所述场景重建模块,具体用于:
根据各个所述变换矩阵对各帧所述点云数据进行变换;根据各个所述标靶的预设点的第一空间坐标确定各帧所述点云数据的位置关系;根据各个所述位置关系以及变换后的各帧所述点云数据对所述待测三维目标进行重建。
可选的,三维场景重建模块430,还包括:
深度标定模块,用于:获取预设深度矫正关系式;根据相邻两个标靶的预设点的第一空间坐标确定第一靶心距;
根据相邻两个标靶的预设点的第二空间坐标确定第二靶心距;根据各 个第一靶心距、第二靶心距以及所述预设深度矫正关系式确定所述点云数据获取设备的各帧点云数据的深度矫正参数;根据所述深度矫正参数对所述点云数据进行深度矫正。
可选的,所述预设深度矫正关系式的表达式为:
D Q=A*D P 2+B*D P+C
其中,D P为点云数据的深度值,D Q为矫正后的点云数据的深度值,A、B、C均为深度矫正参数。
可选的,所述空间坐标获取设备包括全站仪、激光跟踪仪、激光雷达和三坐标测量机中的至少一项。
本发明实施例所提供的三维场景的重建装置可执行本发明任意实施例所提供的三维场景的重建方法,具备执行方法相应的功能模块和有益效果。
实施例五
图5为本发明实施例五提供的一种三维场景的重建设备的结构示意图,如图5所示,该设备包括处理器510、存储器520、输入装置530和输出装置540;设备处理器510的数量可以是一个或多个,图5中以一个处理器510为例;设备中的处理器510、存储器520、输入装置530和输出装置540可以通过总线或其他方式连接,图5中以通过总线连接为例。
存储器520作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的三维场景的重建方法对应的程序指令/模块(例如,三维场景的重建装置中的点云数据获取模块410、第一空间坐标获取模块420和三维场景重建模块430)。处理器510通过运行存储在存储器520中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的三维场景的重建方法。
存储器520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器520可以包括高速随机存取存 储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器520可进一步包括相对于处理器510远程设置的存储器,这些远程存储器可以通过网络连接至设备/终端/服务器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置530可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。输出装置540可包括显示屏等显示设备。
实施例六
发明实施例六还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种三维场景的重建方法,该方法包括:
于点云数据获取设备,获取设置有标靶的所述待测三维目标的设定帧的点云数据;
基于空间坐标设备,获取各个所述标靶的预设点的第一空间坐标;
根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本发明任意实施例所提供的三维场景的重建方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本发明实施例的技术方案可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、 随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
值得注意的是,上述三维场景的重建装置或系统的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (20)

  1. 一种三维场景的重建系统,其特征在于,所述系统包括:
    标靶,设置于待测三维目标的设定位置;
    点云数据获取设备,用于获取设置有所述标靶的所述待测三维目标的设定帧的点云数据;
    空间坐标获取设备,用于获取各个所述标靶的预设点的第一空间坐标;
    场景重建设备,用于接收设定帧的所述点云数据以及各个所述第一空间坐标,并根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
  2. 根据权利要求1所述的系统,其特征在于,所述标靶包括标靶识别码,所述标靶识别码设置于所述标靶的中央,用于识别所述标靶。
  3. 根据权利要求2所述的系统,其特征在于,所述标靶识别码包括二维码、条形码和序号编码中的至少一项。
  4. 根据权利要求1所述的系统,其特征在于,所述系统还包括:
    标靶规划模块,用于根据所述点云数据获取设备的视场角和拍摄目标确定所述标靶的设定位置。
  5. 根据权利要求1所述的系统,其特征在于,所述系统还包括:
    重建规划模块,用于根据重建目标以及所述点云数据获取设备单帧的视场角确定所述点云数据获取设备的扫描方案,其中,所述重建目标为所述待测三维目标的需要进行重建的区域。
  6. 根据权利要求5所述的系统,其特征在于,所述设定帧的所述点云数据以及各个所述第一空间坐标是根据所述重建目标以及所述点云数据获取设备的视野范围进行确定。
  7. 根据权利要求1所述的系统,其特征在于,所述场景重建设备,包括:
    数据接收模块,用于接收设定帧的所述点云数据以及各个所述第一空间坐标;
    坐标提取模块,用于根据所述点云数据,提取所述标靶的所述预设点的第二空间坐标;
    变换矩阵确定模块,用于基于预设算法,根据各个标靶的所述预设点的所述第一空间坐标和所述第二空间坐标,确定所述点云数据获取设备的各帧点云数据的变换矩阵;
    场景重建模块,用于根据各个所述变换矩阵对各帧所述点云数据进行变换,根据变换后的各帧点云数据对所述待测三维目标进行三维场景重建。
  8. 根据权利要求7所述的系统,其特征在于,所述变换矩阵包括旋转矩阵和平移矩阵,所述预设算法包括四元数组法、奇异值分解法、迭代最近点法中的至少一种。
  9. 根据权利要求7所述的系统,其特征在于,所述场景重建模块,具体用于:
    根据各个所述变换矩阵对各帧所述点云数据进行变换;
    根据各个所述标靶的预设点的第一空间坐标确定各帧所述点云数据的位置关系;
    根据各个所述位置关系以及变换后的各帧所述点云数据对所述待测三维目标进行三维场景重建。
  10. 根据权利要求7所述的系统,其特征在于,所述场景重建设备,还包括深度标定模块,用于:
    获取预设深度矫正关系式;
    根据相邻两个标靶的所述预设点的所述第一空间坐标确定第一靶心距;
    根据相邻两个标靶的所述预设点的所述第二空间坐标确定第二靶心距;
    根据各个第一靶心距、第二靶心距以及所述预设深度矫正关系式确定所述点云数据获取设备的各帧点云数据的深度矫正参数;
    根据所述深度矫正参数对所述点云数据进行深度矫正,以根据深度矫正后的点云数据进行所述标靶的预设点的第二空间坐标的提取。
  11. 根据权利要求10所述的系统,其特征在于,所述预设深度矫正关系式的表达式为:
    D Q=A*D P 2+B*D P+C
    其中,D P为所述点云数据的深度值,D Q为矫正后的所述点云数据的深度值,A、B、C均为深度矫正参数。
  12. 根据权利要求11所述的系统,其特征在于,所述深度校正参数的取值范围是根据点云数据获取设备所拍摄的深度与真实深度进行确定。
  13. 根据权利要求1所述的系统,其特征在于,所述空间坐标获取设备包括全站仪、激光跟踪仪、激光雷达和三坐标测量机中的至少一项。
  14. 根据权利要求1所述的系统,其特征在于,所述点云数据获取设备包括三维相机和激光雷达中的至少一项。
  15. 根据权利要求1所述的系统,其特征在于,所述标靶由外至内依次包括圆环、二维码和十字中心。
  16. 一种三维场景的重建方法,其特征在于,包括:
    基于点云数据获取设备,获取设置有标靶的所述待测三维目标的设定帧的点云数据;
    基于空间坐标设备,获取各个所述标靶的预设点的第一空间坐标;
    根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
  17. 根据权利要求16所述的方法,其特征在于,所述基于点云数据获取设备,获取设置有标靶的所述待测三维目标的设定帧的点云数据之前还包括:
    根据所述点云数据获取设备的视场角和拍摄目标确定所述标靶的设定位置。
  18. 一种三维场景的重建装置,其特征在于,所述装置包括:
    点云数据获取模块,用于基于点云数据获取设备,获取设置有标靶的所述待测三维目标的设定帧的点云数据;
    第一空间坐标获取模块,用于基于空间坐标设备,获取各个所述标靶的预设点的第一空间坐标;
    三维场景重建模块,用于根据设定帧的所述点云数据以及各个所述第一空间坐标对所述待测三维目标进行三维场景重建。
  19. 一种三维场景的重建设备,其特征在于,所述设备包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求16或权利要求17任一所述的三维场景的重建方法。
  20. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求16或权利要求17任一所述的三维场景的重建方法。
PCT/CN2020/131095 2020-04-14 2020-11-24 一种三维场景的重建系统、方法、设备及存储介质 WO2021208442A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010288970.1 2020-04-14
CN202010288970.1A CN113592989B (zh) 2020-04-14 2020-04-14 一种三维场景的重建系统、方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021208442A1 true WO2021208442A1 (zh) 2021-10-21

Family

ID=78083760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/131095 WO2021208442A1 (zh) 2020-04-14 2020-11-24 一种三维场景的重建系统、方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113592989B (zh)
WO (1) WO2021208442A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004927A (zh) * 2021-10-25 2022-02-01 北京字节跳动网络技术有限公司 3d视频模型重建方法、装置、电子设备及存储介质
CN114862788A (zh) * 2022-04-29 2022-08-05 湖南联智科技股份有限公司 一种三维激光扫描的平面标靶坐标自动识别方法
CN115218891A (zh) * 2022-09-01 2022-10-21 西华大学 一种移动机器人自主定位导航方法
CN115979121A (zh) * 2022-10-26 2023-04-18 成都清正公路工程试验检测有限公司 一种提高自动测量系统点位测量精度的方法
CN115984512A (zh) * 2023-03-22 2023-04-18 成都量芯集成科技有限公司 一种平面场景三维重建装置及方法
CN116859410A (zh) * 2023-06-08 2023-10-10 中铁第四勘察设计院集团有限公司 一种提高既有铁路线无人机激光雷达测量精度的方法
CN116993923A (zh) * 2023-09-22 2023-11-03 长沙能川信息科技有限公司 换流站三维模型制作方法、系统、计算机设备和存储介质
CN117876502A (zh) * 2024-03-08 2024-04-12 荣耀终端有限公司 深度标定方法、深度标定设备及深度标定系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708150A (zh) * 2022-05-02 2022-07-05 先临三维科技股份有限公司 一种扫描数据处理方法、装置、电子设备及介质
CN115032615A (zh) * 2022-05-31 2022-09-09 中国第一汽车股份有限公司 一种激光雷达标定点确定方法、装置、设备及存储介质
CN116299368B (zh) * 2023-05-19 2023-07-21 深圳市其域创新科技有限公司 激光扫描仪的精度测量方法、装置、扫描仪及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950433A (zh) * 2010-08-31 2011-01-19 东南大学 利用激光三维扫描技术建立变电站真三维模型的方法
CN104973092A (zh) * 2015-05-04 2015-10-14 上海图甲信息科技有限公司 一种基于里程和图像测量的铁轨路基沉降测量方法
CN107631700A (zh) * 2017-09-07 2018-01-26 西安电子科技大学 三维扫描仪与全站仪相结合的三维数据测量方法
CN110163968A (zh) * 2019-05-28 2019-08-23 山东大学 Rgbd相机大型三维场景构建方法及系统
US10535148B2 (en) * 2016-12-07 2020-01-14 Hexagon Technology Center Gmbh Scanner VIS

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120093A (zh) * 2019-03-25 2019-08-13 深圳大学 一种多元特征混合优化的rgb-d室内三维测图方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950433A (zh) * 2010-08-31 2011-01-19 东南大学 利用激光三维扫描技术建立变电站真三维模型的方法
CN104973092A (zh) * 2015-05-04 2015-10-14 上海图甲信息科技有限公司 一种基于里程和图像测量的铁轨路基沉降测量方法
US10535148B2 (en) * 2016-12-07 2020-01-14 Hexagon Technology Center Gmbh Scanner VIS
CN107631700A (zh) * 2017-09-07 2018-01-26 西安电子科技大学 三维扫描仪与全站仪相结合的三维数据测量方法
CN110163968A (zh) * 2019-05-28 2019-08-23 山东大学 Rgbd相机大型三维场景构建方法及系统

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004927A (zh) * 2021-10-25 2022-02-01 北京字节跳动网络技术有限公司 3d视频模型重建方法、装置、电子设备及存储介质
CN114862788A (zh) * 2022-04-29 2022-08-05 湖南联智科技股份有限公司 一种三维激光扫描的平面标靶坐标自动识别方法
CN114862788B (zh) * 2022-04-29 2024-05-24 湖南联智科技股份有限公司 一种三维激光扫描的平面标靶坐标自动识别方法
CN115218891A (zh) * 2022-09-01 2022-10-21 西华大学 一种移动机器人自主定位导航方法
CN115979121A (zh) * 2022-10-26 2023-04-18 成都清正公路工程试验检测有限公司 一种提高自动测量系统点位测量精度的方法
CN115984512A (zh) * 2023-03-22 2023-04-18 成都量芯集成科技有限公司 一种平面场景三维重建装置及方法
CN115984512B (zh) * 2023-03-22 2023-06-13 成都量芯集成科技有限公司 一种平面场景三维重建装置及方法
CN116859410A (zh) * 2023-06-08 2023-10-10 中铁第四勘察设计院集团有限公司 一种提高既有铁路线无人机激光雷达测量精度的方法
CN116859410B (zh) * 2023-06-08 2024-04-19 中铁第四勘察设计院集团有限公司 一种提高既有铁路线无人机激光雷达测量精度的方法
CN116993923A (zh) * 2023-09-22 2023-11-03 长沙能川信息科技有限公司 换流站三维模型制作方法、系统、计算机设备和存储介质
CN116993923B (zh) * 2023-09-22 2023-12-26 长沙能川信息科技有限公司 换流站三维模型制作方法、系统、计算机设备和存储介质
CN117876502A (zh) * 2024-03-08 2024-04-12 荣耀终端有限公司 深度标定方法、深度标定设备及深度标定系统

Also Published As

Publication number Publication date
CN113592989A (zh) 2021-11-02
CN113592989B (zh) 2024-02-20

Similar Documents

Publication Publication Date Title
WO2021208442A1 (zh) 一种三维场景的重建系统、方法、设备及存储介质
CN112894832B (zh) 三维建模方法、装置、电子设备和存储介质
CN111473739B (zh) 一种基于视频监控的隧道塌方区围岩变形实时监测方法
CN113532311A (zh) 点云拼接方法、装置、设备和存储设备
CN110176032B (zh) 一种三维重建方法及装置
WO2019127445A1 (zh) 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品
CN107808407A (zh) 基于双目相机的无人机视觉slam方法、无人机及存储介质
CN106529538A (zh) 一种飞行器的定位方法和装置
CN107833250B (zh) 语义空间地图构建方法及装置
CN107560592B (zh) 一种用于光电跟踪仪联动目标的精确测距方法
CN109341668B (zh) 基于折射投影模型和光束追踪法的多相机测量方法
CN109425348A (zh) 一种同时定位与建图的方法和装置
CN110889873A (zh) 一种目标定位方法、装置、电子设备及存储介质
CN112489099A (zh) 点云配准方法、装置、存储介质及电子设备
CN111998862A (zh) 一种基于bnn的稠密双目slam方法
CN113192200A (zh) 一种基于空三并行计算算法的城市实景三维模型的构建方法
WO2023284358A1 (zh) 相机标定方法、装置、电子设备及存储介质
CN117095002B (zh) 轮毂缺陷的检测方法及装置、存储介质
CN111735447A (zh) 一种仿星敏式室内相对位姿测量系统及其工作方法
Dreher et al. Global localization in meshes
CN116091701A (zh) 三维重建方法、装置、计算机设备及存储介质
CN113494906B (zh) 利用机器学习识别目标的影像全站仪无人测量方法及系统
CN114092564B (zh) 无重叠视域多相机系统的外参数标定方法、系统、终端及介质
CN114387532A (zh) 边界的识别方法及其装置、终端、电子设备和无人设备
CN113223163A (zh) 点云地图构建方法及装置、设备、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931381

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20931381

Country of ref document: EP

Kind code of ref document: A1