CN113592989A - Three-dimensional scene reconstruction system, method, equipment and storage medium - Google Patents

Three-dimensional scene reconstruction system, method, equipment and storage medium Download PDF

Info

Publication number
CN113592989A
CN113592989A CN202010288970.1A CN202010288970A CN113592989A CN 113592989 A CN113592989 A CN 113592989A CN 202010288970 A CN202010288970 A CN 202010288970A CN 113592989 A CN113592989 A CN 113592989A
Authority
CN
China
Prior art keywords
target
point cloud
cloud data
dimensional
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010288970.1A
Other languages
Chinese (zh)
Other versions
CN113592989B (en
Inventor
欧清扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN202010288970.1A priority Critical patent/CN113592989B/en
Priority to PCT/CN2020/131095 priority patent/WO2021208442A1/en
Publication of CN113592989A publication Critical patent/CN113592989A/en
Application granted granted Critical
Publication of CN113592989B publication Critical patent/CN113592989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses a system, a method, equipment and a storage medium for reconstructing a three-dimensional scene, wherein the system for reconstructing the three-dimensional scene comprises the following steps: the target is arranged at a set position of the three-dimensional target to be detected; the point cloud data acquisition equipment is used for acquiring point cloud data of a set frame of the three-dimensional target to be detected, provided with the target; the spatial coordinate acquisition equipment is used for acquiring first spatial coordinates of preset points of each target; and the scene reconstruction equipment is used for receiving the point cloud data and each first space coordinate of the set frame and reconstructing a three-dimensional scene of the building to be detected according to the point cloud data and each first space coordinate of the set frame. According to the technical scheme of the embodiment of the invention, the point cloud data acquisition equipment and the space coordinate acquisition equipment are arranged to reconstruct the three-dimensional scene, so that the reconstruction precision is improved.

Description

Three-dimensional scene reconstruction system, method, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of surveying and mapping, in particular to a system, a method, equipment and a storage medium for reconstructing a three-dimensional scene.
Background
Along with the development of smart cities, cultural relic protection, indoor navigation and virtual reality, people have higher and higher demand on indoor refined three-dimensional models.
The existing indoor three-dimensional reconstruction methods mainly comprise two methods: firstly, thereby adopt range finding sensors such as laser, radar to acquire the structural information of object decoupling surface and realize three-dimensional reconstruction, these instruments are most expensive and portable not convenient for however, and the application scene is limited. The second method is to collect indoor point cloud data through a depth camera and perform point cloud splicing through feature recognition, so as to realize three-dimensional reconstruction, however, because the features of the indoor images of the building are less, the feature extraction is not facilitated, and meanwhile, the existing point cloud splicing can only ensure that the adjacent point cloud data are well spliced, so that the overall splicing effect of the house cannot be ensured, and the three-dimensional reconstruction precision is low, so that the requirements cannot be met.
Disclosure of Invention
The embodiment of the invention discloses a system, a method, equipment and a storage medium for reconstructing a three-dimensional scene, which realize high-precision three-dimensional reconstruction of the three-dimensional scene.
In a first aspect, an embodiment of the present invention provides a system for reconstructing a three-dimensional scene, where the system includes:
the target is arranged at a set position of the three-dimensional target to be detected;
the point cloud data acquisition equipment is used for acquiring point cloud data of a set frame of the three-dimensional target to be detected, provided with the target;
the spatial coordinate acquisition equipment is used for acquiring first spatial coordinates of preset points of each target;
and the scene reconstruction equipment is used for receiving the point cloud data and each first space coordinate of the set frame and reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the point cloud data and each first space coordinate of the set frame.
In a second aspect, an embodiment of the present invention further provides a method for reconstructing a three-dimensional scene, where the method includes:
acquiring point cloud data of a set frame of the three-dimensional target to be detected, which is provided with a target, based on point cloud data acquisition equipment;
acquiring a first space coordinate of a preset point of each target based on space coordinate equipment;
and reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the point cloud data of the set frame and each first space coordinate.
In a third aspect, an embodiment of the present invention further provides a device for reconstructing a three-dimensional scene, where the device includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for reconstructing a three-dimensional scene provided by any embodiment of the present invention.
In a fourth aspect, the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the method for reconstructing a three-dimensional scene provided in any embodiment of the present invention.
According to the technical scheme of the embodiment of the invention, the target is set for the three-dimensional target to be detected, so that the feature points of the three-dimensional scene are increased, and feature extraction and subsequent coordinate determination are facilitated; the point cloud data acquisition equipment and the space coordinate equipment are used for acquiring indoor data, the two different equipment are used for acquiring data, the accuracy and the robustness of the data are improved, meanwhile, the space coordinate equipment can determine the space coordinates of the preset points of the target with higher precision, so that the precision of point cloud data splicing is improved, meanwhile, the space coordinates are directly determined by the equipment, and the reconstruction speed and efficiency are improved. According to the technical scheme of the embodiment of the invention, the space coordinate of each preset point of the target is directly obtained by the space coordinate obtaining equipment, the point cloud data is collected by the point cloud data obtaining equipment, and the point cloud data is spliced according to the space coordinate, so that the three-dimensional scene reconstruction of the three-dimensional target is realized, and the reconstruction precision and efficiency are improved.
Drawings
Fig. 1A is a schematic structural diagram of a system for reconstructing a three-dimensional scene according to a first embodiment of the present invention;
fig. 1B is a schematic structural diagram of a target according to a first embodiment of the invention;
fig. 2 is a schematic structural diagram of a system for reconstructing a three-dimensional scene according to a second embodiment of the present invention;
fig. 3 is a flowchart of a method for reconstructing a three-dimensional scene according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a three-dimensional scene reconstruction apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a reconstruction apparatus for a three-dimensional scene in a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1A is a schematic structural diagram of a system for reconstructing a three-dimensional scene according to an embodiment of the present invention, and as shown in fig. 1A, the system includes: target 110, point cloud data acquisition device 120, spatial coordinate acquisition device 130, and scene reconstruction device 140.
The three-dimensional scene may be an indoor scene of a building, a robot scene, an automobile scene, or other scenes requiring three-dimensional reconstruction, and for convenience of description, the indoor scene of the building is taken as an example for description in the embodiment of the present invention. A target 110 disposed at a set position of a three-dimensional object to be measured; a point cloud data acquiring device 120 configured to acquire point cloud data of a set frame of the three-dimensional target to be measured on which the target is set; a spatial coordinate acquiring device 130 configured to acquire a first spatial coordinate of a preset point of each of the targets; and the scene reconstruction device 140 is configured to receive the point cloud data of the setting frame and each of the first space coordinates, and perform three-dimensional scene reconstruction on the three-dimensional target to be detected according to the point cloud data of the setting frame and each of the first space coordinates.
Optionally, the point cloud data acquiring device 120 includes at least one of a three-dimensional camera and a laser radar.
Specifically, the number of targets 110 may be 3, 6, 9, 12, 15, 18, or other values, which may be determined according to the reconstruction objective. Generally, for one plane, 3 targets need to be set for feature recognition and determination of one reconstruction plane with high accuracy. Then, for a complete reconstruction of a building interior, at least 18(3 x 6) targets are required, i.e. three targets are required on each plane of the building, respectively. The three-dimensional target to be measured can be a building to be measured, an automobile to be measured, a robot to be measured or other three-dimensional objects. The building to be tested can be any one of the existing buildings, such as residential buildings, ancient buildings and the like, and can also be a building in the construction stage. The preset point can be the center of the target or other preset positions.
Further, the target 110 may be a color including only black and white, or may be a color target. The black and white target can reduce data volume and facilitate feature extraction. The target 110 may be square, circular, or other shape, and may be regular or irregular. The size of the target 110 may be determined according to the size of the building to be measured and the performance of the point cloud data acquisition device 120 and the spatial coordinate acquisition device 130. The target 110 may be made of pvc sticker or other materials. It should be appreciated that the dimensions and materials of the target 110 are such that the point cloud data acquisition device 120 and the spatial coordinate acquisition device 130 can effectively and accurately acquire various features of the target 110 within their resolution.
Optionally, the target 110 includes a target identification code centrally located on the target for identifying the target.
Optionally, the target identification code includes at least one of a two-dimensional code and a barcode. Other identification codes, such as target serial numbers, can of course also be used for target identification.
Specifically, the identification code of the target 110 may effectively identify the identity of the target. In general, a plurality of targets 110 are required to reconstruct an indoor model of a building, and a two-dimensional code, a barcode, or a serial number code corresponding to each target is designed to distinguish the targets.
Optionally, the target comprises a circular ring, a two-dimensional code and a cross center from outside to inside in sequence.
For example, fig. 1B is a schematic structural diagram of a target according to an embodiment of the present invention, and as shown in fig. 1B, the target 110 sequentially includes a circular ring 111, a two-dimensional code 112, and a central cross mark 113 from outside to inside. The center cross mark 113 can facilitate the space coordinate acquisition device 130 to aim at and align the center of the target, the circular ring 111 is arranged on the outermost side of the target and can be internally tangent to the square outline where the target is located, the circle center of the circular ring 111 is the center of the target or the target center and is also the center of the center cross mark, and the two-dimensional code 112 is used for identifying the target.
Further, before data acquisition, such as point cloud data and the first spatial coordinates, the number and placement positions of the targets 110 need to be determined, and specifically, a corresponding relationship between the positions of the targets and the two-dimensional codes can be established, so as to determine the positions of the targets according to the two-dimensional codes and the corresponding relationship.
Specifically, the point cloud data acquisition device 120 may be a three-dimensional camera, a laser radar, or other device. The three-dimensional camera may be a Structured Light (Structured Light) depth camera, a depth camera based on Time of flight (TOF), or a depth camera based on binocular stereo vision (also referred to as a binocular camera), and of course, may also be a depth camera based on other algorithms.
Specifically, the set frame may be determined according to the reconstructed target and the field of view range of the three-dimensional camera, for example, if the reconstructed target is a model of a building to be reconstructed completely, the field of view range (single-frame field angle) of the three-dimensional camera is 120 °, the set frame number may be 5 frames, specifically, 2 frames of shooting 3 frames by azimuth rotation and increasing ceiling and ground by pitching, and certainly, may also be 6 frames, and each plane shoots one frame. The minimum value of the set frame needs to satisfy the range covered by the reconstruction target, and the data of two adjacent frames can not have an overlapping area. The set region in the reconstruction target may be left empty, provided that the set region does not need to be reconstructed.
Further, the related actions of the three-dimensional camera, such as rotation, pitch, etc., may be realized by the motion device.
Further, the system for reconstructing a three-dimensional scene further includes:
and the reconstruction planning module is used for determining a scanning scheme of the point cloud data acquisition equipment according to the reconstruction target and the field angle of a single frame of the point cloud data acquisition equipment.
The reconstruction target includes a region of the building to be reconstructed, which may be a reconstruction range. The scanning scheme comprises the number of frames which need to be shot by the point cloud data acquisition equipment, namely, the set frames, and can also comprise the angle of each frame shot by the point cloud data acquisition equipment.
Specifically, the spatial coordinate acquiring device 130 may be any device capable of acquiring spatial coordinates, and optionally, the spatial coordinate acquiring device includes at least one of a total station, a laser tracker, a laser radar, and a three-coordinate measuring machine. The Total Station, also called Total Station Electronic distance measuring (ETS), can automatically display the measured three-dimensional coordinates, and is convenient and fast. The spatial coordinate acquisition apparatus 130 has higher accuracy than the point cloud data acquisition apparatus in order to improve the accuracy of the point cloud data.
Specifically, the scene reconstruction device 140 is configured to perform scene reconstruction according to the acquired data, the first spatial coordinates, and the point cloud data of each frame. Aligning and splicing the point cloud data of each frame according to the first space coordinate so as to generate a reconstructed three-dimensional scene of the three-dimensional target to be detected.
According to the technical scheme of the embodiment of the invention, the target is set for the three-dimensional target to be detected, so that the feature points of the three-dimensional scene are increased, and feature extraction and subsequent coordinate determination are facilitated; the point cloud data acquisition equipment and the space coordinate equipment are used for acquiring indoor data, the two different equipment are used for acquiring data, the accuracy and the robustness of the data are improved, meanwhile, the space coordinate equipment can determine the space coordinates of the preset points of the target with higher precision, so that the precision of point cloud data splicing is improved, meanwhile, the space coordinates are directly determined by the equipment, and the reconstruction speed and efficiency are improved. According to the technical scheme of the embodiment of the invention, the space coordinate of each preset point of the target is directly obtained by the space coordinate obtaining equipment, the point cloud data is collected by the point cloud data obtaining equipment, and the point cloud data is spliced according to the space coordinate, so that the reconstruction of the three-dimensional scene of the three-dimensional target is realized, and the reconstruction precision and efficiency are improved.
Example two
Fig. 2 is a schematic structural diagram of a reconstruction system of a three-dimensional scene according to a second embodiment of the present invention, which is a refinement and supplement to the first embodiment, and optionally, the reconstruction system of a three-dimensional scene according to the present embodiment further includes: and the target planning module is used for determining the set position of the target according to the field angle of the point cloud data acquisition equipment and the shooting target.
As shown in fig. 2, the system for reconstructing a three-dimensional scene includes: target planning module 210, target 220, point cloud data acquisition device 230, spatial coordinate acquisition device 240, data reception module 250, coordinate extraction module 260, transformation matrix determination module 270, and scene reconstruction module 280.
The target planning module 210 is configured to determine a set position of the target according to the angle of view of the point cloud data acquisition device and the shooting target; a target 220 disposed at a set position of the three-dimensional target to be measured; a point cloud data acquisition device 230 configured to acquire point cloud data of a set frame of the three-dimensional target to be measured on which the target is set; a spatial coordinate acquiring device 240, configured to acquire a first spatial coordinate of a preset point of each target; a data receiving module 250, configured to receive the point cloud data of a set frame and each of the first spatial coordinates; the coordinate extraction module 260 is configured to extract a second spatial coordinate of the preset point of the target according to the point cloud data; a transformation matrix determining module 270, configured to determine a transformation matrix of each frame of point cloud data of the point cloud data acquiring device according to the first spatial coordinate and the second spatial coordinate of the preset point of each target based on a preset algorithm; and the scene reconstruction module 280 is configured to transform each frame of point cloud data according to each transformation matrix, and perform three-dimensional scene reconstruction on the three-dimensional target to be detected according to each transformed frame of point cloud data.
Specifically, the three-dimensional target to be measured is taken as a building to be measured, and the three-dimensional scene is an indoor scene of the building to be measured. The shooting target, i.e. the reconstruction target, may include a region of the three-dimensional target to be reconstructed, which may be a reconstruction range. The field angle of the point cloud data acquisition device refers to the field range of the point cloud data acquisition device shooting single-frame point cloud data. The spatial coordinates of each point, i.e., the second spatial coordinates, may be determined according to the point cloud data acquired by the point cloud data acquiring device 230 and the internal parameters of the point cloud data acquiring device 230. The spatial coordinate acquisition device 240 has a higher measurement accuracy than the point cloud data acquisition device 230.
Specifically, the transformation matrix determining module 270 is mainly configured to perform coordinate transformation on the point cloud data based on the first spatial coordinate, and since the spatial coordinate acquiring device 240 corresponding to the first spatial coordinate has higher precision than the point cloud data acquiring device 230, the precision of the point cloud data is improved.
Optionally, the transformation matrix includes a rotation matrix and a translation matrix, and the preset algorithm includes at least one of a quaternion array method, a singular value decomposition method, and an iterative closest point method.
Optionally, the scene reconstruction module 280 is specifically configured to:
transforming the point cloud data of each frame according to each transformation matrix; determining the position relation of each frame of point cloud data according to the first space coordinates of the preset points of each target; and performing three-dimensional scene reconstruction on the three-dimensional target to be detected according to the position relations and the transformed point cloud data of the frames.
Optionally, the system for reconstructing a three-dimensional scene further includes:
a depth calibration module to: acquiring a preset depth correction relational expression; determining a first target center distance according to a first space coordinate of preset points of two adjacent targets; determining a second target center distance according to a second space coordinate of preset points of two adjacent targets; determining depth correction parameters of each frame of point cloud data of the point cloud data acquisition equipment according to each first target center distance, each second target center distance and the preset depth correction relational expression; and performing depth correction on the point cloud data according to the depth correction parameters, and extracting a second space coordinate of a preset point of the target according to the point cloud data after the depth correction.
Optionally, the expression of the preset depth correction relational expression is as follows:
DQ=A*DP 2+B*DP+C
wherein D isPAs depth values of the point cloud data, DQThe depth values A, B, C of the corrected point cloud data are depth correction parameters.
Specifically, the value ranges, initial values, and step sizes of the depth correction parameters A, B and C may be preset, and an exemplary value range of a may be (5.5)-7,7.5-7) The value range of B may be (-0.9,0.9), the value range of C may be (-10,10), or other ranges, and needs to be determined according to the error between the depth shot by the point cloud data acquisition device and the real depth, which mainly depends on the performance parameters of the point cloud data acquisition device, such as a three-dimensional camera. The initial value may be the lowest value in the value range of each depth correction parameter, and the step length may be defined by the user or set by default, for example, the step length of the parameter a may be 0.1-7The step size of the parameter B may be 0.1, the step size of the parameter C may be 0.05, and the step size may have other values.
Further, for a single frame of point cloud data, a depth error function and a preset depth error threshold value can be constructed in advance, according to the initial value, step length, preset depth correction relation and depth error function of each parameter, the values of parameters A, B and C are iterated once, the values of the parameters A, B and C are substituted into the preset depth correction relational expression to determine the corrected depth of each target, obtaining the coordinates of preset points (target centers) of the corrected targets, determining the corrected second target center distance of two adjacent targets according to the corrected coordinates, substituting the first target center distance and the corrected second target center distance into the depth error function, when the depth error function satisfies the preset depth error threshold value or less in two consecutive iterations, the parameters A, B and C at this time are determined to be the required depth correction parameters. And corrects the depth of the frame of point cloud data captured by the point cloud data capturing device 230 based on the depth correction parameter and the preset depth correction relational expression, to obtain corrected point cloud data. And analogizing in sequence to obtain corrected point cloud data of each frame of point cloud data.
Correspondingly, the corrected point cloud data replaces the point cloud data acquired by the point cloud data acquisition device 230 to perform scene reconstruction, that is, the data receiving module is configured to receive the corrected point cloud data and the first spatial coordinate of the set frame, and the coordinate extracting module is configured to extract the second spatial coordinate of the preset point of the target according to the corrected point cloud data.
Specifically, the expression of the depth error function may be:
Figure BDA0002449661270000101
wherein D isQI.e. the depth value of the corrected point cloud data, n is the number of the target center distances (the first target center distance or the second target center distance), and L isMIs the first center-of-gravity distance, L(For a second target distance corresponding to the corrected point cloud data, for two adjacent target distances i (X)i,Yi,Di) And j (X)j,Yj,Dj) Its corresponding target distance Li-jThe expression of (a) is:
Figure BDA0002449661270000111
specifically, the function of the transformation matrix determination module is specifically described by taking a singular value decomposition method as an example. Setting the conversion relation between the first space coordinate and the second space coordinate as follows:
mi=Rpi+T+Ni,i=1,2,3,…,l(l≥3)
wherein m isiSetting a spatial coordinate, i.e., a first spatial coordinate, of a preset point i of the target for the spatial coordinate acquisition device 240; p is a radical ofiSetting a spatial coordinate, i.e., a second spatial coordinate, of a preset point i of the target, which is acquired by the point cloud data acquisition device 230; r is a 3 × 3 rotation matrix, T is a 3-dimensional translation vector, NiTo set the transformation error vector for a preset point i of the target, l is the total number of targets.
Pre-establishing a conversion error function:
Figure BDA0002449661270000112
Figure BDA0002449661270000113
wherein the content of the first and second substances,
Figure BDA0002449661270000114
when the pre-set point is the target's bulls-eye,
Figure BDA0002449661270000115
representing the center of mass under the point cloud data acquisition device 230,
Figure BDA0002449661270000116
it represents the centroid under the spatial coordinate acquisition device.
For matrix H3*3Singular value decomposition is carried out to obtain: h3*3=UDVTWherein D ═ diag (D)i),d1≥d2≥d3Is more than or equal to 0. Order to
Figure BDA0002449661270000117
Wherein, I3Is a 3 × 3 identity matrix.
When, rank (H)3*3) When the number is more than or equal to 2, the expressions of the obtained rotation matrix and the translation matrix are as follows:
Figure BDA0002449661270000121
further, a transition error threshold may be set, and when the transition error function is less than the transition error threshold, then the transformation matrices (rotation matrix and translation matrix) are determined to meet the requirements.
And then, the scene reconstruction module can perform coordinate transformation on the point cloud data of each frame according to the transformation matrix and perform point cloud splicing according to the transformed point cloud data of each frame so as to reconstruct a three-dimensional scene of the three-dimensional target to be detected.
It should be understood that the reconstruction system of a three-dimensional scene provided by the embodiment of the present invention may also be applied to scene reconstruction of a three-dimensional object model, such as an automobile, a robot, or other objects.
According to the technical scheme of the embodiment of the invention, indoor data are acquired through the point cloud data acquisition equipment and the space coordinates, two different kinds of equipment acquire data, so that the accuracy and robustness of the data are improved, meanwhile, the space coordinate equipment can determine the space coordinates of the preset points of the target at higher precision, so that the precision of point cloud data splicing is improved, meanwhile, the space coordinates are directly determined by the equipment, and the speed and efficiency of reconstruction are improved; the point cloud data are deeply corrected through the first space coordinate, coordinate transformation is carried out on the point cloud data according to a preset algorithm and the first space coordinate, and a reconstructed model is obtained by splicing the point cloud data after transformation, so that the model reconstruction precision is further improved, and the quality of the reconstructed model is improved.
EXAMPLE III
Fig. 3 is a flowchart of a method for reconstructing a three-dimensional scene according to a third embodiment of the present invention, where the present embodiment is applicable to a situation of reconstructing a three-dimensional scene, and the method may be executed by a system or a device for reconstructing a three-dimensional scene, as shown in fig. 3, the method specifically includes the following steps:
and 310, acquiring point cloud data of a set frame of the three-dimensional target to be detected, which is provided with the target, based on the point cloud data acquisition equipment.
And 320, acquiring first space coordinates of preset points of each target based on space coordinate equipment.
And 330, reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the point cloud data of the set frame and each first space coordinate.
According to the technical scheme of the embodiment of the invention, the target is set for the three-dimensional target to be detected, so that the feature points of the three-dimensional scene are increased, and feature extraction and subsequent coordinate determination are facilitated; the point cloud data acquisition equipment and the space coordinate equipment are used for acquiring indoor data, the two different equipment are used for acquiring data, the accuracy and the robustness of the data are improved, meanwhile, the space coordinate equipment can determine the space coordinates of the preset points of the target with higher precision, so that the precision of point cloud data splicing is improved, meanwhile, the space coordinates are directly determined by the equipment, and the reconstruction speed and efficiency are improved. According to the technical scheme of the embodiment of the invention, the space coordinate of each preset point of the target is directly obtained by the space coordinate obtaining equipment, the point cloud data is collected by the point cloud data obtaining equipment, and the point cloud data is spliced according to the space coordinate, so that the reconstruction of the three-dimensional scene of the three-dimensional target is realized, and the reconstruction precision and efficiency are improved.
Optionally, the target includes a target identification code, and the target identification code is disposed in the center of the target and used for identifying the target.
Optionally, the method for reconstructing a three-dimensional scene further includes, before acquiring point cloud data of a set frame of the three-dimensional target to be detected, which is provided with a target, based on the point cloud data acquisition device:
and determining the set position of the target according to the angle of view of the point cloud data acquisition equipment and the shooting target.
Optionally, reconstructing a three-dimensional scene of the three-dimensional object to be detected according to the point cloud data of the set frame and the first spatial coordinates, including:
receiving the point cloud data of a set frame and each first space coordinate; extracting a second space coordinate of a preset point of the target according to the point cloud data; determining a transformation matrix of each frame of point cloud data of the point cloud data acquisition equipment according to the first space coordinate and the second space coordinate of the preset point of each target based on a preset algorithm; and transforming the point cloud data of each frame according to each transformation matrix, and reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the transformed point cloud data of each frame.
Optionally, the transformation matrix includes a rotation matrix and a translation matrix, and the preset algorithm includes at least one of a quaternion array method, a singular value decomposition method, and an iterative closest point method.
Optionally, reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the transformed point cloud data of each frame, including:
determining the position relation of each frame of point cloud data according to the first space coordinates of the preset points of each target; and performing three-dimensional scene reconstruction on the three-dimensional target to be detected according to the position relations and the transformed point cloud data of the frames.
Optionally, the method for reconstructing a three-dimensional scene after obtaining the first spatial coordinates of the preset points of each target and before reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the point cloud data of the set frame and each of the first spatial coordinates further includes:
acquiring a preset depth correction relational expression; determining a first target center distance according to a first space coordinate of preset points of two adjacent targets; determining a second target center distance according to a second space coordinate of preset points of two adjacent targets; determining depth correction parameters of each frame of point cloud data of the point cloud data acquisition equipment according to each first target center distance, each second target center distance and the preset depth correction relational expression; and performing depth correction on the point cloud data according to the depth correction parameters. Correspondingly, the three-dimensional scene reconstruction is carried out on the building to be detected according to the point cloud data of the set frame and each first space coordinate, and the method comprises the following steps: and reconstructing a three-dimensional scene of the building to be detected according to the point cloud data of the set frame after the depth correction and the first space coordinates. Correspondingly, extracting a second spatial coordinate of the preset point of the target according to the point cloud data, including: and extracting a second space coordinate of the preset point of the target according to the point cloud data after the depth correction.
Optionally, the expression of the preset depth correction relational expression is as follows:
DQ=A*DP 2+B*DP+C
wherein D isPAs depth values of the point cloud data, DQThe depth values A, B, C of the corrected point cloud data are depth correction parameters.
Optionally, the spatial coordinate acquiring device includes at least one of a total station, a laser tracker, a laser radar, and a three-coordinate measuring machine.
Example four
Fig. 4 is a schematic diagram of an apparatus for reconstructing a three-dimensional scene according to a fourth embodiment of the present invention, as shown in fig. 4, the apparatus includes: a point cloud data acquisition module 410, a first spatial coordinate acquisition module 420, and a three-dimensional scene reconstruction module 430.
The point cloud data acquisition module 410 is configured to acquire point cloud data of a set frame of the to-be-detected three-dimensional target provided with a target based on a point cloud data acquisition device; a first spatial coordinate obtaining module 420, configured to obtain a first spatial coordinate of a preset point of each target based on a spatial coordinate device; and a three-dimensional scene reconstruction module 430, configured to perform three-dimensional scene reconstruction on the three-dimensional object to be detected according to the point cloud data of the set frame and each of the first space coordinates.
According to the technical scheme of the embodiment of the invention, the target is set for the three-dimensional target to be detected, so that the feature points of the three-dimensional scene are increased, and feature extraction and subsequent coordinate determination are facilitated; the point cloud data acquisition equipment and the space coordinate equipment are used for acquiring indoor data, the two different equipment are used for acquiring data, the accuracy and the robustness of the data are improved, meanwhile, the space coordinate equipment can determine the space coordinates of the preset points of the target with higher precision, so that the precision of point cloud data splicing is improved, meanwhile, the space coordinates are directly determined by the equipment, and the reconstruction speed and efficiency are improved. According to the technical scheme of the embodiment of the invention, the space coordinate of each preset point of the target is directly obtained by the space coordinate obtaining equipment, the point cloud data is collected by the point cloud data obtaining equipment, and the point cloud data is spliced according to the space coordinate, so that the three-dimensional scene reconstruction of the three-dimensional target is realized, and the reconstruction precision and efficiency are improved. Optionally, the target includes a target two-dimensional code, and the target two-dimensional code is disposed in the center of the target and used for identifying the target.
Optionally, the apparatus for reconstructing a three-dimensional scene further includes:
and the target planning module is used for determining the set position of the target according to the field angle of the point cloud data acquisition equipment and the shooting target.
Optionally, the three-dimensional scene reconstruction module 430 includes:
the data receiving module is used for receiving the point cloud data of a set frame and each first space coordinate; the coordinate extraction module is used for extracting a second space coordinate of a preset point of the target according to the point cloud data; the transformation matrix determining module is used for determining a transformation matrix of each frame of point cloud data of the point cloud data acquisition equipment according to the first space coordinate and the second space coordinate of the preset point of each target based on a preset algorithm; and the scene reconstruction module is used for transforming the point cloud data of each frame according to each transformation matrix and reconstructing the three-dimensional target to be detected according to the transformed point cloud data of each frame.
Optionally, the transformation matrix includes a rotation matrix and a translation matrix, and the preset algorithm includes at least one of a quaternion array method, a singular value decomposition method, and an iterative closest point method.
Optionally, the scene reconstruction module is specifically configured to:
transforming the point cloud data of each frame according to each transformation matrix; determining the position relation of each frame of point cloud data according to the first space coordinates of the preset points of each target; and reconstructing the three-dimensional target to be detected according to the position relations and the transformed point cloud data of the frames.
Optionally, the three-dimensional scene reconstruction module 430 further includes:
a depth calibration module to: acquiring a preset depth correction relational expression; determining a first target center distance according to a first space coordinate of preset points of two adjacent targets;
determining a second target center distance according to a second space coordinate of preset points of two adjacent targets; determining depth correction parameters of each frame of point cloud data of the point cloud data acquisition equipment according to each first target center distance, each second target center distance and the preset depth correction relational expression; and performing depth correction on the point cloud data according to the depth correction parameters.
Optionally, the expression of the preset depth correction relational expression is as follows:
DQ=A*DP 2+B*DP+C
wherein D isPAs depth values of the point cloud data, DQThe depth values A, B, C of the corrected point cloud data are depth correction parameters.
Optionally, the spatial coordinate acquiring device includes at least one of a total station, a laser tracker, a laser radar, and a three-coordinate measuring machine.
The device for reconstructing a three-dimensional scene provided by the embodiment of the invention can execute the method for reconstructing a three-dimensional scene provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 5 is a schematic structural diagram of an apparatus for reconstructing a three-dimensional scene according to a fifth embodiment of the present invention, as shown in fig. 5, the apparatus includes a processor 510, a memory 520, an input device 530, and an output device 540; the number of the device processors 510 may be one or more, and one processor 510 is taken as an example in fig. 5; the processor 510, the memory 520, the input device 530 and the output device 540 of the apparatus may be connected by a bus or other means, as exemplified by the bus connection in fig. 5.
The memory 520 may be used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method for reconstructing a three-dimensional scene in the embodiments of the present invention (for example, the point cloud data obtaining module 410, the first spatial coordinate obtaining module 420, and the three-dimensional scene reconstructing module 430 in the three-dimensional scene reconstructing apparatus). The processor 510 executes various functional applications of the device and data processing by executing software programs, instructions and modules stored in the memory 520, so as to implement the above-mentioned reconstruction method of the three-dimensional scene.
The memory 520 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 520 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 520 may further include memory located remotely from the processor 510, which may be connected to the device/terminal/server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the apparatus. The output device 540 may include a display device such as a display screen.
EXAMPLE six
A sixth embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for reconstructing a three-dimensional scene, the method comprising:
acquiring point cloud data of a set frame of the three-dimensional target to be detected, which is provided with a target, by point cloud data acquisition equipment;
acquiring a first space coordinate of a preset point of each target based on space coordinate equipment;
and reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the point cloud data of the set frame and each first space coordinate.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the method for reconstructing a three-dimensional scene provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the technical solutions of the embodiments of the present invention can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device) execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the apparatus or the system for reconstructing a three-dimensional scene, the units and the modules included in the embodiment are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (15)

1. A system for reconstructing a three-dimensional scene, comprising:
the target is arranged at a set position of the three-dimensional target to be detected;
the point cloud data acquisition equipment is used for acquiring point cloud data of a set frame of the three-dimensional target to be detected, provided with the target;
the spatial coordinate acquisition equipment is used for acquiring first spatial coordinates of preset points of each target;
and the scene reconstruction equipment is used for receiving the point cloud data and each first space coordinate of the set frame and reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the point cloud data and each first space coordinate of the set frame.
2. The system of claim 1, wherein the target includes a target identification code, the target identification code being centrally located on the target for identifying the target.
3. The system of claim 2, wherein the target identification code comprises at least one of a two-dimensional code and a bar code.
4. The system of claim 1, further comprising:
and the target planning module is used for determining the set position of the target according to the field angle of the point cloud data acquisition equipment and the shooting target.
5. The system of claim 1, wherein the scene reconstruction device comprises:
the data receiving module is used for receiving the point cloud data of a set frame and each first space coordinate;
the coordinate extraction module is used for extracting a second space coordinate of the preset point of the target according to the point cloud data;
a transformation matrix determining module, configured to determine, based on a preset algorithm, a transformation matrix of each frame of point cloud data of the point cloud data acquiring device according to the first spatial coordinate and the second spatial coordinate of the preset point of each target;
and the scene reconstruction module is used for transforming the point cloud data of each frame according to each transformation matrix and reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the transformed point cloud data of each frame.
6. The system of claim 5, wherein the transformation matrix comprises a rotation matrix and a translation matrix, and the predetermined algorithm comprises at least one of a quaternion array method, a singular value decomposition method, and an iterative closest point method.
7. The system of claim 5, wherein the scene reconstruction module is specifically configured to:
transforming the point cloud data of each frame according to each transformation matrix;
determining the position relation of each frame of point cloud data according to the first space coordinates of the preset points of each target;
and performing three-dimensional scene reconstruction on the three-dimensional target to be detected according to the position relations and the transformed point cloud data of the frames.
8. The system of claim 5, wherein the scene reconstruction device further comprises a depth scaling module configured to:
acquiring a preset depth correction relational expression;
determining a first target center distance according to the first space coordinates of the preset points of two adjacent targets;
determining a second target center distance according to the second space coordinates of the preset points of two adjacent targets;
determining depth correction parameters of each frame of point cloud data of the point cloud data acquisition equipment according to each first target center distance, each second target center distance and the preset depth correction relational expression;
and performing depth correction on the point cloud data according to the depth correction parameters, and extracting a second space coordinate of a preset point of the target according to the point cloud data after the depth correction.
9. The system of claim 8, wherein the expression of the preset depth correction relation is:
DQ=A*DP 2+B*DP+C
wherein D isPAs depth value of the point cloud data, DQThe depth values A, B, C of the corrected point cloud data are depth correction parameters.
10. The system of claim 1, wherein the spatial coordinate acquisition device comprises at least one of a total station, a laser tracker, a lidar, and a three-coordinate measuring machine.
11. The system of claim 1, wherein the point cloud data acquisition device comprises at least one of a three-dimensional camera and a lidar.
12. The system of claim 1, wherein the target comprises a circle, a two-dimensional code and a cross center in sequence from outside to inside.
13. A method for reconstructing a three-dimensional scene, comprising:
acquiring point cloud data of a set frame of the three-dimensional target to be detected, which is provided with a target, based on point cloud data acquisition equipment;
acquiring a first space coordinate of a preset point of each target based on space coordinate equipment;
and reconstructing a three-dimensional scene of the three-dimensional target to be detected according to the point cloud data of the set frame and each first space coordinate.
14. An apparatus for reconstructing a three-dimensional scene, the apparatus comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of reconstructing a three-dimensional scene of claim 13.
15. A storage medium containing computer-executable instructions for performing the method of reconstructing a three-dimensional scene of claim 13 when executed by a computer processor.
CN202010288970.1A 2020-04-14 2020-04-14 Three-dimensional scene reconstruction system, method, equipment and storage medium Active CN113592989B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010288970.1A CN113592989B (en) 2020-04-14 2020-04-14 Three-dimensional scene reconstruction system, method, equipment and storage medium
PCT/CN2020/131095 WO2021208442A1 (en) 2020-04-14 2020-11-24 Three-dimensional scene reconstruction system and method, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010288970.1A CN113592989B (en) 2020-04-14 2020-04-14 Three-dimensional scene reconstruction system, method, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113592989A true CN113592989A (en) 2021-11-02
CN113592989B CN113592989B (en) 2024-02-20

Family

ID=78083760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010288970.1A Active CN113592989B (en) 2020-04-14 2020-04-14 Three-dimensional scene reconstruction system, method, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113592989B (en)
WO (1) WO2021208442A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115032615A (en) * 2022-05-31 2022-09-09 中国第一汽车股份有限公司 Laser radar calibration point determining method, device, equipment and storage medium
CN116299368A (en) * 2023-05-19 2023-06-23 深圳市其域创新科技有限公司 Precision measuring method and device for laser scanner, scanner and storage medium
WO2023213253A1 (en) * 2022-05-02 2023-11-09 先临三维科技股份有限公司 Scanning data processing method and apparatus, and electronic device and medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004927A (en) * 2021-10-25 2022-02-01 北京字节跳动网络技术有限公司 3D video model reconstruction method and device, electronic equipment and storage medium
CN115218891B (en) * 2022-09-01 2022-12-27 西华大学 Autonomous positioning and navigation method for mobile robot
CN115984512B (en) * 2023-03-22 2023-06-13 成都量芯集成科技有限公司 Three-dimensional reconstruction device and method for plane scene
CN116859410B (en) * 2023-06-08 2024-04-19 中铁第四勘察设计院集团有限公司 Method for improving laser radar measurement accuracy of unmanned aerial vehicle on existing railway line
CN116993923B (en) * 2023-09-22 2023-12-26 长沙能川信息科技有限公司 Three-dimensional model making method, system, computer equipment and storage medium for converter station
CN117876502A (en) * 2024-03-08 2024-04-12 荣耀终端有限公司 Depth calibration method, depth calibration equipment and depth calibration system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950433A (en) * 2010-08-31 2011-01-19 东南大学 Building method of transformer substation three-dimensional model by using laser three-dimensional scanning technique
CN104973092A (en) * 2015-05-04 2015-10-14 上海图甲信息科技有限公司 Rail roadbed settlement measurement method based on mileage and image measurement
CN107631700A (en) * 2017-09-07 2018-01-26 西安电子科技大学 The three-dimensional vision information method that spatial digitizer is combined with total powerstation
CN110120093A (en) * 2019-03-25 2019-08-13 深圳大学 Three-dimensional plotting method and system in a kind of room RGB-D of diverse characteristics hybrid optimization
CN110163968A (en) * 2019-05-28 2019-08-23 山东大学 RGBD camera large-scale three dimensional scenario building method and system
US10535148B2 (en) * 2016-12-07 2020-01-14 Hexagon Technology Center Gmbh Scanner VIS

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950433A (en) * 2010-08-31 2011-01-19 东南大学 Building method of transformer substation three-dimensional model by using laser three-dimensional scanning technique
CN104973092A (en) * 2015-05-04 2015-10-14 上海图甲信息科技有限公司 Rail roadbed settlement measurement method based on mileage and image measurement
US10535148B2 (en) * 2016-12-07 2020-01-14 Hexagon Technology Center Gmbh Scanner VIS
CN107631700A (en) * 2017-09-07 2018-01-26 西安电子科技大学 The three-dimensional vision information method that spatial digitizer is combined with total powerstation
CN110120093A (en) * 2019-03-25 2019-08-13 深圳大学 Three-dimensional plotting method and system in a kind of room RGB-D of diverse characteristics hybrid optimization
CN110163968A (en) * 2019-05-28 2019-08-23 山东大学 RGBD camera large-scale three dimensional scenario building method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023213253A1 (en) * 2022-05-02 2023-11-09 先临三维科技股份有限公司 Scanning data processing method and apparatus, and electronic device and medium
CN115032615A (en) * 2022-05-31 2022-09-09 中国第一汽车股份有限公司 Laser radar calibration point determining method, device, equipment and storage medium
CN116299368A (en) * 2023-05-19 2023-06-23 深圳市其域创新科技有限公司 Precision measuring method and device for laser scanner, scanner and storage medium
CN116299368B (en) * 2023-05-19 2023-07-21 深圳市其域创新科技有限公司 Precision measuring method and device for laser scanner, scanner and storage medium

Also Published As

Publication number Publication date
WO2021208442A1 (en) 2021-10-21
CN113592989B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
WO2019127445A1 (en) Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
CN113532311A (en) Point cloud splicing method, device, equipment and storage equipment
CN111179358A (en) Calibration method, device, equipment and storage medium
CN109425348B (en) Method and device for simultaneously positioning and establishing image
CN106529538A (en) Method and device for positioning aircraft
CN110738703B (en) Positioning method and device, terminal and storage medium
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
CN111144349B (en) Indoor visual relocation method and system
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
Cosido et al. Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace
CN112348909A (en) Target positioning method, device, equipment and storage medium
CN114529615B (en) Radar calibration method, device and storage medium
CN112148742A (en) Map updating method and device, terminal and storage medium
CN114140539A (en) Method and device for acquiring position of indoor object
CN115854895A (en) Non-contact stumpage breast diameter measurement method based on target stumpage form
CN113077523B (en) Calibration method, calibration device, computer equipment and storage medium
CN113313765A (en) Positioning method, positioning device, electronic equipment and storage medium
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
CN111735447A (en) Satellite-sensitive-simulation type indoor relative pose measurement system and working method thereof
CN114913246B (en) Camera calibration method and device, electronic equipment and storage medium
CN116091701A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, computer equipment and storage medium
CN109829939A (en) A method of it reducing multi-view images and matches corresponding image points search range
CN114943809A (en) Map model generation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40057497

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant