CN113570721A - Method and device for reconstructing three-dimensional space model and storage medium - Google Patents

Method and device for reconstructing three-dimensional space model and storage medium Download PDF

Info

Publication number
CN113570721A
CN113570721A CN202111132270.4A CN202111132270A CN113570721A CN 113570721 A CN113570721 A CN 113570721A CN 202111132270 A CN202111132270 A CN 202111132270A CN 113570721 A CN113570721 A CN 113570721A
Authority
CN
China
Prior art keywords
point cloud
target scene
dimensional
image acquisition
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111132270.4A
Other languages
Chinese (zh)
Other versions
CN113570721B (en
Inventor
程显昱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beike Technology Co Ltd
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co Ltd filed Critical Beike Technology Co Ltd
Priority to CN202111132270.4A priority Critical patent/CN113570721B/en
Publication of CN113570721A publication Critical patent/CN113570721A/en
Application granted granted Critical
Publication of CN113570721B publication Critical patent/CN113570721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for reconstructing a three-dimensional space model and a storage medium. The data acquisition method comprises the following steps: acquiring point cloud of a target scene by a laser radar in an SLAM (simultaneous localization and mapping) mode, and generating a three-dimensional point cloud model of the target scene based on the acquired point cloud; determining at least one image acquisition point location in the target scene based on the three-dimensional point cloud model; respectively carrying out image acquisition on each image acquisition point in the at least one image acquisition point to obtain a panoramic image of the target scene at the at least one image acquisition point; and fitting the three-dimensional point cloud model with the panoramic image of the at least one image acquisition point location to obtain a three-dimensional space model of the target scene. The embodiment of the disclosure can improve the data acquisition efficiency, ensure the data consistency and reduce the number of broken holes of the generated three-dimensional space model.

Description

Method and device for reconstructing three-dimensional space model and storage medium
Technical Field
The present disclosure relates to three-dimensional panoramic technologies, and in particular, to a method and an apparatus for reconstructing a three-dimensional spatial model, and a storage medium.
Background
At present, with the rise of the fields of unmanned driving, VR/AR, intelligent traffic and the like, the three-dimensional reconstruction technology of a physical space is widely applied, and in the related technology, when the indoor or outdoor space is subjected to three-dimensional reconstruction, the adopted implementation mode is fixed-point acquisition, but the mode at least has the problems of long acquisition time, more broken holes of generated models and poor data consistency.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for reconstructing a three-dimensional space model and a storage medium, so as to improve data acquisition efficiency, ensure data consistency and reduce the number of broken holes of the generated three-dimensional space model.
In an aspect of the embodiments of the present disclosure, a method for reconstructing a three-dimensional space model is provided, where the data acquisition method includes: acquiring point cloud of a target scene by a laser radar in an SLAM (simultaneous localization and mapping) mode, and generating a three-dimensional point cloud model of the target scene based on the acquired point cloud; determining at least one image acquisition point location in the target scene based on the three-dimensional point cloud model; respectively carrying out image acquisition on each image acquisition point in the at least one image acquisition point to obtain a panoramic image of the target scene at the at least one image acquisition point; and fitting the three-dimensional point cloud model with the panoramic image of the at least one image acquisition point location to obtain a three-dimensional space model of the target scene.
In another aspect of the disclosed embodiments, there is provided a three-dimensional space model reconstruction apparatus, wherein the reconstruction apparatus includes: a point cloud model generation unit configured to: acquiring point cloud of a target scene by a laser radar in an SLAM (simultaneous localization and mapping) mode, and generating a three-dimensional point cloud model of the target scene based on the acquired point cloud; a collection point location determination unit configured to: determining at least one image acquisition point location in the target scene based on the three-dimensional point cloud model; an image acquisition unit configured to: respectively carrying out image acquisition on each image acquisition point in the at least one image acquisition point to obtain a panoramic image of the target scene at the at least one image acquisition point; a pasting unit configured to: and fitting the three-dimensional point cloud model with the panoramic image of the at least one image acquisition point location to obtain a three-dimensional space model of the target scene.
In a further aspect of the disclosed embodiments, an electronic device is provided, which includes:
a memory for storing a computer program;
a processor for executing the computer program stored in the memory, and when the computer program is executed, implementing the reconstruction method of the three-dimensional space model of the present disclosure.
In another aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, wherein when the computer program is executed by a processor, the computer program implements the reconstruction method of the three-dimensional space model according to the present disclosure.
The invention discloses a three-dimensional space model reconstruction method and device, electronic equipment and a storage medium. Firstly, the omnibearing mobile scanning of a target scene in a real-time positioning and map building SLAM mode can be realized, so that the problem of shielding caused by fixed acquisition point positions in the prior art is solved to a great extent. Secondly, at least one image acquisition point location in the target scene can be determined according to the three-dimensional point cloud model, then the target scene is subjected to image acquisition according to the image acquisition point location indication to obtain a panoramic image, the image acquisition point location does not need to be selected manually, and the shot images do not need to be subjected to manual multi-point location splicing, so that the consistency of image data can be ensured. In addition, the redundancy of the acquisition point positions can be avoided by adopting the SLAM mode to carry out the omnibearing mobile scanning, the acquisition time is greatly shortened, and the acquisition time of the point cloud data can be reduced by one order of magnitude (10 times), so that the overall efficiency of data acquisition is improved. In the process of reconstructing the three-dimensional model of the target scene, the point cloud data and the panoramic image have no shielding problem, so that the situation of holes in the generated three-dimensional point cloud model can be reduced or even eliminated, and the integrity of the model is better.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a schematic structural diagram of a system to which the technical solution of the present disclosure is applied;
FIG. 2 is a flow chart of one embodiment of a method of reconstructing a three-dimensional spatial model of the present disclosure;
FIG. 3 is a flow chart of another embodiment of a method of reconstructing a three-dimensional spatial model of the present disclosure;
FIG. 4 is a schematic diagram of a three-dimensional point cloud model of a target scene obtained by using an embodiment of the reconstruction method of the three-dimensional space model of the present disclosure;
FIG. 5 is a schematic diagram showing image capture points and movement path indications based on the three-dimensional point cloud model of FIG. 4;
FIG. 6 is a schematic structural diagram illustrating an embodiment of an apparatus for reconstructing a three-dimensional spatial model according to the present disclosure;
FIG. 7 is a schematic structural diagram of another embodiment of an apparatus for reconstructing a three-dimensional spatial model according to the present disclosure;
fig. 8 is a schematic structural diagram of an application embodiment of the electronic device of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the disclosure
In the process of implementing the present disclosure, the inventor finds that, in the fixed-point acquisition mode, acquisition equipment is erected at certain fixed positions (point locations), and when spatial depth information is acquired, color image information is acquired, and in the later stage, depth information acquired by a plurality of point locations is matched and fused by an algorithm to generate a three-dimensional model of a space; and generating a panoramic image from the color image information of each point location, and pasting the color information on the generated three-dimensional model according to a preset corresponding relation to generate the three-dimensional color model.
In the fixed-point acquisition mode, firstly, in order to ensure the acquisition effect, acquisition points with large redundancy are required to be arranged in an acquisition space, the acquisition of depth information and color images is required to be carried out on each point, and during subsequent processing, the depth information and the color information of each point do not participate in the generation of a final model and a panoramic image, which results in overlong acquisition time; secondly, because fixed-point acquisition refers to the position, the height and the visual angle, all shielding relations are difficult to solve, so that a plurality of shielding relations and insufficient data result in a plurality of broken holes generated by the model; and finally, due to excessive manual operation and multi-point algorithm splicing, the data quality is different from person to person and from scene to scene, and the data consistency is influenced.
Brief description of the drawings
Fig. 1 is a system structure to which the present disclosure is applicable, which includes a point cloud collecting end, an image collecting end, a data processing end, and a display.
The system modules can be connected in a communication mode, and therefore data interaction can be achieved. For example, the point cloud acquisition terminal may send point cloud data of an acquired target scene to the data processing terminal for processing, the image acquisition terminal may send an acquired image of the target scene to the data processing terminal for processing, and the data processing terminal may send a three-dimensional point cloud model established based on the point cloud data and an image acquisition point location to a display for displaying.
It should be noted that the point cloud collection end, the image collection end, the display and the data processing end may establish communication connection in a wired or wireless manner. For example, if the point cloud collection end, the image collection end and the display and the data processing end can be integrated into a whole, communication connection can be realized by using a data line; if the point cloud collection end, the image collection end and the display are separately and independently arranged from the data processing end (for example, a client and server mode, namely a C/S mode is adopted), wireless communication connection can be realized by using a mobile network (4G, 5G) or a wireless network (Wi-Fi) and the like.
The point cloud collection end can be a laser instant positioning And Mapping (SLAM) suite, called a laser SLAM suite for short, And specifically comprises a motor with a code disc, a laser radar And an inertial measurement sensor; the image acquisition end can include but is not limited to a panoramic camera; the data processing side may include, but is not limited to, processors, computers, and the like.
In an optional example, taking the point cloud collecting end, the image collecting end and the display and the data processing end as an example, the working process of the system is summarized as follows:
in the first step, the laser SLAM suite which can be started by a worker in a hand-held mode carries out all-directional mobile scanning on a target scene. For example, a worker holds the laser SLAM kit to walk around in a target scene to obtain point cloud in the target scene, a motor with a coded disc drives the laser radar to rotate in the process to increase the scanning angle of the target scene space, the space depth information data obtained by scanning the laser radar once is called a frame of point cloud, and the inertial measurement sensor can synchronously measure the interframe attitude variation of the laser radar, namely the attitude variation; and the measured point cloud and the pose variation are sent to a data processing end.
And secondly, establishing a three-dimensional point cloud model by the data processing terminal on the basis of the received point cloud and the pose variation, determining an image acquisition point location on the basis of the established three-dimensional point cloud model, and sending the three-dimensional point cloud model and the relative position of the image acquisition point location in the three-dimensional point cloud model to a display for displaying.
And thirdly, the staff uses the panoramic camera to shoot the target scene based on the image acquisition point positions displayed by the display, and sends the shot panoramic image to the data processing terminal.
And fourthly, the data processing end attaches the panoramic image and the three-dimensional point cloud model to obtain a three-dimensional space model, so that the reconstruction of the three-dimensional space model of the target scene is completed.
Exemplary method
FIG. 2 is a flow chart of one embodiment of a method for reconstructing a three-dimensional spatial model of the present disclosure. The reconstruction method shown in fig. 2 can be applied to the system shown in fig. 1, and includes steps S110 to S140, which are described below.
S110, acquiring point clouds of a target scene in an SLAM mode through a laser radar in real-time positioning and map building, and generating a three-dimensional point cloud model of the target scene based on the acquired point clouds.
And S120, determining at least one image acquisition point position in the target scene based on the three-dimensional point cloud model.
And S130, respectively carrying out image acquisition on each image acquisition point position in the at least one image acquisition point position so as to obtain a panoramic image of the target scene at the at least one image acquisition point position.
S140, attaching the three-dimensional point cloud model to the panoramic image of the at least one image acquisition point location to obtain a three-dimensional space model of the target scene.
The target scene can be any physical space needing to be subjected to three-dimensional space model reconstruction. For example, it may be an indoor space or an outdoor space, wherein the indoor space may include a living room, a bedroom, an exhibition room, etc.
Based on the working frequency of the laser radar, the depth information acquired by scanning the target scene once is counted as a frame of point cloud, wherein the depth information can be represented in a three-dimensional rectangular coordinate mode or in a distance and azimuth angle mode.
The three-dimensional point cloud model is depth information of a three-dimensional space which is constructed based on point clouds and used for simulating a target scene, and is equivalent to a skeleton of the three-dimensional space of the target scene. And fitting the panoramic image to the three-dimensional point cloud model to obtain a three-dimensional space model, wherein the skeleton of the three-dimensional space equivalent to the target scene is added with color image information, so that the three-dimensional space of the target scene can be simulated more truly.
Based on the reconstruction method of the three-dimensional space model, firstly, the point cloud of the target scene can be acquired in a SLAM (simultaneous localization and mapping) mode, the redundancy of point cloud acquisition points can be avoided, the acquisition time is greatly shortened, and the acquisition time of point cloud data can be reduced by one order of magnitude (10 times), so that the overall efficiency of data acquisition is improved, and the problem of shielding caused by fixed acquisition points in the prior art can be solved; secondly, determining at least one image acquisition point location in the target scene based on the three-dimensional point cloud model, and then shooting the image of the target scene according to the image acquisition point location, wherein the step does not need to manually select the image acquisition point location and manually perform multi-point location splicing on the shot image, so that the consistency of image data can be ensured; finally, because the shielding problem does not exist, the broken holes in the finally generated three-dimensional space model can be reduced or even eliminated, and the integrity of the model is better.
Fig. 3 is a flowchart of another embodiment of a reconstruction method of a three-dimensional spatial model according to the present disclosure. On the basis of the embodiment shown in fig. 2, the step S110 may include the following steps S1101 to S1103, which are described below.
S1101, carrying out all-directional mobile scanning on the target scene by utilizing a laser radar to obtain continuous multi-frame point cloud data of the target scene.
As can be appreciated, the above-described "omni-directional motion scanning the target scene with lidar" step can be accomplished in any available manner.
For example, in an alternative example, a motor may be controlled to rotate the laser radar while the laser radar is continuously displaced in the target scene, so as to increase a scanning angle of the target scene, thereby implementing omni-directional mobile scanning. The laser radar can move in various modes, for example, the laser radar can be carried and moved by a remote control vehicle, can be carried and moved by a robot, and can also be moved by holding by an operator; the motor can be a motor with a coded disc, and the laser radar is driven to rotate by the motor with the coded disc, so that the scanning range of the laser radar can be enlarged.
The laser radar scans the acquired depth information of the target area once based on the working frequency of the laser radar to obtain one frame of point cloud data, and continuous multi-frame point cloud data scanned for a target scene can be obtained in the scanning process of the laser radar while moving. For each frame of point cloud data, a three-dimensional rectangular coordinate system is established according to the laser radar position corresponding to the frame, then the coordinates of each point in the point cloud are determined based on the coordinate system, and the coordinates of the points form the frame of point cloud data.
By using the scheme of the step S1101, the technical effect of omni-directional mobile scanning of the laser radar in the target scene while moving can be achieved, so that the depth information of the target scene can be fully acquired at multiple angles and multiple points, data support is provided for the subsequent server to establish a three-dimensional model, and the problem of mutual shielding between scanning areas based on fixed point acquisition in the prior art is avoided.
And S1102, determining the pose variation between any two adjacent frames of point cloud data in the continuous multi-frame point cloud data based on an inertial sensor.
As can be appreciated, the above-described step of determining the amount of pose change can be accomplished in any available manner. For example, in an optional example, based on the real-time acceleration and the real-time angular velocity of the laser radar acquired by the inertial sensor, a rotation matrix and a translation vector between three-dimensional point cloud coordinate systems respectively corresponding to two adjacent frames of point cloud data at corresponding moments are determined. Among them, an Inertial Measurement Unit (IMU) is a sensor mainly used to measure acceleration and rotational motion. The IMU may be integrated or bound with the lidar such that real-time acceleration and real-time angular velocity of the lidar during movement may be measured by the IMU.
Optionally, the real-time acceleration and the real-time angular velocity of the laser radar measured by the IMU in the moving process may be sent to a data processing end in the system in the embodiment of fig. 1, so as to further process the real-time acceleration and the real-time angular velocity, and obtain a rotation matrix and a translation vector between three-dimensional point cloud coordinate systems corresponding to two adjacent frames of point cloud data respectively. Specifically, for convenience of description, two adjacent frames of point cloud data are respectively designated as point cloud data at time t1 and point cloud data at time t 2. The IMU respectively measures the acceleration a1 and the angular speed w1 of the laser radar at the time t 1; and acceleration a2 and angular velocity w2 of the laser radar at time t 2.
The data processing terminal can calculate an acceleration difference Δ a between the acceleration a1 and the acceleration a2 and calculate an angular velocity difference Δ w between the angular velocity w1 and the angular velocity w 2; then, calculating integral of the acceleration difference value delta a from the time t1 to the time t2 as a translation vector between three-dimensional point cloud coordinate systems respectively corresponding to the two adjacent frames of point cloud data; and calculating integral of the angular speed difference delta w from the time t1 to the time t2 as a rotation matrix between the three-dimensional point cloud coordinate systems respectively corresponding to the two adjacent frames of point cloud data.
Based on the above steps S1101 and S1102, the point cloud of the target scene (i.e., the continuous multi-frame point cloud data) and the pose variation (i.e., the rotation matrix and the translation vector) between any two adjacent frames of point cloud data in the continuous multi-frame point cloud data can be obtained.
Step S1103, sequentially establishing a three-dimensional point cloud coordinate system corresponding to each frame of point cloud data based on each frame of point cloud data in the continuous multi-frame point cloud data; selecting a three-dimensional point cloud coordinate system corresponding to the first frame of point cloud data as a three-dimensional point cloud reference coordinate system according to the acquisition time sequence of each frame of point cloud data in the continuous multi-frame point cloud data; and converting the three-dimensional point cloud data in the remaining three-dimensional point cloud coordinate system except the three-dimensional point cloud reference coordinate system into the three-dimensional point cloud reference coordinate system based on the pose variation, and sequentially splicing continuous multi-frame point cloud data according to the acquisition time sequence to obtain the three-dimensional point cloud model of the target scene.
As described above, since the points in the point cloud data can be represented by three-dimensional rectangular coordinates, based on the coordinates of the points in each frame of point cloud data, the origin of the coordinate system of the frame of point cloud, that is, the position of the laser radar, can be determined, thereby determining the coordinate system. Further, the three-dimensional point cloud coordinate system corresponding to the first frame of point cloud data may be regarded as a three-dimensional point cloud reference coordinate system in time sequence, and the collection time may be recorded as one dimension of the midpoint of the point cloud data, for example, a certain point in a certain frame of point cloud may be represented as (x, y, z, t), where x, y, z may represent coordinates, and t may represent collection time.
In addition, as can be understood, the three-dimensional point cloud data in the remaining three-dimensional point cloud coordinate system other than the three-dimensional point cloud reference coordinate system may be "converted into the three-dimensional point cloud reference coordinate system" in any available manner in step S1103. For example, in an alternative example, a direct rotation matrix and a direct translation vector between any remaining three-dimensional point cloud coordinate system except the three-dimensional point cloud reference coordinate system and the three-dimensional point cloud reference coordinate system may be determined according to the acquisition time sequence; therefore, the direct conversion relation between any three-dimensional point cloud coordinate system and the three-dimensional point cloud reference coordinate system can be determined, and the conversion of the coordinate system of the three-dimensional point cloud data is further realized.
The direct rotation matrix is obtained by superposing rotation matrixes among a plurality of groups of adjacent frames at intervals between any three-dimensional point cloud coordinate system and the three-dimensional point cloud reference coordinate system. Similarly, the direct translation vectors are respectively obtained by superposition of translation vectors between multiple groups of adjacent frames spaced between any three-dimensional point cloud coordinate system and the three-dimensional point cloud reference coordinate system.
It should be noted that the "sequentially concatenating multiple continuous frames of point cloud data according to the acquisition time sequence" in step S1103 means: and superposing the three-dimensional point cloud data in any three-dimensional point cloud coordinate system with the corresponding direct rotation matrix and direct translation vector according to the acquisition time sequence. Specifically, according to the acquisition time sequence, the second frame of point cloud data and the first frame of point cloud data are spliced to obtain a large point cloud data synthesized by two frames of point cloud data, then the third frame of point cloud data and the large point cloud data are continuously spliced, and the like is carried out until all continuous multi-frame point cloud data are spliced to obtain the three-dimensional point cloud model of the target scene.
Wherein the superimposing refers to summing the position coordinates of the points in the point cloud with the corresponding rotation components in the rotation matrix and the corresponding translation components in the translation vector.
The above steps S1101 to S1103 are applicable to the case where the target scene is an indoor space or an outdoor space.
According to circumstances, when the target scene is an outdoor space, the step S110 can be implemented by the following methods:
for example, in an optional example, based on a global satellite navigation system, performing omni-directional mobile scanning on the target scene by using a laser radar to obtain continuous multi-frame point cloud data of the target scene based on the global satellite navigation system; and splicing the continuous multi-frame point cloud data based on a global satellite navigation system coordinate system to obtain a three-dimensional point cloud model of the target scene. Specifically, real-time position coordinates of the laser radar in the moving process of omni-directional mobile scanning in the target scene can be determined based on a coordinate system of the global satellite navigation system, then, corresponding to each real-time position coordinate of the laser radar, the coordinates of a point in the coordinate system of the global satellite navigation system are determined by utilizing the distance and the orientation of the point in the point cloud data (depth information) detected by the laser radar at the moment compared with the laser radar, and thus continuous multi-frame point cloud data based on the global satellite navigation system of the target scene is obtained. Furthermore, because the continuous multi-frame point cloud data are established based on a global satellite navigation system coordinate system, namely correspond to the same coordinate system, the continuous multi-frame point cloud data can be spliced directly based on the global satellite navigation system coordinate system, so that the process of converting the coordinate systems corresponding to different frame point clouds based on pose variation is omitted, and the modeling efficiency is improved.
The gnss may include, but is not limited to, BeiDou Navigation Satellite System (BDS) in china, Global Navigation Satellite System (GPS) in the united states, GLONASS Satellite Navigation System (GLONASS) in russia, and GALILEO Satellite Navigation System (GALILEO) in the european union. Preferably, Real-time kinematic (RTK) may be used based on the global satellite navigation system to improve the positioning accuracy.
According to the implementation mode, the process of acquiring the pose change of the point cloud data by using the IMU can be omitted, the depth information of the target scene can be acquired directly based on the coordinate system of the global satellite navigation system, and the efficiency is improved.
For another example, in another optional example, the global positioning information of the global satellite navigation system corresponding to the target scene is acquired; determining global positioning information of the three-dimensional point cloud reference coordinate system in the global satellite navigation system coordinate system; and converting the three-dimensional point cloud model into the global satellite navigation system coordinate system based on the global positioning information.
In this example, the global positioning information of the position of the lidar (i.e. the origin of the three-dimensional point cloud reference coordinate system) may be determined according to the received global positioning information (e.g. longitude, latitude) of the target scene, and the global positioning information may be added to the three-dimensional point cloud reference coordinate system as a new added dimension. The three-dimensional point cloud model is established based on the three-dimensional point cloud reference coordinate system, so that the three-dimensional point cloud model also has global positioning information, and the three-dimensional point cloud model is converted into the global positioning coordinate system to have global positioning property.
The global positioning information corresponding to the target scene may be obtained through the aforementioned global satellite navigation system (e.g., BDS, GPS), and the global positioning information may be, for example, longitude and latitude information. The method comprises the steps of collecting global positioning information corresponding to the target scene, and adding the global positioning information to a three-dimensional point cloud model established based on data collected by a laser radar and an IMU by a server, so that the global positioning information is converted into a global satellite navigation system coordinate system.
In an alternative example, based on the embodiment shown in fig. 2 described above, step S120 may be implemented in the following available manner. Firstly, carrying out space segmentation on the target scene based on the three-dimensional point cloud model to obtain a plurality of subspaces; and then determining the image acquisition point positions according to the shielding relation and the occupation information between the subspaces and inside the subspaces.
Fig. 4 is a schematic diagram of a three-dimensional point cloud model of a target scene obtained by using an embodiment of the reconstruction method of the three-dimensional space model of the present disclosure. Fig. 5 is a schematic diagram showing image acquisition point locations and moving route indication based on the three-dimensional point cloud model in fig. 4. Here, the target scene is taken as an indoor space (as shown in fig. 4) as an example for explanation. Referring to fig. 4, a three-dimensional point cloud model is shown comprising a plurality of regions, which can be spatially segmented using prior art algorithms to obtain a plurality of subspaces, such as living room, bedroom, kitchen, bathroom. Further, a preset algorithm is utilized to determine the shielding relation and the occupation information in the subspace. For example, objects such as tables, chairs and sofas are included in a living room, the areas occupied by the objects cannot be used as image acquisition points, and other idle and non-blocking areas are selected as the image acquisition points; for another example, referring to fig. 5, in a bedroom, due to the obstruction of the bed, the area between the bed and the windowsill cannot be photographed at the doorway of the bedroom, and therefore, it is necessary to determine an image capturing point (a gray dot in fig. 5) between the bed and the windowsill. The black dots represent image acquisition points where the image has been captured, and the gray origin represents image acquisition points where the image has not been captured.
In an optional example, on the basis of the foregoing embodiment, the reconstruction method further includes: and displaying the three-dimensional point cloud model, the relative position relation of the image acquisition point positions in the three-dimensional point cloud model and the moving route indication among the image acquisition point positions.
As can be appreciated, in this example, the real-time point cloud scanned by the laser radar may be matched with the three-dimensional point cloud model by using the data processing end (refer to fig. 1) to determine the real-time position of the laser radar in the three-dimensional point cloud model, and then a moving route indication may be determined based on the difference between the real-time position and the image acquisition point location, and then the moving route indication may be sent to the display (refer to fig. 1) to provide a visual shooting point location guide for the user to shoot to a designated image acquisition point location according to the guide (such as an arrow shown in fig. 5).
The device for displaying may be a display screen integrated with the laser radar, the code wheel motor and the IMU, or may also be a mobile device based on a mobile network (4G, 5G) or a wireless network (Wi-Fi) communication connection, such as a smart phone, a tablet computer, and the like of a user; the display position information can be presented and interacted in the mobile device in the form of app or webpage.
In an alternative example, step S130 may be implemented in various available ways on the basis of the above-described embodiments. For example, a panoramic camera may be used to directly perform image acquisition on the target scene at the image acquisition point, so as to obtain a panoramic image of the target scene at the image acquisition point and corresponding position information of the panoramic image. For another example, the target scene may be acquired at the image acquisition location by using a non-panoramic camera to obtain a plurality of common images of the target scene at the image acquisition location, and the plurality of common images with overlapping degrees are fused to obtain a panoramic image corresponding to the image acquisition location and corresponding position information of the panoramic image.
The fusion of the multiple common images with the contact ratio can specifically be as follows: firstly, correcting the position and attitude of any two adjacent coincident regions in a plurality of common images with coincidence degree, and determining the splicing positions of the common images; pixel stacking is then performed based on the stitching location.
In addition, the corresponding position information of the panoramic image is determined based on the positioning information of the laser SLAM determined by the laser radar in an instant positioning and map building SLAM mode and the external reference of the panoramic camera or the non-panoramic camera relative to the laser radar.
According to the situation, when the panoramic camera or the common camera is used for shooting at the image acquisition point, the laser SLAM still works all the time, namely, the laser radar is used for determining the coordinates of the corresponding position of the panoramic camera or the common camera in a real-time positioning and map building SLAM mode, and then the external parameters (namely, relative coordinates) of the laser radar and the panoramic camera or the common camera are added to obtain the corresponding position information of the panoramic image.
It should be noted that the panoramic camera or the non-panoramic camera is not limited in the embodiments of the present disclosure, for example, the panoramic camera may include but is not limited to insta360 and physical optical Theta; the non-panoramic camera may include, but is not limited to, a fisheye lens or a normal color camera.
The overlap ratio refers to a pixel overlap between two adjacent images with a certain width, and the width may be set according to requirements, and may be, for example, a width of 10 pixels, 20 pixels, or 30 pixels. The arrangement can ensure that the pixels of the plurality of common images cannot be lost when the common images are spliced into the panoramic image.
In an optional example, in a case that the target scene is an outdoor space, the reconstruction method further includes: when point clouds of a target scene are collected in an SLAM (simultaneous localization and mapping) mode through a laser radar, a panoramic camera is used for synchronously shooting the target scene to obtain a corresponding panoramic image and corresponding position information of the panoramic image; and the corresponding position information of the panoramic image is determined based on the positioning information of the laser SLAM determined by the laser radar in an instant positioning and map building SLAM mode and the external reference of the panoramic camera relative to the laser radar.
The above-described "synchronous shooting" may be implemented in various ways depending on the situation. For example, the panoramic camera can be erected on the head of an operator in a backpack form (because the outdoor space scene is wide, the assumed height of the panoramic camera is not restricted), and then the operator holds the laser radar and the IMU integrated acquisition equipment to acquire point cloud data; or, under the condition that the movable robot is used as a carrier, the laser radar, the IMU and the panoramic camera can be integrated and erected on the movable robot, so that scanning and shooting are realized, and the shooting efficiency is improved.
In an alternative example, on the basis of the above embodiment, step S140 may be implemented in the following manner. For example, the panoramic image may be attached to the three-dimensional point cloud model based on a preset first external parameter representing a rotational-translational relationship between a panoramic camera coordinate system and a laser radar coordinate system and corresponding position information of the panoramic image. Specifically, the corresponding position information of the panoramic image may be converted into a coordinate system of a three-dimensional point cloud model based on the first external parameter, so as to obtain a position of the panoramic image in the three-dimensional point cloud model, and add pixel information of the panoramic image at the position.
For another example, the panoramic image may be attached to the three-dimensional point cloud model based on a second external reference representing a rotational-translational relationship between a non-panoramic camera coordinate system and a laser radar coordinate system and corresponding position information of the panoramic image. Specifically, the corresponding position information of the panoramic image is converted into a coordinate system of a three-dimensional point cloud model based on the second external parameter, so that the position of the panoramic image in the three-dimensional point cloud model is obtained, and the pixel information of the panoramic image is added at the position.
In an optional example, based on the above embodiment, the reconstruction method further includes: and determining a virtual reality model for display based on the three-dimensional space model.
Specifically, the three-dimensional space model is triangulated to obtain a triangulated three-dimensional space model; and then performing texture fitting on the triangulated three-dimensional space model to obtain the virtual reality model for display. Here, through a preset triangulation algorithm, patches can be added in gaps between points of the three-dimensional color point cloud model to make up for the gaps of the model; and texturing the patch to obtain a VR model which can be used for terminal APP or page display.
In summary, according to the reconstruction method of the three-dimensional space model disclosed by the present disclosure, firstly, the target scene can be scanned in an omni-directional manner, so that the problem of occlusion caused by a fixed acquisition point location in the prior art is solved to a great extent. Secondly, at least one image acquisition point location in the target scene can be determined according to the three-dimensional point cloud model, then the target scene is subjected to image acquisition according to the image acquisition point location indication to obtain a panoramic image, the image acquisition point location does not need to be selected manually, and the shot images do not need to be subjected to manual multi-point location splicing, so that the consistency of image data can be ensured. In addition, the redundancy of the acquisition point positions can be avoided by adopting the SLAM mode to carry out the omnibearing mobile scanning, the acquisition time is greatly shortened, and the acquisition time of the point cloud data can be reduced by one order of magnitude (10 times), so that the overall efficiency of data acquisition is improved. In the process of reconstructing the three-dimensional model of the target scene, the point cloud data and the panoramic image have no shielding problem, so that the situation of holes in the generated three-dimensional point cloud model can be reduced or even eliminated, and the integrity of the model is better. In addition, the moving route indication of the image acquisition point position can be displayed, so that a user can conveniently shoot image data, data redundancy can not occur, and the influence of human factors on the data quality is weakened.
Exemplary devices
It should be understood that the reconstruction method of the foregoing embodiments with respect to the three-dimensional space model may also be similarly applied to the following reconstruction apparatus of the three-dimensional space model for similar extension. For the sake of simplicity, it is not described in detail.
Fig. 6 is a schematic structural diagram of an embodiment of a device for reconstructing a three-dimensional space model according to the present disclosure. As shown in fig. 6, the reconstruction apparatus 600 includes:
the point cloud model generating unit 610 is configured to acquire a point cloud of a target scene in an instant positioning and mapping SLAM manner by a laser radar, and generate a three-dimensional point cloud model of the target scene based on the acquired point cloud; the acquisition point location determination unit 620 is configured to determine at least one image acquisition point location in the target scene based on the three-dimensional point cloud model; the image capturing unit 630 is configured to respectively perform image capturing at each of the at least one image capturing point to obtain a panoramic image of the target scene at the at least one image capturing point; the fitting unit 640 is configured to fit the three-dimensional point cloud model to the panoramic image of the at least one image acquisition point location, so as to obtain a three-dimensional space model of the target scene.
Based on the reconstruction device of the three-dimensional space model disclosed by the invention, the point cloud of the target scene can be collected in an SLAM (simultaneous localization and mapping) mode at first, the redundancy of point cloud collection point locations can be avoided, the collection time is greatly shortened, and the collection time of point cloud data can be reduced by one order of magnitude (10 times), so that the overall efficiency of data collection is improved, and the problem of shielding caused by fixing the collection point locations in the prior art can be solved; secondly, determining at least one image acquisition point location in the target scene based on the three-dimensional point cloud model, and then shooting the image of the target scene according to the image acquisition point location, wherein the step does not need to manually select the image acquisition point location and manually perform multi-point location splicing on the shot image, so that the consistency of image data can be ensured; finally, because the shielding problem does not exist, the broken holes in the finally generated three-dimensional space model can be reduced or even eliminated, and the integrity of the model is better.
Fig. 7 is a schematic structural diagram of a reconstruction apparatus of a three-dimensional space model according to still another embodiment of the present disclosure.
In still another embodiment of the apparatus for reconstructing a three-dimensional space model, as shown in fig. 7, the point cloud model generating unit 610 includes: the point cloud collection module 6101 is configured to perform omnidirectional mobile scanning on the target scene by using a laser radar, so as to obtain continuous multiframe point cloud data of the target scene; the pose determination module 6102 is configured to determine an amount of pose change between any two adjacent frames of point cloud data in the continuous multi-frame point cloud data based on an inertial sensor; a model building module 6103 configured to: sequentially establishing a three-dimensional point cloud coordinate system corresponding to each frame of point cloud data based on each frame of point cloud data in the continuous multi-frame point cloud data; selecting a three-dimensional point cloud coordinate system corresponding to the first frame of point cloud data as a three-dimensional point cloud reference coordinate system according to the acquisition time sequence of each frame of point cloud data in the continuous multi-frame point cloud data; and converting the three-dimensional point cloud data in the remaining three-dimensional point cloud coordinate system except the three-dimensional point cloud reference coordinate system into the three-dimensional point cloud reference coordinate system based on the pose variation, and sequentially splicing continuous multi-frame point cloud data according to the acquisition time sequence to obtain the three-dimensional point cloud model of the target scene.
In an optional example, the pose variation comprises a rotation matrix and a translation vector between three-dimensional point cloud coordinate systems respectively corresponding to any two adjacent frames of point cloud data; the model establishment module 6103 is further configured to: according to the acquisition time sequence, determining a direct rotation matrix and a direct translation vector between any remaining three-dimensional point cloud coordinate system except the three-dimensional point cloud reference coordinate system and the three-dimensional point cloud reference coordinate system; and superposing the three-dimensional point cloud data in any three-dimensional point cloud coordinate system with the corresponding direct rotation matrix and direct translation vector according to the acquisition time sequence.
In one optional example, the target scene is outdoors; the point cloud model generation unit 610 is further configured to: based on a global satellite navigation system, carrying out all-directional mobile scanning on the target scene by utilizing a laser radar to obtain continuous multi-frame point cloud data of the target scene based on the global satellite navigation system; and splicing the continuous multi-frame point cloud data based on a global satellite navigation system coordinate system to obtain a three-dimensional point cloud model of the target scene.
In an optional example, the acquisition point location determining unit 620 is further configured to: based on the three-dimensional point cloud model, carrying out space segmentation on the target scene to obtain a plurality of subspaces; and determining the image acquisition point positions according to the shielding relation and the occupation information between the subspaces and inside the subspaces.
In an optional example, the image acquisition unit 630 is further configured to: directly carrying out image acquisition on the target scene at the image acquisition point by using a panoramic camera to obtain a panoramic image of the target scene at the image acquisition point and corresponding position information of the panoramic image; or, acquiring the target scene at the image acquisition point by using a non-panoramic camera to obtain a plurality of common images with coincidence degree of the target scene at the image acquisition point, and fusing the common images with coincidence degree to obtain a panoramic image corresponding to the image acquisition point and corresponding position information of the panoramic image; the corresponding position information of the panoramic image is determined based on positioning information of the laser SLAM determined by the laser radar in an instant positioning and map building SLAM mode and external reference of the panoramic camera or the non-panoramic camera relative to the laser radar.
In an optional example, the target scene is outdoors, and the reconstruction apparatus further comprises a synchronous acquisition unit configured to: when point clouds of a target scene are collected in an SLAM (simultaneous localization and mapping) mode through a laser radar, a panoramic camera is used for synchronously shooting the target scene to obtain a corresponding panoramic image and corresponding position information of the panoramic image; and the corresponding position information of the panoramic image is determined based on the positioning information of the laser SLAM determined by the laser radar in an instant positioning and map building SLAM mode and the external reference of the panoramic camera relative to the laser radar.
In an optional example, the attaching unit 640 is further configured to: fitting the panoramic image to the three-dimensional point cloud model based on a preset first external parameter representing the rotational translation relation between a panoramic camera coordinate system and a laser radar coordinate system and corresponding position information of the panoramic image; or fitting the panoramic image to the three-dimensional point cloud model based on a second external reference representing the rotational translation relation between the non-panoramic camera coordinate system and the laser radar coordinate system and the corresponding position information of the panoramic image.
In one optional example, the reconstruction apparatus further comprises a display unit configured to display the three-dimensional point cloud model, a relative positional relationship of the image acquisition point locations in the three-dimensional point cloud model, and an indication of a movement route between the image acquisition point locations.
In an optional example, the reconstruction apparatus further comprises a virtual display model determination unit configured to determine a virtual reality model for presentation based on the three-dimensional space model.
In one optional example, the virtual display model determination unit is further configured to: triangularization processing is carried out on the three-dimensional space model to obtain a triangulated three-dimensional space model; and executing texture fitting on the triangulated three-dimensional space model to obtain the virtual reality model for display.
In summary, the reconstruction apparatus for three-dimensional spatial model disclosed herein can realize the omni-directional mobile scanning of the target scene, thereby solving the problem of occlusion caused by fixed acquisition points in the prior art to a great extent. Secondly, at least one image acquisition point location in the target scene can be determined according to the three-dimensional point cloud model, then the target scene is subjected to image acquisition according to the image acquisition point location indication to obtain a panoramic image, the image acquisition point location does not need to be selected manually, and the shot images do not need to be subjected to manual multi-point location splicing, so that the consistency of image data can be ensured. In addition, the redundancy of the acquisition point positions can be avoided by adopting the SLAM mode to carry out the omnibearing mobile scanning, the acquisition time is greatly shortened, and the acquisition time of the point cloud data can be reduced by one order of magnitude (10 times), so that the overall efficiency of data acquisition is improved. In the process of reconstructing the three-dimensional model of the target scene, the point cloud data and the panoramic image have no shielding problem, so that the situation of holes in the generated three-dimensional point cloud model can be reduced or even eliminated, and the integrity of the model is better. In addition, the moving route indication of the image acquisition point position can be displayed, so that a user can conveniently shoot image data, data redundancy can not occur, and the influence of human factors on the data quality is weakened.
Exemplary electronic device
In addition, an embodiment of the present disclosure also provides an electronic device, including:
a memory for storing a computer program;
a processor, configured to execute the computer program stored in the memory, and when the computer program is executed, implement the method for reconstructing a three-dimensional space model according to any of the above embodiments of the present disclosure.
Fig. 8 is a schematic structural diagram of an application embodiment of the electronic device of the present disclosure. Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 8. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
As shown in fig. 8, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
The memory may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by a processor to implement the reconstruction methods of the three-dimensional spatial model of the various embodiments of the present disclosure described above and/or other desired functions.
In one example, the electronic device may further include: an input device and an output device, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device may also include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, and the like to the outside. The output devices may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 8, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of reconstruction of a three-dimensional spatial model according to various embodiments of the present disclosure described in the above-mentioned part of the description.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps of the method of reconstructing a three-dimensional spatial model according to various embodiments of the present disclosure described in the above section of the present specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A method of reconstructing a three-dimensional spatial model, the method comprising:
acquiring point cloud of a target scene by a laser radar in an SLAM (simultaneous localization and mapping) mode, and generating a three-dimensional point cloud model of the target scene based on the acquired point cloud;
determining at least one image acquisition point location in the target scene based on the three-dimensional point cloud model;
respectively carrying out image acquisition on each image acquisition point in the at least one image acquisition point to obtain a panoramic image of the target scene at the at least one image acquisition point;
and fitting the three-dimensional point cloud model with the panoramic image of the at least one image acquisition point location to obtain a three-dimensional space model of the target scene.
2. The reconstruction method according to claim 1, wherein the acquiring point cloud of the target scene by the lidar in a SLAM manner of real-time localization and mapping comprises:
carrying out omnibearing mobile scanning on the target scene by using a laser radar to obtain continuous multi-frame point cloud data of the target scene;
and determining the pose variation between any two adjacent frames of point cloud data in the continuous multi-frame point cloud data based on an inertial sensor.
3. The reconstruction method of claim 2, wherein the generating a three-dimensional point cloud model of the target scene based on the acquired point cloud comprises:
sequentially establishing a three-dimensional point cloud coordinate system corresponding to each frame of point cloud data based on each frame of point cloud data in the continuous multi-frame point cloud data;
selecting a three-dimensional point cloud coordinate system corresponding to the first frame of point cloud data as a three-dimensional point cloud reference coordinate system according to the acquisition time sequence of each frame of point cloud data in the continuous multi-frame point cloud data;
and converting the three-dimensional point cloud data in the remaining three-dimensional point cloud coordinate system except the three-dimensional point cloud reference coordinate system into the three-dimensional point cloud reference coordinate system based on the pose variation, and sequentially splicing continuous multi-frame point cloud data according to the acquisition time sequence to obtain the three-dimensional point cloud model of the target scene.
4. The reconstruction method according to claim 3, wherein the pose change amount comprises a rotation matrix and a translation vector between three-dimensional point cloud coordinate systems respectively corresponding to any two adjacent frames of point cloud data;
based on the pose variation, converting the three-dimensional point cloud data in the remaining three-dimensional point cloud coordinate system except the three-dimensional point cloud reference coordinate system into the three-dimensional point cloud reference coordinate system, and sequentially splicing continuous multi-frame point cloud data according to the acquisition time sequence, wherein the method comprises the following steps:
according to the acquisition time sequence, determining a direct rotation matrix and a direct translation vector between any remaining three-dimensional point cloud coordinate system except the three-dimensional point cloud reference coordinate system and the three-dimensional point cloud reference coordinate system;
and superposing the three-dimensional point cloud data in any three-dimensional point cloud coordinate system with the corresponding direct rotation matrix and direct translation vector according to the acquisition time sequence.
5. The reconstruction method according to any one of claims 1 to 4, wherein the target scene is outdoors; the method comprises the following steps of acquiring point clouds of a target scene in an SLAM (simultaneous localization and mapping) mode through a laser radar, and generating a three-dimensional point cloud model of the target scene based on the acquired point clouds, and further comprises the following steps:
based on a global satellite navigation system, carrying out all-directional mobile scanning on the target scene by utilizing a laser radar to obtain continuous multi-frame point cloud data of the target scene based on the global satellite navigation system;
and splicing the continuous multi-frame point cloud data based on a global satellite navigation system coordinate system to obtain a three-dimensional point cloud model of the target scene.
6. The reconstruction method according to any one of claims 1 to 4, wherein the determining at least one image acquisition point location in the target scene based on the three-dimensional point cloud model comprises:
based on the three-dimensional point cloud model, carrying out space segmentation on the target scene to obtain a plurality of subspaces;
and determining the image acquisition point positions according to the shielding relation and the occupation information between the subspaces and inside the subspaces.
7. The reconstruction method according to claim 1, wherein said respectively performing image acquisition at each of the at least one image acquisition points to obtain a panoramic image of the target scene at the at least one image acquisition point comprises:
directly carrying out image acquisition on the target scene at the image acquisition point by using a panoramic camera to obtain a panoramic image of the target scene at the image acquisition point and corresponding position information of the panoramic image; alternatively, the first and second electrodes may be,
acquiring the target scene at the image acquisition point by using a non-panoramic camera to obtain a plurality of common images with coincidence degrees of the target scene at the image acquisition point, and fusing the common images with coincidence degrees to obtain a panoramic image corresponding to the image acquisition point and corresponding position information of the panoramic image;
the corresponding position information of the panoramic image is determined based on positioning information of the laser SLAM determined by the laser radar in an instant positioning and map building SLAM mode and external reference of the panoramic camera or the non-panoramic camera relative to the laser radar.
8. The reconstruction method according to any one of claims 1 to 4, wherein the target scene is outdoors, the reconstruction method further comprising:
when point clouds of a target scene are collected in an SLAM (simultaneous localization and mapping) mode through a laser radar, a panoramic camera is used for synchronously shooting the target scene to obtain a corresponding panoramic image and corresponding position information of the panoramic image;
and the corresponding position information of the panoramic image is determined based on the positioning information of the laser SLAM determined by the laser radar in an instant positioning and map building SLAM mode and the external reference of the panoramic camera relative to the laser radar.
9. An apparatus for reconstructing a three-dimensional spatial model, the apparatus comprising:
a point cloud model generation unit configured to: acquiring point cloud of a target scene by a laser radar in an SLAM (simultaneous localization and mapping) mode, and generating a three-dimensional point cloud model of the target scene based on the acquired point cloud;
a collection point location determination unit configured to: determining at least one image acquisition point location in the target scene based on the three-dimensional point cloud model;
an image acquisition unit configured to: respectively carrying out image acquisition on each image acquisition point in the at least one image acquisition point to obtain a panoramic image of the target scene at the at least one image acquisition point;
a pasting unit configured to: and fitting the three-dimensional point cloud model with the panoramic image of the at least one image acquisition point location to obtain a three-dimensional space model of the target scene.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of reconstructing a three-dimensional spatial model as set forth in any one of the preceding claims 1 to 8.
CN202111132270.4A 2021-09-27 2021-09-27 Method and device for reconstructing three-dimensional space model and storage medium Active CN113570721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111132270.4A CN113570721B (en) 2021-09-27 2021-09-27 Method and device for reconstructing three-dimensional space model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111132270.4A CN113570721B (en) 2021-09-27 2021-09-27 Method and device for reconstructing three-dimensional space model and storage medium

Publications (2)

Publication Number Publication Date
CN113570721A true CN113570721A (en) 2021-10-29
CN113570721B CN113570721B (en) 2021-12-21

Family

ID=78174704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111132270.4A Active CN113570721B (en) 2021-09-27 2021-09-27 Method and device for reconstructing three-dimensional space model and storage medium

Country Status (1)

Country Link
CN (1) CN113570721B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187729A (en) * 2022-07-18 2022-10-14 北京城市网邻信息技术有限公司 Three-dimensional model generation method, device, equipment and storage medium
CN115222602A (en) * 2022-08-15 2022-10-21 北京城市网邻信息技术有限公司 Image splicing method, device, equipment and storage medium
CN115330966A (en) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 Method, system, device and storage medium for generating house type graph
CN115330942A (en) * 2022-08-11 2022-11-11 北京城市网邻信息技术有限公司 Multilayer space three-dimensional modeling method, device and computer readable storage medium
CN115330652A (en) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 Point cloud splicing method and device and storage medium
CN115423934A (en) * 2022-08-12 2022-12-02 北京城市网邻信息技术有限公司 House type graph generation method and device, electronic equipment and storage medium
CN115426488A (en) * 2022-11-04 2022-12-02 中诚华隆计算机技术有限公司 Virtual reality image data transmission method, system and chip
CN115830161A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Method, device and equipment for generating house type graph and storage medium
CN115861528A (en) * 2022-11-21 2023-03-28 北京城市网邻信息技术有限公司 Camera and house type graph generating method
CN116233391A (en) * 2023-03-03 2023-06-06 北京有竹居网络技术有限公司 Apparatus, method and storage medium for image processing
CN116385612A (en) * 2023-03-16 2023-07-04 如你所视(北京)科技有限公司 Global illumination representation method and device under indoor scene and storage medium
CN116449391A (en) * 2023-04-17 2023-07-18 深圳直角设计工程有限公司 Indoor panoramic imaging method and system based on 3D point cloud
WO2023134546A1 (en) * 2022-01-12 2023-07-20 如你所视(北京)科技有限公司 Scene space model construction method and apparatus, and storage medium
CN117537735A (en) * 2023-10-20 2024-02-09 中国中建设计研究院有限公司 Measurement method and device
CN117558295A (en) * 2024-01-11 2024-02-13 北京谛声科技有限责任公司 Voiceprint monitoring method and device based on SLAM and SONAH fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106443687A (en) * 2016-08-31 2017-02-22 欧思徕(北京)智能科技有限公司 Piggyback mobile surveying and mapping system based on laser radar and panorama camera
CN108735052A (en) * 2018-05-09 2018-11-02 北京航空航天大学青岛研究院 A kind of augmented reality experiment with falling objects method based on SLAM
CN109192055A (en) * 2018-08-23 2019-01-11 国网天津市电力公司 A kind of matched method of substation equipment nameplate threedimensional model quick obtaining
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN109358342A (en) * 2018-10-12 2019-02-19 东北大学 Three-dimensional laser SLAM system and control method based on 2D laser radar
CN109801358A (en) * 2018-12-06 2019-05-24 宁波市电力设计院有限公司 A kind of substation's three-dimensional investigation method scanning and put cloud visual fusion based on SLAM
CN110610149A (en) * 2019-09-03 2019-12-24 卓尔智联(武汉)研究院有限公司 Information processing method and device and computer storage medium
EP3782119A1 (en) * 2019-03-27 2021-02-24 Mitsubishi Electric Corporation Detection, tracking and 3d modeling of objects with sparse rgb-d slam and interactive perception
CN112837406A (en) * 2021-01-11 2021-05-25 聚好看科技股份有限公司 Three-dimensional reconstruction method, device and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106443687A (en) * 2016-08-31 2017-02-22 欧思徕(北京)智能科技有限公司 Piggyback mobile surveying and mapping system based on laser radar and panorama camera
CN108735052A (en) * 2018-05-09 2018-11-02 北京航空航天大学青岛研究院 A kind of augmented reality experiment with falling objects method based on SLAM
CN109192055A (en) * 2018-08-23 2019-01-11 国网天津市电力公司 A kind of matched method of substation equipment nameplate threedimensional model quick obtaining
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN109358342A (en) * 2018-10-12 2019-02-19 东北大学 Three-dimensional laser SLAM system and control method based on 2D laser radar
CN109801358A (en) * 2018-12-06 2019-05-24 宁波市电力设计院有限公司 A kind of substation's three-dimensional investigation method scanning and put cloud visual fusion based on SLAM
EP3782119A1 (en) * 2019-03-27 2021-02-24 Mitsubishi Electric Corporation Detection, tracking and 3d modeling of objects with sparse rgb-d slam and interactive perception
CN110610149A (en) * 2019-09-03 2019-12-24 卓尔智联(武汉)研究院有限公司 Information processing method and device and computer storage medium
CN112837406A (en) * 2021-01-11 2021-05-25 聚好看科技股份有限公司 Three-dimensional reconstruction method, device and system

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134546A1 (en) * 2022-01-12 2023-07-20 如你所视(北京)科技有限公司 Scene space model construction method and apparatus, and storage medium
CN115187729A (en) * 2022-07-18 2022-10-14 北京城市网邻信息技术有限公司 Three-dimensional model generation method, device, equipment and storage medium
CN115330942B (en) * 2022-08-11 2023-03-28 北京城市网邻信息技术有限公司 Multilayer space three-dimensional modeling method, device and computer readable storage medium
CN115330942A (en) * 2022-08-11 2022-11-11 北京城市网邻信息技术有限公司 Multilayer space three-dimensional modeling method, device and computer readable storage medium
CN115423934A (en) * 2022-08-12 2022-12-02 北京城市网邻信息技术有限公司 House type graph generation method and device, electronic equipment and storage medium
CN115423934B (en) * 2022-08-12 2024-03-08 北京城市网邻信息技术有限公司 House type diagram generation method and device, electronic equipment and storage medium
CN115222602A (en) * 2022-08-15 2022-10-21 北京城市网邻信息技术有限公司 Image splicing method, device, equipment and storage medium
CN115330966A (en) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 Method, system, device and storage medium for generating house type graph
CN115330652A (en) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 Point cloud splicing method and device and storage medium
CN115330652B (en) * 2022-08-15 2023-06-16 北京城市网邻信息技术有限公司 Point cloud splicing method, equipment and storage medium
CN115426488A (en) * 2022-11-04 2022-12-02 中诚华隆计算机技术有限公司 Virtual reality image data transmission method, system and chip
CN115426488B (en) * 2022-11-04 2023-01-10 中诚华隆计算机技术有限公司 Virtual reality image data transmission method, system and chip
CN115830161B (en) * 2022-11-21 2023-10-31 北京城市网邻信息技术有限公司 House type diagram generation method, device, equipment and storage medium
CN115861528A (en) * 2022-11-21 2023-03-28 北京城市网邻信息技术有限公司 Camera and house type graph generating method
CN115830161A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Method, device and equipment for generating house type graph and storage medium
CN115861528B (en) * 2022-11-21 2023-09-19 北京城市网邻信息技术有限公司 Camera and house type diagram generation method
CN116233391A (en) * 2023-03-03 2023-06-06 北京有竹居网络技术有限公司 Apparatus, method and storage medium for image processing
CN116385612B (en) * 2023-03-16 2024-02-20 如你所视(北京)科技有限公司 Global illumination representation method and device under indoor scene and storage medium
CN116385612A (en) * 2023-03-16 2023-07-04 如你所视(北京)科技有限公司 Global illumination representation method and device under indoor scene and storage medium
CN116449391A (en) * 2023-04-17 2023-07-18 深圳直角设计工程有限公司 Indoor panoramic imaging method and system based on 3D point cloud
CN116449391B (en) * 2023-04-17 2024-05-17 深圳直角设计工程有限公司 Indoor panoramic imaging method and system based on 3D point cloud
CN117537735A (en) * 2023-10-20 2024-02-09 中国中建设计研究院有限公司 Measurement method and device
CN117537735B (en) * 2023-10-20 2024-04-30 中国中建设计研究院有限公司 Measurement method and device
CN117558295A (en) * 2024-01-11 2024-02-13 北京谛声科技有限责任公司 Voiceprint monitoring method and device based on SLAM and SONAH fusion
CN117558295B (en) * 2024-01-11 2024-03-26 北京谛声科技有限责任公司 Voiceprint monitoring method and device based on SLAM and SONAH fusion

Also Published As

Publication number Publication date
CN113570721B (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN113570721B (en) Method and device for reconstructing three-dimensional space model and storage medium
US10984554B2 (en) Monocular vision tracking method, apparatus and non-volatile computer-readable storage medium
US10740975B2 (en) Mobile augmented reality system
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
US10896497B2 (en) Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method
CN107836012B (en) Projection image generation method and device, and mapping method between image pixel and depth value
Zollmann et al. Augmented reality for construction site monitoring and documentation
Verykokou et al. UAV-based 3D modelling of disaster scenes for Urban Search and Rescue
WO2019127347A1 (en) Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
US20210056751A1 (en) Photography-based 3d modeling system and method, and automatic 3d modeling apparatus and method
EP2807629B1 (en) Mobile device configured to compute 3d models based on motion sensor data
Gong et al. Extrinsic calibration of a 3D LIDAR and a camera using a trihedron
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
CN112465907A (en) Indoor visual navigation method and system
US8509522B2 (en) Camera translation using rotation from device
WO2022025283A1 (en) Measurement processing device, method, and program
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN113516707A (en) Object positioning method and device based on image
CN114283243A (en) Data processing method and device, computer equipment and storage medium
CN114092646A (en) Model generation method and device, computer equipment and storage medium
CN113646606A (en) Control method, control equipment, unmanned aerial vehicle and storage medium
CA3102860C (en) Photography-based 3d modeling system and method, and automatic 3d modeling apparatus and method
Regula et al. Position estimation using novel calibrated indoor positioning system
US20230410451A1 (en) Augmented reality implement apparatus and method using mobile scanned object model scaling
CN117095131B (en) Three-dimensional reconstruction method, equipment and storage medium for object motion key points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant