CN116503566B - Three-dimensional modeling method and device, electronic equipment and storage medium - Google Patents

Three-dimensional modeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116503566B
CN116503566B CN202310745532.7A CN202310745532A CN116503566B CN 116503566 B CN116503566 B CN 116503566B CN 202310745532 A CN202310745532 A CN 202310745532A CN 116503566 B CN116503566 B CN 116503566B
Authority
CN
China
Prior art keywords
point cloud
mapping
target object
dimensional
cloud map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310745532.7A
Other languages
Chinese (zh)
Other versions
CN116503566A (en
Inventor
黎浩文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qiyu Innovation Technology Co ltd
Original Assignee
Shenzhen Qiyu Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qiyu Innovation Technology Co ltd filed Critical Shenzhen Qiyu Innovation Technology Co ltd
Priority to CN202310745532.7A priority Critical patent/CN116503566B/en
Publication of CN116503566A publication Critical patent/CN116503566A/en
Application granted granted Critical
Publication of CN116503566B publication Critical patent/CN116503566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a three-dimensional modeling method, a three-dimensional modeling device, electronic equipment and a storage medium, wherein the three-dimensional modeling method comprises the following steps: acquiring a plurality of visual images of a target object, and acquiring point cloud map data and radar motion tracks of the target object; optimizing the point cloud map data according to the plurality of visual images to obtain an optimized point cloud map; acquiring first mapping points of track points on the radar motion track on an optimized point cloud map and second mapping points of characteristic points in a plurality of visual images on the optimized point cloud map; and carrying out three-dimensional modeling on the target object according to the first mapping points and the second mapping points to obtain a three-dimensional model of the target object. In the implementation process of the scheme, the target object is subjected to three-dimensional modeling through the first mapping points mapped by the track points and the second mapping points mapped by the characteristic points on the point cloud map after the optimization of the visual images of the target object, so that the three-dimensional modeling precision is effectively improved.

Description

Three-dimensional modeling method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the technical field of computer vision and three-dimensional modeling, and in particular, to a three-dimensional modeling method, apparatus, electronic device, and storage medium.
Background
The simultaneous localization and mapping (Simultaneously Localization and Mapping, SLAM) means that the robot carries out self localization by means of the sensors carried by the robot, and simultaneously, the map of the environment is built incrementally, which is the premise and basis for the intelligent robot to complete tasks autonomously in the unknown environment.
Currently, three-dimensional models are mostly built by SLAM mode alone, specifically for example: after the point cloud map data of the SLAM is obtained, and a three-dimensional model of the triangular mesh is constructed from the point cloud map data.
Disclosure of Invention
The embodiment of the application aims to provide a three-dimensional modeling method, a three-dimensional modeling device, electronic equipment and a storage medium, which are used for improving the precision of three-dimensional modeling.
The embodiment of the application provides a three-dimensional modeling method, which comprises the following steps: acquiring a plurality of visual images of a target object, and acquiring point cloud map data and radar motion tracks of the target object; optimizing the point cloud map data according to the plurality of visual images to obtain an optimized point cloud map; acquiring first mapping points of track points on the radar motion track on an optimized point cloud map and second mapping points of characteristic points in a plurality of visual images on the optimized point cloud map; and carrying out three-dimensional modeling on the target object according to the first mapping points and the second mapping points to obtain a three-dimensional model of the target object. In the implementation process of the scheme, the target object is subjected to three-dimensional modeling through the first mapping points mapped by the track points and the second mapping points mapped by the characteristic points on the point cloud map after the optimization of the visual images of the target object, so that the three-dimensional modeling precision is effectively improved.
Optionally, in an embodiment of the present application, optimizing the point cloud map data according to the plurality of visual images includes: correlating a plurality of visual images according to the image similarity to obtain a plurality of correlation images; extracting feature points in a plurality of associated images and three-dimensional point clouds corresponding to the feature points according to each associated image in the associated images, calculating a reprojection photometric error between the feature points and the associated images, and calculating a geometric consistency error of three-dimensional point cloud-to-point cloud map data; and optimizing the point cloud map data by taking the reprojection photometric errors and the geometric consistency errors as objective functions. In the implementation process of the scheme, the point cloud map data are optimized according to the plurality of visual images, namely, the unordered visual angle images and the point cloud map data acquired by the laser radar are used for combined optimization, so that scene details of the point cloud map are supplemented, and the three-dimensional modeling accuracy is effectively improved.
Optionally, in the embodiment of the present application, extracting the feature point in the associated image and the three-dimensional point cloud corresponding to the feature point includes: performing scale-invariant feature transformation on the associated images to obtain feature points; and performing epipolar geometry triangulation on the characteristic points to obtain a three-dimensional point cloud. In the implementation process of the scheme, the feature points are obtained by carrying out scale invariant feature transformation on the associated images, and epipolar geometry triangulation is carried out on the feature points, so that detail information such as the feature points is reserved, and the accuracy of three-dimensional modeling is effectively improved.
Optionally, in an embodiment of the present application, after extracting a feature point in the associated image and a three-dimensional point cloud corresponding to the feature point, the method further includes: and if the distance between the three-dimensional point cloud projected to the pixel plane of the point cloud map data is greater than the projection threshold value, eliminating the three-dimensional point cloud. In the implementation process of the scheme, the three-dimensional point cloud data is screened by the distance between the projection of the three-dimensional point cloud to the pixel plane of the point cloud map data, so that the projection distance of the point cloud is effectively utilized, the quality of the point cloud data is improved, and the precision of three-dimensional modeling is effectively improved.
Optionally, in an embodiment of the present application, after extracting a feature point in the associated image and a three-dimensional point cloud corresponding to the feature point, the method further includes: and if the luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection point of the pixel plane is larger than the luminosity error, eliminating the three-dimensional point cloud. In the implementation process of the scheme, the three-dimensional point cloud is removed through the luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection points of the pixel plane, so that the observation relation between the point cloud and the view is effectively utilized to screen the three-dimensional point cloud data, the quality of the point cloud data is improved, and the accuracy of three-dimensional modeling is effectively improved.
Optionally, in an embodiment of the present application, three-dimensional modeling of the target object according to the first mapping point and the second mapping point includes: carrying out weighted fusion on the first mapping points and the second mapping points to obtain fused mapping points; and carrying out three-dimensional modeling on the target object according to the fused mapping points. In the implementation process of the scheme, the first mapping points and the second mapping points are subjected to weighted fusion to obtain the fused mapping points, and the target object is subjected to three-dimensional modeling according to the fused mapping points, so that the quality of the fused mapping points is improved, and the accuracy of three-dimensional modeling is effectively improved.
Optionally, in an embodiment of the present application, after obtaining the three-dimensional model of the target object, the method further includes: and carrying out texture mapping on the three-dimensional model of the target object by using a plurality of visual images to obtain a mapped three-dimensional model. In the implementation process of the scheme, texture mapping is carried out on the three-dimensional model of the target object by using a plurality of visual images, so that the mapped three-dimensional model can have visual characteristics of a plurality of visual angles, and the reconstruction precision of the three-dimensional model is improved.
The embodiment of the application also provides a three-dimensional modeling device, which comprises: the data track acquisition module is used for acquiring a plurality of visual images of the target object, and acquiring point cloud map data and radar motion tracks of the target object; the point cloud map optimizing module is used for optimizing the point cloud map data according to the plurality of visual images to obtain an optimized point cloud map; the point cloud mapping acquisition module is used for acquiring first mapping points of track points on the radar motion track on the optimized point cloud map and second mapping points of characteristic points in the multiple visual images on the optimized point cloud map; the three-dimensional model obtaining module is used for carrying out three-dimensional modeling on the target object according to the first mapping points and the second mapping points to obtain a three-dimensional model of the target object.
Optionally, in an embodiment of the present application, the point cloud map optimization module includes: the associated image obtaining sub-module is used for associating a plurality of visual images according to the image similarity to obtain a plurality of associated images; the data error calculation sub-module is used for extracting characteristic points in the associated images and three-dimensional point clouds corresponding to the characteristic points aiming at each associated image in the plurality of associated images, calculating the reprojection photometric errors between the characteristic points and the associated images, and calculating the geometric consistency errors from the three-dimensional point clouds to the point cloud map data; and the point cloud map optimization sub-module is used for optimizing the point cloud map data by taking the reprojection photometric errors and the geometric consistency errors as objective functions.
Optionally, in an embodiment of the present application, the data error calculation sub-module includes: the feature point obtaining unit is used for carrying out scale-invariant feature transformation on the associated images to obtain feature points; and the characteristic point measuring unit is used for carrying out epipolar geometric triangulation on the characteristic points to obtain a three-dimensional point cloud.
Optionally, in an embodiment of the present application, the data error calculation submodule further includes: the first point cloud eliminating unit is used for eliminating the three-dimensional point cloud if the distance between the three-dimensional point cloud projected to the pixel plane of the point cloud map data is larger than the projection threshold value.
Optionally, in an embodiment of the present application, the data error calculation submodule further includes: and the second point cloud eliminating unit is used for eliminating the three-dimensional point cloud if the luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection point of the pixel plane is larger than the luminosity error.
Optionally, in an embodiment of the present application, the three-dimensional model obtaining module includes: the mapping weighted fusion sub-module is used for carrying out weighted fusion on the first mapping points and the second mapping points to obtain fused mapping points; and the target object modeling module is used for carrying out three-dimensional modeling on the target object according to the fused mapping points.
Optionally, in an embodiment of the present application, the three-dimensional modeling apparatus further includes: and the model texture mapping module is used for performing texture mapping on the three-dimensional model of the target object by using a plurality of visual images to obtain a mapped three-dimensional model.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory storing machine-readable instructions executable by the processor to perform the method as described above when executed by the processor.
Embodiments of the present application also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs a method as described above.
Additional features and advantages of embodiments of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of embodiments of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application, and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort to a person having ordinary skill in the art.
FIG. 1 is a schematic flow chart of a three-dimensional modeling method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of texture mapping on a three-dimensional model of a target object according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a three-dimensional modeling apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the embodiments of the present application are only for the purpose of illustration and description, and are not intended to limit the scope of protection of the embodiments of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in embodiments of the present application, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flowcharts within the scope of embodiments of the present application.
In addition, the described embodiments are only a portion of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Accordingly, the following detailed description of the embodiments of the present application, which is provided in the accompanying drawings, is not intended to limit the scope of the claimed embodiments of the present application, but is merely representative of selected embodiments of the present application.
It is understood that "first" and "second" in the embodiments of the present application are used to distinguish similar objects. It will be appreciated by those skilled in the art that the words "first," "second," etc. do not limit the number and order of execution, and that the words "first," "second," etc. do not necessarily differ. In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, which means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. The term "plurality" refers to two or more (including two), and similarly, "plurality" refers to two or more (including two).
Before describing the three-dimensional modeling method provided in the embodiments of the present application, some concepts involved in the embodiments of the present application are described:
three-dimensional models, which are three-dimensional polygonal representations of objects, are typically displayed using a computer or other video equipment; the displayed object can be a real world entity or fictitious thing, can be as small as an atom or as large as a large size, and can be represented by a three-dimensional model in the physical nature.
It should be noted that, the three-dimensional modeling method provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal having a function of executing a computer program or the server described above, and the device terminal is for example: smart phones, personal computers, tablet computers, personal digital assistants, or mobile internet appliances, etc. A server refers to a device that provides computing services over a network, such as: an x86 server and a non-x 86 server, the non-x 86 server comprising: mainframe, minicomputer, and UNIX servers.
Application scenarios to which the three-dimensional modeling method is applicable are described below, where the application scenarios include, but are not limited to: the three-dimensional modeling method can be used for three-dimensional modeling of target objects, so as to improve the accuracy of three-dimensional modeling and the like, wherein the target objects include but are not limited to: a physical building structure, a large building model or a game building model, etc. In addition, the three-dimensional modeling method can be applied to application scenes of film and television videos, game entertainment or digital twin, for example: the three-dimensional modeling method described above is used to construct a three-dimensional model (e.g., a building model, etc.) in a movie video of a movie or a television show, or the three-dimensional modeling method described above is used to construct a large building model, etc. in a game play, or the three-dimensional modeling method described above may be used to construct a traffic facility (e.g., a bridge, etc.) or a public smart building facility in a digital twin system in a smart traffic or smart building scene.
Please refer to fig. 1, which illustrates a flow chart of a three-dimensional modeling method provided in an embodiment of the present application; the main purpose of the three-dimensional modeling method is to reconstruct a three-dimensional model according to a plurality of visual images of a target object and point cloud map data, and the implementation mode of the method can comprise the following steps:
step S110: a plurality of visual images of a target object are acquired, and point cloud map data and radar motion trajectories of the target object are acquired.
Target object, which refers to a target object or object entity requiring three-dimensional modeling, herein includes, but is not limited to: the object entity refers to an object existing in reality, and can be an existing entity building structure or large building model, and the target object can be a virtual object, for example, a game building model, and the like.
The visual image refers to a two-dimensional image acquired visually from a target object, for example, a black-and-white photograph, a color photograph, or an infrared photograph, and the plurality of visual images may be images photographed at different angles or images that are not sequential (i.e., disordered).
The point cloud map data refers to point cloud data acquired by using a SLAM technology for map navigation, and specifically can be point cloud map data obtained by scanning a target object (for example, a building room) by a laser radar based on the SLAM technology such as a real-time positioning and modeling system, and the point cloud map data is sparse and unstructured data.
The radar motion trajectory refers to a motion trajectory of a laser radar when scanning a target object, for example, a motion trajectory of the laser radar when scanning in a building room.
Step S120: and optimizing the point cloud map data according to the plurality of visual images to obtain an optimized point cloud map.
It will be appreciated that the point cloud map data is mostly acquired using lidar scanning, with the accuracy of acquisition and measurement of the lidar typically being on the order of centimeters. In order to improve the data precision of the point cloud map, the point cloud map data can be optimized according to a plurality of visual images, namely, the unordered visual angle images and the point cloud map data acquired by the laser radar are used for carrying out joint optimization, so that scene details of the point cloud map are supplemented, and the point cloud map after joint optimization is obtained. Since the joint optimization process herein is relatively complex, the specific process of the joint optimization will be described in detail in the following embodiments.
Step S130: and acquiring first mapping points of track points on the radar motion track on the optimized point cloud map and second mapping points of characteristic points in the multiple visual images on the optimized point cloud map.
Step S140: and carrying out three-dimensional modeling on the target object according to the first mapping points and the second mapping points to obtain a three-dimensional model of the target object.
In the implementation process of the scheme, the main purpose of the three-dimensional modeling method is to reconstruct a complete and structured three-dimensional model according to a plurality of visual images of a target object and sparse and unstructured point cloud map data.
As an alternative embodiment of the above step S110, an embodiment of acquiring the point cloud map data and the radar motion trail of the target object may include:
step S111: and synchronously positioning and mapping SLAM is carried out on the target object to obtain point cloud map data and radar motion tracks.
The embodiment of step S111 described above is, for example: the target object is synchronously positioned and mapped by the laser radar SLAM, and the point cloud map data and the radar motion trail are obtained by using an executable program compiled or interpreted by a preset programming language, and the programming language can be used, for example: C. c++, java, BASIC, javaScript, LISP, shell, perl, ruby, python, PHP, etc.
As an alternative embodiment of the above step S120, an embodiment of optimizing the point cloud map data according to the plurality of visual images may include:
step S121: and correlating the plurality of visual images according to the image similarity to obtain a plurality of correlation images.
The embodiment of step S121 described above is, for example: image feature detection is performed on a plurality of visual images using an image feature detection algorithm to obtain image features, wherein the image feature detection algorithm herein includes, but is not limited to: scale-invariant feature transforms (SIFT-Invariant Feature Transform), acceleration robust features (Speed Up Robust Features, SURF), FAST (Features from Accelerated Segment Test), ORB (Oriented FAST and Rotated BRIEF), and/or Harris, among others. Image similarity between image features of all two visual images of the plurality of visual images is then calculated using a continuous bag of words (Continuous Bag Of Words, CBOW) approach. Finally, associating a plurality of visual images according to the image similarity, wherein the plurality of visual images can include a first visual image and a second visual image, specifically for example: judging whether the image similarity between the image features of the first visual image and the image features of the second visual image is larger than a preset similarity threshold, if the image similarity between the image features of the first visual image and the image features of the second visual image is larger than the preset similarity threshold, correlating the first visual image with the second visual image to obtain a correlation image, and similarly, performing the above operation on all two visual images of the plurality of visual images to obtain a plurality of correlation images.
Step S122: and extracting a characteristic point in each associated image and a three-dimensional point cloud corresponding to the characteristic point according to each associated image in the plurality of associated images, calculating a reprojection photometric error between the characteristic point and the associated image, and calculating a geometric consistency error of three-dimensional point cloud-to-point cloud map data.
The embodiment of step S122 described above is, for example: for each of a plurality of associated images, extracting two-dimensional feature points of the associated image under a pixel coordinate system in a Scale Invariant Feature Transform (SIFT) mode, and acquiring three-dimensional point clouds corresponding to the feature points. Using the formulaCalculating the reprojection photometric error between the two-dimensional feature point and the associated image, wherein +_>Representing the reprojection photometric error, +.>Represent the firstiPersonal visual image->Representing a certain two-dimensional feature point +.>Representing the observation relation of the two-dimensional characteristic points, wherein the observation relation can be obtained by always carrying index data and pose data of a scanning current visual image by using point cloud map data and filtering and homogenizing the index data and the pose data>Represent the firstjPersonal visual image->An internal reference matrix of the camera representing the visual image, < > >A matrix of external parameters of the camera representing the visual image, < >>Representing the two-dimensional feature point k from the firstiProjection of the visual image onto the firstjIn the individual visual images, the photometric errors of the re-projections (pixel values) are compared.
Finally, using the formulaCalculating geometrical consistency errors of three-dimensional point cloud-to-point cloud map data, wherein +_>Representing geometrical consistency error of the map data from three-dimensional point cloud to point cloud corresponding to the two-dimensional feature point,/->Represent the firstiPersonal visual image->Representing a certain two-dimensional feature point +.>An internal reference matrix of the camera representing the visual image, < >>A matrix of external parameters of the camera representing the visual image, < >>Representing projection of two-dimensional feature points into the world coordinate system by means of an internal and an external reference matrix, < >>And->Respectively represent planespIs a plane parameter of the optical element.
Step S123: and optimizing the point cloud map data by taking the reprojection photometric errors and the geometric consistency errors as objective functions.
Above-mentionedThe embodiment of step S123 is, for example: the reprojection photometric error and the geometric consistency error are taken as target functions, and the formula is usedAnd optimizing the point cloud map data, and constructing a least square optimization problem to solve. Wherein (1)>Representing an objective function +. >Representing the reprojection photometric error, +.>Representing geometrical consistency error of the map data from three-dimensional point cloud to point cloud corresponding to the two-dimensional feature point,/->And the weight value between the reprojection photometric error and the geometric consistency error is represented.
As an alternative embodiment of the step S122, an embodiment of extracting the feature point and the three-dimensional point cloud corresponding to the feature point in the associated image may include:
step S122a: and carrying out scale-invariant feature transformation on the associated images to obtain feature points.
The embodiment of step S122a described above is, for example: for each of the plurality of associated images, extracting two-dimensional feature points of the associated image under a pixel coordinate system by a scale-invariant feature transform (SIFT) mode.
Step S122b: and performing epipolar geometry triangulation on the characteristic points to obtain a three-dimensional point cloud.
The embodiment of step S122b described above is, for example: epipolar geometry triangulation, also referred to herein as triangulating epipolar geometry (Epipolar Geometric Triangulation), is performed on the two-dimensional feature points to obtain a three-dimensional point cloud corresponding to the two-dimensional feature points.
As an optional implementation manner of the step S122, after extracting the feature point and the three-dimensional point cloud corresponding to the feature point in the associated image, the method may further include:
Step S122c: and if the distance between the three-dimensional point cloud projected to the pixel plane of the point cloud map data is greater than the projection threshold value, eliminating the three-dimensional point cloud.
The embodiment of step S122c described above is, for example: using the formulaCalculating that the distance between the three-dimensional point cloud projected to the pixel plane of the point cloud map data is larger than a projection threshold value; wherein (1)>Representing the distance between the projection of the three-dimensional point cloud to the pixel plane of the point cloud map data,Pa certain three-dimensional point cloud is represented,visP) Representing all possible perspectives of the three-dimensional point cloud. And if the distance between the three-dimensional point cloud projected to the pixel plane of the point cloud map data is greater than the projection threshold value, eliminating the two-dimensional characteristic points and the three-dimensional point cloud corresponding to the two-dimensional characteristic points.
And/or after extracting the feature point and the three-dimensional point cloud corresponding to the feature point in the associated image, the method may further include:
step S122d: and if the luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection point of the pixel plane is larger than the luminosity error, eliminating the three-dimensional point cloud.
The embodiment of step S122d described above is, for example: using the formulaCalculating a luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection point of the pixel plane, wherein +. >Representing the luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection point of the pixel plane, +.>Representing a certain two-dimensional feature point,/>An internal reference matrix of the camera representing the visual image, < >>A matrix of external parameters of the camera representing the visual image, < >>Representing projection of two-dimensional feature points into the world coordinate system by means of an internal and an external reference matrix, < >>And->Respectively represent planespIs a plane parameter of the optical element. And if the luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection point of the pixel plane is larger than the luminosity error, eliminating the two-dimensional characteristic point and the three-dimensional point cloud corresponding to the two-dimensional characteristic point.
As an alternative embodiment of the above step S140, an embodiment of three-dimensionally modeling the target object according to the first mapping point and the second mapping point may include:
step S141: and carrying out weighted fusion on the first mapping point and the second mapping point to obtain a fused mapping point.
The embodiment of step S141 is, for example: it can be understood that, because the scales of the source point cloud map data collected by different laser devices are different, the errors of the point cloud map data are different, so that the adaptive weighting fusion is performed on the first mapping point and the second mapping point by using the adaptive weighting mode, and the fused mapping points are obtained, so that the stability of the grid extracted in the three-dimensional modeling process is improved. Among other things, the adaptive weighting means herein include, but are not limited to: and using a direction and surface angle adding mode, and adopting a geometric information statistical mode or an adaptive filtering mode.
Step S142: and carrying out three-dimensional modeling on the target object according to the fused mapping points to obtain a three-dimensional model of the target object.
The embodiment of step S142 described above is, for example: and after the first mapping points and the second mapping points are subjected to weighted fusion, obtaining the point cloud map data comprising the fused mapping points. Then, the point cloud map data including the fused mapping points is subjected to Delaunay (3D Delaunay) triangulation, wherein the Delaunay (3D Delaunay) triangulation is to generate a tetrahedron set of the point cloud map data, which is mutually disjoint, further, each tetrahedron of the tetrahedron set is taken as a node, and adjacent faces of each tetrahedron are taken as edges, so that a directed graph of the target object can be constructed. Finally, three-dimensional modeling is performed on the directed graph of the target object, specifically for example: the method can be used for extracting the surface of the directed graph of the target object by using a maximum flow/minimum cut graph cutting method, so that a three-dimensional model of the target object is obtained, and the multi-scale, integration and refinement of the three-dimensional model are improved.
Alternatively, in three-dimensional modeling of the target object, it is also possible to useGrid extraction function as three-dimensional modeling, wherein +. >Surface to be extracted of a directed graph representing a target object, < >>Represents a set of line of sight information in a visual image, +.>Graph cut method grid extraction function representing weights with sight line information set, < ->、/>And->Respectively are provided withS-term, T-term and regularization term representing surface structure to be extracted of visual line information and target object' S directed graph in certain visual line information set, +.>Weight values representing laser rays or visual images, when +.>When the weight value of the laser beam is expressed, the weight value is expressed by the source lidar (usedLRepresentation) sensor error on the sensor signal,/-determined>Indicating a certain laser ray emitted by the lidar, < >>Representing a sensor on the lidar (denoted by L), a +>Representing a certain visual image acquired by a visual camera, +.>Representing the error of the image sensor on the vision camera, when +.>A weight value representing a visual image, the weight being the cosine of the normal angle between the error of the image sensor on the visual camera and the surface to be extracted of the directed graph of the target object (+)>) And multiplying to obtain the product.
As an alternative embodiment of the three-dimensional modeling method, after obtaining the three-dimensional model of the target object, the method may further include:
step S150: and carrying out texture mapping on the three-dimensional model of the target object by using a plurality of visual images to obtain a mapped three-dimensional model.
It will be appreciated that in a particular implementation, different policies may be applied to texture mapping for different scene sizes of the target object, such as: for larger sized scenes, an incremental mapping strategy may be employed for texture mapping, while for smaller sized scenes, a multi-view mapping strategy may be employed for texture mapping, thereby increasing more detail and accuracy of the texture mapping. Wherein the details of the incremental mapping strategy and the multi-view mapping strategy herein are described in detail below.
Please refer to fig. 2, which is a schematic flowchart of texture mapping of a three-dimensional model of a target object according to an embodiment of the present application; the embodiment of step S150 may include the steps of:
step S151: and judging whether the acquisition equipment of the visual image is preset equipment or not.
It will be appreciated that the above-mentioned larger or smaller sized scenes are determined by the acquisition device of the visual image, specifically for example: if the visual image acquisition device is a laser device (e.g., using the laser device to quickly walk and map), the scene corresponding to the visual image is a larger-sized scene. Similarly, if the acquisition device of the visual image is a camera device (e.g., using close-up camera complement), then the scene corresponding to the visual image is a smaller-sized scene.
The embodiment of step S151 described above is, for example: an executable program compiled or interpreted using a preset programming language may be used to determine whether the visual image capturing device is a preset device, for example: C. c++, java, BASIC, javaScript, LISP, shell, perl, ruby, python, PHP, etc.
Step S152: if the visual image acquisition equipment is preset equipment, the first mapping strategy is used for texture mapping, and a mapped three-dimensional model is obtained.
The first mapping strategy refers to a strategy for performing texture mapping on a scene with a larger size, and the first mapping strategy can be an incremental mapping strategy. The preset device may be a laser device (for example, a laser device is used to quickly walk and map), and may be other devices for capturing a scene with a larger size.
The embodiment of step S152 is, for example: considering that the trajectory of the laser device in a larger-sized scene is continuous and that the visual image is limited in coverage for that larger-sized scene, a first mapping strategy (e.g., an incremental mapping strategy) may be used for texture mapping. If the acquisition device of the visual image is a preset device (e.g. a laser device), a first mapping strategy will be used Texture mapping is performed to obtain a mapped three-dimensional model, wherein,urepresenting texture coordinates on a two-dimensional visual image, < >>Representing the currently mapped patch,/->Representing the visibility weight between texture coordinates on the visual image and the currently mapped patch,ddistance of acquisition device representing visual image to currently mapped patch for continuous time +.>The texture picture corresponding to the currently mapped patch may be obtained by weighting all color values of the visibility weight, and the process of weighting the color values specifically includes: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Texture picture corresponding to the currently mapped patch, < >>Representing a two-dimensional visual imagetTexture coordinates of the moment>Representing the current mappingDough sheet, and/or->Represented bytVisual image color values corresponding to the texture coordinates of the moment.
Step S153: if the visual image acquisition equipment is not preset equipment, a second mapping strategy is used for texture mapping, and a mapped three-dimensional model is obtained.
The second mapping strategy refers to a strategy for texture mapping of a scene with a smaller size, and the second mapping strategy can be a multi-view mapping strategy. Since the structure of objects in a smaller-sized scene is often complex, multiple angles are required to capture the target object, and perspective projection from each triangular patch can be mapped into multiple visual images, thereby screening out the visual images associated with the triangular patch.
The embodiment of step S153 described above is, for example: if the acquisition device of the visual image is not a preset device (e.g. a laser device), a second mapping strategy is used for texture mapping, in particular, the visual image color values corresponding to the visual image can be updated in consideration of the image sharpness, for example using a formulaTo update the visual image color values, to obtain a visual image composed of updated color values, wherein,Crepresenting updated color values,/>Represents the firstiColor values of individual visual images,/->Color values of a reference image (i.e. old image before update) representing the visual image correspondence,/->Represent the firstiGray values of individual visual image color values, +.>Representing the visual imageLike the gray value of the corresponding reference image. Then, using the formulaAnd carrying out dodging and dodging on the region boundary of the updated visual image to obtain the mapped three-dimensional model.
In the implementation process, texture mapping is carried out on visual images acquired by different equipment by using different mapping strategies, so that a mapped three-dimensional model can have a trans-scale characteristic, and the reconstruction accuracy of the three-dimensional model is improved.
Please refer to fig. 3, which illustrates a schematic structural diagram of a three-dimensional modeling apparatus according to an embodiment of the present application; the embodiment of the application provides a three-dimensional modeling apparatus 200, including:
The data track acquisition module 210 is configured to acquire a plurality of visual images of the target object, and acquire point cloud map data and a radar motion track of the target object.
The point cloud map optimizing module 220 is configured to optimize the point cloud map data according to the plurality of visual images, and obtain an optimized point cloud map.
The point cloud map obtaining module 230 is configured to obtain a first mapping point of a track point on the radar motion track on the optimized point cloud map, and a second mapping point of a feature point in the plurality of visual images on the optimized point cloud map.
The three-dimensional model obtaining module 240 is configured to perform three-dimensional modeling on the target object according to the first mapping point and the second mapping point, and obtain a three-dimensional model of the target object.
Optionally, in an embodiment of the present application, the point cloud map optimization module includes:
and the associated image obtaining sub-module is used for carrying out association on the plurality of visual images according to the image similarity to obtain a plurality of associated images.
The data error calculation sub-module is used for extracting the characteristic point in each associated image and the three-dimensional point cloud corresponding to the characteristic point in the associated image, calculating the reprojection photometric error between the characteristic point and the associated image, and calculating the geometric consistency error of the three-dimensional point cloud to point cloud map data.
And the point cloud map optimization sub-module is used for optimizing the point cloud map data by taking the reprojection photometric errors and the geometric consistency errors as objective functions.
Optionally, in an embodiment of the present application, the data error calculation sub-module includes:
the feature point obtaining unit is used for carrying out scale-invariant feature transformation on the associated images to obtain feature points.
And the characteristic point measuring unit is used for carrying out epipolar geometric triangulation on the characteristic points to obtain a three-dimensional point cloud.
Optionally, in an embodiment of the present application, the data error calculation submodule further includes:
the first point cloud eliminating unit is used for eliminating the three-dimensional point cloud if the distance between the three-dimensional point cloud projected to the pixel plane of the point cloud map data is larger than the projection threshold value.
And the second point cloud eliminating unit is used for eliminating the three-dimensional point cloud if the luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection point of the pixel plane is larger than the luminosity error.
Optionally, in an embodiment of the present application, the three-dimensional model obtaining module includes:
and the mapping weighted fusion sub-module is used for carrying out weighted fusion on the first mapping points and the second mapping points to obtain fused mapping points.
And the target object modeling module is used for carrying out three-dimensional modeling on the target object according to the fused mapping points.
Optionally, in an embodiment of the present application, the data track acquisition module includes:
and the synchronous positioning and mapping sub-module is used for synchronously positioning and mapping SLAM of the target object to obtain point cloud map data and radar motion tracks.
Optionally, in an embodiment of the present application, the three-dimensional modeling apparatus further includes:
and the model texture mapping module is used for performing texture mapping on the three-dimensional model of the target object by using a plurality of visual images to obtain a mapped three-dimensional model.
It should be understood that the apparatus corresponds to the above three-dimensional modeling method embodiment, and is capable of performing the steps involved in the above method embodiment, and specific functions of the apparatus may be referred to the above description, and detailed descriptions thereof are omitted herein as appropriate. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device.
Please refer to fig. 4, which illustrates a schematic structural diagram of an electronic device provided in an embodiment of the present application. An electronic device 300 provided in an embodiment of the present application includes: a processor 310 and a memory 320, the memory 320 storing machine-readable instructions executable by the processor 310, which when executed by the processor 310 perform the method as described above.
The present embodiment also provides a computer readable storage medium 330, the computer readable storage medium 330 having stored thereon a computer program which, when executed by the processor 310, performs the method as above.
The computer readable storage medium 330 may be implemented by any type or combination of volatile or nonvolatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other. For the apparatus class embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference is made to the description of the method embodiments for relevant points.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
In addition, the functional modules of the embodiments in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part. Furthermore, in the description of the present specification, the descriptions of the terms "one embodiment," "some embodiments," "examples," "specific examples," "some examples," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the embodiments of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The foregoing description is merely an optional implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and the changes or substitutions should be covered in the scope of the embodiments of the present application.

Claims (8)

1. A method of three-dimensional modeling, comprising:
acquiring a plurality of visual images of a target object, and acquiring point cloud map data and a radar motion track of the target object, wherein the point cloud map data is obtained by scanning the target object by a laser radar, and the radar motion track is a motion track of the laser radar when scanning the target object;
optimizing the point cloud map data according to the plurality of visual images to obtain an optimized point cloud map;
acquiring first mapping points of track points on the radar motion track on the optimized point cloud map and second mapping points of characteristic points in the plurality of visual images on the optimized point cloud map;
performing three-dimensional modeling on the target object according to the first mapping points and the second mapping points to obtain a three-dimensional model of the target object;
Texture mapping is carried out on the three-dimensional model of the target object by using the visual images, and a mapped three-dimensional model is obtained;
wherein the three-dimensional modeling of the target object according to the first mapping point and the second mapping point includes: carrying out weighted fusion on the first mapping points and the second mapping points to obtain point cloud map data comprising the fused mapping points; performing delusian triangulation on the point cloud map data comprising the fused mapping points to generate a tetrahedral set of which the point cloud map data is mutually disjoint; constructing a directed graph of the target object by taking each tetrahedron of the tetrahedron set as a node and taking the adjacent surface of each tetrahedron as an edge; three-dimensional modeling is carried out on the directed graph of the target object;
the texture mapping of the three-dimensional model of the target object using the plurality of visual images includes: judging whether the acquisition equipment of the visual image is preset equipment or not; if yes, texture mapping is carried out by using a first mapping strategy, and a three-dimensional model after mapping is obtained; if not, using a second mapping strategy to carry out texture mapping to obtain a mapped three-dimensional model; the scene size aimed by the first mapping strategy is larger than the scene size aimed by the second mapping strategy.
2. The method of claim 1, wherein the optimizing the point cloud map data from the plurality of visual images comprises:
correlating the plurality of visual images according to the image similarity to obtain a plurality of correlation images;
extracting a characteristic point in each associated image and a three-dimensional point cloud corresponding to the characteristic point from the associated image, calculating a reprojection photometric error between the characteristic point and the associated image, and calculating a geometric consistency error from the three-dimensional point cloud to the point cloud map data;
and optimizing the point cloud map data by taking the reprojection photometric error and the geometric consistency error as objective functions.
3. The method according to claim 2, wherein the extracting the feature point in the associated image and the three-dimensional point cloud corresponding to the feature point includes:
performing scale-invariant feature transformation on the associated image to obtain the feature points;
and performing epipolar geometry triangulation on the characteristic points to obtain the three-dimensional point cloud.
4. The method according to claim 2, further comprising, after the extracting the feature point in the associated image and the three-dimensional point cloud corresponding to the feature point:
And if the distance between the three-dimensional point cloud projected to the pixel plane of the point cloud map data is larger than a projection threshold value, eliminating the three-dimensional point cloud.
5. The method according to claim 2, further comprising, after the extracting the feature point in the associated image and the three-dimensional point cloud corresponding to the feature point:
and if the luminosity difference between the luminosity of the three-dimensional point cloud and the luminosity of the projection point of the pixel plane of the point cloud map data is larger than the luminosity error, eliminating the three-dimensional point cloud.
6. A three-dimensional modeling apparatus, comprising:
the system comprises a data track acquisition module, a target object acquisition module and a radar detection module, wherein the data track acquisition module is used for acquiring a plurality of visual images of the target object, and acquiring point cloud map data and a radar motion track of the target object, wherein the point cloud map data are obtained by scanning the target object by a laser radar, and the radar motion track is a motion track of the laser radar when the target object is scanned;
the point cloud map optimizing module is used for optimizing the point cloud map data according to the plurality of visual images to obtain an optimized point cloud map;
the point cloud mapping acquisition module is used for acquiring first mapping points of track points on the radar motion track on the optimized point cloud map and second mapping points of characteristic points in the visual images on the optimized point cloud map;
The three-dimensional model obtaining module is used for carrying out three-dimensional modeling on the target object according to the first mapping points and the second mapping points to obtain a three-dimensional model of the target object; texture mapping is carried out on the three-dimensional model of the target object by using the visual images, and a mapped three-dimensional model is obtained;
wherein the three-dimensional modeling of the target object according to the first mapping point and the second mapping point includes: carrying out weighted fusion on the first mapping points and the second mapping points to obtain point cloud map data comprising the fused mapping points; performing delusian triangulation on the point cloud map data comprising the fused mapping points to generate a tetrahedral set of which the point cloud map data is mutually disjoint; constructing a directed graph of the target object by taking each tetrahedron of the tetrahedron set as a node and taking the adjacent surface of each tetrahedron as an edge; three-dimensional modeling is carried out on the directed graph of the target object;
the texture mapping of the three-dimensional model of the target object using the plurality of visual images includes: judging whether the acquisition equipment of the visual image is preset equipment or not; if yes, texture mapping is carried out by using a first mapping strategy, and a three-dimensional model after mapping is obtained; if not, using a second mapping strategy to carry out texture mapping to obtain a mapped three-dimensional model; the scene size aimed by the first mapping strategy is larger than the scene size aimed by the second mapping strategy.
7. An electronic device, comprising: a processor and a memory storing machine-readable instructions executable by the processor to perform the method of any one of claims 1 to 5 when executed by the processor.
8. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the method according to any of claims 1 to 5.
CN202310745532.7A 2023-06-25 2023-06-25 Three-dimensional modeling method and device, electronic equipment and storage medium Active CN116503566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310745532.7A CN116503566B (en) 2023-06-25 2023-06-25 Three-dimensional modeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310745532.7A CN116503566B (en) 2023-06-25 2023-06-25 Three-dimensional modeling method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116503566A CN116503566A (en) 2023-07-28
CN116503566B true CN116503566B (en) 2024-03-29

Family

ID=87325104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310745532.7A Active CN116503566B (en) 2023-06-25 2023-06-25 Three-dimensional modeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116503566B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635875A (en) * 2024-01-25 2024-03-01 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium
CN112184768A (en) * 2020-09-24 2021-01-05 杭州易现先进科技有限公司 SFM reconstruction method and device based on laser radar and computer equipment
CN114022639A (en) * 2021-10-27 2022-02-08 浪潮电子信息产业股份有限公司 Three-dimensional reconstruction model generation method and system, electronic device and storage medium
CN114792338A (en) * 2022-01-10 2022-07-26 天津大学 Vision fusion positioning method based on prior three-dimensional laser radar point cloud map
CN115342796A (en) * 2022-07-22 2022-11-15 广东交通职业技术学院 Map construction method, system, device and medium based on visual laser fusion
CN115830073A (en) * 2022-12-22 2023-03-21 安徽蔚来智驾科技有限公司 Map element reconstruction method, map element reconstruction device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7010158B2 (en) * 2001-11-13 2006-03-07 Eastman Kodak Company Method and apparatus for three-dimensional scene modeling and reconstruction
US10539676B2 (en) * 2017-03-22 2020-01-21 Here Global B.V. Method, apparatus and computer program product for mapping and modeling a three dimensional structure

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium
CN112184768A (en) * 2020-09-24 2021-01-05 杭州易现先进科技有限公司 SFM reconstruction method and device based on laser radar and computer equipment
CN114022639A (en) * 2021-10-27 2022-02-08 浪潮电子信息产业股份有限公司 Three-dimensional reconstruction model generation method and system, electronic device and storage medium
CN114792338A (en) * 2022-01-10 2022-07-26 天津大学 Vision fusion positioning method based on prior three-dimensional laser radar point cloud map
CN115342796A (en) * 2022-07-22 2022-11-15 广东交通职业技术学院 Map construction method, system, device and medium based on visual laser fusion
CN115830073A (en) * 2022-12-22 2023-03-21 安徽蔚来智驾科技有限公司 Map element reconstruction method, map element reconstruction device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN116503566A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
CN106940704B (en) Positioning method and device based on grid map
WO2019127445A1 (en) Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
CN112132972B (en) Three-dimensional reconstruction method and system for fusing laser and image data
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
CN110568447B (en) Visual positioning method, device and computer readable medium
CN110176032B (en) Three-dimensional reconstruction method and device
WO2019164498A1 (en) Methods, devices and computer program products for global bundle adjustment of 3d images
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
KR101787542B1 (en) Estimation system and method of slope stability using 3d model and soil classification
CN112197764B (en) Real-time pose determining method and device and electronic equipment
CN112184603B (en) Point cloud fusion method and device, electronic equipment and computer storage medium
CN110807833B (en) Mesh topology obtaining method and device, electronic equipment and storage medium
CN116503566B (en) Three-dimensional modeling method and device, electronic equipment and storage medium
CN112312113B (en) Method, device and system for generating three-dimensional model
CN111415420B (en) Spatial information determining method and device and electronic equipment
CN109064533B (en) 3D roaming method and system
CN111899345B (en) Three-dimensional reconstruction method based on 2D visual image
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN112562005A (en) Space calibration method and system
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN109785429B (en) Three-dimensional reconstruction method and device
CN116051747A (en) House three-dimensional model reconstruction method, device and medium based on missing point cloud data
CN110766731A (en) Method and device for automatically registering panoramic image and point cloud and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant