CN114913105A - Laser point cloud fusion method and device, server and computer readable storage medium - Google Patents

Laser point cloud fusion method and device, server and computer readable storage medium Download PDF

Info

Publication number
CN114913105A
CN114913105A CN202210512885.8A CN202210512885A CN114913105A CN 114913105 A CN114913105 A CN 114913105A CN 202210512885 A CN202210512885 A CN 202210512885A CN 114913105 A CN114913105 A CN 114913105A
Authority
CN
China
Prior art keywords
processed
point cloud
image
laser
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210512885.8A
Other languages
Chinese (zh)
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
Guangdong Siwei Kanan Intelligent Equipment Co Ltd
4Dage Co Ltd
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
Guangdong Siwei Kanan Intelligent Equipment Co Ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, Guangdong Siwei Kanan Intelligent Equipment Co Ltd, 4Dage Co Ltd filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN202210512885.8A priority Critical patent/CN114913105A/en
Publication of CN114913105A publication Critical patent/CN114913105A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Laser Beam Processing (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a laser point cloud fusion method, a device, a server and a computer readable storage medium, wherein the method comprises the following steps: acquiring point clouds to be processed; the point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed; and fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud. Therefore, the laser point cloud fusion method and device can improve the precision of laser point cloud fusion.

Description

Laser point cloud fusion method and device, server and computer readable storage medium
Technical Field
The application belongs to the technical field of point cloud, and particularly relates to a laser point cloud fusion method, a laser point cloud fusion device, a laser point cloud fusion server and a computer readable storage medium.
Background
With the continuous development of sensing technology and surveying and mapping equipment, data sources are various, and comprehensive point cloud data need to be acquired in the field of three-dimensional reconstruction, so that the reconstruction effect is better. But the fusion precision between the existing different types of point cloud data is low.
Disclosure of Invention
The embodiment of the application provides a laser point cloud fusion method, a laser point cloud fusion device, a server and a computer readable storage medium, and can solve the technical problem of low point cloud fusion precision in the prior art.
In a first aspect, an embodiment of the present application provides a laser point cloud fusion method, including:
acquiring a point cloud to be processed; the point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed;
and fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud.
In one possible implementation of the first aspect, acquiring a point cloud to be processed includes:
acquiring a first image to be processed and first laser data;
acquiring a second image to be processed and second laser data;
calibrating a coordinate system of a first coordinate system where the first laser data are located and a second coordinate system where the second laser data are located;
matching the first image to be processed and the second image to be processed;
generating a first point cloud to be processed according to the matched first image to be processed and the first laser data after the coordinate system is calibrated;
and generating a second point cloud to be processed according to the matched second image to be processed and the second laser data after the coordinate system is calibrated.
In a possible implementation of the first aspect, matching the first to-be-processed image and the second to-be-processed image comprises:
extracting a first feature point of the first image to be processed;
extracting a second feature point of the second image to be processed;
and screening out target characteristic points which are common to the first characteristic points and the second characteristic points.
In a possible implementation manner of the first aspect, fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud, including:
acquiring pose information according to the target feature points;
and fusing the first point cloud to be processed and the second point cloud to be processed according to the pose information to obtain a target point cloud.
In a possible implementation manner of the first aspect, after fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud, the method further includes:
optimizing the target point cloud.
In a second aspect, an embodiment of the present application provides a laser point cloud fusion apparatus, including:
the acquisition module is used for acquiring point clouds to be processed; the point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed;
and the fusion module is used for fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud.
In an optional implementation manner of the second aspect, the obtaining module includes:
the first acquisition submodule is used for acquiring a first image to be processed and first laser data;
the second acquisition submodule is used for acquiring a second image to be processed and second laser data;
the calibration sub-module is used for calibrating a coordinate system of a first coordinate system where the first laser data are located and a second coordinate system where the second laser data are located;
the matching sub-module is used for matching the first image to be processed and the second image to be processed;
the first generation submodule is used for generating a first point cloud to be processed according to the matched first image to be processed and the first laser data after the coordinate system is calibrated;
and the second generation sub-module is used for generating a second point cloud to be processed according to the matched second image to be processed and the second laser data after the coordinate system is calibrated.
In an optional implementation manner of the second aspect, the matching sub-module includes:
a first extraction unit, configured to extract a first feature point of the first image to be processed;
the second extraction unit is used for extracting second feature points of the second image to be processed;
and the screening unit is used for screening out the target characteristic points which are common to the first characteristic points and the second characteristic points.
In an alternative embodiment of the second aspect, the fusion module includes:
the third acquisition submodule is used for acquiring pose information according to the target feature points;
and the fusion submodule is used for fusing the first point cloud to be processed and the second point cloud to be processed according to the pose information to obtain a target point cloud.
In an alternative embodiment of the second aspect, the apparatus further comprises:
and the optimization module is used for optimizing the target point cloud.
In a third aspect, an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the method according to the first aspect.
In a fourth aspect, the present application provides a readable computer readable storage medium, and when executed by a processor, the computer program implements the method according to the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
the method comprises the steps of obtaining point clouds to be processed; the point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed; and fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud, and improving the precision of laser point cloud fusion.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a laser point cloud fusion method provided in an embodiment of the present application;
fig. 2 is a block diagram of a laser point cloud fusion apparatus provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The technical solutions provided in the embodiments of the present application will be described below with specific embodiments.
Referring to fig. 1, a schematic flow chart of a laser point cloud fusion method provided in an embodiment of the present application is illustrated, by way of example and not limitation, where the method may be applied to a server, and the method may include the following steps:
step S101, point clouds to be processed are obtained.
The point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed.
It should be noted that the first point cloud to be processed and the second point cloud to be processed are generated by shooting the same target object according to different types of laser scanning equipment.
The first point cloud to be processed and the second point cloud to be processed correspond to the same shooting object, the first point cloud to be processed can be obtained by shooting through a ground laser scanner, and the second point cloud to be processed can be obtained by aerial shooting through an unmanned aerial vehicle.
In the specific application, the method for acquiring the point cloud to be processed comprises the following steps:
step S101-1, a first image to be processed and first laser data are acquired.
And S101-2, acquiring a second image to be processed and second laser data.
The first image to be processed and the first laser data may be obtained by shooting with a ground laser scanner, the first image to be processed may be a panoramic image, and the first laser data includes a depth value of each pixel point in the first image to be processed; the second to-be-processed image and the second laser data can be obtained by aerial photography by an unmanned aerial vehicle, the second to-be-processed image can be a common image, and the second laser data contains the depth value of each pixel point in the second to-be-processed image. In addition, the ground laser scanner can also acquire first GPS data of the target object, and the unmanned aerial vehicle can also acquire second GPS data of the target object.
And S101-3, calibrating a coordinate system of a first coordinate system in which the first laser data is located and a second coordinate system in which the second laser data is located.
Specifically, the first GPS data is located in a first coordinate system, the second GPS data is located in a second coordinate system, and the first GPS data and the second GPS data are subjected to coordinate alignment, that is, coordinate system calibration is performed on the first coordinate system where the first laser data is located and the second coordinate system where the second laser data is located.
And step S101-4, matching the first image to be processed and the second image to be processed.
Exemplarily, matching the first image to be processed and the second image to be processed comprises:
and S101-4-1, extracting a first feature point of the first image to be processed.
Illustratively, a feature extraction algorithm is used to extract a first feature point of the first image to be processed.
And S101-4-2, extracting a second feature point of the second image to be processed.
Illustratively, a feature extraction algorithm is used to extract a first feature point of the first image to be processed.
It should be noted that the feature extraction algorithm is an akage algorithm. The AKAZE feature algorithm is an improved version of SIFT feature algorithm, but Gaussian blur is not used for constructing the scale space, and the Gaussian blur has the defect of losing edge information, and then nonlinear diffusion filtering is adopted for constructing the scale space, so that more edge features of the image are reserved.
And S101-4-3, screening out target feature points which are common to the first feature points and the second feature points.
Illustratively, the first feature point and the second feature point are input to a neural network trained in advance, and a common target feature point is output.
Optionally, before screening out a target feature point common to the first feature point and the second feature point, the method further includes: and training the neural network.
It can be understood that: rough matching is extracted by using the existing algorithm, dense reconstruction is carried out by using the sfm algorithm, effective points are screened, a neural network is trained, and matching of shooting of the unmanned aerial vehicle and panoramic images is realized.
The training process of the neural network comprises an initial step and an iteration step. The initial step is rough matching of the common image and the panoramic image, and the panoramic image in the model can be sliced, and the feature points of the common image and the common image after the panoramic image is sliced are matched. The initial step can also be with unmanned aerial vehicle's image conversion to a part of panorama image, realizes the matching of panorama to the panorama. The iteration step directly uses the matching of the unmanned aerial vehicle image and the panoramic image for iteration. The model of the neural network may be a neural network model for performing feature point matching on the common image and the panoramic image, such as a superslue model, a superslue model variant, or an OANet model.
And S101-5, generating a first point cloud to be processed according to the matched first image to be processed and the first laser data after the coordinate system is calibrated.
Illustratively, the three-dimensional coordinates of the first point cloud to be processed are obtained according to the following formula:
Figure DEST_PATH_IMAGE001
wherein, (u, v) is the pixel coordinate of each feature point in the first image to be processed, d is the depth value of each pixel point in the first image to be processed, K is the internal reference of the ground laser scanner, and (X, Y, Z) is the three-dimensional coordinate of the first point cloud to be processed.
And S101-6, generating a second point cloud to be processed according to the matched second image to be processed and the second laser data after the coordinate system is calibrated.
Illustratively, the three-dimensional coordinates of the second point cloud to be processed are obtained according to the following formula:
Figure 795590DEST_PATH_IMAGE001
and (u, v) is the pixel coordinate of each characteristic point in the second image to be processed, d is the depth value of each pixel point in the second image to be processed, K is the internal reference of the unmanned aerial vehicle, and (X, Y and Z) are the three-dimensional coordinates of the second point cloud to be processed.
And S102, fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud.
In the specific application, the method for fusing the first point cloud to be processed and the second point cloud to be processed to obtain the target point cloud comprises the following steps:
and S102-1, acquiring pose information according to the target feature points.
And matching the target characteristic points according to the SFM algorithm to obtain pose information.
And S102-2, fusing the first point cloud to be processed and the second point cloud to be processed according to the pose information to obtain a target point cloud.
Illustratively, according to the pose information, the first point cloud to be processed and the second point cloud to be processed are fused through an ICP algorithm to obtain a target point cloud.
In an optional implementation manner, after the fusing the first point cloud to be processed and the second point cloud to be processed to obtain the target point cloud, the method further includes:
and optimizing the target point cloud.
It can be understood that after the first point cloud to be processed and the second point cloud to be processed are fused to obtain the target point cloud, some point clouds are overlapped, the synthesized point cloud is rasterized, and the density a and the visibility B (the included angle between the normal of the point cloud and the connecting line between the camera and the point cloud) of the two point clouds are calculated for each grid, and the reliability C = a (cosB) is calculated. The higher the credibility of which part of the point cloud is, which part of the point cloud is selected by the finally synthesized point cloud, thereby achieving the purpose of optimizing the target point cloud.
In the embodiment of the application, point clouds to be processed are obtained; the point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed; and fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud, and improving the precision of laser point cloud fusion.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 2 shows a structural block diagram of a laser point cloud fusion apparatus provided in the embodiment of the present application, which corresponds to the method described in the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 2, the apparatus includes:
the acquisition module 21 is used for acquiring point clouds to be processed; the point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed;
and the fusion module 22 is configured to fuse the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud.
In an optional implementation manner, the obtaining module includes:
the first acquisition submodule is used for acquiring a first image to be processed and first laser data;
the second acquisition submodule is used for acquiring a second image to be processed and second laser data;
the calibration sub-module is used for calibrating a coordinate system of a first coordinate system where the first laser data are located and a second coordinate system where the second laser data are located;
the matching submodule is used for matching the first image to be processed with the second image to be processed;
the first generation submodule is used for generating a first point cloud to be processed according to the matched first image to be processed and the first laser data after the coordinate system is calibrated;
and the second generation submodule is used for generating a second point cloud to be processed according to the matched second image to be processed and the second laser data after the coordinate system is calibrated.
In an optional embodiment, the matching sub-module includes:
a first extraction unit, configured to extract a first feature point of the first image to be processed;
the second extraction unit is used for extracting second feature points of the second image to be processed;
and the screening unit is used for screening out the target characteristic points which are common to the first characteristic points and the second characteristic points.
In an alternative embodiment, the fusion module includes:
the third acquisition submodule is used for acquiring pose information according to the target feature points;
and the fusion sub-module is used for fusing the first point cloud to be processed and the second point cloud to be processed according to the pose information to obtain a target point cloud.
In an optional embodiment, the apparatus further comprises:
and the optimization module is used for optimizing the target point cloud.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 3 is a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 3, the server 3 of this embodiment includes: at least one processor 30, a memory 31 and a computer program 32 stored in the memory 31 and executable on the at least one processor 30, the processor 30 implementing the steps of any of the various method embodiments described above when executing the computer program 32.
The server 3 may be a computing device such as a cloud server. The server may include, but is not limited to, a processor 30, a memory 31. Those skilled in the art will appreciate that fig. 3 is merely an example of the server 3, and does not constitute a limitation of the server 3, and may include more or less components than those shown, or combine some components, or different components, such as input and output devices, network access devices, etc.
The Processor 30 may be a Central Processing Unit (CPU), and the Processor 30 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 31 may in some embodiments be an internal storage unit of the server 3, such as a hard disk or a memory of the server 3. The memory 31 may also be an external storage device of the server 3 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the server 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the server 3. The memory 31 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the apparatus may be divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The present application further provides a readable computer readable storage medium, which is preferably a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program is implemented to implement the steps in the above method embodiments when executed by a processor.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above may be implemented by instructing relevant hardware by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the methods described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunication signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A laser point cloud fusion method is characterized by comprising the following steps: a
Acquiring point clouds to be processed; the point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed;
and fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud.
2. The laser point cloud fusion method of claim 1, wherein obtaining the point cloud to be processed comprises:
acquiring a first image to be processed and first laser data;
acquiring a second image to be processed and second laser data;
calibrating a coordinate system of a first coordinate system where the first laser data are located and a second coordinate system where the second laser data are located;
matching the first image to be processed and the second image to be processed;
generating a first point cloud to be processed according to the matched first image to be processed and the first laser data after the coordinate system is calibrated;
and generating a second point cloud to be processed according to the matched second image to be processed and the second laser data after the coordinate system is calibrated.
3. The laser point cloud fusion method of claim 2, wherein matching the first to-be-processed image and the second to-be-processed image comprises:
extracting a first feature point of the first image to be processed;
extracting a second feature point of the second image to be processed;
and screening out target characteristic points which are common to the first characteristic points and the second characteristic points.
4. The laser point cloud fusion method of claim 3, wherein fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud comprises:
acquiring pose information according to the target feature points;
and fusing the first point cloud to be processed and the second point cloud to be processed according to the pose information to obtain a target point cloud.
5. The laser point cloud fusion method of claim 1, wherein after fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud, the method further comprises:
optimizing the target point cloud.
6. A laser point cloud fusion device, comprising:
the acquisition module is used for acquiring point clouds to be processed; the point clouds to be processed comprise a first point cloud to be processed and a second point cloud to be processed;
and the fusion module is used for fusing the first point cloud to be processed and the second point cloud to be processed to obtain a target point cloud.
7. The laser point cloud fusion apparatus of claim 6, wherein the acquisition module comprises:
the first acquisition submodule is used for acquiring a first image to be processed and first laser data;
the second acquisition submodule is used for acquiring a second image to be processed and second laser data;
the calibration sub-module is used for calibrating a coordinate system of a first coordinate system where the first laser data are located and a second coordinate system where the second laser data are located;
the matching submodule is used for matching the first image to be processed with the second image to be processed;
the first generation submodule is used for generating a first point cloud to be processed according to the matched first image to be processed and the first laser data after the coordinate system is calibrated;
and the second generation submodule is used for generating a second point cloud to be processed according to the matched second image to be processed and the second laser data after the coordinate system is calibrated.
8. The laser point cloud fusion apparatus of claim 7, wherein the matching sub-module comprises:
a first extraction unit, configured to extract a first feature point of the first image to be processed;
the second extraction unit is used for extracting second feature points of the second image to be processed;
and the screening unit is used for screening out the target characteristic points which are common to the first characteristic points and the second characteristic points.
9. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN202210512885.8A 2022-05-12 2022-05-12 Laser point cloud fusion method and device, server and computer readable storage medium Pending CN114913105A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210512885.8A CN114913105A (en) 2022-05-12 2022-05-12 Laser point cloud fusion method and device, server and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210512885.8A CN114913105A (en) 2022-05-12 2022-05-12 Laser point cloud fusion method and device, server and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114913105A true CN114913105A (en) 2022-08-16

Family

ID=82766411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210512885.8A Pending CN114913105A (en) 2022-05-12 2022-05-12 Laser point cloud fusion method and device, server and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114913105A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758006A (en) * 2023-05-18 2023-09-15 广州广检建设工程检测中心有限公司 Scaffold quality detection method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758006A (en) * 2023-05-18 2023-09-15 广州广检建设工程检测中心有限公司 Scaffold quality detection method and device
CN116758006B (en) * 2023-05-18 2024-02-06 广州广检建设工程检测中心有限公司 Scaffold quality detection method and device

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN109064428B (en) Image denoising processing method, terminal device and computer readable storage medium
CN111080526B (en) Method, device, equipment and medium for measuring and calculating farmland area of aerial image
US20190096092A1 (en) Method and device for calibration
CN108711144B (en) Augmented reality method and device
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN112435193B (en) Method and device for denoising point cloud data, storage medium and electronic equipment
CN112927306B (en) Calibration method and device of shooting device and terminal equipment
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN115393815A (en) Road information generation method and device, electronic equipment and computer readable medium
CN113807451A (en) Panoramic image feature point matching model training method and device and server
CN111383254A (en) Depth information acquisition method and system and terminal equipment
CN113379815A (en) Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN114913105A (en) Laser point cloud fusion method and device, server and computer readable storage medium
CN111161348A (en) Monocular camera-based object pose estimation method, device and equipment
CN109034214B (en) Method and apparatus for generating a mark
CN114445583A (en) Data processing method and device, electronic equipment and storage medium
US20230053952A1 (en) Method and apparatus for evaluating motion state of traffic tool, device, and medium
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN117201708B (en) Unmanned aerial vehicle video stitching method, device, equipment and medium with position information
CN113284074B (en) Method and device for removing target object of panoramic image, server and storage medium
CN114022358A (en) Image splicing method and device for laser camera and dome camera, and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination