CN111553937B - Laser point cloud map construction method, device, equipment and system - Google Patents

Laser point cloud map construction method, device, equipment and system Download PDF

Info

Publication number
CN111553937B
CN111553937B CN202010327729.5A CN202010327729A CN111553937B CN 111553937 B CN111553937 B CN 111553937B CN 202010327729 A CN202010327729 A CN 202010327729A CN 111553937 B CN111553937 B CN 111553937B
Authority
CN
China
Prior art keywords
point cloud
image
pose transformation
laser point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010327729.5A
Other languages
Chinese (zh)
Other versions
CN111553937A (en
Inventor
于占海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shanghai Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shanghai Co Ltd filed Critical Neusoft Reach Automotive Technology Shanghai Co Ltd
Priority to CN202010327729.5A priority Critical patent/CN111553937B/en
Publication of CN111553937A publication Critical patent/CN111553937A/en
Application granted granted Critical
Publication of CN111553937B publication Critical patent/CN111553937B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The disclosure relates to a laser point cloud map construction method, device, equipment and system, and belongs to the technical field of laser point cloud construction. The laser point cloud map construction method provided by the disclosure has the advantages of high matching precision and high matching speed. Specifically, the method comprises the steps of: acquiring an image of a target scene and a laser point cloud corresponding to the image; performing image matching on the two frames of images to obtain a first pose transformation parameter; using the first pose transformation parameter as an initial value of the pose transformation parameter, and performing laser point cloud matching on two frames of laser point clouds corresponding to the two frames of images to obtain a second pose transformation parameter; and constructing a laser point cloud map according to the two frames of laser point clouds and the second pose transformation parameters.

Description

Laser point cloud map construction method, device, equipment and system
Technical Field
The disclosure relates to the technical field of laser point cloud map construction, in particular to a laser point cloud map construction method, device, equipment and system.
Background
The laser point cloud map is key data in the automatic driving technology, and provides reference basis for positioning vehicles and planning driving routes. In the related art, multiple frames of laser point clouds are subjected to inter-frame matching to construct a laser point cloud map. However, the construction method in the related art has the disadvantages of low matching accuracy and slow matching rate.
Disclosure of Invention
The disclosure provides a laser point cloud map construction method, device, equipment and system, which are used for solving the defects in the related technology.
In a first aspect, an embodiment of the present disclosure provides a laser point cloud map construction method. The method comprises the following steps:
acquiring an image of a target scene and a laser point cloud corresponding to the image;
performing image matching on the two frames of images to obtain first pose transformation parameters;
using the first pose transformation parameters as initial values of the pose transformation parameters, and performing laser point cloud matching on two frames of laser point clouds corresponding to the two frames of images to obtain second pose transformation parameters;
and constructing a laser point cloud map according to the laser point clouds of the two frames and the second pose transformation parameters.
In one embodiment, the two frames of images include a target image and a reference image; the image matching of the two frames of images to obtain a first pose transformation parameter comprises the following steps:
and converting the target image into a coordinate system of the reference image to obtain the first pose transformation parameter.
In one embodiment, the two frames of the laser point clouds include a target point cloud and a reference point cloud; the step of performing laser point cloud matching on two frames of laser point clouds corresponding to two frames of images by taking the first pose transformation parameters as initial values of the pose transformation parameters to obtain second pose transformation parameters includes:
converting the target point cloud into a coordinate system of a reference point cloud by taking the first pose transformation parameter as an initial value of the pose transformation parameter;
adjusting the pose transformation parameters according to the matching difference value of the reference point cloud and the target point cloud converted into the coordinate system;
and taking the pose transformation parameters corresponding to the matching difference value smaller than or equal to a set threshold value as the second pose transformation parameters.
In one embodiment, the adjusting the pose transformation parameter according to the matching difference between the reference point cloud and the target point cloud converted into the coordinate system includes:
dividing the reference point cloud into a plurality of spatial cells;
acquiring a first probability density parameter of the reference point cloud in each space cell and a second probability density parameter of the target point cloud in each space cell;
and determining the probability difference value of each space cell according to the first probability density and the second probability density of each space cell, and adjusting the pose transformation parameters according to the probability difference value of all the space cells in the reference point cloud coordinate system.
In one embodiment, the constructing a laser point cloud map according to the two frames of laser point clouds and the second pose transformation parameters includes:
and transforming the target point cloud with the second pose transformation parameters, and adding the transformed target point cloud into a reference point cloud.
In one embodiment, the acquiring an image of the target scene and a laser point cloud corresponding to the image includes:
acquiring an image of the target scene through a camera, and acquiring a laser point cloud corresponding to the image through a laser radar; the camera is fixed relative to the laser radar.
In one embodiment, the performing image matching on the two frames of images to obtain a first pose transformation parameter includes:
and carrying out feature descriptor matching or optical flow method matching on the two frames of images to obtain the first pose transformation parameters.
In a second aspect, an embodiment of the present disclosure provides a laser point cloud map construction apparatus. The device comprises:
the acquisition module is used for acquiring an image of a target scene and a laser point cloud corresponding to the image;
the first matching module is used for carrying out image matching on the two frames of images to obtain first pose transformation parameters;
the second matching module is used for taking the first pose transformation parameters as initial values of the pose transformation parameters, and carrying out laser point cloud matching on the two frames of laser point clouds corresponding to the two frames of images so as to obtain second pose transformation parameters; and
and the construction module is used for constructing a laser point cloud map according to the laser point clouds of two frames and the second pose transformation parameters.
In one embodiment, the two frames of images include a target image and a reference image; the first matching module is specifically configured to: and converting the target image into a coordinate system of the reference image to obtain the first pose transformation parameter.
In one embodiment, the two frames of the laser point clouds include a target point cloud and a reference point cloud; the second matching module includes:
the conversion unit is used for converting the target point cloud into a coordinate system of a reference point cloud by taking the first pose conversion parameter as an initial value of the pose conversion parameter;
the adjusting unit is used for adjusting the pose transformation parameters according to the matching difference value of the reference point cloud and the target point cloud converted into the coordinate system;
and the determining unit is used for determining the pose transformation parameters corresponding to the matching difference value smaller than or equal to a set threshold value as the second pose transformation parameters.
In one embodiment, the adjustment unit comprises:
a dividing subunit, configured to divide the reference point cloud into a plurality of spatial cells;
an acquisition subunit, configured to acquire a first probability density parameter of the reference point cloud in each of the spatial cells, and a second probability density parameter of the target point cloud in each of the spatial cells;
and the adjustment subunit is used for determining the probability difference value of each space cell according to the first probability density and the second probability density of each space cell and adjusting the pose transformation parameters according to the probability difference value of all the space cells in the reference point cloud coordinate system.
In one embodiment, the construction module is specifically configured to transform the target point cloud with the second pose transformation parameter, and add the transformed target point cloud to the reference point cloud.
In one embodiment, the obtaining module is specifically configured to: and acquiring an image of the target scene through a camera, and acquiring a laser point cloud corresponding to the image through a laser radar.
In one embodiment, the first matching module is specifically configured to: and carrying out feature descriptor matching or optical flow method matching on the two frames of images to obtain the first pose transformation parameters.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory storing executable instructions; and
and the processor executes the executable instructions stored in the memory to realize the steps of the laser point cloud map construction method provided in the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a laser point cloud map construction system, including:
the driving device is used for driving the driving device,
the laser radar is fixedly arranged on the driving device and is used for acquiring laser point clouds of a target scene;
the camera is fixedly arranged on the driving device and used for acquiring an image of the target scene;
and, the electronic device provided in the third aspect.
The laser point cloud map construction method, device, equipment and system provided by the disclosure have at least the following beneficial effects:
by adopting the laser point cloud map construction method provided by the embodiment of the disclosure, in the whole laser point cloud map construction process, each two frames of laser point clouds are subjected to laser point cloud matching by taking the first pose transformation parameters acquired by corresponding image matching as initial values. In this way, a more optimized initial value is provided for laser point cloud matching through image matching, so that the calculation amount of laser point cloud matching is reduced, the matching accuracy is improved, and the defects in the related technology are overcome.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of a laser point cloud map construction method according to an exemplary embodiment;
FIG. 2 is a flow chart of a laser point cloud map construction method according to another exemplary embodiment;
FIG. 3 is a flow chart of a laser point cloud map construction method according to another exemplary embodiment;
FIG. 4 is a flow chart of a laser point cloud map construction method according to another exemplary embodiment;
FIG. 5 is a block diagram of a laser point cloud mapping apparatus, shown according to an exemplary embodiment;
FIG. 6 is a block diagram of a laser point cloud mapping apparatus shown according to another exemplary embodiment;
fig. 7 is a block diagram of a laser point cloud mapping apparatus according to another exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. Unless defined otherwise, technical or scientific terms used in this disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The terms "a" or "an" and the like as used in the description and the claims do not denote a limitation of quantity, but rather denote the presence of at least one. Unless otherwise indicated, the terms "comprises," "comprising," and the like are intended to cover the presence of elements or articles recited as being "comprising" or "including," and equivalents thereof, without excluding other elements or articles. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
As used in this disclosure and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
In the related art, one frame of two frames of laser point clouds for inter-frame matching is a target laser point cloud, and the other frame is a reference laser point cloud. Since the scanning device collects multiple frames of laser point clouds in the moving process, the coordinate systems of the target laser point cloud and the reference laser point cloud are different. At this time, it is necessary to perform coordinate system conversion on the target laser point cloud and the reference laser point cloud. Typically, the pose transformation parameters are used to transform the target laser point cloud into the coordinate system of the reference laser point cloud, and the pose transformed reference laser point cloud is then added to the target laser point cloud. And so on, adding multiple frames of laser point clouds into the same coordinate system to construct a laser point cloud map.
However, in the related art, there is no limitation on the initial value of the pose conversion parameter, and the initial value of the pose conversion parameter is usually selected according to an empirical value, and even the initial value of the pose conversion parameter is set to 0. In this way, the inter-frame matching process has the defects of large operand and low matching accuracy.
Based on the above problems, the embodiments of the present disclosure provide a method, an apparatus, a device, and a system for constructing a laser point cloud map. Fig. 1 is a flow diagram illustrating a laser point cloud map construction method according to an exemplary embodiment. As shown in fig. 1, the laser point cloud map construction method provided by the embodiment of the present disclosure includes:
step 101, acquiring an image of a target scene and a laser point cloud corresponding to the image.
Illustratively, a scanning device is employed to acquire an image of a target scene and a laser point cloud. The scanning device comprises a camera and a laser radar. The camera outputs an image based on visible light, the laser radar outputs infrared light and receives the infrared light reflected by an external target, and then laser point clouds are output according to the received infrared light.
In step 101, an image of a target scene is acquired by a camera, and a laser point cloud corresponding to the image is acquired by a laser radar. And, the camera is fixed with the laser radar relatively. Therefore, the coordinate system of the image acquired by the camera and the coordinate system of the laser point cloud acquired by the laser radar are relatively fixed. Further, the pose transformation of the adjacent frame image is the same as that of the adjacent frame laser point cloud.
The laser point cloud corresponding to one frame of image of the target scene refers to laser point cloud acquired simultaneously with the image.
And 102, performing image matching on the two frames of images to obtain a first pose transformation parameter.
The two frames of images comprise a target image and a reference image, and the first pose transformation parameters represent a pose transformation mode of matching the reference image into the target image. The specific manner of image matching is not limited, and for example, feature descriptor matching or optical flow method is used to match two frames of images.
Step 103, using the first pose transformation parameter as an initial value of the pose transformation parameter, and performing laser point cloud matching on two frames of laser point clouds corresponding to the two frames of images to obtain a second pose transformation parameter.
The two-frame laser point cloud comprises a target point cloud and a reference point cloud. The target point cloud corresponds to the target image and the reference point cloud corresponds to the reference image. Based on converting the target image into the coordinate system of the reference image in step 102, the second pose transformation parameters refer to the pose transformation parameters for converting the target point cloud into the reference point cloud in step 103.
Fig. 2 is a flow chart illustrating step 103 according to an exemplary embodiment. As an example, as shown in fig. 2, step 103 is implemented by the following steps.
Step 1031, converting the target point cloud into a coordinate system of the reference point cloud by taking the first pose transformation parameter as an initial value of the pose transformation parameter.
In a point cloud image (including a target point cloud and a reference point cloud), one target object has a plurality of scanning points. In step 1031, each scan point in the target point cloud is converted into a coordinate system of the reference point cloud according to the first pose transformation parameters.
Step 1032, adjusting pose transformation parameters according to the matching difference between the reference point cloud and the target point cloud converted into the coordinate system.
The matching difference value of the target point cloud and the reference point cloud represents the matching degree of the target point cloud and the reference point cloud according to the transformation parameters of the current pose. In step 1032, the pose transformation parameters are adjusted according to the difference between the target point cloud and the reference point cloud. After each adjustment, the target point cloud is converted according to the adjusted pose transformation parameters in step 1031. In this way, correspondence between a plurality of matching differences and pose transformation parameters is acquired.
Fig. 3 is a flow chart illustrating step 1032 according to an exemplary embodiment. As an example, as shown in fig. 3, step 1032 is implemented specifically using the following steps.
Step 1032a, dividing the reference point cloud into a plurality of spatial cells.
And the size of each space cell is the same, so that the reference point cloud is subjected to gridding treatment. In such a case, the scan points in the reference point cloud are distributed within different spatial cells. Scanning points of the target point cloud converted into the reference point cloud coordinate system according to the pose transformation parameters are also distributed in different space cells.
Step 1032b, obtaining a first probability density parameter of the reference point cloud within each spatial cell and a second probability density parameter of the target point cloud within each spatial cell.
The first probability density parameter characterizes the normal distribution probability of scanning points in the reference point cloud in each space cell; the second probability density parameter characterizes a normal distribution probability of the scanning points in the target point cloud in each space cell.
Step 1032c, determining a probability difference value of each space cell according to the first probability density and the second probability density of each space cell, and adjusting pose transformation parameters according to the probability difference values of all the space cells in the reference point cloud coordinate system.
The sum of probability difference values of all the space cells in the reference point cloud coordinate system is the matching difference value of the target point cloud and the reference point cloud.
With continued reference to fig. 2, step 1033 is performed after step 1032, as follows:
step 1033, using the pose transformation parameters corresponding to the matching difference value smaller than or equal to the set threshold value as the second pose transformation parameters.
The smaller the matching difference value is, the higher the matching degree between the current target point cloud and the reference point cloud is. The specific value of the set threshold is set according to the map accuracy requirement.
In summary, since the first pose transformation parameter is obtained according to image matching, in step 103, compared with selecting an initial value at will or selecting an initial value according to an empirical value, the first pose transformation parameter is used as the initial value of matching the target point cloud and the reference point cloud, which is equivalent to optimizing the accuracy of the initial value of matching the laser point cloud, so that the matching of the target point cloud and the reference point cloud can be completed more rapidly and accurately to obtain the second pose transformation parameter.
With continued reference to fig. 1, step 104 is performed after step 103, specifically as follows:
and 104, constructing a laser point cloud map according to the two frames of laser point clouds and the second pose transformation parameters.
For example, the target point cloud is transformed with the second pose transformation parameters, and the transformed target point cloud is added to the reference point cloud.
By adopting the laser point cloud map construction method provided by the embodiment of the disclosure, in the whole laser point cloud map construction process, each two frames of laser point clouds are subjected to laser point cloud matching by taking the first pose transformation parameters acquired by corresponding image matching as initial values. In this way, a more optimized initial value is provided for laser point cloud matching through image matching, so that the calculation amount and the matching accuracy of the laser point cloud matching are reduced, and the defects in the related technology are overcome.
Based on the laser point cloud map construction method, the embodiment of the disclosure also provides a laser point cloud map construction device. Fig. 4 is a block diagram of a laser point cloud mapping apparatus according to an exemplary embodiment. As shown in fig. 4, the apparatus includes: an acquisition module 410, a first matching module 420, a second matching module 430, and a construction module 440.
The acquisition module 410 is configured to acquire an image of a target scene and a laser point cloud corresponding to the image.
The first matching module 420 is configured to perform image matching on the two frames of images to obtain a first pose transformation parameter.
The second matching module 430 is configured to perform laser point cloud matching on two frames of laser point clouds corresponding to the two frames of images with the first pose transformation parameter as an initial value of the pose transformation parameter to obtain a second pose transformation parameter.
The construction module 440 is configured to construct a laser point cloud map according to the two-frame laser point cloud and the second pose transformation parameter.
In one embodiment, the two frame laser point cloud includes a target point cloud and a reference point cloud. Fig. 5 is a block diagram of a laser point cloud mapping apparatus according to another exemplary embodiment. As shown in fig. 5, the second matching module 430 includes: a conversion unit 431, an adjustment unit 432, and a determination unit 433.
The conversion unit 431 is configured to convert the target point cloud into a coordinate system of the reference point cloud with the first pose conversion parameter as an initial value of the pose conversion parameter;
the adjusting unit 432 is configured to adjust pose transformation parameters according to a matching difference between the reference point cloud and the target point cloud converted into the coordinate system;
the determining unit 433 is configured to determine, as a second pose transformation parameter, a pose transformation parameter corresponding to when the matching difference is less than or equal to the set threshold.
In one embodiment, fig. 6 is a block diagram of a laser point cloud mapping apparatus shown according to another exemplary embodiment. As shown in fig. 6, the adjusting unit 432 includes: a dividing subunit 4321, an acquisition subunit 4322, and an adjustment subunit 4323.
The dividing subunit 4321 is configured to divide the reference point cloud into a plurality of spatial cells.
The acquiring subunit 4322 is configured to acquire a first probability density parameter of the reference point cloud in each spatial cell and a second probability density parameter of the target point cloud in each spatial cell.
The adjustment subunit 4323 is configured to determine a probability difference value of each spatial cell according to the first probability density and the second probability density of each spatial cell, and adjust the pose transformation parameter according to the probability difference value of all spatial cells in the reference point cloud coordinate system.
In one embodiment, the obtaining module 410 is specifically configured to: and acquiring an image of the target scene through a camera, and acquiring laser point clouds corresponding to the image through a laser radar.
In one embodiment, the first matching module 420 is specifically configured to: and performing feature descriptor matching or optical flow method matching on the two frames of images to obtain a first pose transformation parameter.
In one embodiment, the construction module 440 is specifically configured to transform the target point cloud with the second pose transformation parameter, and add the transformed target point cloud to the reference point cloud.
The embodiment of the disclosure also provides electronic equipment. Fig. 7 is a block diagram of an electronic device, according to an example embodiment. As shown in fig. 7, the electronic device includes: memory and a processor.
The memory has stored thereon executable instructions. The processor is configured to execute executable instructions stored in the memory to implement the steps of the laser point cloud positioning method provided above.
Furthermore, embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as any one of the above.
The embodiment of the disclosure also provides a laser point cloud map construction system. The laser point cloud map construction system comprises: a moving device (e.g., a vehicle), a lidar and a camera fixedly mounted on the moving device, and the electronic apparatus provided above. The camera is used for acquiring an image of a target area based on visible light, the laser radar is used for acquiring laser point clouds of the target area, and the electronic equipment constructs a laser point cloud map based on the image acquired by the camera and the laser point clouds acquired by the laser radar.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and structural equivalents thereof, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on a manually-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general purpose and/or special purpose microprocessors, or any other type of central processing unit. Typically, the central processing unit will receive instructions and data from a read only memory and/or a random access memory. The essential elements of a computer include a central processing unit for carrying out or executing instructions and one or more memory devices for storing instructions and data. Typically, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks, etc. However, a computer does not have to have such a device. Furthermore, the computer may be embedded in another device, such as a mobile phone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices including, for example, semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., internal hard disk or removable disks), magneto-optical disks, and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features of specific embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. On the other hand, the various features described in the individual embodiments may also be implemented separately in the various embodiments or in any suitable subcombination. Furthermore, although features may be acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Furthermore, the processes depicted in the accompanying drawings are not necessarily required to be in the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This disclosure is intended to cover any variations, uses, or adaptations of the foregoing following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (8)

1. The method for constructing the laser point cloud map is characterized by comprising the following steps of:
acquiring an image of a target scene and a laser point cloud corresponding to the image;
performing image matching on two frames of images to obtain a first pose transformation parameter, wherein the two frames of images comprise a target image and a reference image, and the two frames of laser point clouds comprise: a target point cloud corresponding to the target image, and a reference point cloud corresponding to the reference image;
converting the target point cloud into a coordinate system of a reference point cloud by taking the first pose conversion parameter as an initial value of the pose conversion parameter;
dividing the reference point cloud into a plurality of spatial cells;
acquiring a first probability density parameter of the reference point cloud in each space cell and a second probability density parameter of the target point cloud in each space cell;
determining a probability difference value of each space cell according to the first probability density and the second probability density of each space cell, and adjusting pose transformation parameters according to the probability difference value of all the space cells in the reference point cloud coordinate system;
taking the pose transformation parameters corresponding to the matching difference value smaller than or equal to a set threshold value as second pose transformation parameters;
and constructing a laser point cloud map according to the laser point clouds of the two frames and the second pose transformation parameters.
2. The method according to claim 1, wherein performing image matching on the two frames of images to obtain a first pose transformation parameter includes:
and converting the target image into a coordinate system of the reference image to obtain the first pose transformation parameter.
3. The method of claim 1, wherein constructing a laser point cloud map from the two frames of laser point clouds and the second pose transformation parameters comprises:
transforming the target point cloud with the second pose transformation parameters, and adding the transformed target point cloud into the reference point cloud.
4. The method of claim 1, wherein the acquiring an image of a target scene and a laser point cloud corresponding to the image comprises:
acquiring an image of the target scene through a camera, and acquiring a laser point cloud corresponding to the image through a laser radar; the camera is fixed relative to the laser radar.
5. The method according to claim 1, wherein performing image matching on the two frames of images to obtain a first pose transformation parameter includes:
and carrying out feature descriptor matching or optical flow method matching on the two frames of images to obtain the first pose transformation parameters.
6. A laser point cloud map construction apparatus, the apparatus comprising:
the acquisition module is used for acquiring an image of a target scene and a laser point cloud corresponding to the image;
the first matching module is used for performing image matching on two frames of images to obtain a first pose transformation parameter, the two frames of images comprise a target image and a reference image, and the two frames of laser point clouds comprise: a target point cloud corresponding to the target image, and a reference point cloud corresponding to the reference image;
the second matching module is used for converting the target point cloud into a coordinate system of a reference point cloud by taking the first pose transformation parameter as an initial value of the pose transformation parameter; dividing the reference point cloud into a plurality of spatial cells; acquiring a first probability density parameter of the reference point cloud in each space cell and a second probability density parameter of the target point cloud in each space cell; determining a probability difference value of each space cell according to the first probability density and the second probability density of each space cell, and adjusting pose transformation parameters according to the probability difference value of all the space cells in the reference point cloud coordinate system; taking the pose transformation parameters corresponding to the matching difference value smaller than or equal to a set threshold value as second pose transformation parameters; and
and the construction module is used for constructing a laser point cloud map according to the laser point clouds of two frames and the second pose transformation parameters.
7. An electronic device, the electronic device comprising:
a memory storing executable instructions; and
a processor executing executable instructions stored in the memory to implement the steps of the method of any one of claims 1 to 5.
8. A laser point cloud map construction system, the system comprising:
the driving device is used for driving the driving device,
the laser radar is fixedly arranged on the driving device and is used for acquiring laser point clouds of a target scene;
the camera is fixedly arranged on the driving device and used for acquiring an image of the target scene;
and, the electronic device of claim 7.
CN202010327729.5A 2020-04-23 2020-04-23 Laser point cloud map construction method, device, equipment and system Active CN111553937B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010327729.5A CN111553937B (en) 2020-04-23 2020-04-23 Laser point cloud map construction method, device, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010327729.5A CN111553937B (en) 2020-04-23 2020-04-23 Laser point cloud map construction method, device, equipment and system

Publications (2)

Publication Number Publication Date
CN111553937A CN111553937A (en) 2020-08-18
CN111553937B true CN111553937B (en) 2023-11-21

Family

ID=72005811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010327729.5A Active CN111553937B (en) 2020-04-23 2020-04-23 Laser point cloud map construction method, device, equipment and system

Country Status (1)

Country Link
CN (1) CN111553937B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966109B (en) * 2020-09-07 2021-08-17 中国南方电网有限责任公司超高压输电公司天生桥局 Inspection robot positioning method and device based on flexible direct current converter station valve hall
CN112767458B (en) * 2020-11-13 2022-07-29 武汉中海庭数据技术有限公司 Method and system for registering laser point cloud and image
CN112712561A (en) * 2021-01-05 2021-04-27 北京三快在线科技有限公司 Picture construction method and device, storage medium and electronic equipment
CN113093221A (en) * 2021-03-31 2021-07-09 东软睿驰汽车技术(沈阳)有限公司 Generation method and device of grid-occupied map
CN113503883B (en) * 2021-06-22 2022-07-19 北京三快在线科技有限公司 Method for collecting data for constructing map, storage medium and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069840A (en) * 2015-09-14 2015-11-18 南开大学 Three-dimensional normal distribution transformation point cloud registration method based on curvature feature
CN105702151A (en) * 2016-03-31 2016-06-22 百度在线网络技术(北京)有限公司 Indoor map constructing method and device
CN107220995A (en) * 2017-04-21 2017-09-29 西安交通大学 A kind of improved method of the quick point cloud registration algorithms of ICP based on ORB characteristics of image
CN109918464A (en) * 2019-02-26 2019-06-21 东软睿驰汽车技术(沈阳)有限公司 The storage method and device and call method and device of a kind of cloud map
CN110400363A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Map constructing method and device based on laser point cloud
CN110428467A (en) * 2019-07-30 2019-11-08 四川大学 A kind of camera, imu and the united robot localization method of laser radar
CN110487286A (en) * 2019-08-09 2019-11-22 上海电器科学研究所(集团)有限公司 It is a kind of to project the robot pose determining method merged with laser point cloud based on point feature
CN110645998A (en) * 2019-09-10 2020-01-03 上海交通大学 Dynamic object-free map segmentation establishing method based on laser point cloud
CN110658530A (en) * 2019-08-01 2020-01-07 北京联合大学 Map construction method and system based on double-laser-radar data fusion and map
CN110689576A (en) * 2019-09-29 2020-01-14 桂林电子科技大学 Automatic ware-based dynamic 3D point cloud normal distribution AGV positioning method
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218817B (en) * 2013-04-19 2015-11-18 深圳先进技术研究院 The dividing method of plant organ point cloud and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069840A (en) * 2015-09-14 2015-11-18 南开大学 Three-dimensional normal distribution transformation point cloud registration method based on curvature feature
CN105702151A (en) * 2016-03-31 2016-06-22 百度在线网络技术(北京)有限公司 Indoor map constructing method and device
CN107220995A (en) * 2017-04-21 2017-09-29 西安交通大学 A kind of improved method of the quick point cloud registration algorithms of ICP based on ORB characteristics of image
CN110400363A (en) * 2018-04-24 2019-11-01 北京京东尚科信息技术有限公司 Map constructing method and device based on laser point cloud
CN109918464A (en) * 2019-02-26 2019-06-21 东软睿驰汽车技术(沈阳)有限公司 The storage method and device and call method and device of a kind of cloud map
CN110428467A (en) * 2019-07-30 2019-11-08 四川大学 A kind of camera, imu and the united robot localization method of laser radar
CN110658530A (en) * 2019-08-01 2020-01-07 北京联合大学 Map construction method and system based on double-laser-radar data fusion and map
CN110487286A (en) * 2019-08-09 2019-11-22 上海电器科学研究所(集团)有限公司 It is a kind of to project the robot pose determining method merged with laser point cloud based on point feature
CN110645998A (en) * 2019-09-10 2020-01-03 上海交通大学 Dynamic object-free map segmentation establishing method based on laser point cloud
CN110689576A (en) * 2019-09-29 2020-01-14 桂林电子科技大学 Automatic ware-based dynamic 3D point cloud normal distribution AGV positioning method
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision

Also Published As

Publication number Publication date
CN111553937A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN111553937B (en) Laser point cloud map construction method, device, equipment and system
CN109188457B (en) Object detection frame generation method, device, equipment, storage medium and vehicle
CN111060101B (en) Vision-assisted distance SLAM method and device and robot
CN108717710B (en) Positioning method, device and system in indoor environment
US11182917B2 (en) Stereo camera depth determination using hardware accelerator
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
US10559095B2 (en) Image processing apparatus, image processing method, and medium
JP7227969B2 (en) Three-dimensional reconstruction method and three-dimensional reconstruction apparatus
WO2021016854A1 (en) Calibration method and device, movable platform, and storage medium
CN114217303B (en) Target positioning and tracking method and device, underwater robot and storage medium
US11657485B2 (en) Method for expanding image depth and electronic device
CN112907573B (en) Depth completion method based on 3D convolution
CN114494383B (en) Light field depth estimation method based on Richard-Lucy iteration
CN112734931A (en) Method and system for assisting point cloud target detection
US20220277480A1 (en) Position estimation device, vehicle, position estimation method and position estimation program
CN115164900A (en) Omnidirectional camera based visual aided navigation method and system in urban environment
KR101806453B1 (en) Moving object detecting apparatus for unmanned aerial vehicle collision avoidance and method thereof
JP2022087822A (en) Radar tracking method, noise removal method, device and instrument
CN115965961B (en) Local-global multi-mode fusion method, system, equipment and storage medium
CN110651475A (en) Hierarchical data organization for dense optical flows
CN115883969B (en) Unmanned aerial vehicle shooting method, unmanned aerial vehicle shooting device, unmanned aerial vehicle shooting equipment and unmanned aerial vehicle shooting medium
CN116342677A (en) Depth estimation method, device, vehicle and computer program product
CN111504335B (en) Map construction method and device, electronic equipment and storage medium
CN115601275A (en) Point cloud augmentation method and device, computer readable storage medium and terminal equipment
US11856284B2 (en) Method of controlling a portable device and a portable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant