CN115329111A - Image feature library construction method and system based on point cloud and image matching - Google Patents

Image feature library construction method and system based on point cloud and image matching Download PDF

Info

Publication number
CN115329111A
CN115329111A CN202211237318.2A CN202211237318A CN115329111A CN 115329111 A CN115329111 A CN 115329111A CN 202211237318 A CN202211237318 A CN 202211237318A CN 115329111 A CN115329111 A CN 115329111A
Authority
CN
China
Prior art keywords
image
point
point cloud
feature
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211237318.2A
Other languages
Chinese (zh)
Other versions
CN115329111B (en
Inventor
王薇薇
任宇飞
薄涵文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Aerospace Information Research Institute
Original Assignee
Qilu Aerospace Information Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Aerospace Information Research Institute filed Critical Qilu Aerospace Information Research Institute
Priority to CN202211237318.2A priority Critical patent/CN115329111B/en
Publication of CN115329111A publication Critical patent/CN115329111A/en
Application granted granted Critical
Publication of CN115329111B publication Critical patent/CN115329111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the field of visual positioning, and provides an image feature library construction method and system based on point cloud and image matching, which comprises the steps of obtaining an image, POS data and point cloud data; performing feature extraction on the acquired image by combining POS data to obtain a feature point descriptor and an image point coordinate of a feature point; carrying out point cloud mapping on the acquired point cloud data to obtain image point coordinates of the point cloud data; performing coordinate matching calculation according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates; and constructing an image feature library according to the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates. According to the invention, the image point coordinates corresponding to the point cloud data are matched with the image point coordinates corresponding to the feature points, so that the feature points, the image coordinates and the object space three-dimensional coordinates are unified, the precision of the object space three-dimensional coordinates corresponding to the feature points is greatly improved, and the quality of constructing an image feature library is improved.

Description

Image feature library construction method and system based on point cloud and image matching
Technical Field
The invention belongs to the technical field of visual positioning, and particularly relates to an image feature library construction method and system based on point cloud and image matching.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Visual positioning technology is widely applied in various fields, such as automatic driving, indoor and outdoor navigation, scenic spot recovery enhancement, intelligent sand tables, industrial inspection and the like. The visual active positioning is usually based on a built image feature library, the image coordinates and the object space three-dimensional coordinates of feature points are obtained in a feature point matching mode, and then the external orientation elements, namely three position information and three posture information, of a user at the moment of photographing are inversely calculated by utilizing a collinear equation.
At present, the traditional image feature library construction mode generally only utilizes feature information extracted from images, and the precision of the three-dimensional coordinate of an object space obtained in the way is low. The method is characterized in that a pre-collected scene sequence image is utilized to establish, an image feature library is mainly composed of image feature points, image point coordinates and object space three-dimensional point coordinates, wherein the object space three-dimensional point coordinates are obtained through various matrixes among images, camera internal parameters and feature point coordinates through a collinear equation. The method is influenced by multiple factors such as image matching precision, image coordinate calculation precision and the like, and the precision of the three-dimensional coordinate of the object space is not high.
Disclosure of Invention
In order to solve the problems, the invention provides an image feature library construction method and system based on point cloud and image matching.
According to some embodiments, a first aspect of the present invention provides an image feature library construction method based on point cloud and image matching, which adopts the following technical solutions:
an image feature library construction method based on point cloud and image matching comprises the following steps:
acquiring an image, POS data and point cloud data;
performing feature extraction on the acquired image by combining POS data to obtain a feature point descriptor and image point coordinates of feature points;
carrying out point cloud mapping on the acquired point cloud data to obtain an image point coordinate of the point cloud data;
performing coordinate matching calculation according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates;
and constructing an image feature library according to the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates.
Further, the feature extraction is performed on the obtained image by combining with the POS data, and an HOG feature extraction algorithm is adopted to obtain feature point descriptors and image point coordinates of feature points, specifically:
carrying out image graying on the obtained image, and carrying out color space standardization on the grayed image by adopting a Gamma correction method;
calculating each pixel gradient of the image subjected to the standardization processing and dividing the image into a plurality of cells;
counting the gradient histogram of each unit cell to form a feature descriptor of each unit cell;
forming a block by a plurality of cells, and connecting descriptors of all the cells in series to form an HOG characteristic point descriptor of the block;
and combining the POS data and the corresponding relation between the feature point descriptor and the image point coordinate of the feature point to obtain the image point coordinate of the feature point.
Further, the feature points are a set of all feature points obtained by feature extraction of the image.
Further, the image point coordinates of the point cloud data refer to image coordinates of an intersection point where each point in the point cloud data is overlapped with an image through 2D coordinate transformation.
Further, performing point cloud mapping on the acquired point cloud data to obtain image point coordinates of the point cloud data, specifically:
transforming the point cloud data to a camera coordinate system through coordinate transformation;
obtaining corresponding image point coordinates of the point cloud data in a camera coordinate system, namely the image point coordinates of the point cloud data;
and establishing a correlation between the point cloud data and the image according to the image point coordinates of the point cloud data.
Further, the coordinate matching calculation is performed according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the mapping relationship among the feature points, the image point coordinates of the feature points, and the point cloud coordinates, and specifically includes:
comparing the consistency of the image point coordinates of the characteristic points and the image point coordinates of the point cloud data through difference value calculation;
if a plurality of point clouds which have the coordinate difference value with the image point of the characteristic point not equal to zero and are within the threshold range exist in the point cloud data, carrying out adjustment calculation on the plurality of point cloud data to obtain unique point cloud data, and determining the relationship between the unique point cloud data and the characteristic point;
and obtaining a mapping relation of the feature point descriptor, the image point coordinates of the feature points and the point cloud coordinates by combining the coordinates of the point cloud data and the image point coordinates of the feature points based on the unique relation between the point cloud data and the feature points.
Further, the constructing an image feature library according to the feature points, the image point coordinates of the feature points, and the mapping relationship of the point cloud coordinates includes:
creating a feature point descriptor, an image point coordinate and a point cloud coordinate field for the image feature library;
and correspondingly storing the feature point descriptor, the image point coordinate and the point cloud coordinate into corresponding fields.
According to some embodiments, a second aspect of the present invention provides an image feature library construction system based on point cloud and image matching, which adopts the following technical solutions:
an image feature library construction system based on point cloud and image matching comprises:
a data acquisition module configured to acquire an image, POS data, and point cloud data;
the image feature extraction module is configured to perform feature extraction on the acquired image by combining POS data to obtain a feature point descriptor and an image point coordinate of a feature point;
the point cloud mapping module is configured to perform point cloud mapping on the acquired point cloud data to obtain image point coordinates of the point cloud data;
the coordinate matching module is configured to perform coordinate matching calculation according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the mapping relation among the feature points, the image point coordinates of the feature points and the point cloud coordinates;
and the image feature library construction module is configured to construct an image feature library according to the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates.
According to some embodiments, a third aspect of the invention provides a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in a method for constructing an image feature library based on point cloud and image matching according to the first aspect.
According to some embodiments, a fourth aspect of the invention provides a computer apparatus.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of the method for constructing an image feature library based on point cloud and image matching according to the first aspect.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a feature library construction method based on point cloud and image matching, which can acquire real-scene high-precision three-dimensional point cloud data in real time by using a laser scanner and a mapping relation between the point cloud data and an image. By establishing a mapping relation between point cloud data and an image, obtaining image point coordinates corresponding to the point cloud data and matching the image point coordinates corresponding to the feature points, realizing the unification of the feature points, the image coordinates and the object space three-dimensional coordinates, avoiding the need of recovering the object space three-dimensional coordinates in an image matching mode, greatly improving the precision of the feature points corresponding to the object space three-dimensional coordinates, improving the quality of image feature library construction and providing guarantee for visual positioning.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a flowchart of an image feature library construction method based on point cloud and image matching according to an embodiment of the present invention;
FIG. 2 is a multi-scenario diagram of data acquisition provided by an embodiment of the present invention;
fig. 3 is a flowchart of image feature extraction according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example one
As shown in fig. 1, the embodiment provides a method for constructing an image feature library based on point cloud and image matching, and the method is applied to a server for example, it can be understood that the method can also be applied to a terminal, and can also be applied to a system including the terminal and the server, and is implemented through interaction between the terminal and the server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network server, cloud communication, middleware service, a domain name service, a security service CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. In this embodiment, the method includes the steps of:
acquiring an image, POS data and point cloud data;
performing feature extraction on the acquired image by combining POS data to obtain a feature point descriptor and an image point coordinate of a feature point;
carrying out point cloud mapping on the acquired point cloud data to obtain an image point coordinate of the point cloud data;
performing coordinate matching calculation according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates;
and constructing an image feature library according to the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates.
The flowchart for constructing the image feature library in this embodiment is shown in fig. 1, and includes five steps of data acquisition, image feature extraction, point cloud mapping, coordinate matching, and image feature library construction. Each step is described in detail below.
S1, data acquisition:
the invention uses a mobile acquisition vehicle integrating a camera, an IMU and a laser scanner to acquire data, wherein the acquired data comprises images, POS data and point cloud data. The image may be a panoramic image or a non-panoramic image, depending on the hardware implementation.
As can be appreciated, the POS data includes GPS data and IMU data.
S2, image feature extraction:
performing feature extraction on the image acquired in S1 to obtain a feature point corresponding descriptor of an image feature library and the image point coordinates of the feature points, which are defined as the image point coordinates to be distinguished from the image point coordinates in step S3
Figure DEST_PATH_IMAGE001
The feature points in the present application are a set of all feature points extracted from the image features, and are a plurality of feature points obtained by feature extraction.
Performing feature extraction on the acquired image by combining POS data, and obtaining a feature point descriptor and the image point coordinates of the feature points by adopting an HOG feature extraction algorithm, wherein the feature extraction algorithm specifically comprises the following steps:
carrying out image graying on the obtained image, and carrying out color space standardization on the grayed image by adopting a Gamma correction method;
calculating each pixel gradient of the image subjected to the standardization processing and dividing the image into a plurality of cells;
counting the gradient histogram of each cell to form a feature descriptor of each cell;
forming a block by a plurality of cells, and connecting descriptors of all the cells in series to form an HOG characteristic point descriptor of the block;
and combining the POS data and the corresponding relation between the feature point descriptor and the image point coordinate of the feature point to obtain the image point coordinate of the feature point.
It is to be understood that the HOG feature extraction algorithm is only one feature extraction algorithm for implementing the step, and the step is not limited to a certain feature extraction algorithm, and one or more kinds may be used. In order to ensure the richness of the image feature library data and improve the matching rate of visual positioning, various different feature extraction algorithms can be adopted. When a plurality of feature extraction algorithms are used, a plurality of extraction results need to be stored in the image feature library.
S3, point cloud mapping:
performing point cloud mapping on the point cloud data acquired in the step S1 to acquire image point coordinates corresponding to the point cloud data, and distinguishing the image point coordinates from the image point coordinates in the step S2, wherein the image point coordinates are defined as image point coordinates
Figure 779109DEST_PATH_IMAGE002
. Coordinates of the image points in this step
Figure 978009DEST_PATH_IMAGE002
The point cloud data is the image coordinate of the intersection point of each point in the point cloud data and the image through 2D coordinate transformation, and the point cloud data is a 2D coordinate. The image referred to in this step is an image obtained by the mobile collection vehicle in S1.
The point cloud data mapping is to establish a relative relationship between the point cloud data and the image, transform the point cloud data to a camera coordinate system through coordinate transformation, and obtain corresponding image point coordinates of the point cloud data in the camera coordinate system
Figure 646887DEST_PATH_IMAGE002
As shown in equation (1).
Figure DEST_PATH_IMAGE003
(1)
In the formula (1), the first and second groups,
Figure 577934DEST_PATH_IMAGE004
is a point cloud data three-dimensional coordinate;
Figure DEST_PATH_IMAGE005
is a parameter intrinsic to the laser scanner, wherein
Figure 771412DEST_PATH_IMAGE006
Is a rotation matrix between the laser scanner and the camera;fis the camera focal length;
Figure DEST_PATH_IMAGE007
is a camera coordinate system translation parameter;x,yis a meterThe image point coordinates corresponding to the calculated point cloud data,
Figure 446107DEST_PATH_IMAGE008
is the three-dimensional coordinate of the point cloud data in the camera coordinate system.
The point cloud data obtains the image point coordinates of each point cloud data through the calculation of the step
Figure 969492DEST_PATH_IMAGE002
S4, coordinate matching:
corresponding image point coordinates of the characteristic points obtained in S2
Figure 337019DEST_PATH_IMAGE001
And S3, obtaining the coordinates of the image point corresponding to the point cloud
Figure 578645DEST_PATH_IMAGE002
And performing matching calculation, and establishing a corresponding relation of the characteristic points, the descriptors and the point cloud coordinates through matching S2, namely obtaining a group of relation data of the characteristic point descriptors, the image point coordinates and the point cloud coordinates.
Specifically, coordinate matching calculation is performed according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the mapping relationship among the feature points, the image point coordinates of the feature points, and the point cloud coordinates, and specifically:
comparing the consistency of the image point coordinates of the characteristic points and the image point coordinates of the point cloud data through difference value calculation;
if a plurality of point clouds which have the coordinate difference value with the image point of the characteristic point not equal to zero and are within the threshold range exist in the point cloud data, carrying out adjustment calculation on the plurality of point cloud data to obtain unique point cloud data, and determining the relationship between the unique point cloud data and the characteristic point;
and obtaining a mapping relation of the feature point descriptor, the image point coordinates of the feature points and the point cloud coordinates by combining the coordinates of the point cloud data and the image point coordinates of the feature points based on the unique relation between the point cloud data and the feature points.
In addition, if the difference value between the image point coordinates of the point cloud data and the image point coordinates of the feature points is zero, the matching is successful.
It should be noted that the feature points in this embodiment are a set of all feature points extracted from the image features, and are a plurality of feature points obtained by feature extraction, so that when performing difference calculation, one feature point in this embodiment corresponds to a set of point cloud data, and the feature point and each point in the set of point cloud data are subjected to difference one by one.
S5, constructing an image feature library:
and (5) constructing an image feature library according to the relation data obtained in the step (S4).
The image feature library comprises feature point descriptors, image coordinates and corresponding point cloud data.
The method for constructing the image feature library according to the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates comprises the following steps:
creating a feature point descriptor, an image point coordinate and a point cloud coordinate field for the image feature library;
and correspondingly storing the feature point descriptors, the image point coordinates and the point cloud coordinates into corresponding fields.
In step S1, in order to ensure the availability and stability of the image feature library in multiple scenes, when the mobile collection vehicle is used to obtain data, multi-scene data acquisition should be performed, including but not limited to sunny days, cloudy days, rainy and snowy days, daytime, and nighttime, as shown in fig. 2.
For step S2, if multiple image feature extraction algorithms are adopted, each image feature extraction algorithm needs to obtain descriptors corresponding to feature points and coordinates of image points
Figure 791451DEST_PATH_IMAGE001
As shown in fig. 3.
Example two
The embodiment provides an image feature library construction system based on point cloud and image matching, which includes:
a data acquisition module configured to acquire an image, POS data, and point cloud data;
the image feature extraction module is configured to perform feature extraction on the acquired image by combining POS data to obtain feature points, feature point descriptors and image point coordinates of the feature points;
the point cloud mapping module is configured to perform point cloud mapping on the acquired point cloud data to obtain image point coordinates of the point cloud data;
the coordinate matching module is configured to perform coordinate matching calculation according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the mapping relation among the feature points, the image point coordinates of the feature points and the point cloud coordinates;
and the image feature library construction module is configured to construct an image feature library according to the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates.
The modules are the same as the corresponding steps in the implementation example and application scenarios, but are not limited to the disclosure of the first embodiment. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
In the foregoing embodiments, the description of each embodiment has an emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions in other embodiments.
The proposed system can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules may be combined or integrated into another system, or some features may be omitted, or not executed.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method for constructing an image feature library based on point cloud and image matching as described in the first embodiment.
Example four
The embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the image feature library construction method based on point cloud and image matching as described in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a computer to implement the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive changes in the technical solutions of the present invention.

Claims (10)

1. An image feature library construction method based on point cloud and image matching is characterized by comprising the following steps:
acquiring an image, POS data and point cloud data;
performing feature extraction on the acquired image by combining POS data to obtain a feature point descriptor and an image point coordinate of a feature point;
carrying out point cloud mapping on the acquired point cloud data to obtain an image point coordinate of the point cloud data;
performing coordinate matching calculation according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates;
and constructing an image feature library according to the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates.
2. The method for constructing the image feature library based on point cloud and image matching according to claim 1, wherein feature extraction is performed on the acquired image in combination with POS data, and an HOG feature extraction algorithm is adopted to obtain feature point descriptors and image point coordinates of feature points, specifically:
carrying out image graying on the obtained image, and carrying out color space standardization on the grayed image by adopting a Gamma correction method;
calculating each pixel gradient of the image subjected to the standardization processing and dividing the image into a plurality of cells;
counting the gradient histogram of each unit cell to form a feature descriptor of each unit cell;
a plurality of cells are combined into a block, and the HOG characteristic point descriptors of the block are formed by connecting the descriptors of all the cells in series;
and combining the POS data and the corresponding relation between the feature point descriptor and the image point coordinate of the feature point to obtain the image point coordinate of the feature point.
3. The method as claimed in claim 1, wherein the feature points are a set of all feature points obtained by feature extraction of the image.
4. The method as claimed in claim 1, wherein the image feature library constructing method based on point cloud and image matching is characterized in that the image point coordinates of the point cloud data are image coordinates of an intersection point where each point in the point cloud data is overlapped with an image through 2D coordinate transformation.
5. The method for constructing the image feature library based on point cloud and image matching as claimed in claim 1, wherein the point cloud mapping is performed on the acquired point cloud data to obtain image point coordinates of the point cloud data, specifically:
transforming the point cloud data to a camera coordinate system through coordinate transformation;
obtaining corresponding image point coordinates of the point cloud data in a camera coordinate system, namely the image point coordinates of the point cloud data;
and establishing a correlation between the point cloud data and the image according to the image point coordinates of the point cloud data.
6. The method for constructing the image feature library based on point cloud and image matching as claimed in claim 1, wherein the coordinate matching calculation is performed according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the mapping relationship among the feature points, the image point coordinates of the feature points and the point cloud coordinates, and specifically comprises:
comparing the consistency of the image point coordinates of the characteristic points and the image point coordinates of the point cloud data through difference value calculation;
if a plurality of point clouds which have the coordinate difference value with the image point of the characteristic point not equal to zero and are within the threshold range exist in the point cloud data, carrying out adjustment calculation on the plurality of point cloud data to obtain unique point cloud data, and determining the relationship between the unique point cloud data and the characteristic point;
and obtaining a mapping relation of the feature point descriptor, the image point coordinates of the feature points and the point cloud coordinates by combining the coordinates of the point cloud data and the image point coordinates of the feature points based on the unique relation between the point cloud data and the feature points.
7. The method for constructing an image feature library based on point cloud and image matching according to claim 1, wherein constructing the image feature library according to the mapping relationship among the feature points, the image point coordinates of the feature points, and the point cloud coordinates comprises:
creating a feature point descriptor, an image point coordinate and a point cloud coordinate field for the image feature library;
and correspondingly storing the feature point descriptor, the image point coordinate and the point cloud coordinate into corresponding fields.
8. An image feature library construction system based on point cloud and image matching is characterized by comprising:
a data acquisition module configured to acquire an image, POS data, and point cloud data;
the image feature extraction module is configured to perform feature extraction on the acquired image by combining POS data to obtain a feature point descriptor and an image point coordinate of a feature point;
the point cloud mapping module is configured to perform point cloud mapping on the acquired point cloud data to obtain image point coordinates of the point cloud data;
the coordinate matching module is configured to perform coordinate matching calculation according to the image point coordinates of the feature points and the image point coordinates of the point cloud data to obtain the mapping relation among the feature points, the image point coordinates of the feature points and the point cloud coordinates;
and the image feature library construction module is configured to construct an image feature library according to the feature points, the image point coordinates of the feature points and the mapping relation of the point cloud coordinates.
9. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the steps of the method for constructing an image feature library based on point cloud and image matching according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method for constructing the image feature library based on point cloud and image matching according to any one of claims 1 to 7 when executing the program.
CN202211237318.2A 2022-10-11 2022-10-11 Image feature library construction method and system based on point cloud and image matching Active CN115329111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211237318.2A CN115329111B (en) 2022-10-11 2022-10-11 Image feature library construction method and system based on point cloud and image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211237318.2A CN115329111B (en) 2022-10-11 2022-10-11 Image feature library construction method and system based on point cloud and image matching

Publications (2)

Publication Number Publication Date
CN115329111A true CN115329111A (en) 2022-11-11
CN115329111B CN115329111B (en) 2023-02-03

Family

ID=83914822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211237318.2A Active CN115329111B (en) 2022-10-11 2022-10-11 Image feature library construction method and system based on point cloud and image matching

Country Status (1)

Country Link
CN (1) CN115329111B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597168A (en) * 2023-07-18 2023-08-15 齐鲁空天信息研究院 Matching method, device, equipment and medium of vehicle-mounted laser point cloud and panoramic image
CN116630598A (en) * 2023-07-19 2023-08-22 齐鲁空天信息研究院 Visual positioning method and device under large scene, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093459A (en) * 2013-01-06 2013-05-08 中国人民解放军信息工程大学 Assisting image matching method by means of airborne lidar point cloud data
CN106780729A (en) * 2016-11-10 2017-05-31 中国人民解放军理工大学 A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN109115186A (en) * 2018-09-03 2019-01-01 山东科技大学 A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method
CN110163903A (en) * 2019-05-27 2019-08-23 百度在线网络技术(北京)有限公司 The acquisition of 3-D image and image position method, device, equipment and storage medium
CN113362383A (en) * 2020-03-02 2021-09-07 华为技术有限公司 Point cloud and image fusion method and device
CN114677435A (en) * 2021-07-20 2022-06-28 武汉海云空间信息技术有限公司 Point cloud panoramic fusion element extraction method and system
US20220215565A1 (en) * 2021-03-25 2022-07-07 Beijing Baidu Netcom Science Technology Co., Ltd. Method for generating depth map, elecronic device and storage medium
CN114743021A (en) * 2022-04-15 2022-07-12 国网江苏省电力有限公司泰州供电分公司 Fusion method and system of power transmission line image and point cloud data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093459A (en) * 2013-01-06 2013-05-08 中国人民解放军信息工程大学 Assisting image matching method by means of airborne lidar point cloud data
CN106780729A (en) * 2016-11-10 2017-05-31 中国人民解放军理工大学 A kind of unmanned plane sequential images batch processing three-dimensional rebuilding method
CN109115186A (en) * 2018-09-03 2019-01-01 山东科技大学 A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method
CN110163903A (en) * 2019-05-27 2019-08-23 百度在线网络技术(北京)有限公司 The acquisition of 3-D image and image position method, device, equipment and storage medium
CN113362383A (en) * 2020-03-02 2021-09-07 华为技术有限公司 Point cloud and image fusion method and device
US20220215565A1 (en) * 2021-03-25 2022-07-07 Beijing Baidu Netcom Science Technology Co., Ltd. Method for generating depth map, elecronic device and storage medium
CN114677435A (en) * 2021-07-20 2022-06-28 武汉海云空间信息技术有限公司 Point cloud panoramic fusion element extraction method and system
CN114743021A (en) * 2022-04-15 2022-07-12 国网江苏省电力有限公司泰州供电分公司 Fusion method and system of power transmission line image and point cloud data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张占平等: "改进SIFT的倾斜无人机影像匹配方法", 《地理空间信息》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597168A (en) * 2023-07-18 2023-08-15 齐鲁空天信息研究院 Matching method, device, equipment and medium of vehicle-mounted laser point cloud and panoramic image
CN116597168B (en) * 2023-07-18 2023-11-17 齐鲁空天信息研究院 Matching method, device, equipment and medium of vehicle-mounted laser point cloud and panoramic image
CN116630598A (en) * 2023-07-19 2023-08-22 齐鲁空天信息研究院 Visual positioning method and device under large scene, electronic equipment and storage medium
CN116630598B (en) * 2023-07-19 2023-09-29 齐鲁空天信息研究院 Visual positioning method and device under large scene, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115329111B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN115329111B (en) Image feature library construction method and system based on point cloud and image matching
US20230252664A1 (en) Image Registration Method and Apparatus, Electronic Apparatus, and Storage Medium
CN110276768B (en) Image segmentation method, image segmentation device, image segmentation apparatus, and medium
CN110648397A (en) Scene map generation method and device, storage medium and electronic equipment
CN114565863B (en) Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
CN112734837B (en) Image matching method and device, electronic equipment and vehicle
CN110599586A (en) Semi-dense scene reconstruction method and device, electronic equipment and storage medium
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN108109148A (en) Image solid distribution method, mobile terminal
CN113888639A (en) Visual odometer positioning method and system based on event camera and depth camera
Matsuo et al. Efficient edge-awareness propagation via single-map filtering for edge-preserving stereo matching
CN114202632A (en) Grid linear structure recovery method and device, electronic equipment and storage medium
CN114170290A (en) Image processing method and related equipment
CN113902856B (en) Semantic annotation method and device, electronic equipment and storage medium
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN114299230A (en) Data generation method and device, electronic equipment and storage medium
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN111767839B (en) Vehicle driving track determining method, device, equipment and medium
CN113435367A (en) Social distance evaluation method and device and storage medium
CN113326769A (en) High-precision map generation method, device, equipment and storage medium
CN112270748A (en) Three-dimensional reconstruction method and device based on image
US20230053952A1 (en) Method and apparatus for evaluating motion state of traffic tool, device, and medium
CN107862669B (en) Method and device for adjusting brightness of spliced image
CN112669346B (en) Pavement emergency determination method and device
CN113178000B (en) Three-dimensional reconstruction method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant