CN111340889A - Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning - Google Patents

Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning Download PDF

Info

Publication number
CN111340889A
CN111340889A CN202010102585.3A CN202010102585A CN111340889A CN 111340889 A CN111340889 A CN 111340889A CN 202010102585 A CN202010102585 A CN 202010102585A CN 111340889 A CN111340889 A CN 111340889A
Authority
CN
China
Prior art keywords
point cloud
dimensional key
key points
camera
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010102585.3A
Other languages
Chinese (zh)
Other versions
CN111340889B (en
Inventor
王程
刘伟权
赖柏锜
杨文韬
卞学胜
李渊
李军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202010102585.3A priority Critical patent/CN111340889B/en
Publication of CN111340889A publication Critical patent/CN111340889A/en
Application granted granted Critical
Publication of CN111340889B publication Critical patent/CN111340889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a method for automatically acquiring matched image blocks and point cloud balls based on vehicle-mounted laser scanning, which comprises the following steps: acquiring original point cloud data, removing ground point cloud in the original point cloud data to obtain initial point cloud, and extracting three-dimensional key points of the initial point cloud; acquiring a camera image, and extracting two-dimensional key points of the camera image; acquiring camera parameters, and intercepting local point clouds corresponding to camera images in the initial point clouds according to the camera parameters; projecting the three-dimensional key points of the local point cloud onto a camera image according to camera parameters, and voting the three-dimensional key points of the local point cloud according to the two-dimensional key points of the camera image so as to establish a matching relation between the two-dimensional key points and the three-dimensional key points; acquiring a local point cloud ball and an image block corresponding to the local point cloud ball according to the matching relation between the two-dimensional key point and the three-dimensional key point; the method can effectively and quickly acquire a large number of matched image blocks and point cloud balls, and has high generalization and robustness.

Description

Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning
Technical Field
The invention relates to the technical field of three-dimensional data processing, in particular to a method for automatically acquiring matched image blocks and point cloud balls based on vehicle-mounted laser scanning.
Background
The two-dimensional image and the three-dimensional point cloud data are matched, the space corresponding relation between the two-dimensional image and the three-dimensional space can be constructed, an irreplaceable effect is played in the construction of a smart city, and the method is particularly suitable for the field of outdoor augmented reality and automatic driving at the city scene level.
In the related technology, in the process of constructing the spatial relationship between a two-dimensional image and a three-dimensional space, the following two methods are mostly used, one is to perform three-dimensional reconstruction on a real scene based on the sfM method to obtain a three-dimensional image point cloud, and to retain key points during reconstruction to retrieve for visual positioning, and the other is to perform posture parameter regression on a single picture by using a deep learning method. The methods are only limited to specific scenes, are not suitable for new scenes, and have low robustness and generalization.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the art described above. Therefore, an object of the present invention is to provide a method for automatically acquiring matching image blocks and point cloud spheres based on vehicle-mounted laser scanning, which can effectively and quickly acquire a large number of matching image blocks and point cloud spheres and has high generalization and robustness.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for automatically acquiring a matching image block and a point cloud based on vehicle-mounted laser scanning, including the following steps: acquiring original point cloud data, removing ground point cloud in the original point cloud data to obtain initial point cloud, and extracting three-dimensional key points of the initial point cloud; acquiring a camera image, and extracting two-dimensional key points of the camera image; acquiring camera parameters, and intercepting local point clouds corresponding to the camera images in the initial point clouds according to the camera parameters; projecting the three-dimensional key points of the local point cloud onto the camera image according to the camera parameters, and voting the three-dimensional key points of the local point cloud according to the two-dimensional key points of the camera image so as to establish a matching relationship between the two-dimensional key points and the three-dimensional key points; and acquiring a local point cloud ball and an image block corresponding to the local point cloud ball according to the matching relation between the two-dimensional key point and the three-dimensional key point.
According to the method for automatically acquiring the matched image block and the point cloud ball based on the vehicle-mounted laser scanning, the method comprises the following steps of firstly, acquiring original point cloud data, removing ground point cloud in the original point cloud data to obtain initial point cloud, and extracting three-dimensional key points of the initial point cloud; then, acquiring a camera image, and extracting two-dimensional key points of the camera image; then, camera parameters are obtained, and local point clouds corresponding to camera images in the initial point clouds are intercepted according to the camera parameters; then, projecting the three-dimensional key points of the local point cloud onto a camera image according to camera parameters, and voting the three-dimensional key points of the local point cloud according to the two-dimensional key points of the camera image so as to establish a matching relation between the two-dimensional key points and the three-dimensional key points; then, acquiring a local point cloud ball and an image block corresponding to the local point cloud ball according to the matching relation between the two-dimensional key point and the three-dimensional key point; therefore, a large number of matched image blocks and point cloud balls can be effectively and quickly acquired, and the method has high generalization and robustness.
In addition, the method for automatically acquiring the matching image block and the point cloud ball based on the vehicle-mounted laser scanning, which is provided by the embodiment of the invention, can further have the following additional technical characteristics:
optionally, removing the ground point cloud in the original point cloud data to obtain an initial point cloud, including: segmenting the original point cloud data according to a preset width to obtain a plurality of local point cloud blocks, and segmenting the local point cloud blocks according to an octree index structure to generate point cloud voxels which are continuous in space; and removing the ground point cloud in the point cloud voxel according to a voxel upward growth method to obtain an initial point cloud.
Optionally, the three-dimensional key points of the initial point cloud are extracted by a key point detector of the ISS.
Optionally, the two-dimensional key points of the camera image are extracted by a key point detector of SIFT.
Optionally, the camera parameters include camera external parameters and camera positioning information, where intercepting, according to the camera parameters, a local point cloud corresponding to the camera image in the initial point cloud includes: calculating the shooting direction of a camera according to the camera external parameters, and calculating the nearest neighbor point of the camera and the initial point cloud according to the shooting direction and the camera positioning information; and determining a central point of the local point cloud according to the nearest neighbor point, a preset propelling distance and the shooting direction, and intercepting the initial point cloud according to the central point of the local point cloud and a preset radius to obtain the local point cloud.
Optionally, projecting the three-dimensional key points of the local point cloud onto the camera image according to the camera parameters comprises: and calculating a projection matrix of the local point cloud to the camera image according to the camera parameters, and projecting the three-dimensional key points of the local point cloud to the camera image according to the projection matrix.
Optionally, voting on the three-dimensional key points of the local point cloud according to the two-dimensional key points of the camera image to establish a matching relationship between the two-dimensional key points and the three-dimensional key points, including: and uniformly voting the two-dimensional key points of the camera image on all the three-dimensional key points in the local point cloud, reserving the three-dimensional key points of each two-dimensional key point in a preset range, and taking the three-dimensional key points in the preset range as matching points of the two-dimensional key points so as to establish the matching relationship between the two-dimensional key points and the three-dimensional key points.
Optionally, obtaining a local point cloud ball and an image block corresponding to the local point cloud ball according to the matching relationship between the two-dimensional key point and the three-dimensional key point includes: acquiring a local point cloud ball corresponding to the three-dimensional key point according to the three-dimensional key point in the preset range, and back-projecting the local point cloud ball onto the camera image according to a projection matrix to obtain a local point cloud ball projection image; and acquiring the maximum rectangular enclosure of the projected image of the local point cloud ball, and acquiring an image block corresponding to the local point cloud ball according to the maximum rectangular enclosure.
Drawings
Fig. 1 is a schematic flow chart of a method for automatically acquiring a matching image block and a point cloud based on vehicle-mounted laser scanning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of raw point cloud data according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an initial point cloud according to an embodiment of the invention;
FIG. 4 is a schematic diagram of three-dimensional key points of initial point cloud extraction according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a local point cloud capture method according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a local point cloud according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a local point cloud three-dimensional key point according to an embodiment of the invention;
FIG. 8 is a schematic diagram illustrating the effect of projecting three-dimensional key points into a camera image according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a maximum rectangular bounding selection effect according to an embodiment of the present invention;
fig. 10 is a schematic diagram illustrating matching effects between an image block and a local point cloud sphere according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the related technology, in the process of constructing the spatial relationship between the two-dimensional image and the three-dimensional space, the method is only limited to a specific scene, is not suitable for a new scene, and has low robustness and generalization; according to the method for automatically acquiring the matched image block and the point cloud ball based on the vehicle-mounted laser scanning, the method comprises the following steps of firstly, acquiring original point cloud data, removing ground point cloud in the original point cloud data to obtain initial point cloud, and extracting three-dimensional key points of the initial point cloud; then, acquiring a camera image, and extracting two-dimensional key points of the camera image; then, camera parameters are obtained, and local point clouds corresponding to camera images in the initial point clouds are intercepted according to the camera parameters; then, projecting the three-dimensional key points of the local point cloud onto a camera image according to camera parameters, and voting the three-dimensional key points of the local point cloud according to the two-dimensional key points of the camera image so as to establish a matching relation between the two-dimensional key points and the three-dimensional key points; then, acquiring a local point cloud ball and an image block corresponding to the local point cloud ball according to the matching relation between the two-dimensional key point and the three-dimensional key point; therefore, a large number of matched image blocks and point cloud balls can be effectively and quickly acquired, and the method has high generalization and robustness.
In order to better understand the above technical solutions, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Fig. 1 is a schematic flow diagram of a method for automatically acquiring a matching image block and a point cloud ball based on vehicle-mounted laser scanning according to an embodiment of the present invention, and as shown in fig. 1, the method for automatically acquiring a matching image block and a point cloud ball based on vehicle-mounted laser scanning includes the following steps:
s101, acquiring original point cloud data, removing ground point cloud in the original point cloud data to obtain initial point cloud, and extracting three-dimensional key points of the initial point cloud.
That is to say, the original point cloud data obtained by scanning the vehicle-mounted laser system is obtained, the ground point cloud in the original point cloud data is removed, so that the initial point cloud can be obtained, and then, the key points of the initial point cloud are extracted to extract the three-dimensional key points of the initial point cloud.
There are various ways to remove the ground point cloud in the original point cloud data to obtain the initial point cloud.
As an example, firstly, segmenting original point cloud data according to a preset width to obtain a plurality of local point cloud blocks, and segmenting the local point cloud blocks according to an octree index structure to generate spatially continuous point cloud voxels; and then, removing the ground point cloud in the point cloud voxel according to a voxel upward growth method to obtain an initial point cloud.
As another example, first, the whole original point cloud (as shown in FIG. 2, FIG. 2 is a schematic diagram of the original point cloud) is divided into a certain width w in the XY planebIs vertically cut into a series of local point cloud blocks Bi,i=1,2,…,NBWherein N isBTo generate a total number of local point cloud tiles; preferably, the division width of the dot cloud block is 1.2 m. Then, at each divided local point cloud BiThe index structure is divided by using an octree index structure and is divided according to a certain width wvDivide it into a series of spatially continuous point cloud voxels Vj,j=1,2,…,NVIn which N isVTo generate the total number of point cloud voxels, the point cloud voxels are preferably spaced at 0.05m intervals. Then, based on the divided point cloud voxels, adopting a voxel upward growth method to divide each divided local point cloud block BiFiltering the ground point cloud, and finally, filtering all the ground point clouds BiAnd combining to obtain an initial point cloud (as shown in fig. 3, fig. 3 is a schematic diagram of the initial point cloud).
There are various ways to extract the three-dimensional key points of the initial point cloud.
As an example, the three-dimensional keypoints of the initial point cloud are obtained by keypoint detector extraction of the ISS.
As another example, all three-dimensional ISS keypoints of the initial point cloud M' are extracted using the ISS keypoint detector
Figure BDA0002387368990000041
Wherein N isUThe number of extracted ISS key points. The ISS key points extracted by M' are shown in fig. 4.
S102, acquiring a camera image and extracting two-dimensional key points of the camera image.
That is, images taken by the camera are acquired, and extraction of key points is performed on the camera images to extract two-dimensional key points for each camera image.
The two-dimensional key of the camera image can be extracted in various ways.
As an example, two-dimensional keypoints of a camera image are obtained by keypoint detector extraction of SIFT.
As another example, all two-dimensional keypoints for all camera images are extracted using a keypoint detector of SIFT
Figure BDA0002387368990000051
Wherein N isVTotal number of photos of camera, NiThe number of SIFT key points extracted for the ith photograph.
S103, acquiring camera parameters, and intercepting local point clouds corresponding to the camera images in the initial point clouds according to the camera parameters.
The camera parameters may include a variety of data, such as camera parameters, camera positioning information, and the like.
As an example, a projection matrix of the local point cloud to the camera image is calculated according to the camera parameters, and three-dimensional key points of the local point cloud are projected onto the camera image according to the projection matrix.
As another example, as shown in FIG. 5, first, the ith camera image Ii is determined from camera parameters of the vehicle laser scanning systemShooting direction
Figure BDA0002387368990000052
Then using the camera image IiIs shot at a position Ai(i.e., positioning information of the camera) along the photographing direction of the camera
Figure BDA0002387368990000053
Starting from the point cloud point, finding the nearest neighbor point Bi from the initial point cloud M', and continuously advancing a distance d in the direction to obtain a new position point A by taking Bi as a referencei', then with Ai' setting a radius R for the center to intercept the local point cloud LiPreferably, the distance d is 5m and the radius R is 20 m. As shown in fig. 6, fig. 6 is a schematic diagram of the finally intercepted local point cloud.
And S104, projecting the three-dimensional key points of the local point cloud onto a camera image according to the camera parameters, and voting the three-dimensional key points of the local point cloud according to the two-dimensional key points of the camera image so as to establish a matching relation between the two-dimensional key points and the three-dimensional key points.
There are various ways to project the three-dimensional key points of the local point cloud onto the camera image according to the camera parameters.
As an example, a projection matrix of the local point cloud to the camera image is calculated according to the camera parameters, and three-dimensional key points of the local point cloud are projected onto the camera image according to the projection matrix.
As another example, first, the current local point cloud scene L is calculated according to the internal and external parameters of the camera of the vehicle-mounted laser scanning systemiTo the corresponding camera image IiProjection matrix P ofiThen, the local point cloud scene LiISS three-dimensional key point
Figure BDA0002387368990000054
By projecting a matrix PiProjected onto camera image IiTo obtain ISS projection points
Figure BDA0002387368990000055
Figure BDA0002387368990000056
Wherein N isLFor a local point cloud scene LiNumber of ISS key points. The three-dimensional key points of the local point cloud in fig. 6 are shown in fig. 7, and the effect of projecting the three-dimensional key points in fig. 7 into the camera image is shown in fig. 8.
The three-dimensional key points of the local point cloud are voted according to the two-dimensional key points of the camera image, and various ways can be provided for establishing the matching relationship between the two-dimensional key points and the three-dimensional key points.
As an example, a two-dimensional key point of a camera image is uniformly voted for all three-dimensional key points in a local point cloud, a three-dimensional key point of each two-dimensional key point within a preset range is reserved, and the three-dimensional key points within the preset range are used as matching points of the two-dimensional key points to establish a matching relationship between the two-dimensional key points and the three-dimensional key points.
As another example, first, the SIFT two-dimensional key points of all camera images are uniformly voted for all ISS three-dimensional projection points, SIFT key points within 3 pixels away from each ISS projection point are reserved as candidate matching points, and if there is more than one SIFT key point within 3 pixels, one SIFT key point is randomly selected as a candidate matching point; then, if the ISS projected points corresponding to the ISS keypoints have more than 3 candidate SIFT keypoints of different camera images, these ISS keypoints and corresponding candidate SIFT keypoints are retained, and these remaining ISS keypoints and SIFT keypoints are considered to be correct matching points.
And S105, acquiring the local point cloud ball and the image block corresponding to the local point cloud ball according to the matching relation between the two-dimensional key point and the three-dimensional key point.
As an example, firstly, a local point cloud ball corresponding to a three-dimensional key point is obtained according to the three-dimensional key point in a preset range, and the local point cloud ball is back projected onto a camera image according to a projection matrix to obtain a local point cloud ball projection image; and then, acquiring the maximum rectangular enclosure of the local point cloud ball projection image, and acquiring an image block corresponding to the local point cloud ball according to the maximum rectangular enclosure.
As another example, first, a local point cloud sphere is obtained according to a set radius r with a three-dimensional key point in a preset range as a center, and preferably, the radius r may be 1 m; then, the local point cloud ball is back-projected to a camera image of a corresponding three-dimensional key point according to the projection matrix to obtain a local point cloud ball projection image, and then the maximum rectangular enclosure of the local point cloud ball projection image is obtained, wherein the effect of the maximum rectangular enclosure is shown in fig. 9; then, the maximum rectangle is surrounded and expanded by N pixels outwards to obtain an image block corresponding to the local point cloud ball; the matching effect of the image block and the local point cloud ball is shown in fig. 10, the first row is the local point cloud ball, and the second row to the third row are the corresponding image blocks.
In summary, according to the method for automatically acquiring a matching image block and a point cloud ball based on vehicle-mounted laser scanning of the embodiment of the invention, firstly, original point cloud data is acquired, ground point cloud in the original point cloud data is removed to obtain an initial point cloud, and three-dimensional key points of the initial point cloud are extracted; then, acquiring a camera image, and extracting two-dimensional key points of the camera image; then, camera parameters are obtained, and local point clouds corresponding to camera images in the initial point clouds are intercepted according to the camera parameters; then, projecting the three-dimensional key points of the local point cloud onto a camera image according to camera parameters, and voting the three-dimensional key points of the local point cloud according to the two-dimensional key points of the camera image so as to establish a matching relation between the two-dimensional key points and the three-dimensional key points; then, acquiring a local point cloud ball and an image block corresponding to the local point cloud ball according to the matching relation between the two-dimensional key point and the three-dimensional key point; therefore, a large number of matched image blocks and point cloud balls can be effectively and quickly acquired, and the method has high generalization and robustness.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above should not be understood to necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (8)

1. A method for automatically acquiring a matched image block and a point cloud ball based on vehicle-mounted laser scanning is characterized by comprising the following steps:
acquiring original point cloud data, removing ground point cloud in the original point cloud data to obtain initial point cloud, and extracting three-dimensional key points of the initial point cloud;
acquiring a camera image, and extracting two-dimensional key points of the camera image;
acquiring camera parameters, and intercepting local point clouds corresponding to the camera images in the initial point clouds according to the camera parameters;
projecting the three-dimensional key points of the local point cloud onto the camera image according to the camera parameters, and voting the three-dimensional key points of the local point cloud according to the two-dimensional key points of the camera image so as to establish a matching relationship between the two-dimensional key points and the three-dimensional key points;
and acquiring a local point cloud ball and an image block corresponding to the local point cloud ball according to the matching relation between the two-dimensional key point and the three-dimensional key point.
2. The method of automatically acquiring matching image blocks and point cloud spheres based on vehicle laser scanning of claim 1, wherein removing ground point clouds in the original point cloud data to obtain an initial point cloud comprises:
segmenting the original point cloud data according to a preset width to obtain a plurality of local point cloud blocks, and segmenting the local point cloud blocks according to an octree index structure to generate point cloud voxels which are continuous in space;
and removing the ground point cloud in the point cloud voxel according to a voxel upward growth method to obtain an initial point cloud.
3. The method for automatically acquiring matching image blocks and point cloud spheres based on vehicle laser scanning of claim 1, wherein the three-dimensional key points of the initial point cloud are obtained by key point detector extraction of the ISS.
4. The method for automatically acquiring matching image blocks and point cloud balls based on vehicle-mounted laser scanning as claimed in claim 1, wherein the two-dimensional key points of the camera image are obtained by key point detector extraction of SIFT.
5. The method of claim 1, wherein the camera parameters comprise camera external parameters and camera positioning information, and wherein intercepting local point clouds corresponding to the camera images in the initial point clouds according to the camera parameters comprises:
calculating the shooting direction of a camera according to the camera external parameters, and calculating the nearest neighbor point of the camera and the initial point cloud according to the shooting direction and the camera positioning information;
and determining a central point of the local point cloud according to the nearest neighbor point, a preset propelling distance and the shooting direction, and intercepting the initial point cloud according to the central point of the local point cloud and a preset radius to obtain the local point cloud.
6. The method of automatically acquiring matching image tiles and point cloud spheres based on vehicle laser scanning of claim 1, wherein projecting three-dimensional key points of the local point cloud onto the camera image according to the camera parameters comprises:
and calculating a projection matrix of the local point cloud to the camera image according to the camera parameters, and projecting the three-dimensional key points of the local point cloud to the camera image according to the projection matrix.
7. The method of claim 6, wherein voting for three-dimensional key points of the local point cloud according to two-dimensional key points of the camera image to establish a matching relationship between the two-dimensional key points and the three-dimensional key points comprises:
and uniformly voting the two-dimensional key points of the camera image on all the three-dimensional key points in the local point cloud, reserving the three-dimensional key points of each two-dimensional key point in a preset range, and taking the three-dimensional key points in the preset range as matching points of the two-dimensional key points so as to establish the matching relationship between the two-dimensional key points and the three-dimensional key points.
8. The method for automatically acquiring matching image blocks and point cloud balls based on vehicle-mounted laser scanning according to claim 7, wherein acquiring local point cloud balls and image blocks corresponding to the local point cloud balls according to the matching relationship between the two-dimensional key points and the three-dimensional key points comprises:
acquiring a local point cloud ball corresponding to the three-dimensional key point according to the three-dimensional key point in the preset range, and back-projecting the local point cloud ball onto the camera image according to a projection matrix to obtain a local point cloud ball projection image;
and acquiring the maximum rectangular enclosure of the projected image of the local point cloud ball, and acquiring an image block corresponding to the local point cloud ball according to the maximum rectangular enclosure.
CN202010102585.3A 2020-02-19 2020-02-19 Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning Active CN111340889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010102585.3A CN111340889B (en) 2020-02-19 2020-02-19 Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010102585.3A CN111340889B (en) 2020-02-19 2020-02-19 Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning

Publications (2)

Publication Number Publication Date
CN111340889A true CN111340889A (en) 2020-06-26
CN111340889B CN111340889B (en) 2023-04-07

Family

ID=71186957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010102585.3A Active CN111340889B (en) 2020-02-19 2020-02-19 Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning

Country Status (1)

Country Link
CN (1) CN111340889B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200851A (en) * 2020-12-09 2021-01-08 北京云测信息技术有限公司 Point cloud-based target detection method and device and electronic equipment thereof
CN112580489A (en) * 2020-12-15 2021-03-30 深兰人工智能(深圳)有限公司 Traffic light detection method and device, electronic equipment and storage medium
WO2024041585A1 (en) * 2022-08-26 2024-02-29 The University Of Hong Kong A method for place recognition on 3d point cloud

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140010407A1 (en) * 2012-07-09 2014-01-09 Microsoft Corporation Image-based localization
CN106403845A (en) * 2016-09-14 2017-02-15 杭州思看科技有限公司 3D sensor system and 3D data acquisition method
US20180276885A1 (en) * 2017-03-27 2018-09-27 3Dflow Srl Method for 3D modelling based on structure from motion processing of sparse 2D images
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN110443840A (en) * 2019-08-07 2019-11-12 山东理工大学 The optimization method of sampling point set initial registration in surface in kind

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140010407A1 (en) * 2012-07-09 2014-01-09 Microsoft Corporation Image-based localization
CN106403845A (en) * 2016-09-14 2017-02-15 杭州思看科技有限公司 3D sensor system and 3D data acquisition method
EP3392831A1 (en) * 2016-09-14 2018-10-24 Hangzhou Scantech Company Limited Three-dimensional sensor system and three-dimensional data acquisition method
US20180276885A1 (en) * 2017-03-27 2018-09-27 3Dflow Srl Method for 3D modelling based on structure from motion processing of sparse 2D images
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN110443840A (en) * 2019-08-07 2019-11-12 山东理工大学 The optimization method of sampling point set initial registration in surface in kind

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈坤源等: "铁路高精度点云智能的处理技术", 《厦门大学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200851A (en) * 2020-12-09 2021-01-08 北京云测信息技术有限公司 Point cloud-based target detection method and device and electronic equipment thereof
CN112200851B (en) * 2020-12-09 2021-02-26 北京云测信息技术有限公司 Point cloud-based target detection method and device and electronic equipment thereof
CN112580489A (en) * 2020-12-15 2021-03-30 深兰人工智能(深圳)有限公司 Traffic light detection method and device, electronic equipment and storage medium
WO2024041585A1 (en) * 2022-08-26 2024-02-29 The University Of Hong Kong A method for place recognition on 3d point cloud

Also Published As

Publication number Publication date
CN111340889B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111340889B (en) Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning
CN108205797B (en) Panoramic video fusion method and device
Schops et al. A multi-view stereo benchmark with high-resolution images and multi-camera videos
CN109741257B (en) Full-automatic panorama shooting and splicing system and method
Schöning et al. Evaluation of multi-view 3D reconstruction software
US8711141B2 (en) 3D image generating method, 3D animation generating method, and both 3D image generating module and 3D animation generating module thereof
CN109842811B (en) Method and device for implanting push information into video and electronic equipment
CN110567441B (en) Particle filter-based positioning method, positioning device, mapping and positioning method
CN107025647B (en) Image tampering evidence obtaining method and device
CN109255809A (en) A kind of light field image depth estimation method and device
Ali et al. Single image Façade segmentation and computational rephotography of House images using deep learning
Pintus et al. A fast and robust framework for semiautomatic and automatic registration of photographs to 3D geometry
Teixeira et al. Epipolar based light field key-location detector
CN113838069A (en) Point cloud segmentation method and system based on flatness constraint
CN111738061A (en) Binocular vision stereo matching method based on regional feature extraction and storage medium
CN115861531A (en) Crop group three-dimensional reconstruction method based on aerial images
CN116092035A (en) Lane line detection method, lane line detection device, computer equipment and storage medium
CN110457998A (en) Image data correlating method and equipment, data processing equipment and medium
CN114332134B (en) Building facade extraction method and device based on dense point cloud
CN113724365B (en) Three-dimensional reconstruction method and device
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
Skuratovskyi et al. Outdoor mapping framework: from images to 3d model
JP2011170554A (en) Object recognition device, object recognition method, and object recognition program
CN112204624A (en) Method and device for automatically shearing model and storage medium
JP6606748B2 (en) Stereo pair image display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant