CN117495969B - Automatic point cloud orientation method, equipment and storage medium based on computer vision - Google Patents

Automatic point cloud orientation method, equipment and storage medium based on computer vision Download PDF

Info

Publication number
CN117495969B
CN117495969B CN202410003521.6A CN202410003521A CN117495969B CN 117495969 B CN117495969 B CN 117495969B CN 202410003521 A CN202410003521 A CN 202410003521A CN 117495969 B CN117495969 B CN 117495969B
Authority
CN
China
Prior art keywords
point cloud
target
angle
gray
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410003521.6A
Other languages
Chinese (zh)
Other versions
CN117495969A (en
Inventor
廖李灿
应宗权
毛凤山
吕述晖
李金祥
刘介山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCCC Fourth Harbor Engineering Institute Co Ltd
Original Assignee
CCCC Fourth Harbor Engineering Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCCC Fourth Harbor Engineering Institute Co Ltd filed Critical CCCC Fourth Harbor Engineering Institute Co Ltd
Priority to CN202410003521.6A priority Critical patent/CN117495969B/en
Publication of CN117495969A publication Critical patent/CN117495969A/en
Application granted granted Critical
Publication of CN117495969B publication Critical patent/CN117495969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic point cloud orientation method based on computer vision, electronic equipment and a computer-readable storage medium, which comprise the following steps: extracting a point cloud ring covering the target layout range from the original point cloud data, and converting the point cloud ring from a Cartesian coordinate system to a spherical coordinate system; mapping the point cloud ring into a two-dimensional point cloud gray image; searching a target gray level image from the point cloud gray level image based on a template matching algorithm, and identifying a corresponding target area in the point cloud ring; defining a second segmentation precision, and mapping the target area into a target gray image with higher precision; filtering the target gray level image, and acquiring a target center point from the target gray level image through a feature point extraction algorithm; and calculating a point cloud coordinate conversion parameter, and converting the original point cloud data from a local coordinate system to a construction coordinate system. The method and the device can automatically identify the planar target area and extract the target center point, and rapidly and accurately realize the automatic processing of the point cloud data orientation process.

Description

Automatic point cloud orientation method, equipment and storage medium based on computer vision
Technical Field
The present invention relates to the field of engineering measurement technologies, and in particular, to an automated point cloud orientation method based on computer vision, an electronic device, and a computer readable storage medium.
Background
With the development of society and the progress of process technology, the construction of super high-rise, special-shaped and large-scale structures at home and abroad is increasingly accelerated, and as the sensitivity of the structure to construction deviation is strong in the construction process and the requirements on the measuring construction period are tight, the requirements on measuring work are higher and higher, the measuring precision is required to be met, and meanwhile, the measuring efficiency is also required to be considered. The traditional measurement means have hardly met the requirement in terms of efficiency, and the occurrence of the three-dimensional laser scanning technology effectively solves the problem.
The three-dimensional laser scanning technology overcomes the defect of single-point acquisition in the traditional measurement mode, and can realize the rapid and accurate acquisition of the three-dimensional coordinates, intensity and color information of the surface of an acquisition object. On the one hand, the three-dimensional coordinates of the structure acquired by the three-dimensional scanner are based on a local coordinate system of the instrument, and the construction measurement adopts a construction coordinate system designated by engineering, so that the point cloud under the local coordinate system needs to be converted into the construction coordinate system, namely the point cloud orientation during engineering measurement analysis.
The current common method is that point cloud orientation is realized by arranging a plurality of groups of spherical targets or plane targets, the spherical targets can realize automatic identification, but whether spherical point clouds exist in a scanning area needs to be traversed, and at most half of spherical surfaces can be scanned in one scanning, so that the spherical surfaces are easily influenced by other objects in the area during fitting; the planar target needs to manually find out the target area first and then extract the center point. On the other hand, the three-dimensional laser scanning technology obtains more comprehensive and accurate information, and meanwhile, the data volume to be processed becomes larger. Factors of huge data volume and manual target identification greatly reduce the processing efficiency of point cloud orientation, and influence the efficiency of real-time feedback of the three-dimensional scanning technology on field measurement like a traditional measurement method.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide an automatic point cloud orientation method based on computer vision, electronic equipment and a computer readable storage medium, which can automatically identify a plane target area and extract a target center point, and quickly and accurately realize the automatic processing of a point cloud data orientation process.
The automatic point cloud orientation method based on computer vision is realized by adopting the following technical scheme:
an automatic point cloud orientation method based on computer vision comprises the following steps:
Step 1: extracting a point cloud ring covering the target layout range from the original point cloud data, and converting the point cloud ring from a Cartesian coordinate system to a spherical coordinate system;
Step 2: defining a first segmentation precision, segmenting a horizontal angle and a vertical angle of a point cloud ring according to the first segmentation precision, dividing the point cloud ring into a plurality of angle areas, mapping the point cloud ring into a two-dimensional point cloud gray image, wherein each pixel of the point cloud gray image corresponds to each angle area of the point cloud ring one by one, and the gray value of each pixel is obtained by converting and calculating the RGB value of the point cloud in the corresponding angle area;
Step 3: generating a target gray image template, searching a target gray image from the point cloud gray image based on a template matching algorithm, and identifying a corresponding target region in the point cloud ring according to the position of the target gray image;
Step 4: defining a second division precision, dividing a horizontal angle and a vertical angle of a target area according to the second division precision, dividing the target area into a plurality of smaller angle areas, and mapping the target area into a target gray image with higher precision, wherein each pixel of the target gray image corresponds to each angle area of the target area one by one, and gray values of each pixel are obtained by converting and calculating RGB values of point clouds in the corresponding angle areas;
Step 5: filtering the target gray level image, acquiring pixel coordinates of a target center point in the target gray level image through a feature point extraction algorithm, and extracting local coordinates corresponding to the target center point in original point cloud data according to the pixel coordinates;
Step 6: and calculating point cloud coordinate conversion parameters according to the local coordinates and the construction coordinates of at least 3 groups of target center points, and converting the original point cloud data from the local coordinate system to the construction coordinate system according to the point cloud coordinate conversion parameters.
Further, the step1 includes:
Step 11: at least 3 non-collinear plane targets with different heights are distributed in a point cloud scanning area corresponding to the original point cloud data, each plane target consists of four 90-degree sectors, the colors of the sectors are black or white, and the colors of adjacent sectors are different, so that the central point of the target can be identified through the difference of the gray scales of the adjacent sectors; respectively extracting point cloud rings with Z coordinate ranges of [ h-h y-Δh,h-hy+hb +delta h ] in original point cloud data by adopting a straight-through filtering method, wherein a local coordinate system of the original point cloud data takes the center position of an instrument as an original point, h is the target layout height [ h 1、h2、h3、…、hn ], n is the target number, h y is the three-dimensional laser scanner erection height, h b is the vertical length of the target, delta h is a height adjustment parameter, and delta h takes 0.1-0.5 to ensure that the layout range of the target is covered in the extracted point cloud rings;
Step 12: and converting Cartesian coordinates of all points in the point cloud ring into corresponding spherical coordinates to obtain a value range [ V min,Vmax ] of a horizontal angle and a value range [ H min,Hmax ] of a vertical angle of the point cloud ring under the spherical coordinate system.
Further, the step2 includes:
Step 21: defining a first segmentation precision P l1, and segmenting the horizontal angle and the vertical angle of the point cloud ring according to the first segmentation precision P l1 to obtain a segmentation section [ V min+0×Pl1,Vmin+1×Pl1,…,Vmin+i×Pl1,…,Vmax ] of the horizontal angle and a segmentation section [ H min+0×Pl1,Hmin+1×Pl1,…,Hmin+j×Pl1,…,Hmax ] of the vertical angle, so that the point cloud ring is divided into a plurality of angle areas, and each angle area is defined by one horizontal angle section and one vertical angle section;
Step 22: mapping the point cloud ring into a point cloud gray image P 0; the method comprises the steps of mapping each angle area of a point cloud ring into a pixel of a point cloud gray level image P 0, respectively extracting RGB values of point clouds in the angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray level values through an average value method, taking the median of the gray level values as gray level values of corresponding pixels, and defining the gray level value of the corresponding pixels as 0 when the point clouds are not in the angle area, wherein the pixel coordinates (i, j) correspond to a horizontal angle interval [ V min+i×Pl1,Vmin+(i+1)×Pl1 ] and a vertical angle interval [ H min+j×Pl1,Hmin+(j+1)×Pl1 ], and the value ranges of integers i and j are respectively [0 ] ((V max-Vmin)/Pl1) -1) and [0 ] ((H max-Hmin)/Pl1) -1).
Further, the step3 includes:
Step 31: generating an initial gray image with the height and the width of d, defining an origin with the upper left corner of the image as a pixel coordinate system, wherein two areas [0:d/2, 0:d/2 ] and [ d/2:d, d/2:d ] in the pixel coordinate system are black, and two areas [0:d/2, d/2:d ] and [ d/2:d, 0:d/2 ] are white to form four rectangles with black and white intervals;
Step 32: defining A rotation angle, wherein the angle is 2-6 degrees, generating an arithmetic series A [ angle, …, angle x (m-1) and 180 degrees ] taking the angle as A tolerance between 0-180 degrees, rotating an initial gray image according to the angle value of the arithmetic series A to obtain m additional initial gray images, cutting the edge of each initial gray image, leaving A [ d/4:dx3/4, d/4:dx3/4 ] area, thereby generating m+1 target gray image templates, and forming A template set G;
Step 33: and solving correlation coefficients of all target gray image templates in the template set G and different pixel areas in the point cloud gray image P 0 according to a template matching algorithm, extracting a pixel point (i 1,j1) with the best matching effect, taking an image of an area [ i 1: i1+d/2,j1: j1 +d/2] under a pixel coordinate system as a target gray image P 1, and identifying a corresponding target area from the point cloud ring.
Further, in step 33, the template matching algorithm adopts a normalized correlation coefficient matching method, a normalized correlation matching method or a normalized square difference matching method.
Further, the step4 includes:
Step 41: acquiring a value range [ V k1,Vk2 ] of a horizontal angle and a value range [ H k1,Hk2 ] of a vertical angle of a target area under a spherical coordinate system, counting the number of point clouds in the target area as Sum, squaring Sum, rounding downwards if the square value is not an integer, and assigning d 1;
Step 42: defining a second division precision P l2,Pl2=(Hk2-Hk1)/d1, dividing the horizontal angle and the vertical angle of the target area according to the second division precision P l2 to obtain a division section [ V k1+0×Pl2,Vk1+1×Pl2,…,Vk1+i×Pl2,…,Vk2 ] of the horizontal angle and a division section [ H k1+0×Pl2,Hk1+1×Pl2,…,Hk1+j×Pl2,…,Hk2 ] of the vertical angle, so that the target area is divided into a plurality of angle areas, and each angle area is defined by one horizontal angle section and one vertical angle section;
Step 43: mapping the target region to a target gray scale image P 2; wherein, each angle area of the target area needs to be mapped into a pixel of the target gray level image P 2, the pixel coordinates (i, j) correspond to a horizontal angle interval [ V k1+i×Pl2,Vk1+(i+1)×Pl2 ] and a vertical angle interval [ H k1+j×Pl2,Hk1+(j+1)×Pl2 ], and the value ranges of the integers i and j are [0, d 1 -1] and [0, d 1 -1]; and respectively extracting RGB values of point clouds in an angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray values by an average value method, taking the median of the gray values as the gray value of the corresponding pixel, and defining the gray value of the corresponding pixel as 0 when the point clouds are not in the angle area.
Further, the step 5 includes:
Step 51: in the target gray image P 2 generated in step 4, when there is no point cloud in the angle area corresponding to the pixel, the gray value of the pixel is 0, and the pixel is displayed as a noise point in the target gray image P 2, the target gray image P 2 is processed by adopting a filtering algorithm, and the pixel with the gray value of 0 is replaced by the gray values of surrounding pixel points, so as to obtain a target gray image without the noise point;
Step 52: and acquiring the pixel coordinates of the target center point in the target gray level image without noise points through a feature point extraction algorithm, and extracting the local coordinates of the target center point under a Cartesian coordinate system by combining the corresponding relation of the pixel coordinates, the spherical coordinates and the Cartesian coordinates.
Further, in step 51, the filtering algorithm uses median filtering, mean filtering or gaussian filtering to replace the gray value of the noise point with the median or average of the gray values of all pixels in the filtering window corresponding to the noise point, so as to obtain a target gray image without the noise point;
In step 52, the feature point extraction algorithm adopts Harris corner detection algorithm, SIFT corner detection algorithm or template matching method, and determines the pixel coordinates of the target center point through the gray gradient change feature of the target center point or the correlation coefficient value of template matching.
The electronic equipment is realized by adopting the following technical scheme:
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above-described automated point cloud orientation method when executing the computer program.
The computer readable storage medium of the present invention is realized by the following technical scheme:
a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described automated point cloud orientation method.
Compared with the prior art, the invention has the beneficial effects that:
The invention fully utilizes the coordinate information and RGB information contained in the point cloud data to map the three-dimensional point cloud data into the two-dimensional gray level image, and the geometric characteristics of the plane target can be reserved in the two-dimensional gray level image, so that the target area can be automatically identified and the target center point can be extracted by utilizing a mature computer vision algorithm, and the automatic processing of the point cloud orientation can be rapidly and accurately realized.
Drawings
FIG. 1 is a flow chart of an embodiment of the computer vision based automated point cloud orientation method of the present invention;
FIG. 2 is a point cloud ring of an example application of the method of the present invention;
FIG. 3 is a point cloud gray scale image of an example of application of the method of the present invention;
FIG. 4 is an unrotated initial gray scale image of an example of an application of the method of the present invention;
FIG. 5 is a target gray scale image template rotated 24 and cropped for an exemplary application of the method of the present invention;
FIG. 6 is a schematic diagram of finding a target grayscale image P 2 according to an example of application of the method of the present invention;
FIG. 7 is a point cloud within a target area for an example of application of the method of the present invention;
FIG. 8 is a target grayscale image P 2 illustrating an example of application of the method of the present invention;
FIG. 9 is a filtered image of a target grayscale image P 2 in an example of application of the method of the present invention;
fig. 10 is a schematic diagram of a target center point identification result of an application example of the method of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and detailed description, wherein it is to be understood that, on the premise of no conflict, the following embodiments or technical features may be arbitrarily combined to form new embodiments.
Referring to fig. 1, an embodiment of the present invention provides an automated point cloud orientation method based on computer vision. The method comprises the following steps:
Step 1: and extracting a point cloud ring covering the target layout range from the original point cloud data, and converting the point cloud ring from a Cartesian coordinate system to a spherical coordinate system.
The purpose of the step 1 is to reduce the amount of point cloud data to be processed when two-dimensional imaging and computer vision recognition are carried out subsequently, so that the processing efficiency is improved; according to the step 1, the method of the invention requires the point cloud ring to cover the layout range of the plane target, and all original point cloud data is not needed, so that a large number of irrelevant point clouds can be removed.
Specifically, step1 includes:
Step 11: at least 3 non-collinear plane targets with different heights are distributed in a point cloud scanning area corresponding to the original point cloud data, each plane target consists of four 90-degree sectors, the colors of the sectors are black or white, and the colors of adjacent sectors are different, so that the central point of the target can be identified through the difference of the gray scales of the adjacent sectors; respectively extracting point cloud rings with Z coordinate ranges of [ h-h y-Δh,h-hy+hb +delta h ] in original point cloud data by adopting a straight-through filtering method, wherein a local coordinate system of the original point cloud data takes the center position of an instrument as an original point, h is the target layout height [ h 1、h2、h3、…、hn ], n is the target number, h y is the three-dimensional laser scanner erection height, h b is the vertical length of the target, delta h is a height adjustment parameter, and delta h takes 0.1-0.5 to ensure that the layout range of the target is covered in the extracted point cloud rings;
Step 12: and converting Cartesian coordinates of all points in the point cloud ring into corresponding spherical coordinates to obtain a value range [ V min,Vmax ] of a horizontal angle and a value range [ H min,Hmax ] of a vertical angle of the point cloud ring under the spherical coordinate system.
In step 11, n, h y、Δh、hb and other data can be known or set in advance, [ h 1、h2、h3、…、hn ] and the like means that targets with different heights are numbered 1,2, 3, … and n in advance; through the direct filtering method of the step 11, the extracted point cloud ring can be ensured to contain the complete target paper point cloud, meanwhile, the data volume of the point cloud processed later is reduced, and the processing efficiency is improved.
In step 12, the formula for the Cartesian coordinate system to convert the spherical coordinate system is as follows:
(1)
(2)
(3)
wherein x, y, z are coordinates in a Cartesian coordinate system, r, 、/> are coordinates in a spherical coordinate system,/> is a vertical angle, and/> is a horizontal angle.
Step 2: defining a first segmentation precision P l1, segmenting the horizontal angle and the vertical angle of the point cloud ring according to the first segmentation precision P l1, dividing the point cloud ring into a plurality of angle areas, mapping the point cloud ring into a two-dimensional point cloud gray image P 0, wherein each pixel of the point cloud gray image P 0 corresponds to each angle area of the point cloud ring one by one, and the gray value of each pixel is obtained by converting and calculating the RGB value of the point cloud in the corresponding angle area.
In the step 2, the point cloud ring is divided into a plurality of angle areas by dividing the horizontal angle and the vertical angle of the point cloud ring, and the point cloud ring is mapped into a two-dimensional point cloud gray image on the basis, and the two-dimensional imaging manner is helpful to preserve the geometric characteristics of the plane target in the generated point cloud gray image P 0.
In addition, the point cloud gray image P 0 is mainly used for identifying the target area in the following, rather than directly identifying the target center point, so that the accuracy of the image can be understood by a ratio of=total number of point clouds/total number of angle areas, for example, the first segmentation accuracy P l1 can be properly defined, so that the total number of point clouds/total number of angle areas is about an empirical range of 4:1 to 10:1, that is, about 4 to 10 points are on average in each angle area. The purpose of the step 2 is to map the point cloud ring into a two-dimensional gray image with lower precision, so that the processing efficiency is improved as much as possible while the geometrical characteristics of the plane target can be reserved. Preferably, the first segmentation precision P l1 is such that the ratio of the total number of point clouds of the point cloud ring to the total number of angle areas is defined as 4:1-10:1.
Specifically, step2 includes:
Step 21: defining a first segmentation precision P l1,Pl1, namely (H max-Hmin)/100≤Pl1≤(Hmax-Hmin)/50, which is 1/50-1/100 of the empirically-acceptable point cloud ring vertical angle range, and segmenting the horizontal angle and the vertical angle of the point cloud ring according to the first segmentation precision P l1 to obtain a segmentation section [ V min+0×Pl1,Vmin+1×Pl1,…,Vmin+i×Pl1,…,Vmax ] of the horizontal angle and a segmentation section [ H min+0×Pl1,Hmin+1×Pl1,…,Hmin+j×Pl1,…,Hmax ] of the vertical angle, so that the point cloud ring is divided into a plurality of angle areas, and each angle area is defined by one horizontal angle section and one vertical angle section;
In step 21, regarding the value of the first segmentation precision P l1, another more appropriate definition manner is that the first segmentation precision P l1 defines the ratio of the total number of point clouds of the point cloud ring to the total number of angle areas as 4:1-10:1; the definition mode needs to count the total number of point clouds of the point cloud ring, obtain the range of the total number of angle areas according to the ratio, plan the number of horizontal angle intervals and vertical angle intervals on the basis, and then calculate proper P l1;
Step 22: mapping the point cloud ring into a point cloud gray image P 0; the method comprises the steps of mapping each angle area of a point cloud ring into a pixel of a point cloud gray level image P 0, respectively extracting RGB values of point clouds in the angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray level values through an average value method, taking the median of the gray level values as gray level values of corresponding pixels, and defining the gray level value of the corresponding pixels as 0 when the point clouds are not in the angle area, wherein the pixel coordinates (i, j) correspond to a horizontal angle interval [ V min+i×Pl1,Vmin+(i+1)×Pl1 ] and a vertical angle interval [ H min+j×Pl1,Hmin+(j+1)×Pl1 ], and the value ranges of integers i and j are respectively [0 ] ((V max-Vmin)/Pl1) -1) and [0 ] ((H max-Hmin)/Pl1) -1).
In step 22, the average formula used is as follows:
(4)
Step 3: and generating a target gray image template, searching a target gray image P 1 from the point cloud gray image P 0 based on a template matching algorithm, and identifying a corresponding target region in the point cloud ring according to the position of the target gray image P 1.
In the step 3, since the point cloud ring has been converted into the two-dimensional point cloud gray image in the previous step 2, and the two-dimensional image retains the geometric features of the planar target (i.e., is approximately represented as four sectors with alternating black and white), the mature computer vision algorithm (here, the template matching algorithm) can be utilized to find the target gray image P 1 in the point cloud gray image P 0, and then the corresponding target region in the point cloud ring can be identified according to the position of the target gray image P 1 in the point cloud gray image P 0 and the corresponding relationship between the pixel coordinates and the angle region; the target area is composed of angular areas corresponding to all pixels of the target gray image P 1.
Specifically, step 3 includes:
Step 31: generating an initial gray image with the height and the width of d, defining an origin with the upper left corner of the image as a pixel coordinate system, wherein two areas [0:d/2, 0:d/2 ] and [ d/2:d, d/2:d ] in the pixel coordinate system are black, and two areas [0:d/2, d/2:d ] and [ d/2:d, 0:d/2 ] are white to form four rectangles with black and white intervals;
Step 32: defining A rotation angle, wherein the angle is 2-6 degrees, generating an arithmetic series A [ angle, …, angle x (m-1) and 180 degrees ] taking the angle as A tolerance between 0-180 degrees, rotating an initial gray image according to the angle value of the arithmetic series A to obtain m additional initial gray images, cutting the edge of each initial gray image, leaving A [ d/4:dx3/4, d/4:dx3/4 ] area, thereby generating m+1 target gray image templates, and forming A template set G;
Step 33: and solving correlation coefficients of all target gray image templates in the template set G and different pixel areas in the point cloud gray image P 0 according to a template matching algorithm, extracting a pixel point (i 1,j1) with the best matching effect, taking an image of an area [ i 1: i1+d/2,j1: j1 +d/2] under a pixel coordinate system as a target gray image P 1, and identifying a corresponding target area from the point cloud ring.
In step 31, the letter d is used to represent the height and width of the initial gray image, just for convenience in defining the proportional relationship among the initial size of the image, the size of the black area, the size of the white area and the size of the clipping remaining area; for the specific value of d, the remaining area of clipping is only ensured to be smaller than the target marking range in P 0, which can be adjusted according to the actual target size or according to experience.
In step 32, the initial gray scale image is rotated by the angle value of the arithmetic series a, but the rotation may cause some pixels to go beyond d×d, and only the pixels in d×d are retained; in addition, black edges may be generated after the image is rotated, so that 1/4 area is cut out for each edge of the image, and the [ i 1: i1+d/2,j1: j1 +d/2] area is left as a target gray scale image template.
In step 33, the position of the target gray image P 1, that is, the [ i 1: i1+d/2,j1: j1 +d/2] area in the pixel coordinate system is obtained, so that the coordinates of all pixels of the target gray image P 1 are also known, and thus all corresponding horizontal angle intervals and vertical angle intervals are known, and finally the target area in the point cloud ring is obtained. In this step 33, the template matching algorithm includes, but is not limited to, a normalized correlation coefficient matching method, a normalized correlation matching method, a normalized squared difference matching method, and the like.
Step 4: defining a second division precision P l2, dividing the horizontal angle and the vertical angle of the target area according to the second division precision P l2, so that the target area is divided into a plurality of smaller angle areas (specifically, smaller than the angle areas in the step 2), and then the target area is mapped into a target gray image P 2 with higher precision (specifically, higher precision than the precision of P 1 in the step 3), each pixel of the target gray image P 2 corresponds to each angle area of the target area one by one, and the gray value of each pixel is obtained by converting and calculating the RGB value of the point cloud in the corresponding angle area.
The difference between the step 4 and the step 2 is that: the division object of the step 2 is a point cloud ring, and the step 4 is a target area in the point cloud ring; the accuracy of two-dimensional imaging in step 2 is relatively low, mainly the accuracy that need not too high is distinguished to the target region, and in order to improve processing efficiency, and the accuracy of two-dimensional imaging in step 4 is higher, mainly because be used for extracting the target central point, in order to improve the accuracy that the feature point detected to improve the accuracy of target central point coordinate, finally improve the directional rate of accuracy of point cloud.
Specifically, step4 includes:
Step 41: acquiring a value range [ V k1,Vk2 ] of a horizontal angle and a value range [ H k1,Hk2 ] of a vertical angle of a target area under a spherical coordinate system, counting the number of point clouds in the target area as Sum, squaring Sum, rounding downwards if the square value is not an integer, and assigning d 1;
Step 42: defining a second division precision P l2,Pl2=(Hk2-Hk1)/d1, dividing the horizontal angle and the vertical angle of the target area according to the second division precision P l2 to obtain a division section [ V k1+0×Pl2,Vk1+1×Pl2,…,Vk1+i×Pl2,…,Vk2 ] of the horizontal angle and a division section [ H k1+0×Pl2,Hk1+1×Pl2,…,Hk1+j×Pl2,…,Hk2 ] of the vertical angle, so that the target area is divided into a plurality of angle areas, and each angle area is defined by one horizontal angle section and one vertical angle section;
Step 43: mapping the target region to a target gray scale image P 2; wherein, each angle area of the target area needs to be mapped into a pixel of the target gray level image P 2, the pixel coordinates (i, j) correspond to a horizontal angle interval [ V k1+i×Pl2,Vk1+(i+1)×Pl2 ] and a vertical angle interval [ H k1+j×Pl2,Hk1+(j+1)×Pl2 ], and the value ranges of the integers i and j are [0, d 1 -1] and [0, d 1 -1]; and respectively extracting RGB values of point clouds in an angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray values by an average value method, taking the median of the gray values as the gray value of the corresponding pixel, and defining the gray value of the corresponding pixel as 0 when the point clouds are not in the angle area.
In the steps 41 to 43, the ratio of the total number of the point clouds to the total number of the angle areas is about 1:1, that is, each angle area can be approximately regarded as a point of the target area, or two-dimensional imaging is performed with a distance accuracy of approximately one point, which is mainly for improving the feature point detection accuracy and the accuracy of the point cloud orientation.
Step 5: and filtering the target gray level image P 2, acquiring the pixel coordinates of the target center point in the filtered target gray level image by a characteristic point extraction algorithm, and extracting the corresponding local coordinates of the target center point in the original point cloud data according to the pixel coordinates. Since the accuracy of two-dimensional imaging is high in step 5, noise points having a pixel value of 0 are likely to occur, and it is necessary to perform filtering processing first in order to avoid influencing feature point extraction.
Specifically, step 5 includes:
Step 51: in the target gray image P 2 generated in step 4, when there is no point cloud in the angle area corresponding to the pixel, the gray value of the pixel is 0, and the pixel is displayed as a noise point in the target gray image P 2, the target gray image P 2 is processed by adopting a filtering algorithm, and the pixel with the gray value of 0 is replaced by the gray values of surrounding pixel points, so as to obtain a target gray image without the noise point;
Step 52: and acquiring the pixel coordinates of the target center point in the target gray level image without noise points through a feature point extraction algorithm, and extracting the local coordinates of the target center point under a Cartesian coordinate system by combining the corresponding relation of the pixel coordinates, the spherical coordinates and the Cartesian coordinates.
In step 51, the filtering algorithm includes, but is not limited to, median filtering, mean filtering, gaussian filtering, etc., and the median or average number of the gray values of all pixels in the filtering window corresponding to the noise point is used to replace the gray value of the noise point, so as to obtain the target gray image without the noise point.
In step 52, the feature point extraction algorithm includes, but is not limited to, harris corner detection algorithm, SIFT corner detection algorithm, template matching method, etc., and the pixel coordinates of the target center point are determined by the gray gradient change feature of the target center point or the correlation coefficient value of the template matching. For example, defining the size blockSize =3 of a corner detection neighborhood, calculating the Sobel operator size ksize =3 of a gradient map, calculating the parameter k=4 of a corner response function, acquiring the pixel coordinates of a center point in the picture through a Harris corner detection algorithm, and extracting the local coordinates of the target center point under a Cartesian coordinate system by combining the corresponding relation of the pixel coordinates, the spherical coordinates and the Cartesian coordinates.
Step 6: and calculating point cloud coordinate conversion parameters according to the local coordinates and the construction coordinates of at least 3 groups of target center points, and converting the original point cloud data from the local coordinate system to the construction coordinate system according to the point cloud coordinate conversion parameters.
In step 6, point cloud rings corresponding to all plane targets with layout heights in a point cloud scanning range can be extracted, construction coordinates of target center points are obtained in advance, local coordinates and construction coordinate data of at least 3 groups of target center points are selected, coordinate transformation parameters of the point cloud are calculated, and then the original point cloud is oriented; the 3 sets of the selected center point coordinate data can be different in layout height and are not collinear for the corresponding 3 plane targets. The coordinate conversion parameter calculation formula is as follows:
(5)
Wherein [ X, Y, Z,1] is the coordinate in the initial coordinate system, and [ X ', Y ', Z ',1] is the coordinate in the target coordinate system. S is a transformation matrix, and the mathematical expression is:
(6)
Where a-i represent rotation, scaling parameters, dx, dy, dz represent translation parameters, l, m, n represent projection parameters, and s represent scaling parameters.
Referring to fig. 2-10, an example of point cloud orientation using the method of the present invention is provided.
According to step 11 of the method of the present invention, four planar targets are arranged in the point cloud scanning area, the numbers are 1, 2, 3 and 4, the heights of the targets are different, at least three of the targets are non-collinear, the vertical length of each target is h b =0.297 m, the arrangement heights h of the targets are [0.2m, 0.6m, 1.0m and 1.4m ], the erection heights h b =1.5 m of the three-dimensional laser scanner are taken as 0.1m, and the point cloud rings corresponding to the targets are extracted respectively, for example, the point cloud rings corresponding to the targets with the arrangement heights h of 1.4m are extracted, as shown in fig. 2;
According to step 12 of the method, the Cartesian coordinates of all points in the point cloud ring are converted into spherical coordinates through a coordinate conversion formula, and the values of the horizontal angle and the vertical angle of the point cloud ring under the spherical coordinate system are respectively [ -180 degrees, 180 degrees ] and [82 degrees, 99 degrees ].
According to step 21 of the method of the present invention, a first segmentation accuracy P l1 =0.2 is defined, and the segmentation intervals of the horizontal angle and the vertical angle are respectively [ -180 °, -179.8 °, …,179.8 °,180 ° ] and [82 °,82.2 °, …,98.8 °,99 ° ], and an angle area formed by one horizontal angle interval and one vertical angle interval is used as one pixel of the two-dimensional image, and then the horizontal angle interval and the vertical angle interval corresponding to the pixel coordinates (i, j) are respectively [ -180 ° +i×0.2 °, -180 ° + (i+1) ×0.2 ° ] and [82 ° +j×0.2 ° ], 82 ° + (j+1) ×0.2 ° ]. Wherein, the value ranges of i and j are respectively [0, 1799] and [0, 84];
according to step 22 of the method of the present invention, RGB values of the point cloud in each angle area of the point cloud ring are converted into gray values of corresponding pixels, and the resulting point cloud gray image P 0 is shown in fig. 3.
According to step 31 of the method of the present invention, defining the height and width d=20 of the initial gray image, the initial gray image being generated as shown in fig. 4;
According to step 32 of the method of the present invention, A rotation angle=4°, an arithmetic series A [4 °, 8 °, …, 176 °, 180 ° ] with angle as A tolerance is generated between 0 ° and 180 °, the initial gray templates are rotated according to the angle value of the arithmetic series A to obtain another 25 initial gray templates, A total of 26 initial gray templates each clip an image edge, and A [ 5:15, 5:15 ] area is left as A target gray image template, the 26 generated target gray image templates form A template set G, for example, the target gray image templates corresponding to the clipping after 24 ° rotation are shown in fig. 5;
According to step 33 of the method of the present invention, a normalized correlation coefficient matching method is adopted to find a pixel region where the target gray image P 1 is located in the point cloud gray image P 0, as shown in fig. 6; a target area in the point cloud ring can be obtained according to all the pixel coordinates in the pixel area, and the point cloud of the target area is shown in fig. 7.
According to step 41 of the method of the present invention, the values of the horizontal angle and the vertical angle of the target area are obtained in the ranges of [ -64.2 °, -62.0 ° ] and [88.0 °,90.2 ° ], the number of point clouds in the target area is counted as sum=2776, and Sum is given out because the width and the height of the target area are the same, and if the given value is not an integer, d 1 =52 is given out by rounding down;
According to step 42 of the method of the present invention, defining the second segmentation accuracy P l2=0.04,Pl2 may be expressed as: (90.2-88.0)/52, and through steps 42 and 43, a target gray scale image P 2 with higher accuracy is generated, as shown in fig. 8.
According to step 51 of the method of the present invention, since the target gray image P 2 shown in fig. 8 has many noise points, the target gray image P 2 is filtered to obtain a target gray image without noise points, as shown in fig. 9;
The result obtained by the feature point extraction algorithm according to step 52 of the method of the present invention is shown in fig. 10.
According to the method of the invention, in step 6, as the local coordinates and construction coordinate data of the center point of 4 groups of targets are provided, and at least 3 groups of coordinate data correspond to 3 plane targets which are different in layout height and are not collinear, the coordinate transformation parameters of the point cloud are calculated, and then the original point cloud is oriented.
As can be seen from the method and the related examples according to the embodiments of the present invention, the method has the following advantages:
1) According to the method, the Z coordinate ranges of the point cloud rings are designated, the point cloud rings corresponding to targets with different layout heights are respectively extracted, and irrelevant point cloud data in the point cloud orientation process can be removed; meanwhile, the processes of extracting target centers by the point cloud rings with different heights are the same, and the computer can be used for carrying out multi-process simultaneous parallel calculation, so that the point cloud processing efficiency is improved.
2) According to the method, targets distributed at different heights are numbered, all plane targets in an original point cloud are distinguished, unique mapping of the target center coordinates under a point cloud local coordinate system and the target center coordinates under a construction coordinate system is established, coordinate transformation parameters are automatically solved according to a mapping relation, and then the original point cloud is converted from the local coordinate system to the construction coordinate system.
3) The method fully utilizes the coordinate information and RGB information contained in the point cloud data, maps the three-dimensional point cloud data into the two-dimensional gray level image, and can reserve the geometric characteristics of the plane target in the two-dimensional gray level image, so that the target area can be automatically identified and the target center point can be extracted by utilizing a mature computer vision algorithm, and the automatic processing of the point cloud orientation can be rapidly and accurately realized.
4) The method can obtain the center point coordinate of the target by defining the twice segmentation precision and considering both the precision and the efficiency. When the first segmentation precision is defined, the point cloud ring in the three-dimensional data form is mapped into a two-dimensional point cloud gray image with relatively low precision, and only the geometric characteristics of a plane target are required to be reserved for the image so as to find out the target gray image, and thus a target area is identified in the point cloud ring, and therefore, the two-dimensional imaging with low precision can ensure that the target area is identified and simultaneously improve the processing efficiency. In addition, when defining the second segmentation precision, mainly in order to realize the two-dimensional imaging of higher precision to map the target region into the two-dimensional target gray scale image of higher precision, and then improve the detection precision of target center point, finally reach the purpose that improves the directional precision of point cloud.
The embodiment of the invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the automatic point cloud orientation method is realized when the processor executes the computer program.
According to the automatic point cloud orientation method adopted by the electronic equipment, coordinate information and RGB information contained in point cloud data can be fully utilized, three-dimensional point cloud data are mapped into a two-dimensional gray scale image, and geometric characteristics of a plane target can be reserved in the two-dimensional gray scale image, so that a target area can be automatically identified and a target center point can be extracted by using a mature computer vision algorithm, and automatic processing of point cloud orientation can be rapidly and accurately realized.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the automated point cloud orientation method of the embodiment of the invention.
The automatic point cloud orientation method adopted by the computer readable storage medium can fully utilize coordinate information and RGB information contained in point cloud data, map three-dimensional point cloud data into a two-dimensional gray scale image, and retain geometric characteristics of a planar target in the two-dimensional gray scale image, so that a target area can be automatically identified and a target center point can be extracted by utilizing a mature computer vision algorithm, and automatic point cloud orientation processing can be rapidly and accurately realized.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention are intended to be within the scope of the present invention as claimed.

Claims (10)

1. An automatic point cloud orientation method based on computer vision is characterized by comprising the following steps:
Step 1: extracting a point cloud ring covering the target layout range from the original point cloud data, and converting the point cloud ring from a Cartesian coordinate system to a spherical coordinate system;
Step 2: defining a first segmentation precision, segmenting a horizontal angle and a vertical angle of a point cloud ring according to the first segmentation precision, dividing the point cloud ring into a plurality of angle areas, mapping the point cloud ring into a two-dimensional point cloud gray image, wherein each pixel of the point cloud gray image corresponds to each angle area of the point cloud ring one by one, and the gray value of each pixel is obtained by converting and calculating the RGB value of the point cloud in the corresponding angle area;
Step 3: generating a target gray image template, searching a target gray image from the point cloud gray image based on a template matching algorithm, and identifying a corresponding target region in the point cloud ring according to the position of the target gray image;
Step 4: defining a second division precision, dividing a horizontal angle and a vertical angle of a target area according to the second division precision, dividing the target area into a plurality of smaller angle areas, and mapping the target area into a target gray image with higher precision, wherein each pixel of the target gray image corresponds to each angle area of the target area one by one, and gray values of each pixel are obtained by converting and calculating RGB values of point clouds in the corresponding angle areas;
Step 5: filtering the target gray level image, acquiring pixel coordinates of a target center point in the target gray level image through a feature point extraction algorithm, and extracting local coordinates corresponding to the target center point in original point cloud data according to the pixel coordinates;
Step 6: and calculating point cloud coordinate conversion parameters according to the local coordinates and the construction coordinates of at least 3 groups of target center points, and converting the original point cloud data from the local coordinate system to the construction coordinate system according to the point cloud coordinate conversion parameters.
2. The automated point cloud orientation method of claim 1, wherein step 1 comprises:
Step 11: at least 3 non-collinear plane targets with different heights are distributed in a point cloud scanning area corresponding to the original point cloud data, each plane target consists of four 90-degree sectors, the colors of the sectors are black or white, and the colors of adjacent sectors are different, so that the central point of the target can be identified through the difference of the gray scales of the adjacent sectors; respectively extracting point cloud rings with Z coordinate ranges of [ h-h y-Δh,h-hy+hb +delta h ] in original point cloud data by adopting a straight-through filtering method, wherein a local coordinate system of the original point cloud data takes the center position of an instrument as an original point, h is the target layout height [ h 1、h2、h3、…、hn ], n is the target number, h y is the three-dimensional laser scanner erection height, h b is the vertical length of the target, delta h is a height adjustment parameter, and delta h takes 0.1-0.5 to ensure that the layout range of the target is covered in the extracted point cloud rings;
Step 12: and converting Cartesian coordinates of all points in the point cloud ring into corresponding spherical coordinates to obtain a value range [ V min,Vmax ] of a horizontal angle and a value range [ H min,Hmax ] of a vertical angle of the point cloud ring under the spherical coordinate system.
3. The automated point cloud orientation method of claim 2, wherein step 2 comprises:
Step 21: defining a first segmentation precision P l1, and segmenting the horizontal angle and the vertical angle of the point cloud ring according to the first segmentation precision P l1 to obtain a segmentation section [ V min+0×Pl1,Vmin+1×Pl1,…,Vmin+i×Pl1,…,Vmax ] of the horizontal angle and a segmentation section [ H min+0×Pl1,Hmin+1×Pl1,…,Hmin+j×Pl1,…,Hmax ] of the vertical angle, so that the point cloud ring is divided into a plurality of angle areas, and each angle area is defined by one horizontal angle section and one vertical angle section;
Step 22: mapping the point cloud ring into a point cloud gray image P 0; the method comprises the steps of mapping each angle area of a point cloud ring into a pixel of a point cloud gray level image P 0, respectively extracting RGB values of point clouds in the angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray level values through an average value method, taking the median of the gray level values as gray level values of corresponding pixels, and defining the gray level value of the corresponding pixels as 0 when the point clouds are not in the angle area, wherein the pixel coordinates (i, j) correspond to a horizontal angle interval [ V min+i×Pl1,Vmin+(i+1)×Pl1 ] and a vertical angle interval [ H min+j×Pl1,Hmin+(j+1)×Pl1 ], and the value ranges of integers i and j are respectively [0 ] ((V max-Vmin)/Pl1) -1) and [0 ] ((H max-Hmin)/Pl1) -1).
4. The automated point cloud orientation method of claim 3, wherein said step3 comprises:
Step 31: generating an initial gray image with the height and the width of d, defining an origin with the upper left corner of the image as a pixel coordinate system, wherein two areas [0:d/2, 0:d/2 ] and [ d/2:d, d/2:d ] in the pixel coordinate system are black, and two areas [0:d/2, d/2:d ] and [ d/2:d, 0:d/2 ] are white to form four rectangles with black and white intervals;
Step 32: defining A rotation angle, wherein the angle is 2-6 degrees, generating an arithmetic series A [ angle, …, angle x (m-1) and 180 degrees ] taking the angle as A tolerance between 0-180 degrees, rotating an initial gray image according to the angle value of the arithmetic series A to obtain m additional initial gray images, cutting the edge of each initial gray image, leaving A [ d/4:dx3/4, d/4:dx3/4 ] area, thereby generating m+1 target gray image templates, and forming A template set G;
Step 33: and solving correlation coefficients of all target gray image templates in the template set G and different pixel areas in the point cloud gray image P 0 according to a template matching algorithm, extracting a pixel point (i 1,j1) with the best matching effect, taking an image of an area [ i 1: i1+d/2,j1: j1 +d/2] under a pixel coordinate system as a target gray image P 1, and identifying a corresponding target area from the point cloud ring.
5. The automated point cloud orientation method of claim 4, wherein in step 33, the template matching algorithm employs a normalized correlation coefficient matching method, a normalized correlation matching method, or a normalized squared difference matching method.
6. The automated point cloud orientation method of claim 4, wherein said step 4 comprises:
Step 41: acquiring a value range [ V k1,Vk2 ] of a horizontal angle and a value range [ H k1,Hk2 ] of a vertical angle of a target area under a spherical coordinate system, counting the number of point clouds in the target area as Sum, squaring Sum, rounding downwards if the square value is not an integer, and assigning d 1;
Step 42: defining a second division precision P l2,Pl2=(Hk2- Hk1)/d1, dividing the horizontal angle and the vertical angle of the target area according to the second division precision P l2 to obtain a division section [ V k1+0×Pl2,Vk1+1×Pl2,…,Vk1+i×Pl2,…,Vk2 ] of the horizontal angle and a division section [ H k1+0×Pl2,Hk1+1×Pl2,…,Hk1+j×Pl2,…,Hk2 ] of the vertical angle, so that the target area is divided into a plurality of angle areas, and each angle area is defined by one horizontal angle section and one vertical angle section;
Step 43: mapping the target region to a target gray scale image P 2; wherein, each angle area of the target area needs to be mapped into a pixel of the target gray level image P 2, the pixel coordinates (i, j) correspond to a horizontal angle interval [ V k1+i×Pl2,Vk1+(i+1)×Pl2 ] and a vertical angle interval [ H k1+j×Pl2,Hk1+(j+1)×Pl2 ], and the value ranges of the integers i and j are [0, d 1 -1] and [0, d 1 -1]; and respectively extracting RGB values of point clouds in an angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray values by an average value method, taking the median of the gray values as the gray value of the corresponding pixel, and defining the gray value of the corresponding pixel as 0 when the point clouds are not in the angle area.
7. The automated point cloud orientation method of claim 6, wherein said step 5 comprises:
Step 51: in the target gray image P 2 generated in step 4, when there is no point cloud in the angle area corresponding to the pixel, the gray value of the pixel is 0, and the pixel is displayed as a noise point in the target gray image P 2, the target gray image P 2 is processed by adopting a filtering algorithm, and the pixel with the gray value of 0 is replaced by the gray values of surrounding pixel points, so as to obtain a target gray image without the noise point;
Step 52: and acquiring the pixel coordinates of the target center point in the target gray level image without noise points through a feature point extraction algorithm, and extracting the local coordinates of the target center point under a Cartesian coordinate system by combining the corresponding relation of the pixel coordinates, the spherical coordinates and the Cartesian coordinates.
8. The method of automated point cloud orientation according to claim 7, wherein in step 51, the filtering algorithm uses median filtering, mean filtering or gaussian filtering to replace gray values of the noise points with median or average gray values of all pixels in a filtering window corresponding to the noise points, so as to obtain a target gray image without the noise points;
In step 52, the feature point extraction algorithm adopts Harris corner detection algorithm, SIFT corner detection algorithm or template matching method, and determines the pixel coordinates of the target center point through the gray gradient change feature of the target center point or the correlation coefficient value of template matching.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the automated point cloud orientation method of any of claims 1-8 when the computer program is executed by the processor.
10. A computer readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the automated point cloud orientation method according to any of claims 1-8.
CN202410003521.6A 2024-01-02 2024-01-02 Automatic point cloud orientation method, equipment and storage medium based on computer vision Active CN117495969B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410003521.6A CN117495969B (en) 2024-01-02 2024-01-02 Automatic point cloud orientation method, equipment and storage medium based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410003521.6A CN117495969B (en) 2024-01-02 2024-01-02 Automatic point cloud orientation method, equipment and storage medium based on computer vision

Publications (2)

Publication Number Publication Date
CN117495969A CN117495969A (en) 2024-02-02
CN117495969B true CN117495969B (en) 2024-04-16

Family

ID=89680465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410003521.6A Active CN117495969B (en) 2024-01-02 2024-01-02 Automatic point cloud orientation method, equipment and storage medium based on computer vision

Country Status (1)

Country Link
CN (1) CN117495969B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11099275B1 (en) * 2020-04-29 2021-08-24 Tsinghua University LiDAR point cloud reflection intensity complementation method and system
CN114862788A (en) * 2022-04-29 2022-08-05 湖南联智科技股份有限公司 Automatic identification method for plane target coordinates of three-dimensional laser scanning
CN117095038A (en) * 2023-08-24 2023-11-21 上海盎维信息技术有限公司 Point cloud filtering method and system for laser scanner

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11099275B1 (en) * 2020-04-29 2021-08-24 Tsinghua University LiDAR point cloud reflection intensity complementation method and system
CN114862788A (en) * 2022-04-29 2022-08-05 湖南联智科技股份有限公司 Automatic identification method for plane target coordinates of three-dimensional laser scanning
CN117095038A (en) * 2023-08-24 2023-11-21 上海盎维信息技术有限公司 Point cloud filtering method and system for laser scanner

Also Published As

Publication number Publication date
CN117495969A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
CN110807355B (en) Pointer instrument detection and reading identification method based on mobile robot
CN110473221B (en) Automatic target object scanning system and method
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN110070557A (en) A kind of target identification and localization method based on edge feature detection
CN111598770A (en) Object detection method and device based on three-dimensional data and two-dimensional image
CN111354047B (en) Computer vision-based camera module positioning method and system
CN115170669A (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN104008542A (en) Fast angle point matching method for specific plane figure
CN105068918A (en) Page test method and device
CN113343976A (en) Anti-highlight interference engineering measurement mark extraction method based on color-edge fusion feature growth
CN112257721A (en) Image target region matching method based on Fast ICP
JP2008242508A (en) Automatic specific area extraction system, automatic specific area extraction method and program
CN117495969B (en) Automatic point cloud orientation method, equipment and storage medium based on computer vision
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN108416346B (en) License plate character positioning method and device
TWI659390B (en) Data fusion method for camera and laser rangefinder applied to object detection
CN109191483A (en) A kind of quick watershed detection method of helicopter blade Circle in Digital Images shape mark
CN113095309B (en) Method for extracting road scene ground marker based on point cloud
CN115511716A (en) Multi-view global map splicing method based on calibration board
CN114627463A (en) Non-contact power distribution data identification method based on machine identification
CN111667429B (en) Target positioning correction method for inspection robot
CN105741268B (en) A kind of vision positioning method based on colored segment and its topological relation
CN114862765A (en) Cell layered image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant