CN117495969A - Automatic point cloud orientation method, equipment and storage medium based on computer vision - Google Patents

Automatic point cloud orientation method, equipment and storage medium based on computer vision Download PDF

Info

Publication number
CN117495969A
CN117495969A CN202410003521.6A CN202410003521A CN117495969A CN 117495969 A CN117495969 A CN 117495969A CN 202410003521 A CN202410003521 A CN 202410003521A CN 117495969 A CN117495969 A CN 117495969A
Authority
CN
China
Prior art keywords
point cloud
target
angle
gray
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410003521.6A
Other languages
Chinese (zh)
Other versions
CN117495969B (en
Inventor
廖李灿
应宗权
毛凤山
吕述晖
李金祥
刘介山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCCC Fourth Harbor Engineering Institute Co Ltd
Original Assignee
CCCC Fourth Harbor Engineering Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCCC Fourth Harbor Engineering Institute Co Ltd filed Critical CCCC Fourth Harbor Engineering Institute Co Ltd
Priority to CN202410003521.6A priority Critical patent/CN117495969B/en
Publication of CN117495969A publication Critical patent/CN117495969A/en
Application granted granted Critical
Publication of CN117495969B publication Critical patent/CN117495969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses an automatic point cloud orientation method based on computer vision, electronic equipment and a computer-readable storage medium, which comprise the following steps: extracting a point cloud ring covering the target layout range from the original point cloud data, and converting the point cloud ring from a Cartesian coordinate system to a spherical coordinate system; mapping the point cloud ring into a two-dimensional point cloud gray image; searching a target gray level image from the point cloud gray level image based on a template matching algorithm, and identifying a corresponding target area in the point cloud ring; defining a second segmentation precision, and mapping the target area into a target gray image with higher precision; filtering the target gray level image, and acquiring a target center point from the target gray level image through a feature point extraction algorithm; and calculating a point cloud coordinate conversion parameter, and converting the original point cloud data from a local coordinate system to a construction coordinate system. The method and the device can automatically identify the planar target area and extract the target center point, and rapidly and accurately realize the automatic processing of the point cloud data orientation process.

Description

Automatic point cloud orientation method, equipment and storage medium based on computer vision
Technical Field
The present invention relates to the field of engineering measurement technologies, and in particular, to an automated point cloud orientation method based on computer vision, an electronic device, and a computer readable storage medium.
Background
With the development of society and the progress of process technology, the construction of super high-rise, special-shaped and large-scale structures at home and abroad is increasingly accelerated, and as the sensitivity of the structure to construction deviation is strong in the construction process and the requirements on the measuring construction period are tight, the requirements on measuring work are higher and higher, the measuring precision is required to be met, and meanwhile, the measuring efficiency is also required to be considered. The traditional measurement means have hardly met the requirement in terms of efficiency, and the occurrence of the three-dimensional laser scanning technology effectively solves the problem.
The three-dimensional laser scanning technology overcomes the defect of single-point acquisition in the traditional measurement mode, and can realize the rapid and accurate acquisition of the three-dimensional coordinates, intensity and color information of the surface of an acquisition object. However, on one hand, the three-dimensional coordinates of the structure acquired by the three-dimensional scanner are based on a local coordinate system of the instrument, and the construction measurement adopts a construction coordinate system designated by engineering, so that the point cloud under the local coordinate system needs to be converted into the construction coordinate system, namely the point cloud orientation during engineering measurement analysis.
The current common method is that point cloud orientation is realized by arranging a plurality of groups of spherical targets or plane targets, the spherical targets can realize automatic identification, but whether spherical point clouds exist in a scanning area needs to be traversed, and at most half of spherical surfaces can be scanned in one scanning, so that the spherical surfaces are easily influenced by other objects in the area during fitting; the planar target needs to manually find out the target area first and then extract the center point. On the other hand, the three-dimensional laser scanning technology obtains more comprehensive and accurate information, and meanwhile, the data volume to be processed becomes larger. Factors of huge data volume and manual target identification greatly reduce the processing efficiency of point cloud orientation, and influence the efficiency of real-time feedback of the three-dimensional scanning technology on field measurement like a traditional measurement method.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide an automatic point cloud orientation method based on computer vision, electronic equipment and a computer readable storage medium, which can automatically identify a plane target area and extract a target center point, and quickly and accurately realize the automatic processing of a point cloud data orientation process.
The automatic point cloud orientation method based on computer vision is realized by adopting the following technical scheme:
an automatic point cloud orientation method based on computer vision comprises the following steps:
step 1: extracting a point cloud ring covering the target layout range from the original point cloud data, and converting the point cloud ring from a Cartesian coordinate system to a spherical coordinate system;
step 2: defining a first segmentation precision, segmenting a horizontal angle and a vertical angle of a point cloud ring according to the first segmentation precision, dividing the point cloud ring into a plurality of angle areas, mapping the point cloud ring into a two-dimensional point cloud gray image, wherein each pixel of the point cloud gray image corresponds to each angle area of the point cloud ring one by one, and the gray value of each pixel is obtained by converting and calculating the RGB value of the point cloud in the corresponding angle area;
step 3: generating a target gray image template, searching a target gray image from the point cloud gray image based on a template matching algorithm, and identifying a corresponding target region in the point cloud ring according to the position of the target gray image;
step 4: defining a second division precision, dividing a horizontal angle and a vertical angle of a target area according to the second division precision, dividing the target area into a plurality of smaller angle areas, and mapping the target area into a target gray image with higher precision, wherein each pixel of the target gray image corresponds to each angle area of the target area one by one, and gray values of each pixel are obtained by converting and calculating RGB values of point clouds in the corresponding angle areas;
step 5: filtering the target gray level image, acquiring pixel coordinates of a target center point in the target gray level image through a feature point extraction algorithm, and extracting local coordinates corresponding to the target center point in original point cloud data according to the pixel coordinates;
step 6: and calculating point cloud coordinate conversion parameters according to the local coordinates and the construction coordinates of at least 3 groups of target center points, and converting the original point cloud data from the local coordinate system to the construction coordinate system according to the point cloud coordinate conversion parameters.
Further, the step 1 includes:
step 11: at least 3 non-collinear plane targets with different heights are distributed in a point cloud scanning area corresponding to the original point cloud data, each plane target consists of four 90-degree sectors, the colors of the sectors are black or white, and the colors of adjacent sectors are different, so that the central point of the target can be identified through the difference of the gray scales of the adjacent sectors; adopting straight-through filtering method to respectively extract Z coordinate range in original point cloud data as [h-h y -Δh,h-h y +h b +Δh]Wherein, the local coordinate system of the original point cloud data takes the center position of the instrument as the origin,hdistributing the height for the targeth 1h 2h 3 、…、h n ]N is the number of targets,h y the height is erected for the three-dimensional laser scanner,h b for the vertical length of the target, Δh is a height adjustment parameter, and Δh is 0.1-0.5, so as to ensure that the coverage of the layout range of the target in the extracted point cloud ring;
step 12: converting Cartesian coordinates of all points in the point cloud ring into corresponding spherical coordinates to obtain a value range of horizontal angles of the point cloud ring under the spherical coordinate systemV min ,V max ]And the value range of the vertical angle [H min ,H max ]。
Further, the step 2 includes:
step 21: definition of first segmentation accuracyP l1 According to the first division precisionP l1 Dividing the horizontal angle and the vertical angle of the point cloud ring to obtain a division interval of the horizontal angleV min +0×P l1 ,V min +1×P l1 ,…,V min +i×P l1 ,…,V max ]Division of vertical angleH min +0×P l1 ,H min +1×P l1 ,…,H min +j×P l1 ,…,H max ]Dividing the point cloud ring into a plurality of angle areas, wherein each angle area is defined by a horizontal angle interval and a vertical angle interval;
step 22: mapping the point cloud ring into a point cloud gray image P 0 The method comprises the steps of carrying out a first treatment on the surface of the Wherein, each angle area of the point cloud ring needs to be mapped into a point cloud gray image P 0 The pixel coordinates (i, j) correspond to the horizontal angle interval [V min +i×P l1 ,V min +(i+1)×P l1 ]And vertical angle interval [H min +j×P l1 ,H min +(j+1)×P l1 ]The value ranges of the integers i and j are respectively 0, (. About.m.)V max -V min )/P l1 )-1]And [0 ] (. About.H max -H min )/P l1 )-1]The method comprises the steps of carrying out a first treatment on the surface of the And respectively extracting RGB values of point clouds in an angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray values by an average value method, taking the median of the gray values as the gray value of the corresponding pixel, and defining the gray value of the corresponding pixel as 0 when the point clouds are not in the angle area.
Further, the step 3 includes:
step 31: generating an initial gray image with the height and the width of d, defining an origin with the upper left corner of the image as a pixel coordinate system, wherein two areas [0:d/2, 0:d/2 ] and [ d/2:d, d/2:d ] in the pixel coordinate system are black, and two areas [0:d/2, d/2:d ] and [ d/2:d, 0:d/2 ] are white to form four rectangles with black and white intervals;
step 32: defining a rotation Angle, wherein the Angle is 2-6 degrees, generating an arithmetic series A [ Angle, …, angle x (m-1) and 180 degrees ] taking the Angle as a tolerance between 0-180 degrees, rotating an initial gray image according to the Angle value of the arithmetic series A to obtain m additional initial gray images, cutting the edge of each initial gray image, leaving a [ d/4:dx3/4, d/4:dx3/4 ] area, thereby generating m+1 target gray image templates, and forming a template set G;
step 33: solving all target gray image templates and point cloud gray images P in a template set G according to a template matching algorithm 0 The correlation coefficients of different pixel areas in the image are extracted, and the pixel point (i) with the best matching effect is extracted 1 ,j 1 ) Under the pixel coordinate system [ i ] 1 : i 1 +d/2,j 1 : j 1 +d/2]Image of region as target gray image P 1 And identifying the corresponding target area from the point cloud ring.
Further, in step 33, the template matching algorithm adopts a normalized correlation coefficient matching method, a normalized correlation matching method or a normalized square difference matching method.
Further, the step 4 includes:
step 41: obtaining the value range of the horizontal angle of the target area in the spherical coordinate systemV k1 ,V k2 ]And the value range of the vertical angle [H k1 ,H k2 ]Counting the number of point clouds in a target area as Sum, squaring Sum, rounding down if the square value is not an integer, and assigning d 1
Step 42: definition of the second segmentation accuracyP l2P l2 =(H k2 -H k1 )/d 1 According to the second division accuracyP l2 Dividing the horizontal angle and the vertical angle of the target region to obtain a division section of the horizontal angleV k1 +0×P l2 ,V k1 +1×P l2 ,…,V k1 +i×P l2 ,…,V k2 ]Division of vertical angleH k1 +0×P l2 ,H k1 +1×P l2 ,…,H k1 +j×P l2 ,…,H k2 ]Dividing the target area into a plurality of angle areas, each angle area being defined by a horizontal angle section and a vertical angle section;
step 43: mapping target regions to target grayscale images P 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein each angular region of the target region needs to be mapped to a target gray scale image P 2 The pixel coordinates (i, j) correspond to the horizontal angle interval [V k1 +i×P l2 ,V k1 +(i+1)×P l2 ]And vertical angle interval [H k1 +j×P l2 ,H k1 +(j+1)×P l2 ]The value ranges of the integers i and j are respectively 0 and d 1 -1]And [0, d ] 1 -1]The method comprises the steps of carrying out a first treatment on the surface of the And respectively extracting RGB values of point clouds in an angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray values by an average value method, taking the median of the gray values as the gray value of the corresponding pixel, and defining the gray value of the corresponding pixel as 0 when the point clouds are not in the angle area.
Further, the step 5 includes:
step 51: the target gray scale image P generated at step 4 2 In the case that the angle area corresponding to the pixel does not have point cloud, the gray value of the pixel is 0 and the target gray image P 2 The middle display is a noise point, and a filtering algorithm is adopted to carry out on the target gray level image P 2 Processing, namely replacing the pixels with gray values of 0 with gray values of surrounding pixel points to obtain a target gray image without noise points;
step 52: and acquiring the pixel coordinates of the target center point in the target gray level image without noise points through a feature point extraction algorithm, and extracting the local coordinates of the target center point under a Cartesian coordinate system by combining the corresponding relation of the pixel coordinates, the spherical coordinates and the Cartesian coordinates.
Further, in step 51, the filtering algorithm uses median filtering, mean filtering or gaussian filtering to replace the gray value of the noise point with the median or average of the gray values of all pixels in the filtering window corresponding to the noise point, so as to obtain a target gray image without the noise point;
in step 52, the feature point extraction algorithm adopts Harris corner detection algorithm, SIFT corner detection algorithm or template matching method, and determines the pixel coordinates of the target center point through the gray gradient change feature of the target center point or the correlation coefficient value of template matching.
The electronic equipment is realized by adopting the following technical scheme:
an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above-described automated point cloud orientation method when executing the computer program.
The computer readable storage medium of the present invention is realized by the following technical scheme:
a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described automated point cloud orientation method.
Compared with the prior art, the invention has the beneficial effects that:
the invention fully utilizes the coordinate information and RGB information contained in the point cloud data to map the three-dimensional point cloud data into the two-dimensional gray level image, and the geometric characteristics of the plane target can be reserved in the two-dimensional gray level image, so that the target area can be automatically identified and the target center point can be extracted by utilizing a mature computer vision algorithm, and the automatic processing of the point cloud orientation can be rapidly and accurately realized.
Drawings
FIG. 1 is a flow chart of an embodiment of the computer vision based automated point cloud orientation method of the present invention;
FIG. 2 is a point cloud ring of an example application of the method of the present invention;
FIG. 3 is a point cloud gray scale image of an example of application of the method of the present invention;
FIG. 4 is an unrotated initial gray scale image of an example of an application of the method of the present invention;
FIG. 5 is a target gray scale image template rotated 24 and cropped for an exemplary application of the method of the present invention;
FIG. 6 shows an example of finding a target gray scale image P for an application of the method of the present invention 2 Schematic of (2);
FIG. 7 is a point cloud within a target area for an example of application of the method of the present invention;
FIG. 8 is a target gray scale image P illustrating an example of application of the method of the present invention 2
Fig. 9 shows a target gray scale image P in an example of application of the method of the present invention 2 An image subjected to filtering processing;
fig. 10 is a schematic diagram of a target center point identification result of an application example of the method of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and detailed description, wherein it is to be understood that, on the premise of no conflict, the following embodiments or technical features may be arbitrarily combined to form new embodiments.
Referring to fig. 1, an embodiment of the present invention provides an automated point cloud orientation method based on computer vision. The method comprises the following steps:
step 1: and extracting a point cloud ring covering the target layout range from the original point cloud data, and converting the point cloud ring from a Cartesian coordinate system to a spherical coordinate system.
The purpose of the step 1 is to reduce the amount of point cloud data to be processed when two-dimensional imaging and computer vision recognition are carried out subsequently, so that the processing efficiency is improved; according to the step 1, the method of the invention requires the point cloud ring to cover the layout range of the plane target, and all original point cloud data is not needed, so that a large number of irrelevant point clouds can be removed.
Specifically, step 1 includes:
step 11: at least 3 non-collinear plane targets with different heights are distributed in the point cloud scanning area corresponding to the original point cloud data, each plane target consists of four 90-degree sectors, the colors of the sectors are black or white, and the colors of adjacent sectors are different, so that the adjacent sectors can pass throughThe difference of the shape gray scale identifies the center point of the target; adopting straight-through filtering method to respectively extract Z coordinate range in original point cloud data as [h-h y -Δh,h-h y +h b +Δh]Wherein, the local coordinate system of the original point cloud data takes the center position of the instrument as the origin,hdistributing the height for the targeth 1h 2h 3 、…、h n ]N is the number of targets,h y the height is erected for the three-dimensional laser scanner,h b for the vertical length of the target, Δh is a height adjustment parameter, and Δh is 0.1-0.5, so as to ensure that the coverage of the layout range of the target in the extracted point cloud ring;
step 12: converting Cartesian coordinates of all points in the point cloud ring into corresponding spherical coordinates to obtain a value range of horizontal angles of the point cloud ring under the spherical coordinate systemV min ,V max ]And the value range of the vertical angle [H min ,H max ]。
In step 11, n,hh y 、Δh、h b The data may be known or set in advance [ theh 1h 2h 3 、…、h n ]Etc. means that targets of different heights are numbered 1, 2, 3, …, n in advance; through the direct filtering method of the step 11, the extracted point cloud ring can be ensured to contain the complete target paper point cloud, meanwhile, the data volume of the point cloud processed later is reduced, and the processing efficiency is improved.
In step 12, the formula for the Cartesian coordinate system to convert the spherical coordinate system is as follows:
(1)
(2)
(3)
wherein x, y and z are coordinates in a Cartesian coordinate system,r、/>is the coordinate under the spherical coordinate system, +.>Is a vertical angle, and is a vertical angle,is a horizontal angle.
Step 2: definition of first segmentation accuracyP l1 According to the first division precisionP l1 Dividing the horizontal angle and the vertical angle of the point cloud ring to divide the point cloud ring into a plurality of angle areas, and mapping the point cloud ring into a two-dimensional point cloud gray image P 0 The point cloud gray image P 0 Each pixel of the point cloud ring corresponds to each angle area of the point cloud ring one by one, and the gray value of each pixel is obtained by converting and calculating the RGB value of the point cloud in the corresponding angle area.
In the step 2, the point cloud ring is divided into a plurality of angle areas by dividing the horizontal angle and the vertical angle of the point cloud ring, and the point cloud ring is mapped into a two-dimensional point cloud gray image on the basis, and the two-dimensional imaging mode is helpful for the generated point cloud gray image P 0 The geometric features of the planar target remain.
In addition, the point cloud gray image P 0 In the following, mainly for identification of target areas, rather than directly identifying target center points, so that without too high an accuracy, the accuracy of the image can be understood by a ratio of = total number of point clouds/total number of angular areas, e.g. a first segmentation accuracy can be achievedP l1 The definition is properly performed such that the total number of point clouds/total number of angle areas is about 4:1 to 10:1, i.e., about 4 to 10 points per angle area on average. The purpose of the step 2 is to map the point cloud ring into a two-dimensional gray level image with lower precision, and the plane target can be reservedThe geometric characteristics and simultaneously, the processing efficiency is improved as much as possible. Preferably, the first segmentation accuracyP l1 The ratio of the total number of the point clouds of the point cloud ring to the total number of the angle areas is limited to 4:1-10:1.
Specifically, step 2 includes:
step 21: definition of first segmentation accuracyP l1P l1 Empirically, 1/50 to 1/100 of the vertical angle range of the point cloud ring can be obtained, namelyH max -H min )/100≤P l1 ≤(H max -H min ) 50 according to the first division accuracyP l1 Dividing the horizontal angle and the vertical angle of the point cloud ring to obtain a division interval of the horizontal angleV min +0×P l1 ,V min +1×P l1 ,…,V min +i×P l1 ,…,V max ]Division of vertical angleH min +0×P l1 ,H min +1×P l1 ,…,H min +j×P l1 ,…,H max ]Dividing the point cloud ring into a plurality of angle areas, wherein each angle area is defined by a horizontal angle interval and a vertical angle interval;
in step 21, regarding the first division accuracyP l1 Another more appropriate definition is that the first segmentation accuracyP l1 The ratio of the total number of point clouds of the point cloud ring to the total number of angle areas is limited to 4:1-10:1; the definition mode needs to count the total number of point clouds of the point cloud ring, obtain the total number range of angle areas according to the ratio, plan the number of horizontal angle intervals and vertical angle intervals on the basis, and calculate the proper angleP l1
Step 22: mapping the point cloud ring into a point cloud gray image P 0 The method comprises the steps of carrying out a first treatment on the surface of the Wherein, each angle area of the point cloud ring needs to be mapped into a point cloud gray image P 0 The pixel coordinates (i, j) correspond to the horizontal angle interval [V min +i×P l1 ,V min +(i+1)×P l1 ]And vertical angle interval [H min +j×P l1 ,H min +(j+1)×P l1 ]The value ranges of the integers i and j are respectively 0, (. About.m.)V max -V min )/P l1 )-1]And [0 ] (. About.H max -H min )/P l1 )-1]The method comprises the steps of carrying out a first treatment on the surface of the And respectively extracting RGB values of point clouds in an angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray values by an average value method, taking the median of the gray values as the gray value of the corresponding pixel, and defining the gray value of the corresponding pixel as 0 when the point clouds are not in the angle area.
In step 22, the average formula used is as follows:
(4)
step 3: generating a gray image template of the target, and generating a point cloud gray image P based on a template matching algorithm 0 Target gray scale image P 1 From the target grey image P 1 The location of the corresponding target area in the point cloud ring is identified.
In this step 3, since the previous step 2 has converted the point cloud ring into a two-dimensional point cloud gray image, and the two-dimensional image retains the geometric features of the planar target (i.e., appears as a sector of four alternating black and white), the point cloud gray image P can be obtained using a sophisticated computer vision algorithm (here, a template matching algorithm) 0 Target gray scale image P 1 Then according to the target gray level image P 1 In point cloud gray scale image P 0 The corresponding target area in the point cloud ring can be identified by combining the corresponding relation between the pixel coordinates and the angle area; the target region is formed by a target gray scale image P 1 And the angle areas corresponding to all the pixels are formed.
Specifically, step 3 includes:
step 31: generating an initial gray image with the height and the width of d, defining an origin with the upper left corner of the image as a pixel coordinate system, wherein two areas [0:d/2, 0:d/2 ] and [ d/2:d, d/2:d ] in the pixel coordinate system are black, and two areas [0:d/2, d/2:d ] and [ d/2:d, 0:d/2 ] are white to form four rectangles with black and white intervals;
step 32: defining a rotation Angle, wherein the Angle is 2-6 degrees, generating an arithmetic series A [ Angle, …, angle x (m-1) and 180 degrees ] taking the Angle as a tolerance between 0-180 degrees, rotating an initial gray image according to the Angle value of the arithmetic series A to obtain m additional initial gray images, cutting the edge of each initial gray image, leaving a [ d/4:dx3/4, d/4:dx3/4 ] area, thereby generating m+1 target gray image templates, and forming a template set G;
step 33: solving all target gray image templates and point cloud gray images P in a template set G according to a template matching algorithm 0 The correlation coefficients of different pixel areas in the image are extracted, and the pixel point (i) with the best matching effect is extracted 1 ,j 1 ) Under the pixel coordinate system [ i ] 1 : i 1 +d/2,j 1 : j 1 +d/2]Image of region as target gray image P 1 And identifying the corresponding target area from the point cloud ring.
In step 31, the letter d is used to represent the height and width of the initial gray image, just for convenience in defining the proportional relationship among the initial size of the image, the size of the black area, the size of the white area and the size of the clipping remaining area; for the specific value of d, only the remaining region ratio P of clipping is ensured 0 The target is limited in scope, which can be adjusted according to the actual target size or experience.
In step 32, the initial gray scale image is rotated by the angle value of the arithmetic series a, but the rotation may cause some pixels to go beyond d×d, and only the pixels in d×d are retained; in addition, black edges may be generated after the image is rotated, so that 1/4 of the area is cut for each edge of the image, leaving [ i ] 1 : i 1 +d/2,j 1 : j 1 +d/2]The region serves as a target gray scale image template.
In step 33, a target gray scale image P is obtained 1 I.e. the position of the pixel in the coordinate system 1 : i 1 +d/2,j 1 : j 1 +d/2]The region, and therefore the target gray scale image P 1 And the coordinates of all pixels are obtained, so that all corresponding horizontal angle intervals and vertical angle intervals are obtained, and finally, the target area in the point cloud ring is obtained. In this step 33, the template matching algorithm includes, but is not limited to, a normalized correlation coefficient matching method, a normalized correlation matching method, a normalized squared difference matching method, and the like.
Step 4: definition of the second segmentation accuracyP l2 According to the second division accuracyP l2 Dividing the horizontal and vertical angles of the target region into a plurality of smaller angle regions (specifically smaller than the angle regions in step 2), thereby mapping the target region into a target gray scale image P with higher accuracy 2 (specifically, P in step 3 1 Higher precision), the target gray image P 2 Each pixel of the target area corresponds to each angle area of the target area one by one, and the gray value of each pixel is obtained by converting and calculating the RGB value of the point cloud in the corresponding angle area.
The difference between the step 4 and the step 2 is that: the division object of the step 2 is a point cloud ring, and the step 4 is a target area in the point cloud ring; the accuracy of two-dimensional imaging in step 2 is relatively low, mainly the accuracy that need not too high is distinguished to the target region, and in order to improve processing efficiency, and the accuracy of two-dimensional imaging in step 4 is higher, mainly because be used for extracting the target central point, in order to improve the accuracy that the feature point detected to improve the accuracy of target central point coordinate, finally improve the directional rate of accuracy of point cloud.
Specifically, step 4 includes:
step 41: obtaining the value range of the horizontal angle of the target area in the spherical coordinate systemV k1 ,V k2 ]And the value range of the vertical angle [H k1 ,H k2 ]Counting the number of point clouds in a target area as Sum, squaring Sum, rounding down if the square value is not an integer, and assigning d 1
Step 42: definition of the second segmentation accuracyP l2P l2 =(H k2 -H k1 )/d 1 According to the second division accuracyP l2 Dividing the horizontal angle and the vertical angle of the target region to obtain a division section of the horizontal angleV k1 +0×P l2 ,V k1 +1×P l2 ,…,V k1 +i×P l2 ,…,V k2 ]Division of vertical angleH k1 +0×P l2 ,H k1 +1×P l2 ,…,H k1 +j×P l2 ,…,H k2 ]Dividing the target area into a plurality of angle areas, each angle area being defined by a horizontal angle section and a vertical angle section;
step 43: mapping target regions to target grayscale images P 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein each angular region of the target region needs to be mapped to a target gray scale image P 2 The pixel coordinates (i, j) correspond to the horizontal angle interval [V k1 +i×P l2 ,V k1 +(i+1)×P l2 ]And vertical angle interval [H k1 +j×P l2 ,H k1 +(j+1)×P l2 ]The value ranges of the integers i and j are respectively 0 and d 1 -1]And [0, d ] 1 -1]The method comprises the steps of carrying out a first treatment on the surface of the And respectively extracting RGB values of point clouds in an angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray values by an average value method, taking the median of the gray values as the gray value of the corresponding pixel, and defining the gray value of the corresponding pixel as 0 when the point clouds are not in the angle area.
In the steps 41 to 43, the ratio of the total number of the point clouds to the total number of the angle areas is about 1:1, that is, each angle area can be approximately regarded as a point of the target area, or two-dimensional imaging is performed with a distance accuracy of approximately one point, which is mainly for improving the feature point detection accuracy and the accuracy of the point cloud orientation.
Step 5: target gray scale image P 2 Filtering, and filtering by feature point extraction algorithmAnd acquiring pixel coordinates of a target center point in the target gray level image after the filtering, and extracting local coordinates corresponding to the target center point in the original point cloud data according to the pixel coordinates. Since the accuracy of two-dimensional imaging is high in step 5, noise points having a pixel value of 0 are likely to occur, and it is necessary to perform filtering processing first in order to avoid influencing feature point extraction.
Specifically, step 5 includes:
step 51: the target gray scale image P generated at step 4 2 In the case that the angle area corresponding to the pixel does not have point cloud, the gray value of the pixel is 0 and the target gray image P 2 The middle display is a noise point, and a filtering algorithm is adopted to carry out on the target gray level image P 2 Processing, namely replacing the pixels with gray values of 0 with gray values of surrounding pixel points to obtain a target gray image without noise points;
step 52: and acquiring the pixel coordinates of the target center point in the target gray level image without noise points through a feature point extraction algorithm, and extracting the local coordinates of the target center point under a Cartesian coordinate system by combining the corresponding relation of the pixel coordinates, the spherical coordinates and the Cartesian coordinates.
In step 51, the filtering algorithm includes, but is not limited to, median filtering, mean filtering, gaussian filtering, etc., and the median or average number of the gray values of all pixels in the filtering window corresponding to the noise point is used to replace the gray value of the noise point, so as to obtain the target gray image without the noise point.
In step 52, the feature point extraction algorithm includes, but is not limited to, harris corner detection algorithm, SIFT corner detection algorithm, template matching method, etc., and the pixel coordinates of the target center point are determined by the gray gradient change feature of the target center point or the correlation coefficient value of the template matching. For example, defining a corner detection neighborhood size blocksize=3, calculating a Sobel operator size ksize=3 of a gradient map, calculating a parameter k=4 of a corner response function, acquiring pixel coordinates of a center point in the picture through a Harris corner detection algorithm, and extracting local coordinates of a target center point under a Cartesian coordinate system by combining the corresponding relation of the pixel coordinates, the spherical coordinates and the Cartesian coordinates.
Step 6: and calculating point cloud coordinate conversion parameters according to the local coordinates and the construction coordinates of at least 3 groups of target center points, and converting the original point cloud data from the local coordinate system to the construction coordinate system according to the point cloud coordinate conversion parameters.
In step 6, point cloud rings corresponding to all plane targets with layout heights in a point cloud scanning range can be extracted, construction coordinates of target center points are obtained in advance, local coordinates and construction coordinate data of at least 3 groups of target center points are selected, coordinate transformation parameters of the point cloud are calculated, and then the original point cloud is oriented; the 3 sets of the selected center point coordinate data can be different in layout height and are not collinear for the corresponding 3 plane targets. The coordinate conversion parameter calculation formula is as follows:
(5)
wherein [ X, Y, Z,1] is the coordinate in the initial coordinate system, and [ X ', Y ', Z ',1] is the coordinate in the target coordinate system. S is a transformation matrix, and the mathematical expression is:
(6)
where a-i represent rotation, scaling parameters, dx, dy, dz represent translation parameters, l, m, n represent projection parameters, and s represent scaling parameters.
Referring to fig. 2-10, an example of point cloud orientation using the method of the present invention is provided.
According to step 11 of the method of the present invention, four planar targets are arranged in the point cloud scanning area, numbered 1, 2, 3, 4, the heights of the targets being different, wherein at least three of the targets are non-collinear, and the vertical length of each target ish b =0.297 m, the layout heights h of the individual targets are [0.2m, 0.6m, 1.0m, 1.4m, respectively]Three-dimensional laser scanner mounting heighth b Taking Δh as 0.1m, extracting point cloud rings corresponding to the targets respectively, for example, extracting point cloud rings corresponding to targets with a layout height h of 1.4m, as shown in fig. 2;
according to step 12 of the method, the Cartesian coordinates of all points in the point cloud ring are converted into spherical coordinates through a coordinate conversion formula, and the values of the horizontal angle and the vertical angle of the point cloud ring under the spherical coordinate system are respectively [ -180 degrees, 180 degrees ] and [82 degrees, 99 degrees ].
According to step 21 of the method of the invention, a first segmentation accuracy is definedP l1 =0.2, the division intervals resulting in horizontal and vertical angles are [ -180 °, -179.8 °, …,179.8 °,180 °]And [82 °,82.2 °, …,98.8 °,99 °]An angle area formed by a horizontal angle section and a vertical angle section is taken as one pixel of the two-dimensional image, and the horizontal angle section and the vertical angle section corresponding to the pixel coordinates (i, j) are respectively [ -180 degrees+i×0.2 degrees-180°+(i+1)×0.2°]And [82 ° +j×0.2°82°+(j+1)×0.2°]. Wherein, the value ranges of i and j are respectively 0 and 1799]And [0, 84 ]];
According to step 22 of the method of the present invention, the RGB values of the point cloud in each angle area of the point cloud ring are converted into the gray values of the corresponding pixels, and the point cloud gray image P is finally obtained 0 As shown in fig. 3.
According to step 31 of the method of the present invention, defining the height and width d=20 of the initial gray image, the initial gray image being generated as shown in fig. 4;
according to step 32 of the method of the present invention, a rotation angle=4°, an arithmetic series a [4 °,8 °, …, 176 °,180 ° ] with Angle as a tolerance is generated between 0 ° and 180 °, the initial gray templates are rotated according to the Angle value of the arithmetic series a to obtain another 25 initial gray templates, a total of 26 initial gray templates each clip an image edge, and a [ 5:15, 5:15 ] area is left as a target gray image template, the generated 26 target gray image templates form a template set G, for example, the target gray image templates corresponding to the clipping after 24 ° rotation are shown in fig. 5;
in step 33 of the method, the normalized correlation coefficient matching method is adopted to obtain the point cloud gray level image P 0 Finding target grayscale image P 1 The pixel region is shown in fig. 6; according to the pixel regionAll pixel coordinates within can result in a target area in the point cloud ring, the point cloud of which is shown in fig. 7.
According to step 41 of the method of the invention, the horizontal and vertical angles of the target area are obtained with values ranging from-64.2 DEG to-62.0 deg]And [88.0 °,90.2 ]]Counting the number of point clouds in the target area as sum=2776, and since the width and the height of the target area are the same, squaring Sum, if the squaring value is not an integer, rounding downwards, and assigning d 1 =52;
According to step 42 of the method of the present invention, a second segmentation accuracy is definedP l2 =0.04,P l2 Can be expressed as: (90.2-88.0)/52, and through step 42 and step 43, generating a target gray scale image P with higher precision 2 As shown in fig. 8.
According to step 51 of the method of the present invention, the target gray scale image P shown in fig. 8 is generated 2 There are many noise points, thus for the target gray image P 2 Filtering to obtain a target gray image without noise points, as shown in fig. 9;
the result obtained by the feature point extraction algorithm according to step 52 of the method of the present invention is shown in fig. 10.
According to the method of the invention, in step 6, as the local coordinates and construction coordinate data of the center point of 4 groups of targets are provided, and at least 3 groups of coordinate data correspond to 3 plane targets which are different in layout height and are not collinear, the coordinate transformation parameters of the point cloud are calculated, and then the original point cloud is oriented.
As can be seen from the method and the related examples according to the embodiments of the present invention, the method has the following advantages:
1) According to the method, the Z coordinate ranges of the point cloud rings are designated, the point cloud rings corresponding to targets with different layout heights are respectively extracted, and irrelevant point cloud data in the point cloud orientation process can be removed; meanwhile, the processes of extracting target centers by the point cloud rings with different heights are the same, and the computer can be used for carrying out multi-process simultaneous parallel calculation, so that the point cloud processing efficiency is improved.
2) According to the method, targets distributed at different heights are numbered, all plane targets in an original point cloud are distinguished, unique mapping of the target center coordinates under a point cloud local coordinate system and the target center coordinates under a construction coordinate system is established, coordinate transformation parameters are automatically solved according to a mapping relation, and then the original point cloud is converted from the local coordinate system to the construction coordinate system.
3) The method fully utilizes the coordinate information and RGB information contained in the point cloud data, maps the three-dimensional point cloud data into the two-dimensional gray level image, and can reserve the geometric characteristics of the plane target in the two-dimensional gray level image, so that the target area can be automatically identified and the target center point can be extracted by utilizing a mature computer vision algorithm, and the automatic processing of the point cloud orientation can be rapidly and accurately realized.
4) The method can obtain the center point coordinate of the target by defining the twice segmentation precision and considering both the precision and the efficiency. When the first segmentation precision is defined, the point cloud ring in the three-dimensional data form is mapped into a two-dimensional point cloud gray image with relatively low precision, and only the geometric characteristics of a plane target are required to be reserved for the image so as to find out the target gray image, and thus a target area is identified in the point cloud ring, and therefore, the two-dimensional imaging with low precision can ensure that the target area is identified and simultaneously improve the processing efficiency. In addition, when defining the second segmentation precision, mainly in order to realize the two-dimensional imaging of higher precision to map the target region into the two-dimensional target gray scale image of higher precision, and then improve the detection precision of target center point, finally reach the purpose that improves the directional precision of point cloud.
The embodiment of the invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the automatic point cloud orientation method is realized when the processor executes the computer program.
According to the automatic point cloud orientation method adopted by the electronic equipment, coordinate information and RGB information contained in point cloud data can be fully utilized, three-dimensional point cloud data are mapped into a two-dimensional gray scale image, and geometric characteristics of a plane target can be reserved in the two-dimensional gray scale image, so that a target area can be automatically identified and a target center point can be extracted by using a mature computer vision algorithm, and automatic processing of point cloud orientation can be rapidly and accurately realized.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the automated point cloud orientation method of the embodiment of the invention.
The automatic point cloud orientation method adopted by the computer readable storage medium can fully utilize coordinate information and RGB information contained in point cloud data, map three-dimensional point cloud data into a two-dimensional gray scale image, and retain geometric characteristics of a planar target in the two-dimensional gray scale image, so that a target area can be automatically identified and a target center point can be extracted by utilizing a mature computer vision algorithm, and automatic point cloud orientation processing can be rapidly and accurately realized.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention are intended to be within the scope of the present invention as claimed.

Claims (10)

1. An automatic point cloud orientation method based on computer vision is characterized by comprising the following steps:
step 1: extracting a point cloud ring covering the target layout range from the original point cloud data, and converting the point cloud ring from a Cartesian coordinate system to a spherical coordinate system;
step 2: defining a first segmentation precision, segmenting a horizontal angle and a vertical angle of a point cloud ring according to the first segmentation precision, dividing the point cloud ring into a plurality of angle areas, mapping the point cloud ring into a two-dimensional point cloud gray image, wherein each pixel of the point cloud gray image corresponds to each angle area of the point cloud ring one by one, and the gray value of each pixel is obtained by converting and calculating the RGB value of the point cloud in the corresponding angle area;
step 3: generating a target gray image template, searching a target gray image from the point cloud gray image based on a template matching algorithm, and identifying a corresponding target region in the point cloud ring according to the position of the target gray image;
step 4: defining a second division precision, dividing a horizontal angle and a vertical angle of a target area according to the second division precision, dividing the target area into a plurality of smaller angle areas, and mapping the target area into a target gray image with higher precision, wherein each pixel of the target gray image corresponds to each angle area of the target area one by one, and gray values of each pixel are obtained by converting and calculating RGB values of point clouds in the corresponding angle areas;
step 5: filtering the target gray level image, acquiring pixel coordinates of a target center point in the target gray level image through a feature point extraction algorithm, and extracting local coordinates corresponding to the target center point in original point cloud data according to the pixel coordinates;
step 6: and calculating point cloud coordinate conversion parameters according to the local coordinates and the construction coordinates of at least 3 groups of target center points, and converting the original point cloud data from the local coordinate system to the construction coordinate system according to the point cloud coordinate conversion parameters.
2. The automated point cloud orientation method of claim 1, wherein step 1 comprises:
step 11: at least 3 non-collinear plane targets with different heights are distributed in a point cloud scanning area corresponding to the original point cloud data, each plane target consists of four 90-degree sectors, the colors of the sectors are black or white, and the colors of adjacent sectors are different, so that the central point of the target can be identified through the difference of the gray scales of the adjacent sectors; adopting straight-through filtering method to respectively extract Z coordinate range in original point cloud data as [h-h y -Δh,h-h y +h b +Δh]Wherein, the local coordinate system of the original point cloud data takes the center position of the instrument as the origin,hdistributing the height for the targeth 1h 2h 3 、…、h n ]N is the number of targets,h y the height is erected for the three-dimensional laser scanner,h b for the vertical length of the target itself, Δh is the height adjustmentTaking 0.1-0.5 for the parameters delta h to ensure that the distribution range of the targets is covered in the extracted point cloud ring;
step 12: converting Cartesian coordinates of all points in the point cloud ring into corresponding spherical coordinates to obtain a value range of horizontal angles of the point cloud ring under the spherical coordinate systemV min ,V max ]And the value range of the vertical angle [H min ,H max ]。
3. The automated point cloud orientation method of claim 2, wherein step 2 comprises:
step 21: definition of first segmentation accuracyP l1 According to the first division precisionP l1 Dividing the horizontal angle and the vertical angle of the point cloud ring to obtain a division interval of the horizontal angleV min +0×P l1 ,V min +1×P l1 ,…,V min +i×P l1 ,…,V max ]Division of vertical angleH min +0×P l1 ,H min +1×P l1 ,…,H min +j×P l1 ,…,H max ]Dividing the point cloud ring into a plurality of angle areas, wherein each angle area is defined by a horizontal angle interval and a vertical angle interval;
step 22: mapping the point cloud ring into a point cloud gray image P 0 The method comprises the steps of carrying out a first treatment on the surface of the Wherein, each angle area of the point cloud ring needs to be mapped into a point cloud gray image P 0 The pixel coordinates (i, j) correspond to the horizontal angle interval [V min +i×P l1 ,V min +(i+1)×P l1 ]And vertical angle interval [H min +j×P l1 ,H min +(j+1)×P l1 ]The value ranges of the integers i and j are respectively 0, (. About.m.)V max - V min )/P l1 )-1]And [0 ] (. About.H max -H min )/P l1 )-1]The method comprises the steps of carrying out a first treatment on the surface of the Respectively extracting an angle area corresponding to each pixel coordinateAnd converting the RGB value of each point in the angle area into a gray value by an average value method, taking the median of the gray value as the gray value of the corresponding pixel, and defining the gray value of the corresponding pixel as 0 when the point cloud is not in the angle area.
4. The automated point cloud orientation method of claim 3, wherein said step 3 comprises:
step 31: generating an initial gray image with the height and the width of d, defining an origin with the upper left corner of the image as a pixel coordinate system, wherein two areas [0:d/2, 0:d/2 ] and [ d/2:d, d/2:d ] in the pixel coordinate system are black, and two areas [0:d/2, d/2:d ] and [ d/2:d, 0:d/2 ] are white to form four rectangles with black and white intervals;
step 32: defining a rotation Angle, wherein the Angle is 2-6 degrees, generating an arithmetic series A [ Angle, …, angle x (m-1) and 180 degrees ] taking the Angle as a tolerance between 0-180 degrees, rotating an initial gray image according to the Angle value of the arithmetic series A to obtain m additional initial gray images, cutting the edge of each initial gray image, leaving a [ d/4:dx3/4, d/4:dx3/4 ] area, thereby generating m+1 target gray image templates, and forming a template set G;
step 33: solving all target gray image templates and point cloud gray images P in a template set G according to a template matching algorithm 0 The correlation coefficients of different pixel areas in the image are extracted, and the pixel point (i) with the best matching effect is extracted 1 ,j 1 ) Under the pixel coordinate system [ i ] 1 : i 1 +d/2,j 1 : j 1 +d/2]Image of region as target gray image P 1 And identifying the corresponding target area from the point cloud ring.
5. The automated point cloud orientation method of claim 4, wherein in step 33, the template matching algorithm employs a normalized correlation coefficient matching method, a normalized correlation matching method, or a normalized squared difference matching method.
6. The automated point cloud orientation method of claim 4, wherein said step 4 comprises:
step 41: obtaining the value range of the horizontal angle of the target area in the spherical coordinate systemV k1 ,V k2 ]And the value range of the vertical angle [H k1 ,H k2 ]Counting the number of point clouds in a target area as Sum, squaring Sum, rounding down if the square value is not an integer, and assigning d 1
Step 42: definition of the second segmentation accuracyP l2P l2 =(H k2 - H k1 )/d 1 According to the second division accuracyP l2 Dividing the horizontal angle and the vertical angle of the target region to obtain a division section of the horizontal angleV k1 +0×P l2 ,V k1 +1×P l2 ,…,V k1 +i×P l2 ,…,V k2 ]Division of vertical angleH k1 +0×P l2 ,H k1 +1×P l2 ,…,H k1 +j×P l2 ,…,H k2 ]Dividing the target area into a plurality of angle areas, each angle area being defined by a horizontal angle section and a vertical angle section;
step 43: mapping target regions to target grayscale images P 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein each angular region of the target region needs to be mapped to a target gray scale image P 2 The pixel coordinates (i, j) correspond to the horizontal angle interval [V k1 +i×P l2 ,V k1 +(i+1)×P l2 ]And vertical angle interval [H k1 +j×P l2 ,H k1 +(j+1)×P l2 ]The value ranges of the integers i and j are respectively 0 and d 1 -1]And [0, d ] 1 -1]The method comprises the steps of carrying out a first treatment on the surface of the Respectively extracting RGB values of point clouds in an angle area corresponding to each pixel coordinate, converting the RGB values of each point in the angle area into gray values by an average value method, taking the median of the gray values as the gray value of the corresponding pixel, and taking the median of the gray values as the gray value of the corresponding pixel when the angle is formedWhen there is no point cloud in the area, defining the gray value of the corresponding pixel as 0.
7. The automated point cloud orientation method of claim 6, wherein said step 5 comprises:
step 51: the target gray scale image P generated at step 4 2 In the case that the angle area corresponding to the pixel does not have point cloud, the gray value of the pixel is 0 and the target gray image P 2 The middle display is a noise point, and a filtering algorithm is adopted to carry out on the target gray level image P 2 Processing, namely replacing the pixels with gray values of 0 with gray values of surrounding pixel points to obtain a target gray image without noise points;
step 52: and acquiring the pixel coordinates of the target center point in the target gray level image without noise points through a feature point extraction algorithm, and extracting the local coordinates of the target center point under a Cartesian coordinate system by combining the corresponding relation of the pixel coordinates, the spherical coordinates and the Cartesian coordinates.
8. The method of automated point cloud orientation according to claim 7, wherein in step 51, the filtering algorithm uses median filtering, mean filtering or gaussian filtering to replace gray values of the noise points with median or average gray values of all pixels in a filtering window corresponding to the noise points, so as to obtain a target gray image without the noise points;
in step 52, the feature point extraction algorithm adopts Harris corner detection algorithm, SIFT corner detection algorithm or template matching method, and determines the pixel coordinates of the target center point through the gray gradient change feature of the target center point or the correlation coefficient value of template matching.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the automated point cloud orientation method of any of claims 1-8 when the computer program is executed by the processor.
10. A computer readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the automated point cloud orientation method according to any of claims 1-8.
CN202410003521.6A 2024-01-02 2024-01-02 Automatic point cloud orientation method, equipment and storage medium based on computer vision Active CN117495969B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410003521.6A CN117495969B (en) 2024-01-02 2024-01-02 Automatic point cloud orientation method, equipment and storage medium based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410003521.6A CN117495969B (en) 2024-01-02 2024-01-02 Automatic point cloud orientation method, equipment and storage medium based on computer vision

Publications (2)

Publication Number Publication Date
CN117495969A true CN117495969A (en) 2024-02-02
CN117495969B CN117495969B (en) 2024-04-16

Family

ID=89680465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410003521.6A Active CN117495969B (en) 2024-01-02 2024-01-02 Automatic point cloud orientation method, equipment and storage medium based on computer vision

Country Status (1)

Country Link
CN (1) CN117495969B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11099275B1 (en) * 2020-04-29 2021-08-24 Tsinghua University LiDAR point cloud reflection intensity complementation method and system
CN114862788A (en) * 2022-04-29 2022-08-05 湖南联智科技股份有限公司 Automatic identification method for plane target coordinates of three-dimensional laser scanning
CN117095038A (en) * 2023-08-24 2023-11-21 上海盎维信息技术有限公司 Point cloud filtering method and system for laser scanner

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11099275B1 (en) * 2020-04-29 2021-08-24 Tsinghua University LiDAR point cloud reflection intensity complementation method and system
CN114862788A (en) * 2022-04-29 2022-08-05 湖南联智科技股份有限公司 Automatic identification method for plane target coordinates of three-dimensional laser scanning
CN117095038A (en) * 2023-08-24 2023-11-21 上海盎维信息技术有限公司 Point cloud filtering method and system for laser scanner

Also Published As

Publication number Publication date
CN117495969B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN110443836B (en) Point cloud data automatic registration method and device based on plane features
CN109978839B (en) Method for detecting wafer low-texture defects
CN107014294B (en) Contact net geometric parameter detection method and system based on infrared image
CN107203973B (en) Sub-pixel positioning method for center line laser of three-dimensional laser scanning system
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN110473221B (en) Automatic target object scanning system and method
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN110853081B (en) Ground and airborne LiDAR point cloud registration method based on single-tree segmentation
CN104008542B (en) A kind of Fast Corner matching process for specific plane figure
CN115170669A (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN113343976A (en) Anti-highlight interference engineering measurement mark extraction method based on color-edge fusion feature growth
CN111429548B (en) Digital map generation method and system
CN112257721A (en) Image target region matching method based on Fast ICP
JP2008242508A (en) Automatic specific area extraction system, automatic specific area extraction method and program
CN113313116A (en) Vision-based accurate detection and positioning method for underwater artificial target
CN117495969B (en) Automatic point cloud orientation method, equipment and storage medium based on computer vision
CN106934846B (en) Cloth image processing method and system
JP2003141567A (en) Three-dimensional city model generating device and method of generating three-dimensional city model
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
TWI659390B (en) Data fusion method for camera and laser rangefinder applied to object detection
CN109191483A (en) A kind of quick watershed detection method of helicopter blade Circle in Digital Images shape mark
CN108416346B (en) License plate character positioning method and device
CN115511716A (en) Multi-view global map splicing method based on calibration board
CN108520498A (en) A kind of high efficiency crystalline shade noise remove method in crystal structure process monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant