CN109978982B - Point cloud rapid coloring method based on oblique image - Google Patents

Point cloud rapid coloring method based on oblique image Download PDF

Info

Publication number
CN109978982B
CN109978982B CN201910262805.6A CN201910262805A CN109978982B CN 109978982 B CN109978982 B CN 109978982B CN 201910262805 A CN201910262805 A CN 201910262805A CN 109978982 B CN109978982 B CN 109978982B
Authority
CN
China
Prior art keywords
point cloud
adopting
image
steps
following
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910262805.6A
Other languages
Chinese (zh)
Other versions
CN109978982A (en
Inventor
李雄刚
翟瑞聪
张峰
苏超
李国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Machine Inspection Center of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Machine Inspection Center of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Machine Inspection Center of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN201910262805.6A priority Critical patent/CN109978982B/en
Publication of CN109978982A publication Critical patent/CN109978982A/en
Application granted granted Critical
Publication of CN109978982B publication Critical patent/CN109978982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity

Abstract

The invention discloses a point cloud rapid coloring method based on an oblique image, which comprises the following steps: acquiring image data of an object to be detected by adopting an oblique photography technology, and taking the image data as a target image; extracting characteristic points in the target image by adopting a second-order partial derivative operator; matching the feature points by adopting an optical flow constant principle, and reconstructing a three-dimensional point cloud; and mapping the three-dimensional point cloud by adopting an S-shaped function, and converting the three-dimensional point cloud into RGB color information. The invention avoids using a laser probe, so that the cost for acquiring the laser point cloud data is greatly reduced; meanwhile, the time and space complexity of the three-dimensional reconstruction algorithm is greatly reduced by using a rapid feature point detection algorithm and an optical flow constant matching algorithm; in addition, the point cloud data are quickly colored by mapping the three-dimensional point cloud coordinate through the S-shaped function, and the efficiency of later-stage classification and display is improved.

Description

Point cloud rapid coloring method based on oblique image
Technical Field
The invention relates to the technical field of point transport data processing, in particular to a point cloud rapid coloring method based on an oblique image.
Background
Currently, the mainstream technology in the three-dimensional reconstruction and ranging industry is to acquire three-dimensional coordinate information of a target object through a laser radar; the laser radar is often mounted on an aircraft, when the aircraft passes over an object to be measured, the laser radar emits electromagnetic waves at a certain frequency, the radar acquires point cloud data about the object to be measured through echo data, and the acquired point cloud data attributes include coordinate information (x, y, z) of the object in a geographic world coordinate system, intensity of the echo, direction of the echo and the like.
The problems existing in the prior art are as follows: the operation of the laser radar equipment is complex and is not convenient for large-area popularization and use in a general service scene; the laser probe belongs to a compact instrument, and the use cost is higher; the data volume of the laser point cloud is large, so that the storage and the transmission are inconvenient; due to the limitation of scanning frequency, the laser point cloud data can not acquire the fine local features of the object to be detected like a picture; the laser point cloud data does not contain any color information, and is not beneficial to later application.
Technologies such as three-dimensional reconstruction, distance measurement and classification based on point cloud data are widely applied to how to efficiently acquire high-precision laser point cloud data at low cost in production practice, which is an urgent problem to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a point cloud rapid coloring method based on an oblique image, which solves the problems in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a point cloud rapid coloring method based on an oblique image comprises the following steps:
acquiring image data of an object to be detected by adopting an oblique photography technology, and taking the image data as a target image;
extracting characteristic points in the target image by adopting a second-order partial derivative operator;
matching the feature points by adopting an optical flow constant principle, and reconstructing a three-dimensional point cloud;
and mapping the three-dimensional point cloud by adopting an S-shaped function, and converting the three-dimensional point cloud into RGB color information.
Optionally, the steps are: extracting feature points in the image by adopting a second-order partial derivative operator, wherein the method comprises the following steps:
and taking the target image as a data source to extract the characteristic texture, wherein the mathematical description of the characteristic texture extraction is as follows:
Figure BDA0002015847550000021
Figure BDA0002015847550000022
Figure BDA0002015847550000023
/>
Figure BDA0002015847550000024
Figure BDA0002015847550000025
wherein:
Figure BDA0002015847550000026
height and width are height and width of a neighborhood window at x and y positions of a pixel point, and gradx and grady are gradient images of an original image in the transverse direction and the longitudinal direction;
and (5) calculating the characteristic value of each characteristic point by combining the formulas (1) to (5) and the gradient image.
Optionally, the steps are: in the extracting the feature points in the target image by using the second-order partial derivative operator, the method further comprises the following steps:
screening out the characteristic points meeting the following conditions from the extracted characteristic points: the characteristic value is larger than a preset threshold value and is positioned in the region of interest.
Optionally, the steps are: matching the feature points by adopting the constant optical flow principle, and reconstructing a three-dimensional point cloud, wherein the method comprises the following steps:
the process of feature point matching is as follows: j (x) = I (x-d) + n (x), where n is noise.
Optionally, the steps are: adopting an S-shaped function to map the three-dimensional point cloud and converting the three-dimensional point cloud into RGB color information, wherein the method comprises the following steps:
three coordinate values of the point cloud are normalized, and the mathematical formula is described as follows:
Figure BDA0002015847550000031
where x, y, z are the coordinates of the point cloud in the world coordinate system and R, G, B are the pixel values corresponding to the display color system.
Optionally, the value of height and width is 7.
Optionally, the steps of: screening out the characteristic points meeting the following conditions from the extracted characteristic points, and further comprising the following steps of:
in screening feature points, the feature points are first sorted in order from bottom to top.
Optionally, the preset threshold is 128.
Optionally, when feature points meeting the condition that the feature values are located in the region of interest are screened out from the extracted feature points, the mask for extracting the image feature points located in the region of interest may be described by the following formula:
Figure BDA0002015847550000032
compared with the prior art, the invention has the following beneficial effects:
the invention provides a point cloud rapid coloring method based on an oblique image, which realizes that an unmanned aerial vehicle obtains high-definition image data through oblique photography, generates a point cloud through the image data and rapidly colors the point cloud. The method provided by the invention avoids using a laser probe, so that the cost for acquiring the laser point cloud data is greatly reduced; meanwhile, the time and space complexity of the three-dimensional reconstruction algorithm is greatly reduced by using a rapid feature point detection algorithm and an optical flow constant matching algorithm; in addition, the point cloud data are quickly colored by mapping the three-dimensional point cloud coordinate through the S-shaped function, and the efficiency of later-stage classification and display is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a point cloud rapid coloring method based on an oblique image according to the present invention;
fig. 2 is a flowchart of step S2 in the point cloud rapid coloring method based on oblique images provided by the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Referring to fig. 1 and fig. 2, the present invention provides a method for quickly coloring a point cloud based on an oblique image, which includes the following steps:
s1, acquiring image data of an object to be detected by adopting an oblique photography technology to serve as a target image.
In order to acquire a high-resolution image of a target object in a short distance and restore real surface information of the object by an image three-dimensional reconstruction technology, the invention acquires the image by adopting an unmanned aerial vehicle oblique photography technology. This technique carries on many image acquisition sensors on the unmanned aerial vehicle platform, follows four incline directions of perpendicular and southeast and west simultaneously, gathers the image data of the five different angles of the object that awaits measuring, obtains the target image to use this target image as the data source that characteristic texture drawed in the follow-up step.
And S2, extracting characteristic points in the target image by adopting a second-order partial derivative operator.
In the step, a best feature point extraction algorithm is adopted to extract feature point position information and corresponding feature values, a thresholding strategy is used to quickly select feature points, and initial values of coordinates of the same-name points of dense feature points on the initial image on the target image are obtained.
Specifically, the step S2 includes the steps of:
s201, extracting feature textures by taking the target image as a data source.
In this step, the mathematical description of feature texture extraction is as follows:
Figure BDA0002015847550000051
Figure BDA0002015847550000052
Figure BDA0002015847550000053
/>
Figure BDA0002015847550000054
Figure BDA0002015847550000055
wherein:
Figure BDA0002015847550000056
height and width are height and width of a neighborhood window at x and y positions of a pixel point, and gradx and grady are gradient images of an original image in the transverse direction and the longitudinal direction;
since the image local feature is calculated, the value of height, width should not be too small, if the value of height, width is too small, it will result in too many feature points, and if it is too large, it will aggravate the time overhead of the algorithm in selecting the feature points.
In this embodiment, in view of the above, in the specific experiment, the value of height and width is 7, and h is 3,w is 3.
It is understood that in a specific experiment, the height and width can be set to other values than 7 as long as the number of feature points and the selection of the feature points are balanced, and the values for achieving the experimental effect are all within the protection scope of the embodiment.
And S202, calculating a characteristic value of each characteristic point.
In the step, the feature value of each feature point is calculated by combining the formulas (1) to (5) and the gradient image:
Figure BDA0002015847550000061
Figure BDA0002015847550000062
Figure BDA0002015847550000063
Figure BDA0002015847550000064
Figure BDA0002015847550000065
and combining the gradient images to calculate the characteristic value of each pixel point in the images. These feature values may also be represented in the form of an image, i.e., a feature value image.
And S203, screening out the characteristic points meeting the conditions.
In this step, we first sort the eigenvalues obtained by extraction from large to small, and select the eigenvalues that satisfy the condition for subsequent tracking and analysis.
Specifically, in this example, our conditions in our experiments are as follows: the coordinates of the characteristic points are in the interested area, and the characteristic value is larger than a preset threshold value.
When feature points satisfying the condition that the feature values are located in the region of interest are screened out from the extracted feature points, the image feature point mask for extracting the feature points located in the region of interest can be described by the following formula:
Figure BDA0002015847550000066
and S3, matching the feature points by adopting an optical flow constant principle, and reconstructing a three-dimensional point cloud.
In the step, feature point pairs are matched through a constant optical flow Kanade-Lucas-Tomasi method in image processing, and three-dimensional point cloud is generated from unmanned aerial vehicle oblique image data.
Specifically, the process of feature point matching is as follows:
J(x)=I(x-d)+n(x) (7)
where n is noise.
The displacement at which the residual error defined by the double integer window W is minimized is selected herein.
ε=∫ W [I(x-d)-J(x)] 2 ωdx (8)
In this description, ω is a weight function, e.g., ω can be set to 1 in a simple example. This weight function ω may depend on the luminance pattern of the image or may be chosen as a gaussian likelihood function to emphasize the central region of the window. When this displacement is much smaller than the size of the window, the linearization method process is most efficient.
The process of feature point matching is shown in equation (7), which is solved exactly in the specific practice using the newton's iterative algorithm. The convergence speed of the solution is controlled by specifying the number of iterations. The following is a specific mathematical derivation process:
the present embodiment defines the difference between the two windows as follows:
Figure BDA0002015847550000071
x=[x,y] T
d=[d x ,d y ] T (9)
the weight function ω (x) is set to 1 herein for simplicity of operation; wherein J is defined as a = [ a ] x ,a y ] T And (3) expanding the coefficients into Taylor series to round off high-order terms, and linearizing the coefficients into the following form:
Figure BDA0002015847550000072
wherein: xi = [ xi ] xy ] T Let us say that:
Figure BDA0002015847550000073
x = a, substituting the above equation herein can result in partial differential equations in two directions as follows:
Figure BDA0002015847550000081
Figure BDA0002015847550000082
thus:
Figure BDA0002015847550000083
wherein:
Figure BDA0002015847550000084
to solve for the displacement d we let (13) equal 0, i.e.:
Figure BDA0002015847550000085
after transposition we get:
∫∫ W [J(x)-I(x)]g(x)ω(x)dx=-∫∫ W g T dg(x)ω(x)dx=-[∫∫ W g T g(x)ω(x)dx]d (15)
from the above equation, we can see that we must solve the following matrix equation:
Zd=e (16)
z is a 2*2 matrix, e is the column vector of 2*1:
Z=∫∫ W g T g(x)ω(x)dx
e=∫∫ W [J(x)-I(x)]g(x)ω(x)dx (17)
and S4, mapping the three-dimensional point cloud by adopting an S-shaped function, and converting the three-dimensional point cloud into RGB color information.
In the step, the point cloud is quickly colored by combining a pseudo color theory on the basis of the field relationship between the position information of the point cloud and the point cloud.
The false color image processing is also called false color processing, which is an operation of assigning RGB colors to gray values according to a specific criterion. The main application of pseudo-color is for human visual observation and interpretation of gray scale objects in an image or sequence of images. Because the color information of the point cloud data generated by the oblique image is not convenient for visual observation of engineering personnel and effective information in the point cloud is excavated, the point cloud is quickly colored by combining a pseudo-color technology by taking the field relationship among the point clouds as a basis.
In this embodiment, first, the three coordinate values of the point cloud are normalized, and the mathematical formula thereof is described as the following formula (18), where x, y, and z are coordinates of the point cloud in the world coordinate system, and R, G, and B are pixel values corresponding to the color system of the display, and the point cloud can be quickly colored through such a transformation.
Figure BDA0002015847550000091
Based on the above embodiment, in the present invention, we use texture information to extract matching feature points, and the description of such feature points only uses 3 floating-point type feature values; in the dense feature point matching process, the Kanade-Lucas-Tomasi principle in the optical flow constant theory is used, so that the matching algorithm is converged rapidly; and finally, mapping the three-dimensional world coordinates of the point cloud into RGB values through a Sigmoid function to render and color. Therefore, the requirement of the whole algorithm on hardware storage is greatly reduced, the time complexity of the algorithm is controlled to a great extent, and the requirements of real-time performance and high precision can be met while the performance of the hardware is reduced.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A point cloud rapid coloring method based on an oblique image is characterized by comprising the following steps:
acquiring image data of an object to be detected by adopting an oblique photography technology to serve as a target image;
extracting characteristic points in the target image by adopting a second-order partial derivative operator;
matching the feature points by adopting an optical flow constant principle, and reconstructing a three-dimensional point cloud;
mapping the three-dimensional point cloud by adopting an S-shaped function, and converting the three-dimensional point cloud into RGB color information;
the steps are as follows: extracting feature points in the image by adopting a second-order partial derivative operator, wherein the method comprises the following steps:
and taking the target image as a data source to extract the characteristic texture, wherein the mathematical description of the characteristic texture extraction is as follows:
Figure FDA0004083449140000011
Figure FDA0004083449140000012
Figure FDA0004083449140000013
Figure FDA0004083449140000014
Figure FDA0004083449140000015
wherein:
Figure FDA0004083449140000016
height and width are height and width of a neighborhood window at x and y of a pixel point, and gradx and grady are gradient images of the original image in the transverse direction and the longitudinal direction;
calculating a characteristic value of each characteristic point by combining the formulas (1) to (5) and the gradient image;
the steps are as follows: in extracting the feature points in the target image by using the second-order partial derivative operator, the method further comprises the following steps:
screening out the characteristic points meeting the following conditions from the extracted characteristic points: the characteristic value is larger than a preset threshold value and is positioned in the region of interest;
the height and width values are both 7.
2. The method for rapidly coloring point cloud based on oblique image according to claim 1, wherein the method comprises the following steps: matching the feature points by adopting the optical flow constant principle, and reconstructing a three-dimensional point cloud, wherein the method comprises the following steps:
the process of feature point matching is as follows: j (x) = I (x-d) + n (x), where n is noise.
3. The method for rapidly coloring point cloud based on oblique image according to claim 1, wherein the method comprises the following steps: adopting an S-shaped function to map the three-dimensional point cloud and converting the three-dimensional point cloud into RGB color information, wherein the method comprises the following steps:
three coordinate values of the point cloud are normalized, and the mathematical formula is described as follows:
Figure FDA0004083449140000021
where x, y, z are coordinates of the point cloud in the world coordinate system, and R, G, B are pixel values corresponding to the display color system.
4. The method for rapidly coloring point cloud based on oblique image according to claim 1, wherein the method comprises the following steps: screening out the characteristic points meeting the following conditions from the extracted characteristic points, and further comprising the following steps of:
in screening feature points, the feature points are first sorted in order from bottom to top.
5. The method for rapidly coloring point cloud based on oblique image according to claim 1, wherein the predetermined threshold is 128.
6. The method for rapidly coloring point cloud based on oblique image according to claim 1, wherein when the feature points meeting the condition that the feature value is located in the region of interest are screened out from the extracted feature points, the image feature point mask for extracting the feature points located in the region of interest can be described by the following formula:
Figure FDA0004083449140000022
/>
CN201910262805.6A 2019-04-02 2019-04-02 Point cloud rapid coloring method based on oblique image Active CN109978982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910262805.6A CN109978982B (en) 2019-04-02 2019-04-02 Point cloud rapid coloring method based on oblique image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910262805.6A CN109978982B (en) 2019-04-02 2019-04-02 Point cloud rapid coloring method based on oblique image

Publications (2)

Publication Number Publication Date
CN109978982A CN109978982A (en) 2019-07-05
CN109978982B true CN109978982B (en) 2023-04-07

Family

ID=67082471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910262805.6A Active CN109978982B (en) 2019-04-02 2019-04-02 Point cloud rapid coloring method based on oblique image

Country Status (1)

Country Link
CN (1) CN109978982B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580703B (en) * 2019-09-10 2024-01-23 广东电网有限责任公司 Distribution line detection method, device, equipment and storage medium
CN112613107A (en) * 2020-12-26 2021-04-06 广东电网有限责任公司 Method and device for determining construction progress of tower project, storage medium and equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004221635A (en) * 2003-01-09 2004-08-05 Seiko Epson Corp Color converter, color converting method, color converting program, and print control apparatus
US8290295B2 (en) * 2009-03-03 2012-10-16 Microsoft Corporation Multi-modal tone-mapping of images
CN102629328B (en) * 2012-03-12 2013-10-16 北京工业大学 Probabilistic latent semantic model object image recognition method with fusion of significant characteristic of color
CN103325108A (en) * 2013-05-27 2013-09-25 浙江大学 Method for designing monocular vision odometer with light stream method and feature point matching method integrated
US20170032565A1 (en) * 2015-07-13 2017-02-02 Shenzhen University Three-dimensional facial reconstruction method and system
CN105629980B (en) * 2015-12-23 2018-07-31 深圳速鸟创新科技有限公司 A kind of one camera oblique photograph 3 d modeling system
CN106228609A (en) * 2016-07-09 2016-12-14 武汉广图科技有限公司 A kind of oblique photograph three-dimensional modeling method based on spatial signature information
WO2018056802A1 (en) * 2016-09-21 2018-03-29 Universiti Putra Malaysia A method for estimating three-dimensional depth value from two-dimensional images

Also Published As

Publication number Publication date
CN109978982A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
CN107451982B (en) High-canopy-density forest stand crown area acquisition method based on unmanned aerial vehicle image
CN112712535B (en) Mask-RCNN landslide segmentation method based on simulation difficult sample
CN111060076B (en) Method for planning routing of unmanned aerial vehicle inspection path and detecting foreign matters in airport flight area
Sun et al. Large-scale building height retrieval from single SAR imagery based on bounding box regression networks
CN109785371A (en) A kind of sun image method for registering based on normalized crosscorrelation and SIFT
CN112560619B (en) Multi-focus image fusion-based multi-distance bird accurate identification method
Yang et al. Fully constrained linear spectral unmixing based global shadow compensation for high resolution satellite imagery of urban areas
CN109978982B (en) Point cloud rapid coloring method based on oblique image
CN111915723A (en) Indoor three-dimensional panorama construction method and system
Chen et al. A color-guided, region-adaptive and depth-selective unified framework for Kinect depth recovery
Schunert et al. Grouping of persistent scatterers in high-resolution SAR data of urban scenes
Condorelli et al. A comparison between 3D reconstruction using nerf neural networks and mvs algorithms on cultural heritage images
CN114299137A (en) Laser spot center positioning method and test system
US11636649B2 (en) Geospatial modeling system providing 3D geospatial model update based upon predictively registered image and related methods
Belfiore et al. Orthorectification and pan-sharpening of worldview-2 satellite imagery to produce high resolution coloured ortho-photos
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
US11816793B2 (en) Geospatial modeling system providing 3D geospatial model update based upon iterative predictive image registration and related methods
CN111145201B (en) Steady and fast unmanned aerial vehicle photogrammetry mark detection and positioning method
Guo et al. A cloud boundary detection scheme combined with aslic and cnn using zy-3, gf-1/2 satellite imagery
CN112766032A (en) SAR image saliency map generation method based on multi-scale and super-pixel segmentation
Villa et al. Robust Landmark and Hazard Detection on Small Body Surfaces Using Shadow Imagery
Zhu et al. Extraction of linear features based on beamlet transform
CN117437654B (en) Semantic recognition-based grid map analysis method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant