CN116863357A - Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method - Google Patents

Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method Download PDF

Info

Publication number
CN116863357A
CN116863357A CN202310961454.4A CN202310961454A CN116863357A CN 116863357 A CN116863357 A CN 116863357A CN 202310961454 A CN202310961454 A CN 202310961454A CN 116863357 A CN116863357 A CN 116863357A
Authority
CN
China
Prior art keywords
image
remote sensing
aerial vehicle
unmanned aerial
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310961454.4A
Other languages
Chinese (zh)
Inventor
赵海盟
王强
李双
薛乐堂
王明春
张慧敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Aerospace Technology
Original Assignee
Guilin University of Aerospace Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Aerospace Technology filed Critical Guilin University of Aerospace Technology
Priority to CN202310961454.4A priority Critical patent/CN116863357A/en
Publication of CN116863357A publication Critical patent/CN116863357A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method, which comprises the following steps: 1) And the remote sensing image acquisition of the multi-phase high-resolution unmanned aerial vehicle is completed for the typical reservoir dykes and dams. 2) And performing calibration correction processing on the collected unmanned aerial vehicle remote sensing image. 3) And intelligent extraction of reservoir dam image features is performed on the basis of an enhanced graph cut algorithm of a super-pixel fusion edge operator. 4) After the image features of the reservoir dykes and dams are extracted, the high-precision matching technology of the super pixel surface block structure is utilized to perform multi-temporal unmanned aerial vehicle remote sensing dykes and dams change detection. The invention provides a set of multi-temporal remote sensing dam image calibration and intelligent segmentation change detection method which is high in detection precision, economical, convenient and fast and is based on an unmanned aerial vehicle.

Description

Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method
Technical Field
The invention relates to a method for detecting intelligent segmentation change of dam images by using unmanned aerial vehicle remote sensing technology. Belongs to the fields of computer vision, deep learning image processing and remote sensing.
Background
The unmanned aerial vehicle remote sensing image change detection technology is that an unmanned aerial vehicle airborne sensor is utilized to acquire change information of a certain area in a certain period. With the rapid development of unmanned aerial vehicle aircrafts and airborne sensors in China in recent years, unmanned aerial vehicle remote sensing technology is gradually perfected and matured in a change detection task. Unmanned aerial vehicle remote sensing technology has many advantages such as wide coverage, the periodicity is strong, with low costs. The system is often used for monitoring land utilization/coverage change, city expansion planning, forest cutting and protection, glacier ablation, geographical disaster prevention and the like. The change detection refers to extracting change information of ground features in the same geographic area and different time phases. The change detection is often used for city planning, land utilization, vegetation coverage, disaster detection, etc.
Remote Sensing (RS) image-based change detection is an important means for detecting surface changes, and is an effective method in urban planning, disaster prevention, environmental protection and agricultural general investigation. At present, dam change monitoring is carried out by utilizing unmanned aerial vehicle remote sensing data, but unmanned aerial vehicle remote sensing image data has the characteristics of being easily affected by environment, large in image data resolution, huge in data volume and the like, and a mature scheme for detecting reservoir dam change by using remote sensing images is not available at present.
Disclosure of Invention
Based on the method, the invention aims to provide an unmanned aerial vehicle remote sensing dam image calibration and intelligent segmentation change detection method, which aims to provide a mature scheme for reservoir dam change detection by using remote sensing images.
In order to achieve the above purpose, the present invention adopts the following technical scheme: an unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method comprises the following steps: 1) And the remote sensing image acquisition of the multi-phase high-resolution unmanned aerial vehicle is completed for the typical reservoir dykes and dams. 2) And performing calibration correction processing on the collected unmanned aerial vehicle remote sensing image. 3) And (3) intelligently extracting the image characteristics of the reservoir dam body based on an enhanced graph cut algorithm of LSC (Linear Spectral Clustering) super-pixel fusion edge operator. 4) After the image features of the reservoir dykes and dams are extracted, the high-precision matching technology of the super pixel surface block structure is utilized to perform multi-temporal unmanned aerial vehicle remote sensing dykes and dams change detection.
The method comprises the following steps of 1) completing high-resolution unmanned aerial vehicle remote sensing image acquisition of a typical reservoir dam through an unmanned aerial vehicle remote sensing mode, and realizing dam unmanned aerial vehicle remote sensing image acquisition by setting three flying heights of 50m, 100m and 200m according to the size ratio of 1:2:4 of ground dam remote sensing image pixels.
The step 2) carries out calibration correction processing on the remote sensing image of the unmanned aerial vehicle after the data acquisition is completed, and comprises the following steps: (1) correcting the remote sensing image shot by the unmanned aerial vehicle onboard camera by using an improved distortion model of a radial-eccentric-image plane; (2) carrying out orthographic correction on all corrected unmanned aerial vehicle remote sensing images; (3) after the orthographic image is acquired, the image needs to be rotated and intercepted; (4) and performing color unification on the multi-temporal unmanned aerial vehicle remote sensing image by using a histogram equalization and normalization based mode.
The step 3) of intelligently extracting reservoir dam body features based on an enhanced graph cut algorithm of an LSC super-pixel fusion edge operator comprises the following steps: (1) mapping each super pixel block in the LSC super pixel segmentation result into a point by using the idea of replacing the surface with the point, rearranging the point into a matrix, and introducing an edge operator; (2) inputting the new mapping matrix integrated with the edge operator into a graph cut algorithm, and constructing an enhanced graph cut algorithm for segmentation; (3) for the super-pixel segmentation result, the resolution is restored according to the index of the pixels in each super-pixel block.
And 4) after extracting the characteristics of the reservoir dam, performing multi-temporal unmanned aerial vehicle remote sensing image dam change detection, wherein the method comprises the following steps of: (1) and extracting and matching characteristic points of the images of the multiple time phases. Taking super pixels as feature points, and adopting a super pixel surface block structure high-precision matching technology for feature matching; (2) a homography transformation matrix is calculated through the matched characteristics, and the images are unified under a coordinate system through homography transformation (perspective transformation); (3) and performing differential operation on the dam extraction result mask images of the multiple time phases under the same coordinate system, so as to obtain a dam change result mask image, namely, a remote sensing dam image change result of the unmanned aerial vehicle.
The invention adopts the technical proposal, and has the following advantages:
1. a set of multi-temporal remote sensing dam image calibration and intelligent segmentation change detection method based on unmanned aerial vehicle is provided. Compared with other existing methods, the method has high detection precision, economy and convenience through a series of processes such as calibration, orthographic correction, color consistency adjustment, super-pixel fusion edge segmentation, high-precision matching of the super-pixel structure and the like.
2. The enhanced graph cut algorithm based on the LSC super-pixel fusion edge operator can effectively conduct intelligent segmentation extraction on the remote sensing image of the large-resolution unmanned aerial vehicle. The super pixels are rearranged into a mapping matrix by mapping into points to be transmitted into a segmentation algorithm, so that hardware consumption and operation time consumption are greatly reduced; the accuracy and smoothness of the segmented edges is increased by the enhanced graph cut algorithm that incorporates edge operators.
Drawings
In order to more clearly illustrate the specific embodiments of the present invention or the solutions in the prior art, the following description will briefly explain some specific embodiments or the drawings required in the prior art description, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a technical roadmap of an unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method provided by the invention;
FIG. 2 is a flowchart of the Ojin method;
FIG. 3 is a flow chart of dyke target extraction of an enhanced graph cut algorithm of an LSC super-pixel fusion edge operator;
FIG. 4 is a flow chart of two-phase image dam change detection;
fig. 5 shows the detection result of two-phase change of the reservoir dam.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
The invention relates to an unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method, which comprises the following steps:
1. a large number of multi-time-phase high-resolution image data are acquired for a typical reservoir in an unmanned aerial vehicle remote sensing mode, and dam on-site data unmanned aerial vehicle remote sensing image acquisition is realized by setting three flying heights of 50m, 100m and 200m according to the size ratio of 1:2:4 of the ground dam remote sensing image pixels.
2. After data acquisition is completed, performing calibration and correction processing on the remote sensing image of the high-resolution unmanned aerial vehicle, and comprising the following steps:
1) And correcting the image shot by the unmanned aerial vehicle onboard camera by using the improved camera distortion correction model. Due to the distortion of the camera, there is an offset between the actual point coordinates and the ideal point coordinates. And (3) establishing an improved distortion model of the radial-eccentric-image plane by using the following method, and correcting the image shot by the unmanned aerial vehicle-mounted camera. The correction model is as follows:, wherein : , in the formula As the coordinates of the ideal feature points,for the actual feature point coordinates,respectively isAndthe distortion in the direction is such that,respectively isAndradial distortion in the direction of the beam,respectively isAndthe eccentric distortion in the direction is such that,is thatAndthe plane distortion in the direction is such that,is a radial distortion systemThe number of the product is the number,is the coefficient of the eccentric distortion,as the image plane distortion coefficient,is the scale of the image plane distortion coefficient.
2) And carrying out orthographic correction on all corrected unmanned aerial vehicle remote sensing images, wherein the orthographic correction can be used for manufacturing a digital elevation data model containing three-dimensional coordinate information by analyzing GPS (Global Positioning System) geographic information of a plurality of images and combining camera parameters, and finally generating an orthographic image.
3) The method for rotating and capturing the image after the orthographic image is acquired comprises the following steps:
(1) an adaptive binarized image was obtained using the oxford method. The flow of the Ojin method is shown in figure 2, and the specific steps are as follows:
for a pair of imagesThe binarization segmentation threshold of the front background and the rear background of the image is set asThe duty ratio of the pixels belonging to the foreground after the binarization segmentation is set asAverage gray scale of. The duty ratio of the background pixel points after binarization segmentation is set asThe average gray level is. The total average gray level of the image is noted asThe inter-class variance is noted as. Assuming that the contrast of the image is high, the size of the image is recorded asGray value is smaller thanThe number of pixels of (a) isGray value is greater thanThe number of pixels of (a) isThe following steps are:
the pixel classification relationships are:
wherein :
substituting the formula to obtain:
at this time, the final inter-class variance is obtained after traversing the gray values of the whole imageLower pixel gray thresholdThe binarization threshold is thus obtained.
(2) Morphological denoising and adhesion removing are carried out, and the outline of a dam body is extracted;
(3) calculating the minimum circumscribed rectangle of the outline of the dam body;
(4) calculating the deflection angle of the rectangle, and rotating the image;
(5) cutting a main body of the reservoir dam body.
4) And performing color unification on the multi-temporal unmanned aerial vehicle remote sensing image by using a histogram equalization and normalization based mode. Firstly, carrying out histogram equalization on both a standard image and a target image to obtain the same normalized uniform histogram; and then obtaining a histogram mapping table (similar mapping according to the situation) of the two images based on the uniform histogram. And obtaining a pixel mapping table through the histogram mapping table, and recalculating each pixel value in the target image through the pixel mapping table.
3. The enhanced graph cut algorithm based on the LSC super-pixel fusion edge operator carries out intelligent extraction on the image features of the reservoir dam body, and the specific flow is shown in a figure 3 and comprises the following steps:
1) Each super-pixel block in the LSC super-pixel segmentation result is mapped to a point and rearranged into a matrix using the idea of replacing the surface with points.
2) Inputting the new mapping matrix integrated with the edge operator into a graph cut algorithm, constructing an enhanced graph cut algorithm for segmentation, and specifically comprising the following steps:
provided with a three-channel color imageAll pixel points of the image are constructed into a gray value matrix and are assembledFor the gray value of a pixel in a BGR color channel, whereIs an index number; the image being segmented by a set of fuzzy valuesThe representation is that for the classification problemE {0,1}, where 0 means that the current pixel belongs to the background and 1 means that it belongs to the foreground.
Edge operators are introduced into the graph cut algorithm.The direction gradient isThe direction gradient isDefinition ofIs a pixelAndthe probability value of the edge exists between the two, and the gradient approximation value is used as an edge operator:
defining a new smoothing term:
wherein ,andis a gray value vector of BGR color space, and adopts L2 norm to measure the similarity of color characteristics of two pixels, and when the difference of the two pixels is larger, the two pixels are more likely to be distributed as different labels;for adjusting the influence of the contrast of adjacent pixels when the image contrast is different,as a weighted harmonic factor of gray-scale and gradient intensities in the smoothing function, whenWhen the set is 1, the original graph cut algorithm is generally set to 0.5.
The data items in the graph cut algorithm are defined as follows:
wherein For a particular threshold value of the value,the probability that the current pixel belongs to the foreground or the background.
Finally, the energy function of the reconstructed enhanced graph cut algorithm is:
wherein ,is a balance factor used to balance the weights of the data item and the smooth item, typically taken as 0.5. Enhanced graph cut algorithmIs more robust and accurate.
3) And restoring the resolution according to the pixel index in each super pixel block in the segmentation result.
4. After the reservoir dyke characteristics are extracted, the multi-temporal unmanned aerial vehicle remote sensing image dyke change detection specifically comprises the following steps:
1) And taking super pixels as characteristic points, and adopting a super pixel surface block structure high-precision matching technology for characteristic point matching. And building an energy cost function of the inner panel of the super-pixel structure body by referring to the photometric consistency constraint in the regional matching of the optical flow method and the pixel-by-pixel cost matching method in the three-dimensional dense matching.
wherein ,representing a region of the super-pixel structure,an arrangement number indicating the sequence of the super pixel structure,representing the color information of the color-coded light,representing the gradient information of the object,representation and reference pixelsCorresponding pixels.As a penalty function
2) Super-pixel by matchingThe homography transformation matrix is calculated by the characteristic points, and the homography transformation (perspective transformation) is based on the formulaThe images are unified in the same coordinate system. Wherein the method comprises the steps ofIs the phase 1 image of the image,is the phase 2 image of the image,the homography matrix is calculated according to four pairs of matching points of two images. Unifying the images in a coordinate system specifically comprises the following steps:
(1) through the super-pixel matching characteristics of the multi-phase image, a homography transformation matrix is calculated;
definition of homography matrixRepresenting a mapping between two planes, the expression is as follows:
setting arbitrary point in left graph in two-dimensional imageAnd matching points in the corresponding right graphThe mapping relation comprises the following steps:
writing transforms into matricesForm:
from the above equation, a set of matched feature points can obtain two sets of equations, and homography matrixThere are 9 unknowns. However, in practice there are only 8 degrees of freedom, and constraints can be added in general such thatBecause of the presence of:
wherein As scale factors, there are:
as can be seen from the above equation, the mapping of points has no effect at all after adding one scale factor.
Order theCan makeThe method comprises the following steps:
obtaining homography matrixIn practice there are only 8 degrees of freedom, at least onlyAnd 4 pairs of matching points which are not collinear can be solved. Because more than 4 pairs of excellent matching points can be found in the actual characteristic point matching process, the overdetermined problem is generally solved by constructing a maximum likelihood function by using a least square method so as to improve the registration accuracy;
(2) after the homography matrix is calculated, each pixel of the image is reprojected through the homography transformation matrix, and a new projection image is generated;
(3) and interpolating pixels of the remapped image to unify the image in a coordinate system.
4) And performing differential operation on the dam extraction result mask images of the multiple time phases under the same coordinate system, so as to obtain a dam change result mask image, namely, a remote sensing dam image change result of the unmanned aerial vehicle.
The foregoing description of the preferred embodiments of the present invention is not intended to be limiting, but it will be understood that all modifications, equivalents, or improvements within the spirit and scope of the present invention are intended to be included within the scope of the present invention as defined by the following claims.

Claims (5)

1. An unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method comprises the following steps:
1) Completing multi-time-phase high-resolution unmanned aerial vehicle remote sensing image acquisition on a typical reservoir dam;
2) Performing calibration correction processing on the collected unmanned aerial vehicle remote sensing image;
3) Intelligent extraction is carried out on the image characteristics of the reservoir dam body based on an enhanced graph cut algorithm of the super-pixel fusion edge operator;
4) After the image features of the reservoir dykes and dams are extracted, the high-precision matching technology of the super pixel surface block structure is utilized to perform multi-temporal unmanned aerial vehicle remote sensing dykes and dams change detection.
2. The method of claim 1, wherein dam unmanned aerial vehicle remote sensing image acquisition is achieved by setting three flying heights of 50m, 100m and 200m according to a size ratio of 1:2:4 of ground dam remote sensing image pixels.
3. The method of claim 1, wherein the step of performing a calibration correction process on the drone remote sensing image comprises:
a) Performing calibration correction processing on the remote sensing image of the unmanned aerial vehicle; because of the distortion of the unmanned aerial vehicle remote sensing image, the offset exists between the actual point coordinates and the ideal point coordinates, and the following formula is utilized to establish an improved distortion correction model of the radial-eccentric-image plane; wherein : , in the formula As the coordinates of the ideal feature points,for the actual feature point coordinates,respectively isAndthe distortion in the direction is such that,respectively isAndradial distortion in the direction of the beam,respectively isAndthe eccentric distortion in the direction is such that,is thatAndthe plane distortion in the direction is such that,as the radial distortion coefficient of the lens,is the coefficient of the eccentric distortion,as the image plane distortion coefficient,is the scale of the image plane distortion coefficient;
b) Carrying out orthographic correction on the corrected unmanned aerial vehicle remote sensing image;
c) After the orthographic image is acquired, the image needs to be rotated and intercepted, and finally, the multi-temporal unmanned aerial vehicle remote sensing image is subjected to color unification in a histogram equalization and normalization based mode.
4. The method of claim 1, wherein an enhanced graph cut algorithm of LSC (Linear Spectral Clustering) super-pixel fusion edge operator is provided to perform target extraction on reservoir dam image features; the steps of the extraction algorithm include:
a) Mapping each super pixel block in the LSC super pixel segmentation result into a point by using the idea of replacing the surface with the point, rearranging the point into a matrix, and introducing the following edge operators;the direction gradient isThe direction gradient isDefinition ofIs a pixelAndthe probability value of the edge exists between the two, and gradient approximation value is used as an edge operator
b) Inputting the new mapping matrix integrated with the edge operator into a graph cut algorithm, and constructing an enhanced graph cut algorithm for segmentation; enhanced segmentation is characterized by defining new smooth terms; wherein ,andis a gray value vector of the BGR color space, and the L2 norm is used to measure the similarity of the color characteristics of the two pixels, when the difference between the two pixels is larger, the more likely it is to be allocated to different labels,for adjusting the influence of the contrast of adjacent pixels when the image contrast is different,as a weighted harmonic factor of gray-scale and gradient intensities in the smoothing function, whenWhen the set value is 1, the original graph cut algorithm is generally set to be 0.5; the data items in the graph cut algorithm are defined as:, wherein For a particular threshold value of the value,for the probability that the current pixel belongs to the foreground or background,is a pixelAt the gray value of the BGR color channel,e {0: background, 1: foreground }; the energy function of the reconstructed enhanced graph cut algorithm is, wherein ,is a balance factor used to balance the weights of the data item and the smooth item, typically taken as 0.5. The segmentation result of the enhanced graph cutting algorithm is more robust and accurate;
c) And (3) reducing the resolution according to the index of the pixels in each super pixel block according to the segmentation result of the last step.
5. The method of claim 1, wherein the step of performing dyke change detection of the multi-temporal unmanned aerial vehicle remote sensing image comprises;
a) And extracting and matching characteristic points of the images of the multiple time phases. And taking the super pixel as a characteristic point, and adopting a super pixel surface block structure high-precision matching technology for characteristic point matching. The key of the high-precision matching technology of the super-pixel surface block structure is to establish an energy cost function of the inner surface block of the super-pixel structure; wherein ,representing a region of the super-pixel structure,an arrangement number indicating the sequence of the super pixel structure,representing the color information of the color-coded light,representing the gradient information of the object,representation and reference pixelsCorresponding pixels.As a penalty function
b) A homography transformation matrix is calculated through the matched super-pixel characteristic points, and the images are unified under a coordinate system through homography transformation;
c) And performing differential operation on the dam extraction result mask images of the multiple time phases under the same coordinate system, so as to obtain a dam change result mask image, namely, a remote sensing dam image change result of the unmanned aerial vehicle.
CN202310961454.4A 2023-08-02 2023-08-02 Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method Pending CN116863357A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310961454.4A CN116863357A (en) 2023-08-02 2023-08-02 Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310961454.4A CN116863357A (en) 2023-08-02 2023-08-02 Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method

Publications (1)

Publication Number Publication Date
CN116863357A true CN116863357A (en) 2023-10-10

Family

ID=88232296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310961454.4A Pending CN116863357A (en) 2023-08-02 2023-08-02 Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method

Country Status (1)

Country Link
CN (1) CN116863357A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853334A (en) * 2024-03-07 2024-04-09 中国人民解放军海军青岛特勤疗养中心 Medical image reconstruction method and system based on DICOM image
CN118505702A (en) * 2024-07-18 2024-08-16 自然资源部第一海洋研究所 Rapid calculation method for sandy coast erosion amount based on multi-period remote sensing image
CN118644974A (en) * 2024-08-19 2024-09-13 中铁水利水电规划设计集团有限公司 Automatic monitoring and early warning system and method for dyke seepage

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853334A (en) * 2024-03-07 2024-04-09 中国人民解放军海军青岛特勤疗养中心 Medical image reconstruction method and system based on DICOM image
CN117853334B (en) * 2024-03-07 2024-05-14 中国人民解放军海军青岛特勤疗养中心 Medical image reconstruction method and system based on DICOM image
CN118505702A (en) * 2024-07-18 2024-08-16 自然资源部第一海洋研究所 Rapid calculation method for sandy coast erosion amount based on multi-period remote sensing image
CN118644974A (en) * 2024-08-19 2024-09-13 中铁水利水电规划设计集团有限公司 Automatic monitoring and early warning system and method for dyke seepage

Similar Documents

Publication Publication Date Title
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN116863357A (en) Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method
US10366501B2 (en) Method and apparatus for performing background image registration
CN104156536B (en) The visualization quantitatively calibrating and analysis method of a kind of shield machine cutter abrasion
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN103218787B (en) Multi-source heterogeneous remote sensing image reference mark automatic acquiring method
CN111047695B (en) Method for extracting height spatial information and contour line of urban group
CN110610505A (en) Image segmentation method fusing depth and color information
CN113223045B (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN110197173B (en) Road edge detection method based on binocular vision
CN109087323A (en) A kind of image three-dimensional vehicle Attitude estimation method based on fine CAD model
CN112016478B (en) Complex scene recognition method and system based on multispectral image fusion
CN107341781A (en) Based on the SAR image correcting methods for improving the matching of phase equalization characteristic vector base map
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN109461132A (en) SAR image automatic registration method based on feature point geometric topological relation
Karsli et al. Automatic building extraction from very high-resolution image and LiDAR data with SVM algorithm
CN108647658A (en) A kind of infrared imaging detection method of high-altitude cirrus
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN112183434A (en) Building change detection method and device
CN108509835B (en) PolSAR image ground object classification method based on DFIC super-pixels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20231010

WD01 Invention patent application deemed withdrawn after publication