CN115937527A - Image segmentation method and device for unmanned aerial vehicle image - Google Patents
Image segmentation method and device for unmanned aerial vehicle image Download PDFInfo
- Publication number
- CN115937527A CN115937527A CN202211548343.2A CN202211548343A CN115937527A CN 115937527 A CN115937527 A CN 115937527A CN 202211548343 A CN202211548343 A CN 202211548343A CN 115937527 A CN115937527 A CN 115937527A
- Authority
- CN
- China
- Prior art keywords
- image
- sub
- aerial vehicle
- unmanned aerial
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 107
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000011218 segmentation Effects 0.000 claims abstract description 63
- 238000004590 computer program Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 abstract description 15
- 238000005516 engineering process Methods 0.000 abstract description 7
- 230000003252 repetitive effect Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 239000010749 BS 2869 Class C1 Substances 0.000 description 1
- 239000010750 BS 2869 Class C2 Substances 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides an image segmentation method and device for an unmanned aerial vehicle image, and relates to the technical field of image processing. The method comprises the following steps: determining a plurality of segmentation thresholds corresponding to a plurality of sub-images according to the gradient amplitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image; respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images; determining a repetition region between a first image and a second image acquired by the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle, wherein the first image and the second image are acquired by the unmanned aerial vehicle at adjacent time intervals; and carrying out image matching on the repeated region according to the plurality of image segmentation results to obtain the identification result of the repeated region. Therefore, the method and the device can solve the problems that the image segmentation technology in the prior art is low in processing speed and inaccurate in image segmentation result.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image segmentation method and device for an unmanned aerial vehicle image.
Background
Image Segmentation (Image Segmentation) is a process of subdividing a mathematical Image into a plurality of Image sub-regions (sets of pixels, also called superpixels). The purpose of image segmentation is to simplify or change the representation of the image so that the image is more easily understood and analyzed. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in an image. More specifically, image segmentation is the process of labeling each pixel in an image such that pixels with the same label have some common visual characteristic.
In the prior art, when the image segmentation is performed, when the gray level difference of the image is not large and the texture features of an object are too much, the segmentation result is inaccurate, and the phenomenon of over-segmentation can be caused. The real-time problem in the image processing system also seriously affects the feasibility of the threshold segmentation method in practical engineering application.
For example, in the flight process of the unmanned aerial vehicle, for the image acquired by the unmanned aerial vehicle in real time, the image segmentation technology in the prior art is slow in processing speed, and the segmentation result is inaccurate.
Disclosure of Invention
The embodiment of the invention provides an image segmentation method and device for an unmanned aerial vehicle image, and aims to solve the problems that the image segmentation technology in the prior art is low in processing speed and inaccurate in image segmentation result.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image segmentation method for an unmanned aerial vehicle image, where the method includes: determining a plurality of segmentation thresholds corresponding to a plurality of sub-images according to gradient amplitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image; respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images; determining a repetition region between a first image and a second image acquired by the drone according to flight data of the drone, the first image and the second image being acquired by the drone at adjacent time intervals; and carrying out image matching on the repeated region according to the plurality of image segmentation results to obtain the identification result of the repeated region.
Further, the determining, according to the gradient magnitude distribution in the gradient histogram corresponding to the first image acquired by the unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image, a plurality of segmentation thresholds corresponding to the plurality of sub-images includes: determining pixel maximum gradient variances corresponding to the sub-images according to the pixel maximum gradient in the first image and the pixel number corresponding to the sub-images; and determining a first threshold value and a second threshold value according to the variance of the pixel maximum gradient and the pixel maximum gradient.
Further, the performing image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds to obtain a plurality of image segmentation results corresponding to the plurality of sub-images includes: comparing the gray-scale value of each pixel in the sub-image with the first threshold and the second threshold to determine a category corresponding to each pixel in the sub-image; and determining an image segmentation result corresponding to the sub-image according to the category corresponding to each pixel in the sub-image.
Further, the determining a repetition region between the first image and the second image according to the flight data of the drone includes: determining a first offset of the unmanned aerial vehicle in a first direction, a second offset of the unmanned aerial vehicle in a second direction and a resolution corresponding to the unmanned aerial vehicle according to the flight data of the unmanned aerial vehicle; determining a first pixel offset in the first direction and a second pixel offset in the second direction according to the first offset, the second offset and the resolution; and determining the repetition region according to the first pixel offset and the second pixel offset.
Further, according to the flight data of the unmanned aerial vehicle, determining a first offset of the unmanned aerial vehicle in a first direction, a second offset of the unmanned aerial vehicle in a second direction, and a resolution corresponding to the unmanned aerial vehicle includes: determining the first offset and the second offset according to the position of the unmanned aerial vehicle when shooting the first image, the flight speed, the flight direction and the acquisition time interval of the unmanned aerial vehicle; and determining the resolution corresponding to the unmanned aerial vehicle according to the height of the unmanned aerial vehicle, the focal length of the camera and the image width of the first image.
In a second aspect, an embodiment of the present invention further provides an image segmentation apparatus for an image of an unmanned aerial vehicle, where the apparatus includes: the unmanned aerial vehicle image segmentation device comprises a first determining module, a second determining module and a segmentation module, wherein the first determining module is used for determining a plurality of segmentation threshold values corresponding to a plurality of sub-images in a first image according to a gradient histogram and the number of pixels corresponding to the first image acquired by the unmanned aerial vehicle; the image segmentation module is used for respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images; a second determining module, configured to determine, according to flight data of the drone, a repetition region between a first image and a second image acquired by the drone, where the first image and the second image are acquired by the drone at adjacent time intervals; and the image matching module is used for carrying out image matching on the repeated region according to the plurality of image segmentation results so as to obtain the identification result of the repeated region.
Further, the first determining module comprises: the first determining submodule is used for determining pixel maximum gradient variances corresponding to the sub-images according to the pixel maximum gradient in the first image and the pixel quantity corresponding to the sub-images; and the second determining submodule is used for determining a first threshold and a second threshold according to the pixel maximum gradient variance and the pixel maximum gradient.
Further, the image segmentation module comprises: the comparison submodule is used for comparing the gray value of each pixel in the sub-image with the first threshold and the second threshold so as to determine the category corresponding to each pixel in the sub-image; and the third determining submodule is used for determining an image segmentation result corresponding to the sub-image according to the category corresponding to each pixel in the sub-image.
Further, the second determining module includes: the fourth determining sub-module is used for determining a first offset of the unmanned aerial vehicle in the first direction, a second offset of the unmanned aerial vehicle in the second direction and a resolution corresponding to the unmanned aerial vehicle according to the flight data of the unmanned aerial vehicle; a fifth determining submodule, configured to determine a first pixel offset in the first direction and a second pixel offset in the second direction according to the first offset, the second offset, and the resolution; a sixth determining submodule, configured to determine the repetition region according to the first pixel offset and the second pixel offset.
Further, the fourth determination submodule includes: the first determining unit is used for determining the first offset and the second offset according to the position of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the first image, the flight speed, the flight direction and the acquisition time interval of the unmanned aerial vehicle; and the second determining unit is used for determining the resolution corresponding to the unmanned aerial vehicle according to the height of the unmanned aerial vehicle, the focal length of the camera and the image width of the first image.
In a third aspect, an embodiment of the present invention additionally provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method of image segmentation of a drone image according to the previous first aspect.
In a fourth aspect, an embodiment of the present invention additionally provides a readable storage medium, where a computer program is stored, and when executed by a processor, the computer program implements the steps of the image segmentation method for an image of a drone according to the first aspect.
In the embodiment of the invention, a plurality of segmentation thresholds corresponding to a plurality of sub-images in a first image are determined according to a gradient histogram and the number of pixels corresponding to the first image acquired by an unmanned aerial vehicle; respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images; determining a repetition region between a first image and a second image acquired by the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle, wherein the first image and the second image are acquired by the unmanned aerial vehicle at adjacent time intervals; and carrying out image matching on the repeated region according to the plurality of image segmentation results to obtain the identification result of the repeated region. In the embodiment, the segmentation threshold corresponding to each sub-image is determined according to the gradient histogram corresponding to the first image and the pixel number, so that each sub-image has the segmentation threshold adaptive to the sub-image, and then the sub-image is segmented based on the segmentation threshold, so that the image identification and classification in the sub-image are realized, and the accuracy of image segmentation in the sub-image is improved; furthermore, the repetitive region of the second image adjacent to the first image is determined based on the flight data of the unmanned aerial vehicle, and the image matching is performed on the multiple image segmentation results according to the repetitive region, so that the calculation amount of image identification and classification in the repetitive region is reduced, and the processing speed of the repetitive region in the second image is increased. The invention solves the problems that the image segmentation technology in the prior art is low in processing speed and inaccurate in image segmentation result.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image segmentation method for an image of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image segmentation apparatus for an unmanned aerial vehicle image in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
According to the embodiment of the invention, an image segmentation method for an unmanned aerial vehicle image is provided, and as shown in fig. 1, the method specifically comprises the following steps:
s102, determining a plurality of segmentation thresholds corresponding to a plurality of sub-images according to gradient amplitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image;
s104, respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images;
s106, determining a repetition region between the first image and a second image acquired by the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle, wherein the first image and the second image are acquired by the unmanned aerial vehicle at adjacent time intervals;
and S108, carrying out image matching on the repeated region according to the plurality of image segmentation results to obtain the identification result of the repeated region.
In this embodiment, the unmanned aerial vehicle carries out the collection of image every preset time interval. In the process of image acquisition of the unmanned aerial vehicle, the flying speed, flying height and flying direction of the unmanned aerial vehicle are regarded as unchanged. Meanwhile, the corresponding shooting parameters of the unmanned aerial vehicle are also kept unchanged, such as the focal length, the aperture, the shutter speed and the like of the camera.
In this embodiment, the first image and the second image are adjacent images acquired by the drone at adjacent time intervals. There is a certain overlap area between the first image and the second image.
In practical application scenarios, the segmentation threshold of the image segmentation includes a high threshold and a low threshold, where the high threshold must be selected outside the non-edge region in the gradient histogram, otherwise, a lot of false edge noise will be introduced to the final result of the image segmentation.
In this embodiment, a non-edge region in the first image is determined according to the gradient magnitude distribution of the first image in the gradient histogram, and then a segmentation threshold corresponding to each sub-image is further determined according to the sum of the number of pixels corresponding to each sub-image in the first image.
In the present embodiment, the segmentation threshold includes, but is not limited to, a high threshold and a low threshold. The segmentation threshold may also comprise a single threshold.
And then performing threshold segmentation on each sub-image based on a segmentation threshold corresponding to each sub-image, solving an optimal threshold according to a preset rule based on the gray scale characteristics of the sub-images, comparing the gray scale value of each pixel in the sub-images with the segmentation threshold, and finally dividing each pixel into a proper category according to a comparison result.
The threshold segmentation can be used for segmenting the image into a plurality of target and background regions by utilizing one or more optimal value sets, and if only a single optimal threshold value exists, the image is subjected to single-threshold segmentation, which is also called binarization of the image. And the method of searching for multiple thresholds for image segmentation is called image multi-valued segmentation. The purpose of value segmentation is to follow the information of the image (such as the gray value of each pixel of the image, the gray value distribution characteristics of the neighboring pixels, etc.). A partition is made of the set of pixels so that each resulting subset forms a region corresponding to the real scene, or the regions exhibit homogeneous behavior within.
Next, after determining the image segmentation result corresponding to each sub-image in the first image, determining the overlapping area in the first image and the second image according to the flight data of the unmanned aerial vehicle. The flight data in this embodiment includes, but is not limited to, the flight speed, flight altitude, flight direction, and the like of the drone. In addition, the flight data may also include, but is not limited to, corresponding shooting parameters of the drone, such as focal length, aperture, and shutter speed of the camera.
Finally, image matching is carried out on the repeated area according to the image segmentation result of each sub-image in the first image, for example, through analysis on the corresponding relation, similarity and consistency of the image content, characteristics, structure, relation, texture, gray level and the like, an image target similar to that in the first image in the repeated area is sought. The corresponding image matching method has a mature algorithm in the prior art, and details thereof are not described in this embodiment.
According to the embodiment of the invention, a plurality of segmentation thresholds corresponding to a plurality of sub-images are determined according to the gradient amplitude distribution in the gradient histogram corresponding to the first image and the sum of the number of pixels corresponding to the plurality of sub-images in the first image, so that each sub-image has a segmentation threshold adaptive to the sub-image, and then the sub-image is segmented based on the segmentation thresholds respectively, so that the image identification and classification in the sub-image are realized respectively, and the accuracy of image segmentation in the sub-image is improved; furthermore, the repetitive region of the second image adjacent to the first image is determined based on the flight data of the unmanned aerial vehicle, and the image matching is performed on the multiple image segmentation results according to the repetitive region, so that the calculation amount of image identification and classification in the repetitive region is reduced, and the processing speed of the repetitive region in the second image is increased. The invention solves the problems that the image segmentation technology in the prior art is low in processing speed and inaccurate in image segmentation result.
Optionally, in this embodiment, according to a gradient magnitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and a sum of corresponding pixels of a plurality of sub-images in the first image, a plurality of segmentation thresholds corresponding to the plurality of sub-images are determined, including but not limited to: determining pixel maximum gradient variances corresponding to the sub-images according to the pixel maximum gradient in the first image and the pixel number corresponding to the sub-images; and determining a first threshold value and a second threshold value according to the variance of the maximum gradient of each pixel and the maximum gradient of the pixel.
Specifically, since only a small number of pixels in a general image are edges, the proportion of non-edges in the image is much larger than that of edges, and thus the gradient amplitude distribution is generally a unimodal distribution, and it can be considered that a pixel set corresponding to a peak top of the unimodal distribution is necessarily a non-edge pixel set. The gradient with the most pixels in the gradient magnitude is called the pixel maximum gradient H max Calculating the gradient H of all pixels in the subimage i Relative to the maximum gradient H of the pixel max The variance of (c), called the pixel maximum gradient variance, is as follows:
where N is the sum of the pixels in the sub-image.
In this embodiment, the first threshold is a high threshold T h The second threshold is a low threshold T l . Wherein a high threshold value T h Must be selected outside the non-edge regions of the gradient histogram, otherwiseGiving the end result a lot of false edge noise.
In this embodiment, the pixel maximum gradient H is used max And the variance σ of the maximum gradient of the pixel max Adaptively setting a high threshold T h Threshold of (2), pixel maximum gradient H max Reflects the center position of the distribution of the non-edge region in the gradient histogram, and the pixel maximum gradient variance sigma max The degree of dispersion of the gradient distribution in the gradient histogram with respect to the maximum gradient of the pixel, i.e. with respect to the non-edge region, is reflected. Approximately if the threshold T is high h Greater than the maximum gradient H of the pixel max Maximum gradient variance σ of a certain multiple max Then, T can be considered h Outside the non-edge region to prevent false edges from appearing in the contour map.
Therefore, according to the pixel maximum gradient H max And the variance σ of the maximum gradient of the pixel max Determining the extent of the non-edge region and thus the high threshold T h Then according to a high threshold value T h Threshold of (2) determining a low threshold value T l Is detected. Calculating T h And T l The formula of (1) is:
T h= H max +β*σ max
T l =k*T h
where β is an adjustment factor, generally between 2 and 5, and K is a proportionality coefficient between the high threshold and the low threshold, generally about 0.4. The threshold is set by adopting a self-adaptive dynamic threshold method, the situation that the contour edge of a target object is discontinuous or some targets cannot be detected due to different local image gray scale change differences can be reduced, and meanwhile, the automation degree of edge extraction can be improved to a certain extent.
Optionally, in this embodiment, the image segmentation is performed on the plurality of sub-images according to a plurality of segmentation thresholds, so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images, which includes but is not limited to: comparing the gray value of each pixel in the sub-image with a first threshold and a second threshold to determine the category corresponding to each pixel in the sub-image; and determining an image segmentation result corresponding to the sub-image according to the category corresponding to each pixel in the sub-image.
The threshold segmentation may be to segment the image into several target and background regions by using one or more optimal value sets, and if there is only a single optimal threshold, it is image single-threshold segmentation, also called binarization of the image. And the method of searching a plurality of threshold values for image segmentation is called image multi-valued segmentation. The purpose of value segmentation is to follow the information of the image (such as the gray value of each pixel of the image, the gray value distribution characteristics of the neighboring pixels, etc.). A partition is made of the set of pixels so that each resulting subset forms a region corresponding to the real scene, or the regions exhibit homogeneous behavior within.
Let I (I, j) denote the pixel value of ith row and jth column of the image I, wherein if 1 is a gray scale image, I (I, j) is a covariance; if 1 is a color image, then I (I, j) is a vector. Taking a gray scale image as an example, assume that the image has a gray scale level of L. The image single threshold segmentation is to divide pixels with pixel values smaller than a threshold t in an image I into a class C2: above the threshold i, another class C1,. The segmentation result can be represented by the following formula:
C 1 ={I(i,j)∈I|0≤I(i,j)≤t-1}
C 2 ={I(i,j)∈I|t≤I(i,j)≤L-1}
in image multi-threshold segmentation, d thresholds divide the image into d +1 types of segmentation results:
C k+1 ={I(i,j)∈I|t k ≤I(i,j)≤t k+1 -1}
wherein, t k Denotes the K-th threshold, K =0,1, …, d, t 0 =0,t k =L,t k <t k+1 。
Optionally, in this embodiment, the repetitive region between the first image and the second image is determined according to flight data of the drone, including but not limited to: determining a first offset of the unmanned aerial vehicle in a first direction, a second offset of the unmanned aerial vehicle in a second direction and a resolution corresponding to the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle; determining a first pixel offset in a first direction and a second pixel offset in a second direction according to the first offset, the second offset and the resolution; and determining the repeated area according to the first pixel offset and the second pixel offset.
Optionally, in this embodiment, according to the flight data of the drone, determining a first offset of the drone in the first direction, a second offset of the drone in the second direction, and a corresponding resolution of the drone, including but not limited to: determining a first offset and a second offset according to the position of the unmanned aerial vehicle when shooting the first image, the flight speed and the flight direction of the unmanned aerial vehicle and the acquisition time interval; and determining the resolution corresponding to the unmanned aerial vehicle according to the height of the unmanned aerial vehicle, the focal length of the camera and the image width of the first image.
The overlapping area of the two images is estimated by data including the position, attitude and flying speed of the unmanned aerial vehicle. Let the eastward velocity be v e North velocity is v n The time interval of sampling is T c Then the offset S of the displacement from the east is measured e And displacement offset S from true north n The following estimation can be made
S e =v e T c
S n =v n T c
Two successive images are denoted by I1 and I2, respectively, with relative offset and overlap regions between them. The number of rows and columns of the first image is L e1 And L n1 The number of rows and columns of the second image are respectively represented by L e2 And L n2 And (4) showing.
The extent of the overlapping area of the two images is first estimated. Assuming that the flying height of the unmanned aerial vehicle is h, the focal length of the camera on the unmanned aerial vehicle is F, and the width of the pixel in each image is L, the resolution r of the ground instrument can be calculated as follows
In Z time, the pixel shift amount P of the east side of the image and the pixel shift amount P of the north side of the image are respectively calculated as
Therefore, the size of the overlapping area A1 can be calculated as
L e1 =M 1 -P e
L n1 =N 1 -P n
Wherein the number of rows in A1 is defined as L e1 The number of columns is defined as L n1 。
In order to reduce the amount of computation in image matching, the selection of the feature template is defined in A2, and the number of rows and columns of A2 are respectively defined as L e2 And L n2 They are respectively:
L e2 =L e1 *0.8
L n2 =L n1 *0.8
following the determination of the overlap area A1, the feature template A2 has already been determined and the next step can be performed. Just because the areas A1 and A2 are limited to a relatively small image range, it is possible to certainly reduce the amount of calculation in the image matching process and improve the reliability thereof.
According to the method and the device, a plurality of segmentation thresholds corresponding to a plurality of sub-images are determined according to the gradient amplitude distribution in the gradient histogram corresponding to the first image acquired by the unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image; respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images; determining a repetition region between a first image and a second image acquired by the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle, wherein the first image and the second image are acquired by the unmanned aerial vehicle at adjacent time intervals; and carrying out image matching on the repeated region according to the plurality of image segmentation results to obtain the identification result of the repeated region. In the embodiment, the segmentation threshold corresponding to each sub-image is determined according to the gradient histogram corresponding to the first image and the number of pixels, so that each sub-image has the segmentation threshold adaptive to the sub-image, and then the sub-image is segmented based on the segmentation threshold, so that the image identification and classification in the sub-image are realized, and the accuracy of image segmentation in the sub-image is improved; furthermore, the repetitive region of the second image adjacent to the first image is determined based on the flight data of the unmanned aerial vehicle, and the image matching is performed on the multiple image segmentation results according to the repetitive region, so that the calculation amount of image identification and classification in the repetitive region is reduced, and the processing speed of the repetitive region in the second image is increased. The invention solves the problems that the image segmentation technology in the prior art is low in processing speed and inaccurate in image segmentation result.
Example two
The embodiment of the invention provides an image segmentation device for an unmanned aerial vehicle image.
Referring to fig. 2, a schematic structural diagram of an image segmentation apparatus for an unmanned aerial vehicle image in an embodiment of the present invention is shown.
The image segmentation device for the unmanned aerial vehicle image comprises the following components: a first determination module 20, an image segmentation module 22, a second determination module 24, and an image matching module 24.
The functions of the modules and the interaction relationship between the modules are described in detail below.
A first determining module 20, configured to determine, according to a gradient amplitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and a sum of pixel numbers corresponding to a plurality of sub-images in the first image, a plurality of segmentation thresholds corresponding to the plurality of sub-images;
the image segmentation module is used for respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images;
a second determining module 22, configured to determine, according to flight data of the drone, a repetition region between a first image and a second image acquired by the drone, where the first image and the second image are acquired by the drone at adjacent time intervals;
and the image matching module 24 is configured to perform image matching on the repeated region according to the multiple image segmentation results to obtain an identification result of the repeated region.
Optionally, in this embodiment, the first determining module 20 includes:
the first determining submodule is used for determining pixel maximum gradient variances corresponding to the sub-images according to the pixel maximum gradient in the first image and the pixel quantity corresponding to the sub-images;
and the second determining submodule is used for determining a first threshold and a second threshold according to the pixel maximum gradient variance and the pixel maximum gradient.
Optionally, in this embodiment, the image segmentation module 22 includes:
the comparison submodule is used for comparing the gray value of each pixel in the sub-image with the first threshold and the second threshold so as to determine the category corresponding to each pixel in the sub-image;
and the third determining submodule is used for determining an image segmentation result corresponding to the sub-image according to the category corresponding to each pixel in the sub-image.
Optionally, in this embodiment, the second determining module 24 includes:
the fourth determining sub-module is used for determining a first offset of the unmanned aerial vehicle in the first direction, a second offset of the unmanned aerial vehicle in the second direction and a resolution corresponding to the unmanned aerial vehicle according to the flight data of the unmanned aerial vehicle;
a fifth determining submodule, configured to determine a first pixel offset in the first direction and a second pixel offset in the second direction according to the first offset, the second offset, and the resolution;
a sixth determining submodule, configured to determine the repetition region according to the first pixel offset and the second pixel offset.
Optionally, in this embodiment, the fourth determining sub-module includes:
the first determining unit is used for determining the first offset and the second offset according to the position of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the first image, the flight speed, the flight direction and the acquisition time interval of the unmanned aerial vehicle;
and the second determining unit is used for determining the resolution corresponding to the unmanned aerial vehicle according to the height of the unmanned aerial vehicle, the focal length of the camera and the image width of the first image.
Moreover, in the embodiment of the invention, a plurality of segmentation thresholds corresponding to a plurality of sub-images are determined according to the gradient amplitude distribution in the gradient histogram corresponding to the first image acquired by the unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image, so that each sub-image has a segmentation threshold adaptive to the sub-image, and then the sub-image is segmented based on the segmentation thresholds respectively, so that the image identification and classification in the sub-image are realized respectively, and the accuracy of image segmentation in the sub-image is improved; furthermore, the repetitive region of the second image adjacent to the first image is determined based on the flight data of the unmanned aerial vehicle, image matching is carried out on the multiple image segmentation results according to the repetitive region, the calculation amount of image identification and classification in the repetitive region is reduced, and the processing speed of the repetitive region in the second image is increased. The invention solves the problems of low processing speed and inaccurate image segmentation result of the image segmentation technology in the prior art.
EXAMPLE III
Preferably, an embodiment of the present invention further provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the image segmentation method of drone images as described above.
Optionally, in this embodiment, the memory is configured to store program code for performing the steps of:
s1, determining a plurality of segmentation thresholds corresponding to a plurality of sub-images according to gradient amplitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image;
s2, respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images;
s3, determining a repeating area between a first image and a second image acquired by the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle, wherein the first image and the second image are acquired by the unmanned aerial vehicle at adjacent time intervals;
and S4, carrying out image matching on the repeated region according to the plurality of image segmentation results to obtain the identification result of the repeated region.
Optionally, the specific example in this embodiment may refer to the example described in embodiment 1 above, and this embodiment is not described again here.
Example four
The embodiment of the invention also provides a readable storage medium. Optionally, in this embodiment, the readable storage medium stores thereon a program or instructions, and the program or instructions when executed by the processor implement the steps of the image segmentation method for the drone image according to embodiment 1.
Optionally, in this embodiment, the readable storage medium is configured to store program code for performing the steps of:
s1, determining a plurality of segmentation thresholds corresponding to a plurality of sub-images according to gradient amplitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image;
s2, respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images;
s3, determining a repeating area between a first image and a second image acquired by the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle, wherein the first image and the second image are acquired by the unmanned aerial vehicle at adjacent time intervals;
and S4, carrying out image matching on the repeated region according to the plurality of image segmentation results to obtain the identification result of the repeated region.
Optionally, the readable storage medium is further configured to store program codes for performing the steps included in the method in embodiment 1, which is not described in detail in this embodiment.
Optionally, in this embodiment, the readable storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Optionally, the specific example in this embodiment may refer to the example described in embodiment 1 above, and this embodiment is not described again here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (12)
1. An image segmentation method for unmanned aerial vehicle images is characterized by comprising the following steps:
determining a plurality of segmentation thresholds corresponding to a plurality of sub-images according to gradient amplitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image;
respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images;
determining a repetition region between a first image and a second image acquired by the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle, wherein the first image and the second image are acquired by the unmanned aerial vehicle at adjacent time intervals;
and performing image matching on the repeated region according to the plurality of image segmentation results to obtain an identification result of the repeated region.
2. The method of claim 1, wherein the determining, according to a gradient magnitude distribution in a gradient histogram corresponding to a first image acquired by the drone and a sum of pixel numbers corresponding to a plurality of sub-images in the first image, a plurality of segmentation thresholds corresponding to the plurality of sub-images comprises:
determining pixel maximum gradient variances corresponding to the sub-images according to the pixel maximum gradient in the first image and the pixel number corresponding to the sub-images;
and determining a first threshold value and a second threshold value according to the variance of the pixel maximum gradient and the pixel maximum gradient.
3. The method according to claim 2, wherein the performing image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds to obtain a plurality of image segmentation results corresponding to the plurality of sub-images comprises:
comparing the gray-scale value of each pixel in the sub-image with the first threshold and the second threshold to determine a category corresponding to each pixel in the sub-image;
and determining an image segmentation result corresponding to the sub-image according to the category corresponding to each pixel in the sub-image.
4. The method of claim 1, wherein determining a repetition region between the first image and the second image from the flight data of the drone comprises:
determining a first offset of the unmanned aerial vehicle in a first direction, a second offset of the unmanned aerial vehicle in a second direction and a corresponding resolution of the unmanned aerial vehicle according to flight data of the unmanned aerial vehicle;
determining a first pixel offset in the first direction and a second pixel offset in the second direction according to the first offset, the second offset and the resolution;
and determining the repeated region according to the first pixel offset and the second pixel offset.
5. The method of claim 4, wherein said determining, from the flight data of the drone, a first offset of the drone in a first direction, a second offset of the drone in a second direction, and a corresponding resolution of the drone comprises:
determining the first offset and the second offset according to the position of the unmanned aerial vehicle when shooting the first image, the flight speed, the flight direction and the acquisition time interval of the unmanned aerial vehicle;
and determining the resolution corresponding to the unmanned aerial vehicle according to the height of the unmanned aerial vehicle, the focal length of the camera and the image width of the first image.
6. An image segmentation device for unmanned aerial vehicle images, the device comprising:
the first determining module is used for determining a plurality of segmentation thresholds corresponding to a plurality of sub-images according to gradient amplitude distribution in a gradient histogram corresponding to a first image acquired by an unmanned aerial vehicle and the sum of the number of pixels corresponding to the plurality of sub-images in the first image;
the image segmentation module is used for respectively carrying out image segmentation on the plurality of sub-images according to the plurality of segmentation thresholds so as to obtain a plurality of image segmentation results corresponding to the plurality of sub-images;
a second determining module, configured to determine a repetition region between a first image and a second image acquired by the drone according to flight data of the drone, where the first image and the second image are acquired by the drone at adjacent time intervals;
and the image matching module is used for carrying out image matching on the repeated region according to the plurality of image segmentation results so as to obtain the identification result of the repeated region.
7. The apparatus of claim 6, wherein the first determining module comprises:
the first determining submodule is used for determining pixel maximum gradient variances corresponding to the sub-images according to the pixel maximum gradient in the first image and the pixel quantity corresponding to the sub-images;
and the second determining submodule is used for determining a first threshold and a second threshold according to the pixel maximum gradient variance and the pixel maximum gradient.
8. The apparatus of claim 7, wherein the image segmentation module comprises:
the comparison submodule is used for comparing the gray value of each pixel in the sub-image with the first threshold and the second threshold so as to determine the category corresponding to each pixel in the sub-image;
and the third determining submodule is used for determining an image segmentation result corresponding to the sub-image according to the category corresponding to each pixel in the sub-image.
9. The apparatus of claim 6, wherein the second determining module comprises:
the fourth determining submodule is used for determining a first offset of the unmanned aerial vehicle in the first direction, a second offset of the unmanned aerial vehicle in the second direction and the corresponding resolution of the unmanned aerial vehicle according to the flight data of the unmanned aerial vehicle;
a fifth determining submodule, configured to determine a first pixel offset in the first direction and a second pixel offset in the second direction according to the first offset, the second offset, and the resolution;
a sixth determining submodule, configured to determine the repetition region according to the first pixel offset and the second pixel offset.
10. The apparatus of claim 9, wherein the fourth determination submodule comprises:
the first determining unit is used for determining the first offset and the second offset according to the position of the unmanned aerial vehicle when the unmanned aerial vehicle shoots the first image, the flight speed, the flight direction and the acquisition time interval of the unmanned aerial vehicle;
and the second determining unit is used for determining the resolution corresponding to the unmanned aerial vehicle according to the height of the unmanned aerial vehicle, the focal length of the camera and the image width of the first image.
11. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the method of image segmentation of drone images according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of image segmentation of a drone image according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211548343.2A CN115937527A (en) | 2022-12-05 | 2022-12-05 | Image segmentation method and device for unmanned aerial vehicle image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211548343.2A CN115937527A (en) | 2022-12-05 | 2022-12-05 | Image segmentation method and device for unmanned aerial vehicle image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115937527A true CN115937527A (en) | 2023-04-07 |
Family
ID=86655426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211548343.2A Pending CN115937527A (en) | 2022-12-05 | 2022-12-05 | Image segmentation method and device for unmanned aerial vehicle image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115937527A (en) |
-
2022
- 2022-12-05 CN CN202211548343.2A patent/CN115937527A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829398B (en) | Target detection method in video based on three-dimensional convolution network | |
US9483709B2 (en) | Visual saliency estimation for images and video | |
CN108446634B (en) | Aircraft continuous tracking method based on combination of video analysis and positioning information | |
CN108960190B (en) | SAR video target detection method based on FCN image sequence model | |
CN107808138B (en) | Communication signal identification method based on FasterR-CNN | |
CN114820465B (en) | Point cloud detection model training method and device, electronic equipment and storage medium | |
CN113822352B (en) | Infrared dim target detection method based on multi-feature fusion | |
CN113780110A (en) | Method and device for detecting weak and small targets in image sequence in real time | |
CN111354022A (en) | Target tracking method and system based on kernel correlation filtering | |
CN110942473A (en) | Moving target tracking detection method based on characteristic point gridding matching | |
CN110852207A (en) | Blue roof building extraction method based on object-oriented image classification technology | |
CN110516731B (en) | Visual odometer feature point detection method and system based on deep learning | |
CN115100616A (en) | Point cloud target detection method and device, electronic equipment and storage medium | |
CN108509835B (en) | PolSAR image ground object classification method based on DFIC super-pixels | |
CN106778822B (en) | Image straight line detection method based on funnel transformation | |
Kusetogullari et al. | Unsupervised change detection in landsat images with atmospheric artifacts: a fuzzy multiobjective approach | |
CN117520581A (en) | Land mapping information management method, system, equipment and medium | |
CN115294439B (en) | Method, system, equipment and storage medium for detecting air weak and small moving target | |
CN111104965A (en) | Vehicle target identification method and device | |
CN115937527A (en) | Image segmentation method and device for unmanned aerial vehicle image | |
CN110458878A (en) | A kind of precipitation cloud tracking based on characteristic matching track algorithm | |
CN113343819B (en) | Efficient unmanned airborne SAR image target segmentation method | |
CN112651351B (en) | Data processing method and device | |
CN113963178A (en) | Method, device, equipment and medium for detecting infrared dim and small target under ground-air background | |
Subash | Automatic road extraction from satellite images using extended Kalman filtering and efficient particle filtering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |