CN116630349A - Straw returning area rapid segmentation method based on high-resolution remote sensing image - Google Patents

Straw returning area rapid segmentation method based on high-resolution remote sensing image Download PDF

Info

Publication number
CN116630349A
CN116630349A CN202310912931.8A CN202310912931A CN116630349A CN 116630349 A CN116630349 A CN 116630349A CN 202310912931 A CN202310912931 A CN 202310912931A CN 116630349 A CN116630349 A CN 116630349A
Authority
CN
China
Prior art keywords
image
pixel
sliding window
fog
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310912931.8A
Other languages
Chinese (zh)
Other versions
CN116630349B (en
Inventor
姚海立
刘海
魏祥圣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rural Revitalization Affairs Center Of Rencheng District Jining City
Shandong Aifudi Biology Holding Co ltd
Original Assignee
Rural Revitalization Affairs Center Of Rencheng District Jining City
Shandong Aifudi Biology Holding Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rural Revitalization Affairs Center Of Rencheng District Jining City, Shandong Aifudi Biology Holding Co ltd filed Critical Rural Revitalization Affairs Center Of Rencheng District Jining City
Priority to CN202310912931.8A priority Critical patent/CN116630349B/en
Publication of CN116630349A publication Critical patent/CN116630349A/en
Application granted granted Critical
Publication of CN116630349B publication Critical patent/CN116630349B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the field of image processing, and provides a straw returning area rapid segmentation method based on a high-resolution remote sensing image, which comprises the following steps: acquiring a remote sensing image corresponding to a field, and converting the remote sensing image into a gray level image; calculating the fog concentration degree based on the gray value of the pixel point in the gray image; determining global atmospheric light and transmittance based on the mist concentration level; and defogging the remote sensing image based on the global atmospheric light and the transmissivity, and dividing a Tian Ouyu field based on the defogged image. The method can accurately defogging the image, realize more ideal defogging effect and improve the segmentation efficiency of the returning area.

Description

Straw returning area rapid segmentation method based on high-resolution remote sensing image
Technical Field
The application relates to the field of image processing, in particular to a straw returning area rapid segmentation method based on a high-resolution remote sensing image.
Background
In the actual straw returning process, the soil still planted with other crops is often needed to be nearby the returning soil, and a turning route is needed to be planned to turn the soil with maximum efficiency before the returning soil is turned.
When large-area land is processed, the remote sensing image of the unmanned aerial vehicle is often utilized to plan the turning route, and as the large-area land is often good in natural environment, the fog probability is high in autumn harvest seasons. Therefore, how to optimize the defogging algorithm and divide the returning area according to the actual returning situation is a urgent problem for farmers in busy season of competing seconds. The existing defogging methods mainly comprise three types: the image enhancement-based processing method directly highlights image details to improve contrast without considering the forming process of a foggy image, and has the defect of being not ideal for defogging images with larger depth of field; secondly, the defogging algorithm of the method is better than the image enhancement and restoration method in effect based on the neural network, but the time cost for shooting a group of foggy images and foggy images with the same background in nature is higher, and the equipment cost is higher at the same time.
Disclosure of Invention
The application provides a method for rapidly dividing straw returning areas based on high-resolution remote sensing images, which can accurately defogging images, realize more ideal defogging effects and improve the dividing efficiency of returning areas.
In a first aspect, the present application provides a method for rapidly dividing straw returning areas based on high-resolution remote sensing images, comprising:
acquiring a remote sensing image corresponding to a field, and converting the remote sensing image into a gray level image;
calculating the fog concentration degree based on the gray value of the pixel point in the gray image;
determining global atmospheric light and transmittance based on the mist concentration level;
and defogging the remote sensing image based on the global atmospheric light and the transmissivity, and dividing a Tian Ouyu field based on the defogged image.
Optionally, calculating the fog concentration degree based on the gray value of the pixel point in the gray image includes:
determining the detail loss degree of a target in the sliding window based on the gray value of the pixel point in the gray image;
determining the light loss degree of the target reflected into the lens under the influence of fog in the sliding window based on the gray value of the pixel point in the gray image;
and calculating the fog thickness degree corresponding to the sliding window based on the detail loss degree of the target image in the sliding window and the light loss degree of the target object reflected into the lens under the influence of fog.
Optionally, determining the level of detail loss of the target image in the sliding window based on the gray value of the pixel point in the gray image includes:
calculating the gradient value range of the sliding window based on the gray value of the pixel point in the sliding window, wherein the gradient value range is used for representing the detail loss degree of the target image in the sliding window;
determining a degree of loss of light reflected into a lens by a target in a sliding window affected by fog based on a gray value of a pixel point in a gray image, including:
calculating gray value variance of the sliding window based on gray values of pixel points in the sliding window, wherein the gray value variance is used for representing the loss degree of light reflected into a lens by the influence of fog on a target in the sliding window;
calculating a fog thickness degree corresponding to the sliding window based on a detail loss degree of a target image in the sliding window and a light loss degree of a target object reflected into a lens under the influence of fog, including:
the mist thickening degree was calculated using the following formula:
wherein F represents the fog thickness degree corresponding to the current sliding window, n represents the total number of pixel points in the current sliding window,refers to the gray scale of the ith pixel pointValue of->,/>For the maximum and minimum gradient values in the current sliding window,representing the gradient value is extremely bad, +.>Representing the gray value variance.
Optionally, determining the global atmosphere light and the transmittance based on the mist concentration level includes:
determining edge pixel points based on the fog concentration degree;
processing the edge pixel points and the pixel points in the sliding window with the maximum fog concentration degree by using a first filter, and processing the rest pixel points except the edge pixel points by using a second filter; the first filter is smaller in size than the second filter;
determining global atmosphere light based on the filtered dark channel image;
and calculating to obtain the transmissivity of each pixel point based on the fog concentration degree corresponding to the pixel point in the sliding window and the global atmosphere light.
Optionally, determining the edge pixel point based on the fog concentration level includes:
constructing a fog concentration degree matrix corresponding to the sliding window with the largest fog concentration degree; the fog concentration degree matrix consists of fog concentration degrees of all pixel points in the sliding window;
calculating a gradient matrix corresponding to the fog concentration degree matrix;
determining the edge probability of each pixel point in the sliding window with the maximum fog concentration degree based on the change condition of the gradient matrix;
edge pixels are determined based on the edge probability of each pixel.
Optionally, determining global atmospheric light based on the filtered dark channel image includes:
determining the brightness value of the dark channel image based on the dark channel image after the filtering processing, and taking a first preset number of pixel points with larger brightness value of the dark channel image as a first candidate pixel point set; the dark channel image is an image formed by the smallest channel value in R, G, B channels in the RGB image, wherein the smallest channel value in R, G, B channels is the brightness value of the dark channel image;
multiplying the brightness value of the pixel point in the first candidate pixel point set by the fog thickness degree of the corresponding pixel point;
selecting a second preset number of pixels with larger product results as a second candidate pixel set;
and acquiring pixel values of the pixel points in the second candidate pixel point set in the remote sensing image, and taking the pixel value corresponding to the pixel point with the highest color pixel value as the global atmosphere light, wherein the color pixel value is obtained by weighted average of values of three channels R, G, B.
Optionally, calculating the transmittance of each pixel point based on the fog concentration degree corresponding to the pixel point in the sliding window and the global atmosphere light includes:
the transmittance was calculated using the following formula:
wherein t is the transmissivity of the pixel point, A is global atmospheric light, I is the filtered dark channel image, and F is the fog concentration degree of the pixel point.
Optionally, defogging the remote sensing image based on the global atmospheric light and the transmittance, including:
defogging each pixel point in the remote sensing image by utilizing global atmospheric light and the transmissivity of each pixel point;
wherein, defogging treatment includes:
wherein J represents defogging pixel points after defogging treatment, and all the defogging pixel points form a defogging image.
Optionally, determining the edge probability of each pixel point in the sliding window with the maximum fog concentration based on the change condition of the gradient matrix includes:
the edge probability is calculated using the following formula:
wherein ,is the edge probability of the pixel, +.>For gradient matrix->I-th point of (a)>For gradient matrix->8 neighborhoods of the ith point in (a),>gradient matrix->And the maximum and minimum of (a) are defined.
Optionally, determining the edge pixel point based on the edge probability of each pixel point includes:
and determining the pixel points with the edge probability larger than the preset value as edge pixel points.
The application has the beneficial effects that the method is different from the prior art, and the method for quickly dividing the straw returning area based on the high-resolution remote sensing image comprises the following steps: acquiring a remote sensing image corresponding to a field, and converting the remote sensing image into a gray level image; calculating the fog concentration degree based on the gray value of the pixel point in the gray image; determining global atmospheric light and transmittance based on the mist concentration level; and defogging the remote sensing image based on the global atmospheric light and the transmissivity, and dividing a Tian Ouyu field based on the defogged image. The method can accurately defogging the image, realize more ideal defogging effect and improve the segmentation efficiency of the returning area.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for rapidly dividing straw returning areas based on high-resolution remote sensing images;
FIG. 2 is a flowchart illustrating an embodiment of the step S12 in FIG. 1;
fig. 3 is a flowchart of an embodiment of step S13 in fig. 1.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the returning process of the platycodon grandiflorum, the first step is to turn over the field, and for the autumn harvest season of competing for seconds, the turning over route is reasonably arranged, so that the improvement of the operation efficiency is an urgent problem to be solved. When the manual design turns over the pressure route, because different crop maturation times are different, the human eye visual field is limited, distinguish the crop and realize that complete route design spends time and human cost huge, consequently generally use unmanned aerial vehicle to acquire the field area when extensive straw is still field, but autumn is foggy, is a great hindrance to the field image of obtaining, designs and turns over the pressure route. According to the application, a defogging algorithm is optimized on the basis of aiming at the acquired image characteristics of the foggy-sky field, so that clear field images are acquired, and quick segmentation of a returning field area is realized. The present application will be described in detail with reference to the accompanying drawings and examples.
Referring to fig. 1, fig. 1 is a flow chart of an embodiment of a method for rapidly dividing a straw returning area based on a high-resolution remote sensing image according to the present application, which specifically includes:
step S11: and acquiring a remote sensing image corresponding to the field, and converting the remote sensing image into a gray level image.
The application realizes the rapid segmentation of the image of the straw returning area under the influence of foggy days, and before the processing, an image acquisition device is required to be arranged to acquire the image of the returning area. Because the field images are relatively regular, but the internal vegetation or soil outlines are more, in order to acquire image information with more reference value, a high-resolution remote sensing unmanned aerial vehicle is required to be used for acquiring the images, and the complete field images are shot as much as possible according to the prior height. After the complete field remote sensing image with high resolution is obtained, the remote sensing image is grayed, and then the gray image and the remote sensing image are stored for the subsequent steps.
Step S12: and calculating the fog concentration degree based on the gray value of the pixel point in the gray image.
In the fog model, the fog image is obtained by imaging the reflection light of a target object penetrating fog and the refraction and reflection light of fog on atmospheric light in a lens, so that in order to obtain a more accurate global atmospheric light value, the image can be roughly distinguished, a region with the most dense fog is screened out, and then the concentration degree of the fog is calculated by using the region. The step mainly obtains and calculates the concentration degree of the fog.
The haze concentration level can be roughly distinguished in the image gray scale. Under normal conditions, the image of the straw returning land simultaneously contains the harvested crops, the crops before harvesting and the soil part, so that the composition is relatively simple, but the detail textures are relatively rich. However, the haze-covered region reduces the light transmittance of the target object, reduces its contrast and loses texture detail. The fog concentration level can be represented by the gray value distribution condition and the gradient value calculation of the image area.
Referring to fig. 2, step S12 includes:
step S21: and determining the detail loss degree of the target in the sliding window based on the gray value of the pixel point in the gray image.
In one embodiment, the gradient value range of the sliding window is calculated based on the gray scale values of the pixels in the sliding window, and the gradient value range is used for representing the detail loss degree of the target image in the sliding window.
Step S22: and determining the light loss degree of the target reflected into the lens under the influence of fog in the sliding window based on the gray value of the pixel point in the gray image.
In one embodiment, the gray value variance of the sliding window is calculated based on gray values of pixels in the sliding window, and the gray value variance is used for representing the loss degree of light reflected into the lens by the influence of fog on a target in the sliding window.
Step S23: and calculating the fog thickness degree corresponding to the sliding window based on the detail loss degree of the target image in the sliding window and the light loss degree of the target object reflected into the lens under the influence of fog.
In one embodiment, the mist thickening degree is calculated using the following formula:
wherein F represents the fog thickness degree corresponding to the current sliding window, n represents the total number of pixel points in the current sliding window,refers to the gray value of the ith pixel, for example>,/>For the maximum and minimum gradient values in the current sliding window,representing the gradient value is extremely bad, +.>Representing the gray value variance.
In particular, sliding windowsThe mouth is of the sizeThe denser the fog region, the more texture details lost in the target region; the thinner the fog, the more normal the light reflected into the lens by the object, and the more color details in the image area. The purpose of calculating the gradient value range of the sliding window is to evaluate the detail loss degree of the target image in the area, and the purpose of calculating the gray value variance of the sliding window is to evaluate the light loss degree of the target object reflected into the lens under the influence of fog. Under the condition that other parameters are unchanged, the larger the gray variance in the sliding window is, the less the lost reflected light is, and the smaller F is; the larger the gradient, the less detail is lost and the smaller F. When the same pixel point is given different F values, the maximum value is reserved as the F of the pixel. The above formula evaluates the image fog region thickness using the information loss relative value.
Step S13: and determining global atmosphere light and transmittance based on the fog concentration level.
After the high-resolution field image is acquired, the turning route is not easy to plan for the foggy weather, so that independent defogging treatment is carried out on the foggy weather. In the current defogging algorithm, a dark channel priori defogging algorithm which is more suitable for the scene is adopted, but parameters of the algorithm still need to be optimized by combining scene information, so that a more ideal defogging effect is obtained.
In the image obtained in the above step, since the area to be returned to the field is often counted in mu, the area is usually large, and thus fog imaging may be uneven in the image due to the factors of illumination or fog itself. For the above reasons, the conventional method of calculating the global atmospheric light a and the transmittance t needs to be optimized for the scene. The step designs an optimization calculation method, obtains parameters more suitable for scenes, and realizes defogging of images.
The global atmosphere light estimation method is that a part of brightest points are selected from the dark channel image, and the brightest points are found in the original image correspondingly to be estimated. The idea of the selection method is to find the darkness channel and the brightest point in the original image as much as possible to represent the most dense area of fog, so that the selected position is a sky part with high probability. In a general fog image, the sky part has the strongest fog sense, and can be considered as infinity, so that the global atmosphere light is estimated to have a certain better effect. However, in the present scenario, since the remote sensing image for the field area is acquired, the condition of the sky area is not provided, and the depth of the image is relatively fixed, the global atmospheric light and the transmittance may be affected by the interference point, which is a white object or a reflective object that may exist on the ground, directly calculated by the conventional method. The thickness degree of fog is still different based on the area of the image acquisition area, so that the parameters of the defogging algorithm, namely the global atmospheric light A and the transmissivity t, are selected in the area with thicker fog.
When the global atmosphere light A is calculated by the traditional method, the image after the minimum value of the dark channel image of the original RGB image is utilized for processing, and the size of the filter is 15 x 15. The advantage of choosing the size of the filter is that the amount of computation is reduced and the final overall effect is good, but the disadvantage is that the boundary between the scene part and the sky part is prone to the phenomenon of scotopic expansion, resulting in incomplete defogging of the surrounding area of the scene. The combination is embodied in this scenario: at the uneven part of the fog, the edge of the dense fog may be corroded by the dark spots of the fog area in the dark channel image processing process, if the corroded spots are reflected as the spots with higher brightness in the dark channel image, the accuracy of calculating the global atmosphere light A again is reduced, and the image processing result may be finally unsatisfactory due to the fact that the edge of the field image is relatively clear and critical. For the above problem, we can adjust the filter size of the minimum filtering at the boundary, and approximately acquire the thick fog region in combination with the thick degree F acquired in the above step.
In one embodiment, referring to fig. 3, step S13 includes:
step S31: and determining edge pixel points based on the fog concentration degree.
Specifically, constructing a fog concentration degree matrix corresponding to a sliding window with the largest fog concentration degree; wherein the fog concentration degree matrix is formed by the fog concentrations of all pixel points in the sliding windowThickness degree. And calculating a gradient matrix corresponding to the fog concentration degree matrix. Specifically, a sliding window with the maximum fog concentration degree F value is obtained, the fog concentration degree F value of pixel points in the sliding window is traversed, a matrix of the image size about the fog concentration degree F is obtained, and a gradient matrix of the matrix is calculated according to the matrix of the F
And determining the edge probability of each pixel point in the sliding window with the maximum fog concentration degree based on the change condition of the gradient matrix. In one embodiment, the edge probability is calculated using the following formula:
wherein ,is the edge probability of the pixel, +.>For gradient matrix->I-th point of (a)>For gradient matrix->8 neighborhoods of the ith point in (a),>gradient matrix->And the maximum and minimum of (a) are defined.
The method combines the maximum value of the pixel difference value of a single pixel and the 8 neighborhood pixel with the maximum and minimum value normalization processing of a matrix to judge the edge probability that the pixel is an edge pixel.
Edge pixels are determined based on the edge probability of each pixel. In an embodiment, a pixel having an edge probability greater than a preset value is determined as an edge pixel. Specifically, the edge probability can be calculatedIs determined to be an edge pixel.
Step S32: processing the edge pixel points and the pixel points in the sliding window with the maximum fog concentration degree by using a first filter, and processing the rest pixel points except the edge pixel points by using a second filter; the first filter is smaller in size than the second filter.
And selecting a first filter to filter the edge pixel points, and selecting a second filter to filter the rest pixel points except the edge pixel points, wherein the size of the first filter is smaller than that of the second filter. In one embodiment, the first filter is 3*3 in size and the second filter is 15 by 15 in size.
In addition, since F is obtained by keeping the maximum value, in order to ensure the reliability of the result, the pixel point in the sliding window with the maximum fog concentration should also be processed by using a first filter with a size of 3*3.
The purpose of calculating the edge probability and distinguishing the filter is to restrain the dark spot expansion effect of a darker area to a brighter area in a dark channel image, and compared with a traditional filtering method, the size of the self-adaptive change filter is more complete for protecting bright spots in the edge area of the dark channel image, the calculation of global atmosphere light A is more accurate, and the finally obtained defogging effect is more ideal.
Step S33: and determining global atmosphere light based on the filtered dark channel image.
In this scenario, since the image does not have sky image conditions, i.e., the prior condition is not fully applicable in this scenario, the selection condition should be improved. Because the fog concentration degree F in the image is acquired, errors caused by white or reflective objects on the ground can be better eliminated by combining the F when the pixel point of the first 0.1% is selected.
Specifically, the brightness value of the dark channel image is determined based on the dark channel image after the filtering processing, and a first preset number of pixels with larger brightness value of the dark channel image are used as a first candidate pixel point set. The dark channel image is an image formed by the smallest channel value in the R, G, B three channels in the RGB image, wherein the smallest channel value in the R, G, B three channels is the brightness value of the dark channel image.
After the filtering process is completed, a first preset number, for example, the first 10% of pixels with larger brightness values of the dark channel image after the filtering process is first coarsely screened, and a first candidate pixel set is established.
Multiplying the brightness value of the pixel point in the first candidate pixel point set by the fog thickness degree of the corresponding pixel point; the product results are reordered, so that the order of the areas with light fog is reduced as a whole, and the influence of white or reflective objects in the areas with light fog is eliminated as much as possible. And selecting a second preset number of pixels with larger product results, such as 0.1%, as a second candidate pixel set. And acquiring pixel values of the pixel points in the second candidate pixel point set in the remote sensing image, and taking the pixel value corresponding to the pixel point with the highest color pixel value as the global atmosphere light, wherein the color pixel value is obtained by weighted average of values of three channels R, G, B.
Step S34: and calculating to obtain the transmissivity of each pixel point based on the fog concentration degree corresponding to the pixel point in the sliding window and the global atmosphere light.
After the global atmosphere light a is acquired, the transmittance t can be calculated from the mist degradation model. The dark channel prior method considers that in the dark channel of the natural outdoor image, one channel value in the RGB image of the object in the non-sky area is close to 0, and in the prior theory, 86 percent of pixel point gray values in the dark channel image of the natural image are concentrated in [0, 16]In the luminance values, therefore, the dark channel a priori directly approximates the haze-free image J to 0 and introduces a pre-set parameterTo calculate the approximate transmittance t, such an approach can also obtain the non-uniformity with low computational complexityErroneous results. In the present scene, the composition of the haze-free image is relatively simple, so that the transmittance t can be calculated more accurately by combining the gray values of the pixels in the gray image.
In the gray level image of the remote sensing image, the image main body is a field under the condition of no fog, the image main body consists of straw and land, and the gray level value of the image main body is expressed as the gray level image. The foggy image is characterized in that the gray value of the main body of the image is reduced, but a layer of white foggy is covered, and the reduction degree of the gray value of the main body image can reflect the transmissivity of the main body image. Note that the contrast objects here are a subject pixel in the case of fog and a subject pixel in the case of no fog. Because the composition of the haze-free image of the scene is simple, the distribution condition of the haze concentration degree F can be combined, and the transmittance t can be calculated as a parameter.
When the pixel mist thickness degree F value is larger, the larger the concentration of the region covered with the mist is, the lower the transmittance is. According to fog degradation modelAvailable->The conventional method directly approximates J to 0 and introduces the adjustment coefficient +.>In the method, the approximate J value is calculated by using the fog concentration degree F. In one embodiment, the transmittance is calculated using the following formula:
wherein t is the transmissivity of the pixel point, A is global atmospheric light, I is the filtered dark channel image, and F is the fog concentration degree of the pixel point. The reason for introducing 16 is that 86% of pixel gray values in the dark channel image of the natural image in the prior conclusion are concentrated in the [0, 16] brightness value interval. When the image mist region is thicker, the larger the F value is, the smaller the overall transmittance t is.
Step S14: and defogging the remote sensing image based on the global atmospheric light and the transmissivity, and dividing a Tian Ouyu field based on the defogged image.
Defogging each pixel point in the remote sensing image by utilizing global atmospheric light and the transmissivity of each pixel point; wherein, defogging treatment includes:
wherein J represents defogging pixel points after defogging treatment, and all the defogging pixel points form a defogging image. The formula is a fog degradation model calculation formula, and defogging treatment is carried out on each pixel point in the remote sensing image by utilizing the fog degradation model calculation formula to obtain a defogged image.
The method of the application suggests the fog degradation model, and the transmissivity t and A in the model are already determined, so that other acquired remote sensing images can be processed based on the fog degradation model to obtain defogged images.
In the existing defogging algorithm, the defogging effect on a field remote sensing image scene without sky is not ideal, and the defogging time and the equipment cost are high by directly utilizing a neural network. According to the method, on the basis of a traditional dark channel priori defogging algorithm, a fog concentration degree model is designed according to the characteristics of a remote sensing image of a straw returning scene, the fog concentration degree is used as a parameter, a dark channel image filtering method is adjusted, the dark point diffusion effect is reduced, a more accurate global atmosphere light value is obtained, a transmissivity algorithm is optimized, the transmissivity is calculated by considering priori conditions such as a fog condition imaging principle, accurate and reliable defogging algorithm parameters are obtained, a more efficient and more ideal defogging effect is achieved, and a bedding is made for dividing a land image needing to be turned.
The foregoing is only the embodiments of the present application, and therefore, the patent scope of the application is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the application.

Claims (10)

1. The method for rapidly dividing the straw returning area based on the high-resolution remote sensing image is characterized by comprising the following steps of:
acquiring a remote sensing image corresponding to a field, and converting the remote sensing image into a gray level image;
calculating the fog concentration degree based on the gray value of the pixel point in the gray image;
determining global atmospheric light and transmittance based on the mist concentration level;
and defogging the remote sensing image based on the global atmospheric light and the transmissivity, and dividing a Tian Ouyu field based on the defogged image.
2. The method for rapidly dividing straw returning area based on high-resolution remote sensing image according to claim 1, wherein calculating the fog concentration degree based on the gray value of the pixel point in the gray image comprises the following steps:
determining the detail loss degree of a target in the sliding window based on the gray value of the pixel point in the gray image;
determining the light loss degree of the target reflected into the lens under the influence of fog in the sliding window based on the gray value of the pixel point in the gray image;
and calculating the fog thickness degree corresponding to the sliding window based on the detail loss degree of the target image in the sliding window and the light loss degree of the target object reflected into the lens under the influence of fog.
3. The method for rapidly dividing straw returning area based on high-resolution remote sensing image according to claim 2, wherein determining the detail loss degree of the target image in the sliding window based on the gray value of the pixel point in the gray image comprises:
calculating the gradient value range of the sliding window based on the gray value of the pixel point in the sliding window, wherein the gradient value range is used for representing the detail loss degree of the target image in the sliding window;
determining a degree of loss of light reflected into a lens by a target in a sliding window affected by fog based on a gray value of a pixel point in a gray image, including:
calculating gray value variance of the sliding window based on gray values of pixel points in the sliding window, wherein the gray value variance is used for representing the loss degree of light reflected into a lens by the influence of fog on a target in the sliding window;
calculating a fog thickness degree corresponding to the sliding window based on a detail loss degree of a target image in the sliding window and a light loss degree of a target object reflected into a lens under the influence of fog, including:
the mist thickening degree was calculated using the following formula:
wherein F represents the fog thickness degree corresponding to the current sliding window, n represents the total number of pixel points in the current sliding window,refers to the gray value of the ith pixel, for example>,/>For the maximum and minimum gradient values in the current sliding window,representing the gradient value is extremely bad, +.>Representing the gray value variance.
4. The method for rapidly dividing straw returning area based on high-resolution remote sensing image according to claim 2, wherein determining global atmosphere light and transmittance based on the fog concentration degree comprises:
determining edge pixel points based on the fog concentration degree;
processing the edge pixel points and the pixel points in the sliding window with the maximum fog concentration degree by using a first filter, and processing the rest pixel points except the edge pixel points by using a second filter; the first filter is smaller in size than the second filter;
determining global atmosphere light based on the filtered dark channel image;
and calculating to obtain the transmissivity of each pixel point based on the fog concentration degree corresponding to the pixel point in the sliding window and the global atmosphere light.
5. The method for rapidly dividing straw returning area based on high-resolution remote sensing image as claimed in claim 4, wherein determining the edge pixel point based on the fog concentration degree comprises the following steps:
constructing a fog concentration degree matrix corresponding to the sliding window with the largest fog concentration degree; the fog concentration degree matrix consists of fog concentration degrees of all pixel points in the sliding window;
calculating a gradient matrix corresponding to the fog concentration degree matrix;
determining the edge probability of each pixel point in the sliding window with the maximum fog concentration degree based on the change condition of the gradient matrix;
edge pixels are determined based on the edge probability of each pixel.
6. The method for rapidly dividing straw returning area based on high-resolution remote sensing image as claimed in claim 4, wherein determining global atmosphere light based on the filtered dark channel image comprises:
determining the brightness value of the dark channel image based on the dark channel image after the filtering processing, and taking a first preset number of pixel points with larger brightness value of the dark channel image as a first candidate pixel point set; the dark channel image is an image formed by the smallest channel value in R, G, B channels in the RGB image, wherein the smallest channel value in R, G, B channels is the brightness value of the dark channel image;
multiplying the brightness value of the pixel point in the first candidate pixel point set by the fog thickness degree of the corresponding pixel point;
selecting a second preset number of pixels with larger product results as a second candidate pixel set;
and acquiring pixel values of the pixel points in the second candidate pixel point set in the remote sensing image, and taking the pixel value corresponding to the pixel point with the highest color pixel value as the global atmosphere light, wherein the color pixel value is obtained by weighted average of values of three channels R, G, B.
7. The method for rapidly dividing straw returning area based on high-resolution remote sensing image as claimed in claim 4, wherein the step of calculating the transmittance of each pixel based on the fog concentration degree and the global atmosphere light corresponding to the pixel in the sliding window comprises the following steps:
the transmittance was calculated using the following formula:
wherein t is the transmissivity of the pixel point, A is global atmospheric light, I is the filtered dark channel image, and F is the fog concentration degree of the pixel point.
8. The method for rapidly dividing straw returning area based on high-resolution remote sensing image according to claim 1, wherein defogging treatment is performed on the remote sensing image based on the global atmosphere light and the transmissivity, and the method comprises the following steps:
defogging each pixel point in the remote sensing image by utilizing global atmospheric light and the transmissivity of each pixel point;
wherein, defogging treatment includes:
wherein J represents defogging pixel points after defogging treatment, and all the defogging pixel points form a defogging image.
9. The method for rapidly dividing straw returning area based on high-resolution remote sensing image as claimed in claim 5, wherein determining the edge probability of each pixel point in the sliding window with the maximum fog concentration degree based on the change condition of the gradient matrix comprises the following steps:
the edge probability is calculated using the following formula:
wherein ,is the edge probability of the pixel, +.>For gradient matrix->I-th point of (a)>For gradient matrix->8 neighborhoods of the ith point in (a),>gradient matrix->And the maximum and minimum of (a) are defined.
10. The method for rapidly segmenting straw returning area based on high-resolution remote sensing image as claimed in claim 5, wherein determining edge pixel points based on the edge probability of each pixel point comprises:
and determining the pixel points with the edge probability larger than the preset value as edge pixel points.
CN202310912931.8A 2023-07-25 2023-07-25 Straw returning area rapid segmentation method based on high-resolution remote sensing image Active CN116630349B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310912931.8A CN116630349B (en) 2023-07-25 2023-07-25 Straw returning area rapid segmentation method based on high-resolution remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310912931.8A CN116630349B (en) 2023-07-25 2023-07-25 Straw returning area rapid segmentation method based on high-resolution remote sensing image

Publications (2)

Publication Number Publication Date
CN116630349A true CN116630349A (en) 2023-08-22
CN116630349B CN116630349B (en) 2023-10-20

Family

ID=87592471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310912931.8A Active CN116630349B (en) 2023-07-25 2023-07-25 Straw returning area rapid segmentation method based on high-resolution remote sensing image

Country Status (1)

Country Link
CN (1) CN116630349B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530257A (en) * 2016-11-22 2017-03-22 重庆邮电大学 Remote sensing image de-fogging method based on dark channel prior model
JP2017138647A (en) * 2016-02-01 2017-08-10 三菱電機株式会社 Image processing device, image processing method, video photographing apparatus, video recording reproduction apparatus, program and recording medium
CN107203981A (en) * 2017-06-16 2017-09-26 南京信息职业技术学院 A kind of image defogging method based on fog concentration feature
CN107301623A (en) * 2017-05-11 2017-10-27 北京理工大学珠海学院 A kind of traffic image defogging method split based on dark and image and system
KR101798911B1 (en) * 2016-06-02 2017-11-17 한국항공대학교산학협력단 Dehazing method and device based on selective atmospheric light estimation
CN107451962A (en) * 2017-07-03 2017-12-08 山东财经大学 A kind of image defogging method and device
CN111192213A (en) * 2019-12-27 2020-05-22 杭州雄迈集成电路技术股份有限公司 Image defogging adaptive parameter calculation method, image defogging method and system
US20200394767A1 (en) * 2019-06-17 2020-12-17 China University Of Mining & Technology, Beijing Method for rapidly dehazing underground pipeline image based on dark channel prior
US20210049744A1 (en) * 2018-04-26 2021-02-18 Chang'an University Method for image dehazing based on adaptively improved linear global atmospheric light of dark channel
CN112419162A (en) * 2019-08-20 2021-02-26 浙江宇视科技有限公司 Image defogging method and device, electronic equipment and readable storage medium
CN114219732A (en) * 2021-12-15 2022-03-22 大连海事大学 Image defogging method and system based on sky region segmentation and transmissivity refinement

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017138647A (en) * 2016-02-01 2017-08-10 三菱電機株式会社 Image processing device, image processing method, video photographing apparatus, video recording reproduction apparatus, program and recording medium
KR101798911B1 (en) * 2016-06-02 2017-11-17 한국항공대학교산학협력단 Dehazing method and device based on selective atmospheric light estimation
CN106530257A (en) * 2016-11-22 2017-03-22 重庆邮电大学 Remote sensing image de-fogging method based on dark channel prior model
CN107301623A (en) * 2017-05-11 2017-10-27 北京理工大学珠海学院 A kind of traffic image defogging method split based on dark and image and system
CN107203981A (en) * 2017-06-16 2017-09-26 南京信息职业技术学院 A kind of image defogging method based on fog concentration feature
CN107451962A (en) * 2017-07-03 2017-12-08 山东财经大学 A kind of image defogging method and device
US20210049744A1 (en) * 2018-04-26 2021-02-18 Chang'an University Method for image dehazing based on adaptively improved linear global atmospheric light of dark channel
US20200394767A1 (en) * 2019-06-17 2020-12-17 China University Of Mining & Technology, Beijing Method for rapidly dehazing underground pipeline image based on dark channel prior
CN112419162A (en) * 2019-08-20 2021-02-26 浙江宇视科技有限公司 Image defogging method and device, electronic equipment and readable storage medium
CN111192213A (en) * 2019-12-27 2020-05-22 杭州雄迈集成电路技术股份有限公司 Image defogging adaptive parameter calculation method, image defogging method and system
CN114219732A (en) * 2021-12-15 2022-03-22 大连海事大学 Image defogging method and system based on sky region segmentation and transmissivity refinement

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUTONG JIANG 等: "Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth", 《IEEE TRANSACTIONS ON IMAGE PROCESSING ( VOLUME: 26, ISSUE: 7, JULY 2017)》 *
代维佳;马国亮;雷帮军;雷柏超;: "基于加权引导滤波的快速自适应图像去雾", 信息通信, no. 04 *
江政远;胡勇;宋文韬;巩彩兰;: "改进暗通道遥感影像去雾方法及效果分析", 上海航天, no. 04 *

Also Published As

Publication number Publication date
CN116630349B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN110378845B (en) Image restoration method based on convolutional neural network under extreme conditions
CN104917975B (en) A kind of adaptive automatic explosion method based on target signature
CN108734670B (en) Method for restoring single night weak-illumination haze image
CN110570360B (en) Retinex-based robust and comprehensive low-quality illumination image enhancement method
CN110827218B (en) Airborne image defogging method based on weighted correction of HSV (hue, saturation, value) transmissivity of image
CN109583378A (en) A kind of vegetation coverage extracting method and system
CN109815916A (en) A kind of recognition methods of vegetation planting area and system based on convolutional neural networks algorithm
CN109447945A (en) Wheat Basic Seedling rapid counting method based on machine vision and graphics process
CN115223004A (en) Method for generating confrontation network image enhancement based on improved multi-scale fusion
CN113077486B (en) Method and system for monitoring vegetation coverage rate in mountainous area
CN110163807B (en) Low-illumination image enhancement method based on expected bright channel
CN110689490A (en) Underwater image restoration method based on texture color features and optimized transmittance
WO2021189782A1 (en) Image processing method, system, automatic locomotion device, and readable storage medium
CN105513015A (en) Sharpness processing method for foggy-day images
CN115687850A (en) Method and device for calculating irrigation water demand of farmland
CN111489299A (en) Defogging method for multispectral remote sensing satellite image
CN116542864A (en) Unmanned aerial vehicle image defogging method based on global and local double-branch network
CN109451292B (en) Image color temperature correction method and device
CN114155173A (en) Image defogging method and device and nonvolatile storage medium
CN108833875B (en) Automatic white balance correction method
Pandian et al. Object Identification from Dark/Blurred Image using WBWM and Gaussian Pyramid Techniques
CN116630349B (en) Straw returning area rapid segmentation method based on high-resolution remote sensing image
KR102277005B1 (en) Low-Light Image Processing Method and Device Using Unsupervised Learning
CN111275698B (en) Method for detecting visibility of road in foggy weather based on unimodal offset maximum entropy threshold segmentation
CN116229404A (en) Image defogging optimization method based on distance sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant