CN115456983A - Water surface floater detection method, system, equipment and medium - Google Patents

Water surface floater detection method, system, equipment and medium Download PDF

Info

Publication number
CN115456983A
CN115456983A CN202211080869.2A CN202211080869A CN115456983A CN 115456983 A CN115456983 A CN 115456983A CN 202211080869 A CN202211080869 A CN 202211080869A CN 115456983 A CN115456983 A CN 115456983A
Authority
CN
China
Prior art keywords
image
edge
detection
pixel
water surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211080869.2A
Other languages
Chinese (zh)
Inventor
谭佳
李�杰
何双均
王斌鑫
吴茂曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CISDI Chongqing Information Technology Co Ltd
Original Assignee
CISDI Chongqing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CISDI Chongqing Information Technology Co Ltd filed Critical CISDI Chongqing Information Technology Co Ltd
Priority to CN202211080869.2A priority Critical patent/CN115456983A/en
Publication of CN115456983A publication Critical patent/CN115456983A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a water surface floater detection method, system, equipment and medium, including: acquiring an image containing a horizontal plane, and recording the image as a target image; carrying out edge detection on the target image by utilizing a multi-stage edge detection algorithm to generate an edge detection image; performing edge expansion on the edge detection image, and communicating areas with discontinuous edges and separated edges to obtain an edge expansion image; then, acquiring a connected region exceeding a preset threshold value in the edge expansion image, and generating a corresponding calibration frame according to the connected point coordinate information of the connected region; and finally, mapping the calibration frame to the target image, and determining whether the water surface floating objects exist in the target image according to the mapping result. According to the method and the device, the high-precision detection of the water surface floater can be realized only by performing simple matrix operation from the image processing angle. Meanwhile, the calculation force required by the method is small, the calculation time is short, and the calculation task can be realized only by adopting low-end hardware equipment.

Description

Water surface floater detection method, system, equipment and medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method, a system, a device, and a medium for detecting a floating object on a water surface.
Background
With the increasing living standard, people put higher requirements on the environment. However, many drinking water sources, urban rivers, peripheral lakes, reservoirs, etc. also have surfaces that are contaminated to a greater or lesser extent, especially with floating materials. The floating garbage on the water surface can not be dissolved and naturally diluted and has uneven distribution, and the existence of the floating objects on the water surface not only influences the water body impression and the urban living environment quality, but also causes the pollution of the water body and the damage of ecological balance, and even threatens the safety of shipping and drinking water. How to quickly and effectively identify the water surface floaters and provide information such as early warning and real-time monitoring for the fields of water surface safety avoidance, pollutant cleaning, water surface traffic safety and the like becomes one of important subjects in the fields of intelligent identification, informatization and sensors.
The existing water surface floater detection method mainly focuses on deep learning screen learning, but the deep learning consumes computational and storage resources, and the computational and storage resources are not available in the environments such as suburbs far away from urban areas and the peripheries of natural scenic spots, so that the method is rarely used in real scenes.
Therefore, under the condition that the computing resources and the storage resources are limited, how to quickly and accurately identify the water surface floater under the condition that the computing resources and the storage resources are occupied (namely, the water surface floater is light weight) is a problem which needs to be solved at present.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, it is an object of the present application to provide a method, system, device and medium for detecting a floating object on a water surface, which are used to solve the problems in the prior art.
To achieve the above and other related objects, there is provided a method for detecting a water surface floating object, the method comprising the steps of:
acquiring an image containing a horizontal plane, and recording the image as a target image;
performing edge detection on the target image by using a multi-stage edge detection algorithm, and generating an edge detection image based on corresponding edge information;
performing edge expansion on the edge detection image, and communicating areas with discontinuous edges and separated edges in the edge detection image to obtain an edge expansion image;
acquiring a connected region exceeding a preset threshold value in the edge expansion image, and generating a corresponding calibration frame according to the coordinate information of a connected point of the connected region;
and mapping the calibration frame to the target image, and determining whether the target image has water surface floating objects according to a mapping result.
In an embodiment of the present application, the process of performing edge detection on the target image by using a multi-stage edge detection algorithm and generating an edge detection image based on corresponding edge information includes:
carrying out gray level processing on the target image, and converting the target image into a gray level image;
performing Gaussian smoothing filtering on the gray level image, and calculating the gradient intensity and direction of the gray level image by using a multi-level edge detection operator;
according to the gradient intensity and the direction of the gray level image, carrying out non-maximum suppression and double-threshold detection on each pixel in the gray level image to obtain edge information of the gray level image;
and restraining the isolated low threshold point based on the edge information of the gray level image, and outputting an edge detection image after restraining is finished.
In an embodiment of the present application, the gray processing is performed on the target image, and the process of converting the target image into the gray image includes: carrying out gray level processing on the target image by using a gray level conversion formula, and converting the target image into a gray level image; wherein the gray scale conversion formula is as follows:
I(x,y)=0.2989R(x,y)+0.5870G(x,y)+0.1140B(x,y)。
in an embodiment of the present application, the process of performing gaussian smoothing filtering on the grayscale image includes:
acquiring a pre-generated Gaussian filter kernel with the size of (2k + 1), and performing Gaussian smoothing filtering on the gray level image by using the Gaussian filter kernel; wherein the equation that generates the Gaussian filter kernel comprises:
Figure BDA0003833183730000021
in an embodiment of the present application, the calculating the gradient strength and the direction of the grayscale image by using the multi-level edge detection operator includes:
determining the horizontal direction G of the gray image by utilizing a multi-level edge detection operator x And a vertical direction G y
Based on the horizontal direction G x And a vertical direction G y Calculating the gradient intensity G of the pixel points in the gray level image, including:
Figure BDA0003833183730000022
based on the horizontal direction G x And a vertical direction G y Calculating the direction theta of a pixel point in the gray level image, including:
θ=arctan(G y /G x );
in the formula, arctan is an arctangent function.
In an embodiment of the present application, the process of performing non-maximum suppression on each pixel in the grayscale image includes:
selecting a pixel from the gray level image and marking the pixel as a current pixel;
comparing the gradient intensity of the current pixel with one pixel along the positive gradient direction and one pixel along the negative gradient direction respectively;
if the gradient intensity of the current pixel is greater than the other two pixels, the current pixel is reserved as an edge point; and if the gradient intensity of the current pixel is less than or equal to at least one of the other two pixels, suppressing the current pixel.
In an embodiment of the present application, the process of performing dual threshold detection on each pixel in the grayscale image includes:
acquiring residual pixels after non-maximum value suppression is finished, and taking the residual pixels as edge pixels of the gray level image;
comparing the gradient values of the edge pixels with a first threshold value and a second threshold value respectively;
if the gradient value of the edge pixel is larger than the first threshold value, marking the edge pixel as a strong edge pixel;
if the gradient value of the edge pixel is smaller than the first threshold value and larger than the second threshold value, marking the edge pixel as a weak edge pixel;
and if the gradient value of the edge pixel is smaller than the second threshold value, inhibiting the edge pixel.
The application also provides a surface of water floater detecting system, the system including:
the image acquisition module is used for acquiring an image containing a horizontal plane and recording the image as a target image;
the edge detection module is used for carrying out edge detection on the target image by utilizing a multi-stage edge detection algorithm and generating an edge detection image based on corresponding edge information;
the edge expansion module is used for performing edge expansion on the edge detection image and communicating areas with discontinuous edges and separated edges in the edge detection image to obtain an edge expansion image;
the calibration module is used for acquiring a connected region exceeding a preset threshold value in the edge expansion image and generating a corresponding calibration frame according to the coordinate information of a connected point of the connected region;
and the image detection module is used for mapping the calibration frame to the target image and determining whether the water surface floater exists in the target image according to the mapping result.
The application still provides a surface of water floater check out test set, includes:
a processor; and the combination of (a) and (b),
a computer readable medium having stored thereon instructions that, when executed by the processor, cause the apparatus to perform the method as described in any of the above.
The present application also provides a computer readable medium having stored thereon instructions which are loaded by a processor and which perform the method as defined in any one of the above.
As described above, the present application provides a method, a system, a device and a medium for detecting a floating object on a water surface, which have the following advantages:
firstly, acquiring an image containing a horizontal plane, and recording the image as a target image; then, carrying out edge detection on the target image by utilizing a multi-stage edge detection algorithm, and generating an edge detection image based on corresponding edge information; performing edge expansion on the edge detection image, and communicating areas with discontinuous edges and edge separation in the edge detection image to obtain an edge expansion image; then acquiring a connected region exceeding a preset threshold value in the edge expansion image, and generating a corresponding calibration frame according to the coordinate information of the connected point of the connected region; and finally, mapping the calibration frame to the target image, and determining whether the water surface floating objects exist in the target image according to the mapping result. Therefore, the method and the device can realize high-precision water surface floating object detection by only performing simple matrix operation from the viewpoint of image processing. Because the computing power required by the method is small, the method not only needs short time for computing, but also can realize computing tasks only by adopting low-end hardware equipment. Compared with the GPU (graphics processing unit, graphics processor for short) required by the current deep learning training model, the method has the advantages that the calculation cost is greatly reduced; in addition, from the aspect of storage resources, the method only needs to adopt a mode of intercepting the picture for detection, and a large amount of storage can be saved. From the point of view of integration, the method has advantages in calculation and storage methods, so that the method is easy to integrate into existing equipment. From the perspective of environmental suitability, the low power consumption of the power supply device enables the power supply device to be more suitable for suburbs and field environments. In addition, this application can also be applicable to the multiple scene detection to the surface of water floater to have more practicality and spreading value.
Drawings
FIG. 1 is a schematic diagram of an exemplary system architecture to which one or more embodiments of the present application may be applied;
fig. 2 is a schematic flow chart of a method for detecting a floating object on a water surface according to an embodiment of the present disclosure;
FIG. 3 is a schematic flowchart of a multi-stage edge detection algorithm according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a method for detecting a float on a water surface according to another embodiment of the present disclosure;
fig. 5a to 5d are schematic diagrams illustrating the monocular detection effect of weak water surface clutter according to an embodiment of the present application;
fig. 6a to 6d are schematic diagrams illustrating the monocular detection effect of strong water surface clutter according to an embodiment of the present application;
fig. 7a to 7d are schematic diagrams illustrating the effect of multi-target detection of weak water surface clutter according to an embodiment of the present application;
fig. 8a to 8d are schematic diagrams illustrating the effect of multi-target detection of strong water surface clutter according to an embodiment of the present application;
fig. 9 is a schematic hardware structure diagram of a water surface floating object detection system according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of a water surface floater detecting apparatus suitable for implementing one or more embodiments in the present application.
Detailed Description
The following embodiments of the present application are described by specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure of the present application. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It should be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present application, and the drawings only show the components related to the present application and are not drawn according to the number, shape and size of the components in actual implementation, the type, quantity and proportion of each component in actual implementation may be changed freely, and the layout of the components may be more complicated.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which technical solutions in one or more embodiments of the present application may be applied. As shown in fig. 1, system architecture 100 may include a terminal device 110, a network 120, and a server 130. The terminal device 110 may include various electronic devices such as a smart phone, a tablet computer, a notebook computer, and a desktop computer. The server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. Network 120 may be any type of communications medium capable of providing a communications link between terminal device 110 and server 130, such as a wired communications link or a wireless communications link.
The system architecture in the embodiments of the present application may have any number of terminal devices, networks, and servers, according to implementation needs. For example, the server 130 may be a server group composed of a plurality of server devices. In addition, the technical solution provided in the embodiment of the present application may be applied to the terminal device 110, or may be applied to the server 130, or may be implemented by both the terminal device 110 and the server 130, which is not particularly limited in this application.
In an embodiment of the present application, the terminal device 110 or the server 130 of the present application may first obtain an image including a horizontal plane, and record the image as a target image; then, carrying out edge detection on the target image by utilizing a multi-stage edge detection algorithm, and generating an edge detection image based on corresponding edge information; performing edge expansion on the edge detection image, and communicating areas with discontinuous edges and edge separation in the edge detection image to obtain an edge expansion image; then, acquiring a connected region exceeding a preset threshold value in the edge expansion image, and generating a corresponding calibration frame according to the connected point coordinate information of the connected region; and finally, mapping the calibration frame to the target image, and determining whether the water surface floating objects exist in the target image according to the mapping result. By using the terminal device 110 or the server 130 to execute the method for detecting the water surface floating objects, the high-precision water surface floating object detection can be realized by only performing simple matrix operation from the viewpoint of image processing. Because the required computing power is small, the computing time is short, and the computing task can be realized only by adopting low-end hardware equipment. Compared with a GPU (graphic processing unit, graphics processor, GPU for short) required by the current deep learning training model, the calculation cost can be greatly reduced; and from the storage resource, only the mode of intercepting the picture is needed to be adopted for detection, so that a large amount of storage can be saved. From an integratability point of view, it is easier to integrate into existing devices due to the advantages in the calculation and storage methods. From the perspective of environmental adaptability, the power supply has low power consumption, so that the power supply can be more suitable for suburban and field environments. In addition, the method can be suitable for detecting various scenes of the water surface floater, thereby having higher practicability and popularization value.
The above section describes the content of an exemplary system architecture to which the solution of the present application is applied, and the following description continues with the water surface float detection method of the present application.
Fig. 2 shows a schematic flow chart of a method for detecting a floating object on a water surface according to an embodiment of the present application. Specifically, in an exemplary embodiment, as shown in fig. 2, the present embodiment provides a water surface float detection method, including the steps of:
s210, acquiring an image containing a horizontal plane, and recording the image as a target image. The target image in this embodiment may be a directly input image including a horizontal plane, or an image including a horizontal plane captured from a video.
S220, performing edge detection on the target image by using a multi-stage edge detection algorithm, and generating an edge detection image based on corresponding edge information;
s230, performing edge expansion on the edge detection image, and communicating areas with discontinuous edges and separated edges in the edge detection image to obtain an edge expansion image;
s240, acquiring a connected region exceeding a preset threshold value in the edge expansion image, and generating a corresponding calibration frame according to the coordinate information of a connected point of the connected region;
and S250, mapping the calibration frame to the target image, and determining whether the water surface floating objects exist in the target image according to a mapping result.
As shown in fig. 3, the process of performing edge detection on the target image by using a multi-stage edge detection algorithm and generating an edge detection image based on corresponding edge information according to this embodiment includes:
and carrying out gray processing on the target image, and converting the target image into a gray image. Specifically, the process of performing gray processing on the target image and converting the target image into a gray image includes: carrying out gray level processing on the target image by using a gray level conversion formula, and converting the target image into a gray level image; wherein the gray scale conversion formula is as follows:
I(x,y)=0.2989R(x,y)+0.5870G(x,y)+0.1140B(x,y)。
and performing Gaussian smoothing filtering on the gray level image, and calculating the gradient intensity and the direction of the gray level image by using a multi-level edge detection operator. Specifically, the process of performing gaussian smoothing filtering on the grayscale image includes: obtaining a pre-generated Gaussian filter kernel of size (2k + 1) and utilizing the Gaussian filter kernelChecking the gray level image to perform Gaussian smoothing filtering; wherein the equation that generates the Gaussian filter kernel comprises:
Figure BDA0003833183730000061
and according to the gradient intensity and the direction of the gray level image, carrying out non-maximum suppression and double-threshold detection on each pixel in the gray level image to obtain edge information of the gray level image. Specifically, the process of calculating the gradient strength and the direction of the gray image by using the multi-level edge detection operator comprises the following steps: determining the horizontal direction G of the gray image by utilizing a multi-level edge detection operator x And a vertical direction G y (ii) a Based on the horizontal direction G x And a vertical direction G y Calculating the gradient intensity G of the pixel points in the gray level image, including:
Figure BDA0003833183730000071
based on the horizontal direction G x And a vertical direction G y Calculating the direction theta of a pixel point in the gray level image, including: θ = arctan (G) y /G x ) (ii) a In the formula, arctan is an arctangent function. The process of non-maxima suppression of each pixel in the grayscale image includes: selecting a pixel from the gray level image and marking the pixel as a current pixel; comparing the gradient intensity of the current pixel with one pixel along the positive gradient direction and one pixel along the negative gradient direction respectively; if the gradient intensity of the current pixel is greater than the other two pixels, the current pixel is reserved as an edge point; and if the gradient intensity of the current pixel is less than or equal to at least one of the other two pixels, suppressing the current pixel. The process of performing dual threshold detection on each pixel in the grayscale image includes: acquiring residual pixels after non-maximum value suppression is completed, and taking the residual pixels as edge pixels of the gray level image; comparing the gradient values of the edge pixels with a first threshold value and a second threshold value respectively; if the gradient value of the edge pixel is larger than the first threshold value, marking the edge pixel as a strong edge pixel; if the gradient of the edge pixelIf the value is less than the first threshold and greater than the second threshold, marking the edge pixel as a weak edge pixel; and if the gradient value of the edge pixel is smaller than the second threshold value, inhibiting the edge pixel. In this embodiment, the value of the first threshold is greater than the second threshold, and the values of the first threshold and the second threshold may be set according to an actual situation, which is not limited by the specific value in this embodiment.
And suppressing the isolated low threshold point based on the edge information of the gray image, and outputting an edge detection image after the suppression is finished.
From this, it can be seen that the present embodiment can realize high-precision detection of the water surface floating object by performing only a simple matrix operation from the viewpoint of image processing. Because the calculation force required by the embodiment is small, the embodiment not only has short calculation time, but also can realize the calculation task only by adopting low-end hardware equipment. Compared with the GPU required by the current deep learning training model, the calculation cost of the embodiment is greatly reduced; in addition, from the perspective of storage resources, the embodiment only needs to detect in a mode of capturing pictures, and a large amount of storage can be saved. From the point of view of integratability, the embodiment is easier to integrate into the existing device because the embodiment has advantages in the calculation and storage method. From the perspective of environmental suitability, due to the low power consumption of the embodiment, the embodiment can be more suitable for suburban and field environments. In addition, this embodiment can also be applicable to the multiple scene detection to the surface of water floater to have more practicality and spreading value.
In another exemplary embodiment, as shown in fig. 4, the present application further provides a water surface float detection method comprising the steps of:
1) And reading the data. Reading the water surface video or picture through video transmission or real-time video recording or photographing,
2) And detecting edges. Under the condition that computing resources are limited, in order to identify the water surface floating object, a Canny image edge detection algorithm is adopted to extract edge information in the image. It should be particularly noted that, for the influence of the water surface clutter on the edge detection effect, the suppression of the water surface strong clutter can be realized through a self-adaptive high-low filtering threshold. The Canny edge detection algorithm is a multi-stage edge detection algorithm, and compared with other edge detection algorithms, the Canny edge detection algorithm has the advantage that the accuracy of image edge identification is much higher.
3) The edges expand. After the edge information of the image is obtained, edge expansion is realized by adopting an edge expansion technology. The effect of the floating material is mainly to reduce the influence of the environmental adaptability of the floating material (such as a clinker bag, a plastic bottle with the color similar to that of the water surface, a transparent water bottle and the like). The environmental adaptability of the floating object is easy to cause the conditions of discontinuous edges and separated edges. In this embodiment, after Canny edge detection, some edge regions may be fractured, so as to form a plurality of scattered small connected regions, and the small connected regions need to be connected through expansion operation, so as to form a large connected region with a certain area (for distinguishing background strong clutter), which is convenient for being detected by a subsequent detection algorithm. Depending on the shape of the expandable structural element, a distinction can be made between circular expansion and rectangular expansion. In this embodiment, the image obtained by Canny edge detection may be subjected to edge dilation using a circle with a radius of 8.
4) And (4) framing of the target floating object. After the edge dilation image is obtained, the area of the suspected target needs to be framed. Here, considering that a small amount of edge information (point clutter) of extremely strong clutter is retained, a threshold is set in a framing step, and clutter edges lower than the threshold are filtered. In this embodiment, after the expansion operation, the target suspected of floating on the water surface needs to be framed. And framing the connected regions exceeding a certain threshold according to the idea that the real boundary tends to have a larger area of the connected regions after expansion. Once the connected region meeting the requirement is detected, a calibration frame with a corresponding size is generated according to the coordinate information of the connected point, and meanwhile, the position information is mapped to the original image so as to observe the actual detection effect. It should be noted that: the threshold value cannot be too small, and the threshold value is mainly used for reducing the influence of the strong clutter edge on the detection result.
5) The mapping of the boxes is calibrated. And acquiring the position information of the calibration frame in the image expansion, mapping the position information to the original image, and visually displaying the floating object and the detection effect.
Specifically, as shown in fig. 3, the process of performing edge detection on an image or a picture by using the Canny image edge detection algorithm includes:
a. converting a gray scale image:
the Canny operator can only process a single-channel gray image, and common screens or pictures are three-channel color pictures, so that the original image needs to be subjected to gray conversion before edge detection is carried out. The conversion formula is as follows:
I(x,y)=0.2989R(x,y)+0.5870G(x,y)+0.1140B(x,y);
b. gaussian smoothing filtering:
in order to reduce the influence of noise on the edge detection result as much as possible, it is necessary to filter out the noise to prevent erroneous detection caused by the noise. In order to smooth the image, a gaussian filter is used to convolve with the image to achieve the effect of smoothing the image to reduce the noise effect apparent on the edge detector. The generation equation for the Gaussian filter kernel of size (2k + 1) (. 2k + 1) is given by:
Figure BDA0003833183730000091
it should be noted that the choice of the size of the gaussian convolution kernel will affect the performance of the Canny detector. The larger the size, the less sensitive the detector is to noise, but the positioning error of the edge detection will also increase slightly.
c. Gradient strength and direction were calculated:
edges in an image can point in various directions, so the Canny algorithm uses four operators to detect horizontal, vertical, and diagonal edges in an image. The operator of edge detection returns to the horizontal direction (G) x ) And the vertical direction (G) y ) The gradient G and the direction theta of the pixel point can be determined according to the first derivative value.
Figure BDA0003833183730000092
θ=arctan(G y /G x );
In the formula, G is gradient strength, theta represents gradient direction, and arctan is an arctan function.
d. Non-maxima suppression:
non-maximum suppression is an edge thinning technique, and the effect of non-maximum suppression is a "thin" edge. After gradient computation of the image, edges extracted based on gradient values alone are still blurred. While non-maximum suppression may help suppress all gradient values other than the local maximum to 0, the step of performing non-maximum suppression on each pixel in the gradient image is:
d1 Compare the gradient strength of the current pixel to the two pixels in the positive and negative gradient directions;
d2 If the gradient strength of the current pixel is the largest compared with the other two pixels, the pixel point is kept as an edge point, otherwise, the pixel point is suppressed.
Typically, for more accurate calculations, linear interpolation is used between two adjacent pixels across the gradient direction to obtain the pixel gradient to be compared.
e. Double-threshold detection:
after applying non-maximum suppression, the remaining pixels can more accurately represent the actual edges in the image. However, there are still some edge pixels due to noise and color variations. To account for these spurious responses, edge pixels must be filtered with weak gradient values and edge pixels with high gradient values are retained, which can be achieved by selecting high and low thresholds. Marking an edge pixel as a strong edge pixel, assuming that its gradient value is above a high threshold; when the gradient value of the edge pixel is smaller than the high threshold value and larger than the low threshold value, marking the edge pixel as a weak edge pixel; when the gradient value of the edge pixel is less than the low threshold, it is suppressed. It should be noted that: the choice of threshold depends on the content of a given input image.
f. Suppression of isolated low threshold points:
through the above steps, the pixels classified as strong edges have been determined as edges because they are extracted from the real edges in the image. However, there will be some controversy over weak edge pixels, as these pixels can be extracted from real edges, and can also be due to noise or color variations. In order to obtain accurate results, weak edges caused by the latter should be suppressed. In general, weak edge pixels caused by real edges will be connected to strong edge pixels, while the noise response is not connected. To track edge joins, by looking at the weak edge pixels and their 8 neighborhood pixels, the weak edge point can be retained as the true edge as long as one of them is a strong edge pixel.
g. Outputting an edge detection image:
after the operation, a drawing board can be established, and the image information of the edge finally detected by the original image is displayed.
Therefore, as can be seen from algorithm design, the whole algorithm provided by the embodiment does not need training, and from the viewpoint of image processing, high-precision water surface floating object detection can be realized only by performing simple matrix operation. Because the required calculation is smaller, the calculation time is short, and the calculation task can be realized only by adopting low-end hardware equipment, such as a 51 single chip microcomputer and an STM32 development board. Compared with a GPU (graphics processing unit) required by a deep learning training model, the calculation cost is greatly reduced; from the perspective of storage resources, the method only needs to detect in a mode of intercepting pictures, and the detection of the water surface floater based on deep learning generally depends on video detection, so that the method occupies relatively large storage resources and can save a large amount of storage; from the point of integratability, due to the advantages of the calculation and storage methods, the method is easier to integrate into the existing equipment than the deep learning method; from the perspective of environmental suitability, the lower power consumption makes it more suitable for suburban and field environments. In summary, the method provided by the embodiment has higher practicability and popularization value than the deep learning water surface floater detection method in the practical popularization.
According to the above descriptions, in a specific embodiment, the present embodiment provides a method for detecting a floating object on a water surface, which belongs to the same technical concept as the detection method in some embodiments described above and has the same processing flow, so that the present embodiment is not described herein again. The detection method in the embodiment is mainly applied to a weak water surface clutter single-target detection scene and is used for detecting the floater on the weak water surface. In this embodiment, the gaussian kernel dimension is set to 1, the high and low thresholds in the dual-threshold detection are set to 0.6 and 0.2, respectively, the radius of the disk in the edge expansion is set to 8, the threshold of the area size of the connected region is set to 80, and other parameters adopt conventional default parameters, which is not described herein again. The detection effect of the water surface floating object in the present embodiment is shown in fig. 5a, 5b, 5c and 5d, respectively. Specifically, fig. 5a is an original image in a weak water surface clutter single target detection scene, fig. 5b is a Canny edge detection image in the weak water surface clutter single target detection scene, fig. 5c is an image after edge expansion and target framing are completed in the weak water surface clutter single target detection scene, and fig. 5d is a water surface floater detection result image in the weak water surface clutter single target detection scene.
According to the above description, in another embodiment, the present embodiment provides a method for detecting a floating object on a water surface, which belongs to the same technical concept as the detection method in some embodiments described above and has the same processing flow, so that the detailed description is omitted here. The detection method in the embodiment is mainly applied to a strong water surface clutter single-target detection scene and is used for detecting the floater on the strong water surface. In this embodiment, the gaussian kernel dimension is set to 1, the high and low thresholds in the dual-threshold detection are set to 0.6 and 0.2, respectively, the radius of the disk in the edge expansion is set to 8, the threshold of the area size of the connected region is set to 80, and other parameters adopt conventional default parameters, which is not described herein again. The detection effect of the water surface floating object of the present embodiment is shown in fig. 6a, fig. 6b, fig. 6c and fig. 6d, respectively. Specifically, fig. 6a is an original image in a strong water surface clutter single target detection scene, fig. 6b is a Canny edge detection image in the strong water surface clutter single target detection scene, fig. 6c is an image after edge expansion and target framing are completed in the strong water surface clutter single target detection scene, and fig. 6d is a water surface floater detection result image in the strong water surface clutter single target detection scene.
As can be seen from fig. 5a to 5d and fig. 6a to 6d, the designed method can effectively detect the water surface floating objects under the condition of both weak water surface clutter and strong water surface clutter for single target detection. In the detection process, the detected edges are discretized and discontinuous due to the environmental adaptability of the floating objects in the edge detection stage, so that the points can only reflect the local contour of the floating objects and a connected area which can be used for target framing is not formed. After the edge expansion, the image edge dense areas are communicated and have a certain area, so that singular points formed by real edges and water surface clutter can be well distinguished.
According to the above descriptions, in a specific embodiment, the present embodiment provides a method for detecting a floating object on a water surface, which belongs to the same technical concept as the detection method in some embodiments described above and has the same processing flow, so that the present embodiment is not described herein again. The detection method in the embodiment is mainly applied to a multi-target detection scene of the clutter of the weak water surface and is used for detecting the floaters of the weak water surface. In this embodiment, the gaussian kernel dimension is set to 1, the high and low thresholds in the dual-threshold detection are set to 0.6 and 0.2, respectively, the radius of the disk in the edge expansion is set to 8, the threshold of the area size of the connected region is set to 80, and other parameters adopt conventional default parameters, which is not described herein again. The detection effect of the water surface floating object of the present embodiment is shown in fig. 7a, fig. 7b, fig. 7c and fig. 7d, respectively. Specifically, fig. 7a is an original image in a weak water surface clutter multi-target detection scene, fig. 7b is a Canny edge detection image in the weak water surface clutter multi-target detection scene, fig. 7c is an image after edge expansion and target framing are completed in the weak water surface clutter multi-target detection scene, and fig. 7d is a water surface floater detection result image in the weak water surface clutter multi-target detection scene.
According to the above descriptions, in a specific embodiment, the present embodiment provides a method for detecting a floating object on a water surface, which belongs to the same technical concept as the detection method in some embodiments described above and has the same processing flow, so that the present embodiment is not described herein again. The detection method in the embodiment is mainly applied to a strong water surface clutter multi-target detection scene and is used for detecting the floaters on the strong water surface. In this embodiment, the gaussian kernel dimension is set to 1, the high and low thresholds in the dual-threshold detection are set to 0.6 and 0.2, respectively, the radius of the disk in the edge expansion is set to 8, the threshold of the area size of the connected region is set to 80, and other parameters adopt conventional default parameters, which are not described herein again. The detection effect of the water surface floating object of the present embodiment is shown in fig. 8a, 8b, 8c and 8d, respectively. Specifically, fig. 8a is an original image in a strong water surface clutter multi-target detection scene, fig. 8b is a Canny edge detection image in the strong water surface clutter multi-target detection scene, fig. 8c is an image after edge expansion and target framing are completed in the strong water surface clutter multi-target detection scene, and fig. 8d is a water surface floater detection result image in the strong water surface clutter multi-target detection scene.
As can be seen from fig. 7a to 7d and fig. 8a to 8d, for multi-target detection, when the horizontal floater detecting method provided by the application is used for detecting floaters on the water surface, the detection accuracy can be as high as 91.67%. Compared with the weak water clutter, the strong water clutter generates more noise interference on the edge detection. After expansion, however, the interference noise is difficult to form a large-area connected region, and is filtered out in the target detection process.
In summary, the present application provides a method for detecting a floating object on a water surface, which includes first obtaining an image including a horizontal plane and recording the image as a target image; then, carrying out edge detection on the target image by utilizing a multi-stage edge detection algorithm, and generating an edge detection image based on corresponding edge information; performing edge expansion on the edge detection image, and communicating areas with discontinuous edges and edge separation in the edge detection image to obtain an edge expansion image; then, acquiring a connected region exceeding a preset threshold value in the edge expansion image, and generating a corresponding calibration frame according to the connected point coordinate information of the connected region; and finally, mapping the calibration frame to the target image, and determining whether the water surface floater exists in the target image according to the mapping result. Therefore, the method and the device can realize high-precision water surface floating object detection by only performing simple matrix operation from the viewpoint of image processing. Because the required computing power is small, the method not only needs short computing time, but also can realize computing tasks only by adopting low-end hardware equipment (such as a 51 single chip microcomputer and an STM32 development board). Compared with the GPU required by the current deep learning training model, the calculation cost is greatly reduced; in addition, from the aspect of storage resources, the method only needs to adopt a mode of intercepting the picture for detection, and a large amount of storage can be saved. From the point of view of integration, the method has advantages in calculation and storage methods, so that the method is easy to integrate into existing equipment. From the perspective of environmental suitability, due to the low power consumption of the application, the application can be more suitable for suburbs and field environments. In addition, this application can also be applicable to the multiple scene detection to the surface of water floater to have more practicality and spreading value.
As shown in fig. 9, the present application further provides a system for detecting a float on a water surface, the system comprising:
the image acquisition module 910 is configured to acquire an image including a horizontal plane and record the image as a target image. The target image in this embodiment may be a directly input image containing a horizontal plane, or may be an image containing a horizontal plane captured from a video.
An edge detection module 920, configured to perform edge detection on the target image by using a multi-level edge detection algorithm, and generate an edge detection image based on corresponding edge information;
an edge expansion module 930, configured to perform edge expansion on the edge detection image, and communicate areas where edges in the edge detection image are discontinuous and separated to obtain an edge expansion image;
a calibration module 940, configured to obtain a connected region exceeding a preset threshold in the edge expansion image, and generate a corresponding calibration frame according to the connected point coordinate information of the connected region;
the image detection module 950 is configured to map the calibration frame into the target image, and determine whether a water surface floating object exists in the target image according to a mapping result.
As shown in fig. 3, the process of performing edge detection on the target image by using a multi-stage edge detection algorithm and generating an edge detection image based on corresponding edge information in this embodiment includes:
and carrying out gray processing on the target image, and converting the target image into a gray image. Specifically, the process of performing gray processing on the target image and converting the target image into a gray image includes: carrying out gray processing on the target image by using a gray conversion formula, and converting the target image into a gray image; wherein, the gray scale conversion formula is as follows:
I(x,y)=0.2989R(x,y)+0.5870G(x,y)+0.1140B(x,y)。
and performing Gaussian smoothing filtering on the gray level image, and calculating the gradient intensity and the direction of the gray level image by using a multi-level edge detection operator. Specifically, the process of performing gaussian smoothing filtering on the grayscale image includes: acquiring a pre-generated Gaussian filter kernel with the size of (2k + 1), and performing Gaussian smoothing filtering on the gray level image by using the Gaussian filter kernel; wherein the equation that generates the Gaussian filter kernel comprises:
Figure BDA0003833183730000131
and according to the gradient intensity and the direction of the gray image, carrying out non-maximum suppression and double-threshold detection on each pixel in the gray image to obtain edge information of the gray image. Specifically, the process of calculating the gradient strength and the direction of the gray image by using the multi-level edge detection operator comprises the following steps: determining the horizontal direction G of the gray image by utilizing a multi-level edge detection operator x And a vertical direction G y (ii) a Based on the horizontal direction G x And a vertical direction G y Calculating the gradient intensity G of the pixel points in the gray level image, including:
Figure BDA0003833183730000132
based on the horizontal direction G x And a vertical direction G y ComputingThe direction theta of the pixel points in the gray level image is as follows: θ = arctan (G) y /G x ) (ii) a In the formula, arctan is an arctangent function. The process of non-maxima suppression of each pixel in the grayscale image includes: selecting a pixel from the gray level image and recording the pixel as a current pixel; comparing the gradient intensity of the current pixel with one pixel along the positive gradient direction and one pixel along the negative gradient direction respectively; if the gradient intensity of the current pixel is greater than the other two pixels, the current pixel is reserved as an edge point; and if the gradient intensity of the current pixel is less than or equal to at least one of the other two pixels, suppressing the current pixel. The process of performing dual threshold detection on each pixel in the grayscale image includes: acquiring residual pixels after non-maximum value suppression is completed, and taking the residual pixels as edge pixels of the gray level image; comparing the gradient values of the edge pixels with a first threshold value and a second threshold value respectively; if the gradient value of the edge pixel is larger than the first threshold value, marking the edge pixel as a strong edge pixel; if the gradient value of the edge pixel is smaller than the first threshold value and larger than the second threshold value, marking the edge pixel as a weak edge pixel; and if the gradient value of the edge pixel is smaller than the second threshold value, suppressing the edge pixel. In this embodiment, the value of the first threshold is greater than the second threshold, and the values of the first threshold and the second threshold may be set according to an actual situation, and this embodiment is not limited to specific values.
And suppressing the isolated low threshold point based on the edge information of the gray image, and outputting an edge detection image after the suppression is finished.
As can be seen from this, in the present embodiment, high-precision detection of the water surface floating object can be achieved by performing only a simple matrix operation from the viewpoint of image processing. Because the calculation force required by the embodiment is small, the embodiment not only has short calculation time, but also can realize the calculation task only by adopting low-end hardware equipment. Compared with the GPU required by the current deep learning training model, the calculation cost of the embodiment is greatly reduced; in addition, from the perspective of storage resources, the embodiment only needs to detect in a mode of capturing pictures, and a large amount of storage can be saved. From the point of view of integratability, the embodiment is more easily integrated into existing devices because of its advantages in the calculation and storage method. From the perspective of environmental suitability, due to the low power consumption of the embodiment, the embodiment can be more suitable for suburban and field environments. In addition, this embodiment can also be applicable to the multiple scene detection to the surface of water floater to have more practicality and spreading value.
According to the above descriptions, in a specific embodiment, the present embodiment provides a system for detecting a floating object on a water surface, which belongs to the same technical concept as the detection systems in some embodiments described above and has the same processing flow, so that the present embodiment is not described herein again. The detection system in the embodiment is mainly applied to a weak water surface clutter single-target detection scene and used for detecting the floater on the weak water surface. In this embodiment, the gaussian kernel dimension is set to 1, the high and low thresholds in the dual-threshold detection are set to 0.6 and 0.2, respectively, the radius of the disk in the edge expansion is set to 8, the threshold of the area size of the connected region is set to 80, and other parameters adopt conventional default parameters, which is not described herein again. The detection effect of the water surface floating object of the present embodiment is shown in fig. 5a, 5b, 5c and 5d, respectively. Specifically, fig. 5a is an original image in a weak water surface clutter single target detection scene, fig. 5b is a Canny edge detection image in the weak water surface clutter single target detection scene, fig. 5c is an image after edge expansion and target framing are completed in the weak water surface clutter single target detection scene, and fig. 5d is a water surface floater detection result image in the weak water surface clutter single target detection scene.
According to the above descriptions, in another specific embodiment, the present embodiment provides a system for detecting a floating object on a water surface, which belongs to the same technical concept as the detection systems in some embodiments described above and has the same processing flow, so that the present embodiment is not described herein again. The detection system in the embodiment is mainly applied to a strong water surface clutter single-target detection scene and is used for detecting the floater on the strong water surface. In this embodiment, the gaussian kernel dimension is set to 1, the high and low thresholds in the dual-threshold detection are set to 0.6 and 0.2, respectively, the radius of the disk in the edge expansion is set to 8, the threshold of the area size of the connected region is set to 80, and other parameters adopt conventional default parameters, which are not described herein again. The detection effect of the water surface floating object of the present embodiment is shown in fig. 6a, fig. 6b, fig. 6c and fig. 6d, respectively. Specifically, fig. 6a is an original image in a strong water surface clutter single target detection scene, fig. 6b is a Canny edge detection image in the strong water surface clutter single target detection scene, fig. 6c is an image after edge expansion and target framing are completed in the strong water surface clutter single target detection scene, and fig. 6d is a water surface floater detection result image in the strong water surface clutter single target detection scene.
As can be seen from fig. 5a to 5d and 6a to 6d, the system designed for single target detection can effectively detect the water surface floating objects under the conditions of weak water surface clutter and strong water surface clutter. Specifically, in the detection process, the detected edges are discretized and are not continuous due to the environmental adaptability of the floating objects in the edge detection stage, so that the points can only reflect the local contour of the floating objects, and a communication area which can be used for target framing is not formed. After the edge expansion, the image edge dense areas are communicated and have a certain area, so that singular points formed by real edges and water surface clutter can be well distinguished.
According to the above descriptions, in a specific embodiment, the present embodiment provides a system for detecting a floating object on a water surface, which belongs to the same technical concept as the detection systems in some embodiments described above and has the same processing flow, so that the present embodiment is not described herein again. The detection system in the embodiment is mainly applied to a multi-target detection scene of the clutter of the weak water surface and is used for detecting the floaters of the weak water surface. In this embodiment, the gaussian kernel dimension is set to 1, the high and low thresholds in the dual-threshold detection are set to 0.6 and 0.2, respectively, the radius of the disk in the edge expansion is set to 8, the threshold of the area size of the connected region is set to 80, and other parameters adopt conventional default parameters, which are not described herein again. The detection effect of the water surface floating object in the present embodiment is shown in fig. 7a, 7b, 7c and 7d, respectively. Specifically, fig. 7a is an original image in a weak water surface clutter multi-target detection scene, fig. 7b is a Canny edge detection image in the weak water surface clutter multi-target detection scene, fig. 7c is an image after edge expansion and target framing are completed in the weak water surface clutter multi-target detection scene, and fig. 7d is a water surface floater detection result image in the weak water surface clutter multi-target detection scene.
According to the above descriptions, in a specific embodiment, the present embodiment provides a system for detecting a floating object on a water surface, which belongs to the same technical concept as the detection systems in some embodiments described above and has the same processing flow, so that the present embodiment is not described herein again. The detection system in the embodiment is mainly applied to a strong water surface clutter multi-target detection scene and is used for detecting the floater on the strong water surface. In this embodiment, the gaussian kernel dimension is set to 1, the high and low thresholds in the dual-threshold detection are set to 0.6 and 0.2, respectively, the radius of the disk in the edge expansion is set to 8, the threshold of the area size of the connected region is set to 80, and other parameters adopt conventional default parameters, which is not described herein again. The detection effect of the water surface floating object of the present embodiment is shown in fig. 8a, 8b, 8c and 8d, respectively. Specifically, fig. 8a is an original image in a strong water surface clutter multi-target detection scene, fig. 8b is a Canny edge detection image in the strong water surface clutter multi-target detection scene, fig. 8c is an image after edge expansion and target framing are completed in the strong water surface clutter multi-target detection scene, and fig. 8d is a water surface floater detection result image in the strong water surface clutter multi-target detection scene.
As can be seen from fig. 7a to 7d and fig. 8a to 8d, for multi-target detection, when the horizontal floater detecting system provided by the present application performs on the water surface floaters, the detection accuracy can be as high as 91.67%. And comparing the weak water clutter and the strong water clutter to generate more noise interference on edge detection. After expansion, however, the interference noise is difficult to form a large-area communication region, and is filtered out in the target detection process.
To sum up, the application provides a system for detecting a floating object on a water surface, which first obtains an image containing a horizontal plane and records the image as a target image; then, carrying out edge detection on the target image by utilizing a multi-stage edge detection algorithm, and generating an edge detection image based on corresponding edge information; then, performing edge expansion on the edge detection image, and communicating areas with discontinuous edges and separated edges in the edge detection image to obtain an edge expansion image; then acquiring a connected region exceeding a preset threshold value in the edge expansion image, and generating a corresponding calibration frame according to the coordinate information of the connected point of the connected region; and finally, mapping the calibration frame to the target image, and determining whether the water surface floater exists in the target image according to the mapping result. Therefore, the water surface floating object detection method and the water surface floating object detection device can realize high-precision water surface floating object detection by only performing simple matrix operation from the viewpoint of image processing. Because the required computing power is small, the method is short in computing time, and computing tasks can be achieved only by adopting low-end hardware equipment (such as a 51 single chip microcomputer and an STM32 development board). Compared with the GPU required by the current deep learning training model, the calculation cost is greatly reduced; in addition, from the aspect of storage resources, the method only needs to adopt a mode of intercepting the picture for detection, and a large amount of storage can be saved. From the point of view of integratability, the application is easier to integrate into existing devices due to its advantages in computing and storage systems. From the perspective of environmental suitability, due to the low power consumption of the application, the application can be more suitable for suburbs and field environments. In addition, this application can also be applicable to the multiple scene detection to the surface of water floater to have more practicality and spreading value.
It should be noted that the system for detecting the floating objects on the water surface provided by the above embodiment and the method for detecting the floating objects on the water surface provided by the above embodiment belong to the same concept, wherein the specific manner of performing the operation by each module and unit has been described in detail in the method embodiment, and is not described herein again. In practical applications, the system for detecting a floating object on a water surface provided in the above embodiment may distribute the above functions to different functional modules according to needs, that is, divide the internal structure of the system into different functional modules to complete all or part of the above described functions, which is not limited herein.
The embodiment of the application also provides a surface of water floater check out test set, and this equipment can include: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the device to perform the method of fig. 2. Fig. 10 shows a schematic structural view of a water surface float detection apparatus 1000. Referring to fig. 10, the water surface float detecting apparatus 1000 includes: a processor 1010, a memory 1020, a power source 1030, a display unit 1040, an input unit 1060.
The processor 1010 is a control center of the water surface floater detecting apparatus 1000, connects various components using various interfaces and lines, and performs various functions of the water surface floater detecting apparatus 1000 by operating or executing software programs and/or data stored in the memory 1020, thereby integrally monitoring the water surface floater detecting apparatus 1000. In the embodiment of the present application, the processor 1010 executes the method described in fig. 2 when calling the computer program stored in the memory 1020. Alternatively, processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. In some embodiments, the processor, memory, and/or memory may be implemented on a single chip, or in some embodiments, they may be implemented separately on separate chips.
The memory 1020 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, various applications, and the like; the storage data area may store data created according to the use of the surface float detecting apparatus 1000, and the like. Further, the memory 1020 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The surface float detection device 1000 further comprises a power source 1030 (e.g., a battery) for supplying power to the various components, which may be logically connected to the processor 1010 via a power management system, thereby performing functions such as managing charging, discharging, and power consumption via the power management system.
The display unit 1040 may be configured to display information input by a user or information provided to the user, and various menus of the water surface floater detecting apparatus 1000, and is mainly configured to display a display interface of each application in the water surface floater detecting apparatus 1000 and objects such as texts and pictures displayed in the display interface in the embodiment of the present application. The display unit 1040 may include a display panel 1050. The Display panel 1050 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 1060 may be used to receive information such as numbers or characters input by a user. The input unit 1060 may include a touch panel 1070 and other input devices 1080. The touch panel 1070, also referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on the touch panel 1070 or near the touch panel 1070 using a finger, a stylus, or any other suitable object or attachment).
Specifically, the touch panel 1070 can detect a touch operation of a user, detect signals generated by the touch operation, convert the signals into touch point coordinates, transmit the touch point coordinates to the processor 1010, and receive and execute a command transmitted from the processor 1010. In addition, the touch panel 1070 may be implemented using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. Other input devices 1080 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, power on/off keys, etc.), a trackball, a mouse, a joystick, and the like.
Of course, the touch panel 1070 may cover the display panel 1050, and when the touch panel 1070 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1010 to determine the type of the touch event, and then the processor 1010 provides a corresponding visual output on the display panel 1050 according to the type of the touch event. Although in fig. 10 the touch panel 1070 and the display panel 1050 are shown as two separate components to implement the input and output functions of the water surface float detection apparatus 1000, in some embodiments, the touch panel 1070 and the display panel 1050 may be integrated to implement the input and output functions of the water surface float detection apparatus 1000.
The water surface float detection apparatus 1000 may also include one or more sensors, such as pressure sensors, gravitational acceleration sensors, proximity light sensors, and the like. Of course, the water surface float detection apparatus 1000 may also include other components such as a camera, as desired for a particular application.
Embodiments of the present application also provide a computer-readable storage medium, which stores instructions that, when executed by one or more processors, enable the above-mentioned device to perform the method described in the present application as fig. 2.
It will be appreciated by those skilled in the art that figure 10 is merely an example of a surface float detection apparatus and does not constitute a limitation of this apparatus, which may include more or fewer components than shown, or some components in combination, or different components. For convenience of description, the above parts are described separately as modules (or units) according to functions. Of course, the functionality of the various modules (or units) may be implemented in the same one or more pieces of software or hardware when implementing the present application.
Those skilled in the art will appreciate that the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. The present application has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application, and it is understood that each flowchart illustration and/or block diagram block and combination of flowchart illustrations and/or block diagrams block and computer program instructions may be implemented by computer program instructions. These computer program instructions may be applied to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. A method of detecting a float on a water surface, the method comprising the steps of:
acquiring an image containing a horizontal plane, and recording the image as a target image;
performing edge detection on the target image by using a multi-stage edge detection algorithm, and generating an edge detection image based on corresponding edge information;
performing edge expansion on the edge detection image, and communicating areas with discontinuous edges and separated edges in the edge detection image to obtain an edge expansion image;
acquiring a connected region exceeding a preset threshold value in the edge expansion image, and generating a corresponding calibration frame according to the coordinate information of a connected point of the connected region;
and mapping the calibration frame to the target image, and determining whether the target image has water surface floating objects according to a mapping result.
2. The method according to claim 1, wherein the step of performing edge detection on the target image by using a multi-stage edge detection algorithm and generating an edge detection image based on corresponding edge information comprises:
carrying out gray processing on the target image, and converting the target image into a gray image;
performing Gaussian smoothing filtering on the gray level image, and calculating the gradient intensity and direction of the gray level image by using a multi-level edge detection operator;
according to the gradient intensity and the direction of the gray level image, carrying out non-maximum suppression and double-threshold detection on each pixel in the gray level image to obtain edge information of the gray level image;
and restraining the isolated low threshold point based on the edge information of the gray level image, and outputting an edge detection image after restraining is finished.
3. The method according to claim 2, wherein the target image is subjected to gray scale processing, and the process of converting the target image into a gray scale image comprises: carrying out gray processing on the target image by using a gray conversion formula, and converting the target image into a gray image; wherein the gray scale conversion formula is as follows: i (x, y) =0.2989R (x, y) +0.5870G (x, y) +0.1140B (x, y).
4. The method of claim 3, wherein the step of Gaussian smoothing the gray scale image comprises:
acquiring a pre-generated Gaussian filter kernel with the size of (2k + 1), and performing Gaussian smoothing filtering on the gray level image by using the Gaussian filter kernel; wherein the equation that generates the Gaussian filter kernel comprises:
Figure FDA0003833183720000011
5. the method of claim 4, wherein the step of calculating the gradient strength and direction of the gray scale image using a multi-level edge detection operator comprises:
determining the horizontal direction G of the gray image by utilizing a multi-level edge detection operator x And a vertical direction G y
Based on the horizontal direction G x And a vertical direction G y Calculating the gradient intensity G of the pixel points in the gray level image, including:
Figure FDA0003833183720000021
based on the horizontal direction G x And a vertical direction G y Calculating the direction theta of a pixel point in the gray level image, including:
θ=arctan(G y /G x );
in the formula, arctan is an arctangent function.
6. The method of claim 5, wherein the non-maxima suppression of each pixel in the intensity image comprises:
selecting a pixel from the gray level image and recording the pixel as a current pixel;
comparing the gradient intensity of the current pixel with one pixel along the positive gradient direction and one pixel along the negative gradient direction respectively;
if the gradient intensity of the current pixel is greater than the other two pixels, the current pixel is reserved as an edge point; and if the gradient intensity of the current pixel is less than or equal to at least one of the other two pixels, suppressing the current pixel.
7. The method of claim 6, wherein the process of performing a dual threshold detection for each pixel in the gray scale image comprises:
acquiring residual pixels after non-maximum value suppression is finished, and taking the residual pixels as edge pixels of the gray level image;
comparing the gradient values of the edge pixels with a first threshold value and a second threshold value respectively;
if the gradient value of the edge pixel is larger than the first threshold value, marking the edge pixel as a strong edge pixel;
if the gradient value of the edge pixel is smaller than the first threshold value and larger than the second threshold value, marking the edge pixel as a weak edge pixel;
and if the gradient value of the edge pixel is smaller than the second threshold value, suppressing the edge pixel.
8. A water surface float detection system, said system comprising:
the image acquisition module is used for acquiring an image containing a horizontal plane and recording the image as a target image;
the edge detection module is used for carrying out edge detection on the target image by utilizing a multi-stage edge detection algorithm and generating an edge detection image based on corresponding edge information;
the edge expansion module is used for performing edge expansion on the edge detection image and communicating areas with discontinuous edges and separated edges in the edge detection image to obtain an edge expansion image;
the calibration module is used for acquiring a connected region exceeding a preset threshold value in the edge expansion image and generating a corresponding calibration frame according to the coordinate information of a connected point of the connected region;
and the image detection module is used for mapping the calibration frame to the target image and determining whether the water surface floater exists in the target image according to the mapping result.
9. A water surface float detection apparatus comprising:
a processor; and the combination of (a) and (b),
a computer readable medium having stored thereon instructions that, when executed by the processor, cause the apparatus to perform the method of any one of claims 1 to 7.
10. A computer-readable medium having stored thereon instructions which are loaded by a processor and which perform the method of any one of claims 1 to 7.
CN202211080869.2A 2022-09-05 2022-09-05 Water surface floater detection method, system, equipment and medium Pending CN115456983A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211080869.2A CN115456983A (en) 2022-09-05 2022-09-05 Water surface floater detection method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211080869.2A CN115456983A (en) 2022-09-05 2022-09-05 Water surface floater detection method, system, equipment and medium

Publications (1)

Publication Number Publication Date
CN115456983A true CN115456983A (en) 2022-12-09

Family

ID=84302721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211080869.2A Pending CN115456983A (en) 2022-09-05 2022-09-05 Water surface floater detection method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN115456983A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152115A (en) * 2023-04-04 2023-05-23 湖南融城环保科技有限公司 Garbage image denoising processing method based on computer vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152115A (en) * 2023-04-04 2023-05-23 湖南融城环保科技有限公司 Garbage image denoising processing method based on computer vision

Similar Documents

Publication Publication Date Title
CN110909611B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
CN110555839A (en) Defect detection and identification method and device, computer equipment and storage medium
US20200175700A1 (en) Joint Training Technique for Depth Map Generation
CN107395958B (en) Image processing method and device, electronic equipment and storage medium
CN109064390A (en) A kind of image processing method, image processing apparatus and mobile terminal
EP4109404A1 (en) Pose determination method and apparatus, and electronic device and storage medium
CN106062824B (en) edge detecting device and edge detection method
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN112949624B (en) Water gauge-based water level detection method and device, electronic equipment and storage medium
CN109918977A (en) Determine the method, device and equipment of free time parking stall
CN108205680A (en) Image characteristics extraction integrated circuit, method, terminal
CN107506162A (en) Coordinate mapping method, computer-readable recording medium and projecting apparatus
CN108090908A (en) Image partition method, device, terminal and storage medium
CN111931877A (en) Target detection method, device, equipment and storage medium
CN103679788A (en) 3D image generating method and device in mobile terminal
CN116168038B (en) Image reproduction detection method and device, electronic equipment and storage medium
CN111950523A (en) Ship detection optimization method and device based on aerial photography, electronic equipment and medium
CN111598149B (en) Loop detection method based on attention mechanism
CN115456983A (en) Water surface floater detection method, system, equipment and medium
WO2022062853A1 (en) Remote sensing image registration method and apparatus, device, storage medium, and system
CN103841340A (en) Image sensor and operating method thereof
CN107330905B (en) Image processing method, device and storage medium
CN115063473A (en) Object height detection method and device, computer equipment and storage medium
US11205064B1 (en) Measuring quality of depth images in real time
Zeng et al. Text Image with Complex Background Filtering Method Based on Harris Corner-point Detection.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination