WO2021142902A1 - 基于DANet的无人机海岸线漂浮垃圾巡检系统 - Google Patents

基于DANet的无人机海岸线漂浮垃圾巡检系统 Download PDF

Info

Publication number
WO2021142902A1
WO2021142902A1 PCT/CN2020/078289 CN2020078289W WO2021142902A1 WO 2021142902 A1 WO2021142902 A1 WO 2021142902A1 CN 2020078289 W CN2020078289 W CN 2020078289W WO 2021142902 A1 WO2021142902 A1 WO 2021142902A1
Authority
WO
WIPO (PCT)
Prior art keywords
coastline
network
features
uav
floating garbage
Prior art date
Application number
PCT/CN2020/078289
Other languages
English (en)
French (fr)
Inventor
翟懿奎
植一航
柯琪锐
余翠琳
周文略
应自炉
甘俊英
曾军英
梁艳阳
麦超云
秦传波
徐颖
Original Assignee
五邑大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 五邑大学 filed Critical 五邑大学
Publication of WO2021142902A1 publication Critical patent/WO2021142902A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3876Recombination of partial images to recreate the original image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Definitions

  • the invention relates to the technical field of inspection systems, in particular to an unmanned aerial vehicle coastline floating garbage inspection system based on DANet.
  • the inspection cycle is long, limited by the scope of vision, which consumes a lot of manpower and material resources; the cost of on-site monitoring arrangement is high and cannot cover the entire river basin, and the shooting range of monitoring equipment is limited, which is easy to miss.
  • Surveillance video requires manual analysis, which further increases labor and financial costs.
  • the effect of artificial analysis varies from person to person, and the analysis result is unstable.
  • the existing technology has the following problems: low efficiency of manual inspection and monitoring and inspection costs high.
  • an automatic drone inspection program has emerged in recent years. This solution solves the problem of river inspections.
  • the drone is equipped with a camera to shoot videos of the river and artificially search for pollution in the video.
  • the purpose of the present invention is to solve at least one of the technical problems existing in the prior art, and to provide a DANet-based UAV coastline floating garbage inspection system, which has the advantages of improving inspection intelligence and optimizing the UAV route.
  • the goal of low cost and high efficiency can be achieved.
  • a DANet-based drone coastline floating garbage inspection system includes:
  • the image acquisition module uses drones to take video shots of the coastline that needs to be inspected, and obtain images from the video;
  • the feature extraction module inputs the image to the FPN network to extract shallow features and deep features, merges the shallow features and deep features to obtain shared features, and the shared features generate network branches, foreground branches and background branches through regions, and finally output panoramic recognition results;
  • the network training module which annotates images and adds them to the data set for pre-training, enables the network to learn edge features and color features, modifies the classifier according to the requirements of coastline inspections, and trains the labeled images so that the network can recognize Coastline and floating garbage;
  • the path correction module is used to adjust the flight direction of the UAV.
  • the UAV flies forward with the coastline extension direction as the direction angle, and calculates the average value of all coastline coordinates as the starting point of the UAV flight, connecting the coastline and the land. Sort and fit a continuous curve, calculate the tangent direction of the points on the curve, and determine the flight direction angle.
  • the DANet-based UAV coastline floating garbage inspection system has at least the following technical effects: the system adopts a panoramic segmentation algorithm, not only segmenting the background and foreground target objects in the image at the same time, but also for each foreground Target independent identity.
  • the precise segmentation results help the drone to adjust its course, realize automatic planning of flight paths, detect floating garbage in the coastline, find out the location and category of pollution feedback, and help relevant departments solve the pollution inspection problem in the scene of long coastlines.
  • This system has the advantages of improving the intelligence of patrol inspection and optimizing the flight path of the UAV, and can achieve the goal of low cost and high efficiency.
  • the image acquisition module takes five frames of images per second from the video, and the image resolution is 1920*1080.
  • the shared feature when the shared feature generates a network branch through an area, the shared feature passes through the area to generate a network RPN, and each recommended area in the image that may be a target is calculated.
  • the recommended areas pass through the fully connected layer one by one.
  • the softmax function calculates the output category, and calculates the position coordinates of the target in the image through area regression.
  • the shared feature when the shared feature divides the foreground target by foreground branches, uses a region of interest alignment algorithm (ROIAlign) to perform bilinear interpolation on multiple recommended regions obtained by the region generation network, and then perform bilinear interpolation. Pooled into 14 ⁇ 14 and 7 ⁇ 7 feature maps.
  • the 14 ⁇ 14 feature maps enter the mask generation network.
  • the mask generation network is composed of the residual network ResNet50 connected to two fully connected layers, and the masked feature map is output.
  • the foreground target mask is obtained; the 7 ⁇ 7 feature map passes through the classification and positioning network, which is composed of a two-layer connection layer connection regression algorithm and a softmax algorithm, and outputs the category of the foreground target and the position coordinates in the image.
  • the recommended area attention module and the mask attention module are used.
  • the shared feature and the region generation network passes through the recommended area attention module, and the feature performs The product of the corresponding elements is calculated, and then added to the original feature elements.
  • the mask attention module fuses the foreground feature map and the background feature map, and uses the foreground information to optimize the background features.
  • the network training module uses strong and weak tags to annotate coastal images, and uses coco2014 and coco2015 data sets for pre-training.
  • the pre-training enables the network to learn edge features and color features.
  • the parameters are further trained.
  • the pre-trained classifier is first discarded, and the previous hidden layer network structure and parameters are retained.
  • the classifier needs to be modified according to the requirements of the coastline inspection, so that The number of output categories is the same as the actual categories that need to be detected.
  • the parameters are randomly initialized, and then the labeled coastline images are used for training, so that the trained network will be able to identify coastlines and floating garbage.
  • the UAV has a built-in image instance segmentation algorithm and heading algorithm.
  • the image instance segmentation algorithm recognizes the seawater area, and obtains all the x and y axis coordinates of the seawater in the screen and saves it as a two-dimensional array, and takes y
  • the axis coordinates are the same, and the smallest x-axis coordinate is used as the coastline coordinate.
  • the average value of all coastline coordinates is calculated and used as the starting point of the UAV flight.
  • the flight direction angle is determined according to the heading algorithm, and the UAV rotates to a suitable angle.
  • the UAV takes all the coordinates of the coastline [ ⁇ P 1x , P 1y ⁇ , ⁇ P 2x , P 2y ⁇ L L ⁇ P nx , P ny ⁇ ] as input, according to the pixel coordinates x, Sort by the sum of squares of y, calculate the Euclidean distance of two points for sorting, obtain the adjacent and continuous coastline coordinate group P, fit the coordinate points of P into a curve, and obtain the offset angle ⁇ respectively, the formula is as follows:
  • k is the slope of the tangent to the midpoint of the curve, which is used to adjust the flight direction of the UAV.
  • the present invention further includes a terminal control module for remotely controlling an unmanned aerial vehicle, and the terminal control module is provided with an information display unit, an unmanned aerial vehicle management unit, and an information management unit.
  • the present invention also provides a DANet-based UAV coastline floating garbage inspection method, including:
  • Adjust the flight direction of the drone The drone flies forward with the extension of the coastline as the direction angle.
  • the average value of all coastline coordinates is calculated as the starting point of the drone flight, and the points where the coastline and the land meet are sorted and fitted.
  • a continuous curve calculates the tangent direction of the points on the curve to determine the flight direction angle.
  • Figure 1 is a schematic diagram of a DANet-based UAV coastline floating garbage inspection system provided by the first embodiment of the present invention
  • Figure 2 is a schematic diagram of a feature extraction module provided by the first embodiment of the present invention.
  • Fig. 3 is a simplified diagram of a recommended area attention module provided by the first embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a mask attention module provided by the first embodiment of the present invention.
  • FIG. 5 is a block diagram of a system execution flow provided by the first embodiment of the present invention.
  • Fig. 6 is a schematic flow chart of a DANet-based UAV coastline floating garbage inspection method provided by the second embodiment of the present invention.
  • the first embodiment of the present invention provides a DANet (Double Attention Net)-based drone coastline floating garbage inspection system, including:
  • the image acquisition module 110 uses drones to take video shots of coastlines that need to be inspected, and obtain images from the videos;
  • the feature extraction module 120 inputs the image to the FPN network to extract shallow features and deep features, merges the shallow features and deep features to obtain shared features, and the shared features generate network branches, foreground branches, and background branches through regions, and finally output panoramic recognition results ;
  • the network training module 130 which annotates the image and adds it to the data set for pre-training, so that the network learns edge features and color features, modifies the classifier according to the requirements of coastline inspections, and trains the labeled images so that the network can recognize Out of the coastline and floating garbage;
  • the path correction module 140 is used to adjust the flight direction of the UAV.
  • the UAV flies forward with the coastline extension direction as the direction angle, and calculates the average value of all coastline coordinates as the starting point of the UAV flight, connecting the coastline and the land.
  • the points are sorted and fitted into a continuous curve, and the tangent direction of the points on the curve is calculated to determine the flight direction angle.
  • the drone uses an onboard camera to record coastal video with a video resolution of 1920*1080, and the flying height is maintained at 10 to 15 meters, so that the shooting field of view is clear.
  • the video is processed by the drone's built-in panoramic segmentation algorithm to detect coastlines and floating garbage.
  • the width of the river bank is calculated, and the UAV adjusts the height, latitude and longitude according to the path correction module 140 to take a video of the river image.
  • the recognition results of pollutants such as sewage and floating garbage are displayed on the application in real time, and the time, GPS location, and category data are saved to the database for easy reuse.
  • the drone arrives at the end, sends a completion signal to the application, and returns on the shortest path.
  • This system uses an end-to-end panoramic segmentation framework that combines instance segmentation and semantic segmentation, and assigns its category label and instance number to each pixel.
  • panoramic segmentation not only recognizes the target object, but also recognizes the background.
  • the background is used to optimize the recognition pixel area of the object and further improve the segmentation accuracy.
  • panoramic segmentation gives each target object a different identity number, which is conducive to target identification and positioning in practical applications.
  • the attention module is added to preprocess the input features, and the processed feature map shows that it produces obvious feature enhancement effects in different target regions.
  • the image acquisition module 110 takes five frames of images per second from the video captured by the drone, with a resolution of 1920*1080, as an input to the network.
  • the feature extraction module 120 uses a multi-scale feature extraction network FPN to extract features of different scales to solve the problem that different object sizes affect the recognition result.
  • FPN performs convolution calculations on the shallow features and fuse them with the deep features, so that the shallow features and the deep features are used for prediction together.
  • the information extracted by the two features is of different importance.
  • the shallow network has more detailed features, and the deep network
  • the features are more abstract, and the two features can complement each other, which improves the network recognition rate.
  • the final shared feature is obtained, and given to the following three branches: the region generating network branch, the foreground branch, and the background branch.
  • the shared features pass through the region generation network RPN (Region Proposal Network), and each area in the image that may be a target is calculated, that is, the recommended area.
  • the recommended areas pass through the fully connected layer one by one, and the calculation output of the softmax function Category, calculate the position coordinates of the target in the image after area regression.
  • RPN is a fast and high-precision detection algorithm, which can quickly extract more accurate feature maps as the input of other modules. Its output features will be shared and used by other branches, saving other branches the time to extract features separately.
  • the foreground branch is responsible for segmenting the foreground target.
  • the shared feature uses the region-of-interest alignment algorithm (ROIAlign) to perform bilinear interpolation on multiple recommended regions obtained by the region generation network, and then pool them into 14 ⁇ 14 and 7 ⁇ 7 feature maps.
  • ROIAlign region-of-interest alignment algorithm
  • the bilinear interpolation algorithm is used first, and then the pooling is performed, which retains more useful pixel information than direct pooling, greatly reducing the loss of features in the pooling process, which is very helpful in small target detection and segmentation.
  • the 14 ⁇ 14 feature map enters the mask generation network, which is composed of the residual network ResNet50 connecting two fully connected layers, and outputs the masked feature map to obtain the foreground target mask.
  • the ResNet50 network is a relatively balanced network with relatively balanced performance and effects. It does not require high performance under the premise that the recognition accuracy does not drop too much.
  • the 7 ⁇ 7 feature map passes through a classification and positioning network.
  • the classification and positioning network is composed of a two-layer connection layer connection regression algorithm and a softmax algorithm, and outputs the category of the foreground target and the position coordinates in the image.
  • the background branch is responsible for segmenting the background of the image.
  • two attention modules are used, the recommended area attention module and the mask attention module, to model the remote context and channel dimensions of the space, and establish the foreground and background objects in the panoramic segmentation The relationship with a series of coarse to fine attention blocks.
  • the shared feature and the region generation network first pass through the recommended region attention module, the feature is calculated by the product of the corresponding element, and then is added to the original feature element.
  • the advantage of this is to use the information of the recommended area to add spatial attention and guide the extraction of background features.
  • the process of adding attention to the recommended area is shown in Figure 3.
  • the mask attention module is also used, as shown in Figure 4. This module combines foreground feature maps and background feature maps, and uses foreground information to optimize background features.
  • the foreground features are obtained from the mask generation network of the foreground branch, using upsampling and feature cascade to restore to the original feature map size, and then the same as the recommended area attention, after the element is multiplied, and the original feature element is aligned Add up. After adding attention, use background selection to use group normalization, perform feature calibration, and improve segmentation accuracy.
  • Each layer of convolutional layer will be followed by a normalized activation module, which is composed of a normalized layer and a ReLU activation function.
  • the first level maps the distribution of the data to [0,1] to make the data gradient drop faster and more accurate, accelerate the convergence speed, and reduce the training time.
  • the ReLU activation function formula is as follows:
  • Each region that may have a target is extracted by cutting, and becomes a separate region of interest F 1 and F 2 L L F n , which are respectively input to the classification and positioning module and the mask generation module.
  • the classification and positioning network is composed of two layers of connection layer connection regression algorithm and softmax algorithm, and outputs the target category and the positioning coordinates in the original image;
  • the mask generation network is composed of residual network ResNet50 connected to two layers of fully connected layers, and outputs masked feature maps .
  • a total of the final classification result, positioning coordinates and mask area of the target are obtained.
  • L final is the final loss
  • L class is the category prediction loss
  • L box is the positioning loss
  • L mask is the mask loss
  • the input image is calculated by the network to accurately segment the background: ocean and land; and the foreground target: floating garbage.
  • the ocean pixels will be output to the heading planning algorithm to adjust the flight posture and heading.
  • the category and GPS location of floating garbage will be recorded in the database for reference by relevant clean-up departments.
  • the coast image is annotated through the network training module 130, and 20,000 coastline data sets are generated for training.
  • the labeling method of strong and weak labels is used.
  • the total data set is divided into two parts according to 3:1: set one and set two.
  • the category instances in set one are all marked with mask annotations, that is, strong labels; the category instances in set two have only bounding box annotations, that is, weak labels. Since the categories in set two only have weak labels about the target object, we will use a combination of strong labels and weak labels to train the model. Weak labels only need to label objects with rectangular boxes. The process takes only a few seconds, and the production time is less than one-tenth of that of strong labels. This can greatly improve the efficiency of labeling, thereby increasing the number of training sets. In addition, as more data is added, the effect of network training will also be improved.
  • transfer learning methods are also used in the training process.
  • pre-training which contain 330K images, 80 object categories, 5 labels for each image, and 250,000 key points.
  • Pre-training enables the network to learn edge features and color features.
  • the network parameters are used for further training. Since the new task also contains similar edge features and color features, the network can converge faster and the recognition rate will also be improved.
  • the path correction module 140 is used to adjust the flight direction of the drone.
  • the drone flies forward with the coastline extension direction as the direction angle, and calculates the average value of all coastline coordinates as the starting point of the drone flight, which will be detected by the panoramic segmentation algorithm Image background, coastline and land. Since the background detected by the recognition algorithm is discrete and irregular points, before calculating the direction angle, it is necessary to sort the points where the coastline and the land meet and fit them into a continuous curve to calculate the tangent direction of the points on the curve. To determine the direction angle.
  • the drone takes all the coordinates of the coastline [ ⁇ P 1x , P 1y ⁇ , ⁇ P 2x , P 2y ⁇ L L ⁇ P nx , P ny ⁇ ] as input, and sorts them according to the square sum of the pixel coordinates x and y, and calculates two points Euclidean distance is sorted, the adjacent and continuous coastline coordinate group P is obtained, the coordinate points of P are fitted into a curve, and the offset angle ⁇ is obtained respectively, and the formula is as follows:
  • k is the slope of the tangent to the midpoint of the curve, which is used to adjust the flight direction of the UAV.
  • the path correction module 140 is designed with three path correction schemes: an initial direction angle scheme, a width change scheme, and a flow direction change scheme.
  • Initial direction angle scheme This scheme aims to solve the problem of automatic direction finding at the beginning of the inspection.
  • the UAV flies forward with the direction of the coastline as the direction angle.
  • the initial flight altitude is set at 20 meters, ensuring that both sides of the river can be photographed.
  • the instance segmentation algorithm recognizes the sea water area, and obtains all the x and y axis coordinates of the sea water in the screen and saves it as a two-dimensional array.
  • the y-axis coordinate is the same, and the x-axis coordinate is the smallest as the coastline coordinate. Calculate the average of all coastline coordinates and use it as the starting point of the drone flight. Determine the flight direction angle according to the above heading algorithm, and adjust the rotation of the drone to a suitable angle.
  • Width change scheme Calculate the seawater mask area. If the area is larger than 80% of the screen, it means that the flying height of the drone is too low, stop flying forward, and slowly rise until the seawater area covers 70% of the mask area before continuing to fly. If the area is less than 60% of the screen, it means that the drone is flying too high, stop flying forward, and slowly descend until the seawater area covers 70% of the mask area before continuing to fly.
  • Flow direction change scheme the river flow direction will change during the flight.
  • the UAV's built-in instance segmentation algorithm and heading algorithm calculate the heading offset angle ⁇ in real time.
  • the flight direction angle ⁇ offset is greater than 30°
  • the drone rotates, it is ignored when it is less than 30°.
  • the drone adjusts its position according to the coordinates of the midpoint of the coastline.
  • the midpoint coordinates (x m , y m ) are the average value of all points on the coastline detected.
  • the system also includes a terminal control module for remotely controlling drones.
  • the terminal control module is equipped with an information display unit, a drone management unit, and an information management unit.
  • the operator selects the route in the application program of the terminal control module to achieve the following functions: input new river data, select the patrol river, view the status of the drone in real time, and query the inspection result.
  • the information display unit displays the video taken by the drone in real time to prevent accidents; the results of the algorithm analysis are also displayed at the same time, making it easy for people to view the detection results.
  • the UAV management unit displays the UAV's power status, storage space usage, positioning information, and direction information.
  • the information management unit has an input button to input the start and end points of the new river; the river selection button, select the patrol river, the drone will automatically fly to the entered latitude and longitude of the river’s starting point, and then call the path self-correction algorithm to start the automatic inspection; query Press the button to view the past inspection results of the database, which is used to find the location and type of pollution, so as to facilitate the formulation of the next treatment plan.
  • the second embodiment of the present invention provides a DANet-based UAV coastline floating garbage inspection method, which includes the following steps:
  • S200 Input the image to the FPN network to extract shallow features and deep features, merge the shallow features and deep features to obtain shared features, and the shared features generate network branches, foreground branches, and background branches through regions, and finally output panoramic recognition results;
  • S300 Annotate the image and add it to the data set for pre-training, so that the network can learn edge features and color features, modify the classifier according to the requirements of coastline inspections, and train the labeled images so that the network can identify coastlines and Floating garbage
  • S400 Adjust the flight direction of the drone.
  • the drone flies forward with the coastline extension direction as the direction angle.
  • the average value of all coastline coordinates is calculated as the starting point of the drone flight.
  • the points where the coastline and the land meet are sorted and simulated. Synthesize a continuous curve and calculate the tangent direction of the points on the curve to determine the flight direction angle.
  • the DANet-based UAV coastline floating garbage inspection method has at least the following technical effects: the panoramic segmentation algorithm not only segments the background and foreground objects in the image at the same time, but also gives each foreground object an independent identity.
  • the precise segmentation results help the drone to adjust its course, realize automatic planning of flight paths, detect floating garbage in the coastline, find out the location and category of pollution feedback, and help relevant departments solve the pollution inspection problem in the scene of long coastlines.
  • This method has the advantages of improving the intelligence of patrol inspection and optimizing the route of the UAV, and can achieve the goal of low cost and high efficiency.
  • the core panoramic segmentation algorithm of the inspection system and its inspection method is the DANet algorithm, which uses the RPN network to quickly extract the region of interest; after the foreground branch is used for target classification and position regression; the background branch introduces the recommended area attention module and mask
  • the membrane attention module uses feature maps extracted from RPN and foreground branches to improve the accuracy of background segmentation.
  • the algorithm can identify the target and the background at the same time, and correspondingly solve the problem of identifying floating garbage and identifying the coastline at the same time.
  • the network extracts the features once and uses them in the three branches, which saves more time than extracting them separately.
  • the patent uses partial supervised learning and transfer learning. Partially supervised learning greatly reduces the labeling time, and more data is labeled at the same time.
  • Use transfer learning use the coco data set to train the network weights, and transfer the weights to their own tasks. In the case of a small data set, a good model can be trained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Astronomy & Astrophysics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了基于DANet的无人机海岸线漂浮垃圾巡检系统,使用全景分割算法,不仅同时分割出图像中的背景和前景目标物体,还给每个前景目标独立的身份。通过精确的分割结果帮助无人机调整航向,实现自动规划飞行路径,同时检测海岸线中的漂浮垃圾情况,发现污染反馈定位及类别,帮助有关部门解决在长海岸线的场景下的污染巡检问题。

Description

基于DANet的无人机海岸线漂浮垃圾巡检系统 技术领域
本发明涉及巡检系统技术领域,特别涉及基于DANet的无人机海岸线漂浮垃圾巡检系统。
背景技术
随着海洋垃圾越来越严重,污染很可能随海洋生物进入人体,因此减少海洋垃圾变得尤为重要。由于海洋过大,现阶段清理的多为海岸漂浮垃圾。但是海岸线过长,边缘较为曲折,难以绘制精准的海岸线,就没有准确的清理路径。海岸线长且曲折,沿整个海岸线清理则更不现实,因此需要知道垃圾所在位置,选择性地清理。另外,有些海岸线人难以到达,影响了人工检测漂浮垃圾的效率。传统的巡检方法有人工巡逻,现场监控。人工巡逻一般只巡检平地,而像山崖、石岸,人都难以检查。巡检周期长,受视觉范围限制,耗费大量人力、物力;现场监控布置成本较高,无法覆盖整个流域,而且监控设备拍摄范围有限,容易漏检。监控视频需要人工分析,进一步加大人力成本、财力成本,另外人为分析效果因人而异,分析结果不稳定;总的来说现有技术存在的问题在于:人为巡检效率低、监控检测成本高。为解决该问题,近年来出现了一种无人机自动巡检方案,该方案解决的是河道巡检,使用无人机搭载摄像头拍摄河流视频,人为寻找视频中的污染情况。其特点是无人机的自动巡检,利用动态二值化检测海岸,自动调整飞行方向。虽然该方法无人机能自动寻路飞行,但依然需要投入大量人力观察。而且动态二值化的检测方式鲁棒性低,实际海岸线情况多变,很容易影响算法准确率,使得无人机偏移理想航线。
发明内容
本发明的目的在于至少解决现有技术中存在的技术问题之一,提供一种基于DANet的无人机海岸线漂浮垃圾巡检系统,其具有提高巡检智能度,优化无人机航线的优点,可达到低费用高效率的目标。
根据本发明实施例的基于DANet的无人机海岸线漂浮垃圾巡检系统,包括:
图像采集模块,通过无人机对需要巡检的海岸线进行视频拍摄,从视频中获取图像;
特征提取模块,将图像输入到FPN网络提取浅层特征和深层特征,融合浅层特征和深层特征得到共享特征,共享特征分别通过区域生成网络分支、前景分支和背景分支,最终输出全景识别结果;
网络训练模块,对图像进行标注并加入到数据集进行预训练,使得网络学习到边缘特征和颜色特征,根据海岸线巡检的要求修改分类器,对已标注的图像进行训练,使网络能够识别出海岸线以及漂浮垃圾;
路径修正模块,用于调整无人机飞行方向,无人机以海岸线延伸方向为方向角向前飞行,计算所有海岸线坐标的平均值,作为无人机飞行起点,将海岸线和陆地相接的点进行排序、拟合成一条连续的曲线,计算曲线上的点的切线方向,从而确定飞行方向角。
根据本发明实施例的基于DANet的无人机海岸线漂浮垃圾巡检系统,至少具有如下技术效果:本系统采用全景分割算法,不仅同时分割出图像中的背景和前景目标物体,还给每个前景目标独立的身份。通过精确的分割结果帮助无人机调整航向,实现自动规划飞行路径,同时检测海岸线中的漂浮垃圾情况,发现污染反馈定位及类别,帮助有关部门解决在长海岸线的场景下的污染巡检问题。本系统具有提高巡检智能度,优化无人机航线的优点,可达到低费用高效率的目标。
根据本发明的一些实施例,所述图像采集模块从视频中每秒取五帧图像,图像分辨率为1920*1080。
根据本发明的一些实施例,所述共享特征通过区域生成网络分支时,共享特征经过区域生成网络RPN,计算得到图像中每个可能是目标的推荐区域,推荐区域一一经过全连接层,通过softmax函数计算输出类别,经过区域回归计算出目标在图像中的位置坐标。
根据本发明的一些实施例,所述共享特征通过前景分支分割前景目标时,共享特征通过感兴趣区域对齐算法(ROIAlign),将区域生成网络得到的多个推荐区域先进行双线性插值,再池化成14×14和7×7的特征图,14×14的特征图进过掩膜生成网络,掩膜生成网络由残差网络ResNet50连接两层全连接层组成,输出带掩膜的特征图,得到前景目标掩膜;7×7的特征图经过分类定位网络,分类定位网络由两层连接层连接回归算法和softmax算法组成,输出获得前景目标的类别和图像中的位置坐标。
根据本发明的一些实施例,所述共享特征通过背景分支分割图像的背景时,使用推荐区域注意力模块和掩膜注意力模块,首先共享特征和区域生成网络经过推荐区域注意力模块,特征进行对应元素乘积计算,然后与原特征元素对位相加,掩膜注意力模块融合前景特征图和背景特征图,利用前景的信息优化背景特征。
根据本发明的一些实施例,所述网络训练模块使用强弱标签的方式对海岸图像进行标注,使用coco2014和coco2015数据集进行预训练,预训练使得网络学习到边缘特征和颜色特征,用该网络参数进行进一步训练,在使用数据集进行训练的过程中,先将预训练的分类器丢弃,保留前面隐藏层网络结构及参数,针对类别数量的不同需要根据海岸线巡检的要求修改分类器,使得输出类别数量和实际需要检测出的类别相同,修改分类器输出后将参数随机初始化,再使用已标注的海岸线图像进行训练,使得训练后的网络将能识别出海岸线以及漂浮垃圾。
根据本发明的一些实施例,所述无人机内置图像实例分割算法和航向算法,图像实例分割算法识别海水区域,获得海水在画面中的所有x、y轴坐标保存为二维数组,取y轴坐标相同,x轴坐标最小作为海岸线坐标,计算所有海岸线坐标的平均值,作为无人机飞行起点,根据航向算法确定飞行方向角,无人机旋转调整至适合角度。
根据本发明的一些实施例,所述无人机以海岸线所有坐标[{P 1x,P 1y}、{P 2x,P 2y}L L{P nx,P ny}]为 输入,按照像素坐标x、y的平方和大小排序,计算两点欧式距离进行排序,获得相邻且连续的海岸线坐标组P,将P的坐标点拟合成曲线,分别求得偏移角度α,公式如下:
α=90°-Arctan(k)
其中,k为曲线中点切线斜率,用于调整无人机飞行方向。
根据本发明的一些实施例,还包括用于遥控无人机的终端控制模块,所述终端控制模块设有信息显示单元、无人机管理单元和信息管理单元。
本发明还提供一种基于DANet的无人机海岸线漂浮垃圾巡检方法,包括:
通过无人机对需要巡检的海岸线进行视频拍摄,从视频中获取图像;
将图像输入到FPN网络提取浅层特征和深层特征,融合浅层特征和深层特征得到共享特征,共享特征分别通过区域生成网络分支、前景分支和背景分支,最终输出全景识别结果;
对图像进行标注并加入到数据集进行预训练,使得网络学习到边缘特征和颜色特征,根据海岸线巡检的要求修改分类器,对已标注的图像进行训练,使网络能够识别出海岸线以及漂浮垃圾;
调整无人机飞行方向,无人机以海岸线延伸方向为方向角向前飞行,计算所有海岸线坐标的平均值,作为无人机飞行起点,将海岸线和陆地相接的点进行排序、拟合成一条连续的曲线,计算曲线上的点的切线方向,从而确定飞行方向角。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:
图1是本发明第一实施例提供的基于DANet的无人机海岸线漂浮垃圾巡检系统简图;
图2是本发明第一实施例提供的特征提取模块简图;
图3是本发明第一实施例提供的推荐区域注意力模块简图;
图4是本发明第一实施例提供的掩膜注意力模块简图;
图5是本发明第一实施例提供的系统执行流程框图;
图6是本发明本发明第二实施例提供的基于DANet的无人机海岸线漂浮垃圾巡检方法流程简图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
需要说明的是,如果不冲突,本发明实施例中的各个特征可以相互结合,均在本发明的保护范围之内。 另外,虽然在系统示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于系统中的模块划分,或流程图中的顺序执行所示出或描述的步骤。
下面结合附图,对本发明实施例作进一步阐述。
如图1所示,本发明的第一实施例提供了一种基于DANet(Double Attention Net)的无人机海岸线漂浮垃圾巡检系统,包括:
图像采集模块110,通过无人机对需要巡检的海岸线进行视频拍摄,从视频中获取图像;
特征提取模块120,将图像输入到FPN网络提取浅层特征和深层特征,融合浅层特征和深层特征得到共享特征,共享特征分别通过区域生成网络分支、前景分支和背景分支,最终输出全景识别结果;
网络训练模块130,对图像进行标注并加入到数据集进行预训练,使得网络学习到边缘特征和颜色特征,根据海岸线巡检的要求修改分类器,对已标注的图像进行训练,使网络能够识别出海岸线以及漂浮垃圾;
路径修正模块140,用于调整无人机飞行方向,无人机以海岸线延伸方向为方向角向前飞行,计算所有海岸线坐标的平均值,作为无人机飞行起点,将海岸线和陆地相接的点进行排序、拟合成一条连续的曲线,计算曲线上的点的切线方向,从而确定飞行方向角。
具体地,无人机通过机载摄像头录制海岸视频,视频分辨率为1920*1080,飞行高度保持在10~15米,使得拍摄视野清晰。视频经过无人机内置全景分割算法处理,检测出海岸线、漂浮垃圾。计算出河岸宽度,无人机根据路径修正模块140调整高度、经纬度将河流图像进行视频拍摄。污染物如污水、漂浮垃圾的识别结果实时显示在应用程序上,保存时间、GPS定位、类别数据到数据库,方便再利用。无人机到达终点,向应用程序发送完成信号,按最短路径返回。
本系统使用了一个端到端全景分割框架,该框架将实例分割和语义分割相结合,对每一个像素点都赋予其类别标签和实例编号。相较于实例分割,全景分割不仅识别出目标物体,还识别出背景,利用背景来优化物体的识别像素区域,进一步提高分割精度。相较于语义分割,全景分割给予每个目标物体不同的身份编号,有利于实际应用中的目标区分、定位。添加注意力模块,对输入的特征进行预处理,处理后的特征图显示其在不同目标区域产生明显的特征增强效果。
图像采集模块110从无人机拍摄的视频每秒取五帧图像,分辨率为1920*1080,作为网络的输入。
如图2所示,特征提取模块120使用多尺度特征提取网络FPN提取不同尺度的特征,解决物体大小不同影响识别结果的问题。FPN将浅层的特征经过卷积计算,与深层特征融合,这样浅层特征和深层特征一起被用于预测,两个特征提取的信息重要程度不同,浅层网络有更多细节特征,深层网络特征更抽象,两个特征可以互补,提高网络识别率。融合后获得最终的共享特征,给予后面的三个分支:区域生成网络分支、前景分支、背景分支。
在区域生成网络分支中,共享特征经过区域生成网络RPN(Region Proposal Network),计算得到图像 中每个可能是目标的区域,即推荐区域,推荐区域一一经过全连接层,softmax函数的计算输出类别,经过区域回归计算出目标在图像中的位置坐标。RPN是快速且精度较高的检测算法,能够快速提取较准确的特征图作为其他模块的输入。其输出特征将给其他分支共享使用,节省了其他分支单独提取特征的时间。
前景分支负责分割前景目标。首先共享特征通过感兴趣区域对齐算法(ROIAlign),将区域生成网络得到的多个推荐区域先进行双线性插值,在池化成14×14和7×7的特征图形式。这里先使用双线性插值算法,再进行池化,比直接池化保留了更多有用的像素信息,大大减少了特征在池化过程中的损失,这在小目标检测分割时非常有帮助。14×14的特征图进过掩膜生成网络,掩膜生成网络由残差网络ResNet50连接两层全连接层组成,输出带掩膜的特征图,得到前景目标掩膜。ResNet50网络是性能和效果相对较平衡的网络,在识别精度不下降太多的前提下,对性能要求也不高。7×7的特征图经过分类定位网络,分类定位网络由两层连接层连接回归算法、softmax算法组成,输出获得前景目标的类别和图像中的位置坐标。
背景分支负责分割出图像的背景。在背景分割过程中,使用了两个注意力模块,推荐区域注意力模块和掩膜注意力模块,对空间的远程上下文和信道维度进行建模,建立了前景的东西和背景的东西在全景分割与一系列的粗到精细的注意力块之间的关系。与没使用注意力模块相比,提取了更多有用的特征信息。在网络实现中,首先共享特征和区域生成网络经过推荐区域注意力模块,特征进行对应元素乘积计算,然后与原特征元素对位相加。这样的好处是利用推荐区域的信息添加空间注意力,指导背景特征提取,推荐区域注意力添加过程如图3所示。其中的
Figure PCTCN2020078289-appb-000001
代表对位相加,
Figure PCTCN2020078289-appb-000002
代表对位相乘。相对于没有添加注意力模块的网络,添加后目标区域的特征会更突出,无关区域的特征会减少,这样提取的无关特征会更少,而目标特征更多,提高了分割的精度,也降低了误检的几率。推荐注意力模块后,还使用了掩膜注意力模块,如图4所示。该模块融合了前景特征图和背景特征图,利用前景的信息优化背景特征。首先前景特征从前景分支的掩膜生成网络获得,使用上采样和特征级联恢复到原来的特征图大小,然后和推荐区域注意力一样,进行元素对位相乘后,与原特征元素对位相加。添加注意力后,使用背景选择使用了组归一化,进行特征校准,提高分割精确度。
每一层卷积层后面都会加上归一激活模块,由归一层和ReLU激活函数组成。归一层将数据的分布映射到[0,1]之间,让数据梯度下降更快更精准,加快收敛速度,减少训练时间。ReLU激活函数公式如下:
Figure PCTCN2020078289-appb-000003
每个可能有目标的区域通过裁剪提取出来,成为单独的感兴趣区域F 1、F 2L L F n,分别输入给分类定位模块和掩膜生成模块。分类定位网络由两层连接层连接回归算法、softmax算法组成,输出目标类别和在原图的定位坐标;掩膜生成网络由残差网络ResNet50连接两层全连接层组成,输出带掩膜的特征图。共得 到目标最终分类结果、定位坐标和掩膜区域。输出结果的损失函数是三个结果的损失函数之和为:L fina=L cls+L box+L mask
其中,L final为最终损失,L class为类别预测损失,L box为定位损失,L mask为掩膜损失。
输入的图像经过网络计算,精确地分割出背景:海洋、陆地;以及前景目标:漂浮垃圾。海洋的像素将输出给航向规划算法,进行飞行姿势、航向的调整。漂浮垃圾的类别、GPS定位将记录到数据库中,供有关清理部门参考。
为了使网络学习到海岸线、漂浮垃圾的特征,通过网络训练模块130对海岸图像进行标注,生成两万张海岸线数据集进行训练。在标注过程中,使用强弱标签的标注方式。总数据集按3:1分成两份:集合一、集合二。集合一中的类别实例都标有掩码注释,即强标签;集合二中的类别实例只有边界框注释,即弱标签。由于集合二中的类别只带有关于目标物体的弱标签,我们将使用组合强标签和弱标签的类别来训练模型。弱标签只需要用矩形框标注物体,过程仅需几秒,制作时间只有强标签的十分之一不到,这样能大大提高标注效率,从而增加训练集的数量。另外由于更多的数据加入,网络训练的效果也会得到提升。
除了偏监督学习方法,迁移学习方法也被用在训练过程中。首先我们使用coco2014、coco2015数据集进行预训练,其中包含330K图像、80个对象类别、每幅图像有5个标签、25万个关键点。预训练使得网络学习到边缘特征、颜色特征,用该网络参数进行进一步训练,由于新任务中也包含类似的边缘特征和颜色特征,所以网络能更快收敛,识别率也会有所提升。在使用自己的数据集进行训练的过程中,我们首先将预训练的分类器丢弃,保留前面隐藏层网络结构及参数。由于类别数量不同,需要根据海岸线巡检的要求修改分类器,使得输出类别数量和实际需要检测出的类别相同。修改分类器输出后将参数随机初始化,再使用两万张已标注的海岸线图像进行训练,使网络能够识别出海岸线上的目标物体。训练后的网络将能识别出海岸线以及漂浮垃圾。
路径修正模块140,用于调整无人机飞行方向,无人机以海岸线延伸方向为方向角向前飞行,计算所有海岸线坐标的平均值,作为无人机飞行起点,通过全景分割算法将检测出图像的背景,海岸线和陆地。由于识别算法检测出的背景是离散的无规律的点,在计算方向角前,需要将海岸线和陆地相接的点进行排序、拟合成一条连续的曲线,才能计算曲线上的点的切线方向,从而确定方向角。
无人机以海岸线所有坐标[{P 1x,P 1y}、{P 2x,P 2y}L L{P nx,P ny}]为输入,按照像素坐标x、y的平方和大小排序,计算两点欧式距离进行排序,获得相邻且连续的海岸线坐标组P,将P的坐标点拟合成曲线,分别求得偏移角度α,公式如下:
α=90°-Arctan(k)
其中,k为曲线中点切线斜率,用于调整无人机飞行方向。
为兼顾算法识别效果和飞行安全,无人机拍摄海水区域主体应占画面60~80%。飞行途中海平面变化、 流行改变都会影响画面占比。根据实际情况,路径修正模块140设计有三种路径修正方案:初始化方向角方案、宽度变化方案和流向变化方案。
初始化方向角方案:该方案旨在解决巡检开始阶段自动寻方向角的问题,无人机以海岸线延伸方向为方向角向前飞行。规定初始飞行高度为二十米,保证能拍摄到河道两岸。实例分割算法识别海水区域,获得海水在画面中的所有x、y轴坐标保存为二维数组。取y轴坐标相同,x轴坐标最小作为海岸线坐标。计算所有海岸线坐标的平均值,作为无人机飞行起点。根据上面的航向算法确定飞行方向角,无人机旋转调整至适合角度。
宽度变化方案:计算海水掩膜面积,若面积大于画面的80%,说明无人机飞行高度过低,停止向前飞行,并缓慢上升至海水区域掩膜面积占70%后继续飞行。若面积小于画面的60%,说明无人机飞行高度过高,停止向前飞行,并缓慢下降至海水区域掩膜面积占70%后继续飞行。
流向变化方案:飞行过程中河流流向会发生改变,为实现自动寻路功能,无人机内置的实例分割算法和航向算法实时计算航向偏移角度α,当飞行方向角度有α偏移大于30°时无人机旋转,小于30°时忽略不计。同时为保证海水在画面一侧,无人机根据海岸线中点坐标调整位置。中点坐标(x m,y m)即检测出的海岸线所有点的平均值。
为进一步简化无人机使用流程,本系统还包括用于遥控无人机的终端控制模块,终端控制模块设有信息显示单元、无人机管理单元和信息管理单元。操作者在终端控制模块的应用程序选择路径,实现如下功能:录入新河流数据,选择巡检河流,无人机状态实时查看,巡检结果查询。信息显示单元将无人机拍摄视频实时显示,防止意外发生;算法分析的结果也同时显示,方便人查看检测结果。无人机管理单元显示无人机电量情况、存储空间使用情况、定位信息、方向信息。信息管理单元有录入按键,用于录入新河流的起点和终点;河流选择按键,选择巡检河流,无人机自动飞向录入的河流起点经纬度,然后调用路径自修正算法开始自动巡检;查询按键,查看数据库的过往巡检结果,用于寻找污染位置和类别,方便制定下一步治理方案。
最后,上述整个系统执行流程如图5所示。
如图6所示,本发明的第二实施例提供了一种基于DANet的无人机海岸线漂浮垃圾巡检方法,包括如下步骤:
S100:通过无人机对需要巡检的海岸线进行视频拍摄,从视频中获取图像;
S200:将图像输入到FPN网络提取浅层特征和深层特征,融合浅层特征和深层特征得到共享特征,共享特征分别通过区域生成网络分支、前景分支和背景分支,最终输出全景识别结果;
S300:对图像进行标注并加入到数据集进行预训练,使得网络学习到边缘特征和颜色特征,根据海岸线巡检的要求修改分类器,对已标注的图像进行训练,使网络能够识别出海岸线以及漂浮垃圾;
S400:调整无人机飞行方向,无人机以海岸线延伸方向为方向角向前飞行,计算所有海岸线坐标的平均值,作为无人机飞行起点,将海岸线和陆地相接的点进行排序、拟合成一条连续的曲线,计算曲线上的 点的切线方向,从而确定飞行方向角。
基于DANet的无人机海岸线漂浮垃圾巡检方法,至少具有如下技术效果:采用全景分割算法,不仅同时分割出图像中的背景和前景目标物体,还给每个前景目标独立的身份。通过精确的分割结果帮助无人机调整航向,实现自动规划飞行路径,同时检测海岸线中的漂浮垃圾情况,发现污染反馈定位及类别,帮助有关部门解决在长海岸线的场景下的污染巡检问题。本方法具有提高巡检智能度,优化无人机航线的优点,可达到低费用高效率的目标。
本巡检系统及其巡检方法的核心全景分割算法是DANet算法,该算法利用RPN网络快速提取感兴趣区域;经过前景分支的进行目标分类、位置回归;背景分支引入推荐区域注意力模块和掩膜注意力模块,利用RPN和前景分支提取的特征图,提高背景分割的精度。算法能同时识别目标和和背景,相对应地同时解决了漂浮垃圾识别问题、和海岸线识别的问题。网络提取一次特征,使用在三个分支中,比分别提取更省时间。
数据增强方面,该专利使用了偏监督学习和迁移学习。偏监督学习大大减少标注时间,相同时间标注了更多数据量。使用迁移学习,利用coco数据集训练网络权重,并将权重迁移到自己的任务中。在较小数据集的情况下,也能训练出效果很好的模型。
以上是对本发明的较佳实施进行了具体说明,但本发明并不局限于上述实施方式,熟悉本领域的技术人员在不违背本发明精神的前提下还可作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。

Claims (10)

  1. 基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于,包括:
    图像采集模块,通过无人机对需要巡检的海岸线进行视频拍摄,从视频中获取图像;
    特征提取模块,将图像输入到FPN网络提取浅层特征和深层特征,融合浅层特征和深层特征得到共享特征,共享特征分别通过区域生成网络分支、前景分支和背景分支,最终输出全景识别结果;
    网络训练模块,对图像进行标注并加入到数据集进行预训练,使得网络学习到边缘特征和颜色特征,根据海岸线巡检的要求修改分类器,对已标注的图像进行训练,使网络能够识别出海岸线以及漂浮垃圾;
    路径修正模块,用于调整无人机飞行方向,无人机以海岸线延伸方向为方向角向前飞行,计算所有海岸线坐标的平均值,作为无人机飞行起点,将海岸线和陆地相接的点进行排序、拟合成一条连续的曲线,计算曲线上的点的切线方向,从而确定飞行方向角。
  2. 根据权利要求1所述的基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于:所述图像采集模块从视频中每秒取五帧图像,图像分辨率为1920*1080。
  3. 根据权利要求1所述的基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于:所述共享特征通过区域生成网络分支时,共享特征经过区域生成网络RPN,计算得到图像中每个可能是目标的推荐区域,推荐区域一一经过全连接层,通过softmax函数计算输出类别,经过区域回归计算出目标在图像中的位置坐标。
  4. 根据权利要求1所述的基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于:所述共享特征通过前景分支分割前景目标时,共享特征通过感兴趣区域对齐算法(ROIAlign),将区域生成网络得到的多个推荐区域先进行双线性插值,再池化成14×14和7×7的特征图,14×14的特征图进过掩膜生成网络,掩膜生成网络由残差网络ResNet50连接两层全连接层组成,输出带掩膜的特征图,得到前景目标掩膜;7×7的特征图经过分类定位网络,分类定位网络由两层连接层连接回归算法和softmax算法组成,输出获得前景目标的类别和图像中的位置坐标。
  5. 根据权利要求1所述的基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于:所述共享特征通过背景分支分割图像的背景时,使用推荐区域注意力模块和掩膜注意力模块,首先共享特征和区域生成网络经过推荐区域注意力模块,特征进行对应元素乘积计算,然后与原特征元素对位相加,掩膜注意力模块融合前景特征图和背景特征图,利用前景的信息优化背景特征。
  6. 根据权利要求1所述的基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于:所述网络训练模块使用强弱标签的方式对海岸图像进行标注,使用coco2014和coco2015数据集进行预训练,预训练使得网络学习到边缘特征和颜色特征,用该网络参数进行进一步训练,在使用数据集进行训练的过程中,先 将预训练的分类器丢弃,保留前面隐藏层网络结构及参数,针对类别数量的不同需要根据海岸线巡检的要求修改分类器,使得输出类别数量和实际需要检测出的类别相同,修改分类器输出后将参数随机初始化,再使用已标注的海岸线图像进行训练,使得训练后的网络将能识别出海岸线以及漂浮垃圾。
  7. 根据权利要求1所述的基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于:所述无人机内置图像实例分割算法和航向算法,图像实例分割算法识别海水区域,获得海水在画面中的所有x、y轴坐标保存为二维数组,取y轴坐标相同,x轴坐标最小作为海岸线坐标,计算所有海岸线坐标的平均值,作为无人机飞行起点,根据航向算法确定飞行方向角,无人机旋转调整至适合角度。
  8. 根据权利要求7所述的基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于:所述无人机以海岸线所有坐标[{P 1x,P 1y}、{P 2x,P 2y}L L{P nx,P ny}]为输入,按照像素坐标x、y的平方和大小排序,计算两点欧式距离进行排序,获得相邻且连续的海岸线坐标组P,将P的坐标点拟合成曲线,分别求得偏移角度α,公式如下:
    α=90°-Arctan(k)
    其中,k为曲线中点切线斜率,用于调整无人机飞行方向。
  9. 根据权利要求1所述的基于DANet的无人机海岸线漂浮垃圾巡检系统,其特征在于:还包括用于遥控无人机的终端控制模块,所述终端控制模块设有信息显示单元、无人机管理单元和信息管理单元。
  10. 基于DANet的无人机海岸线漂浮垃圾巡检方法,其特征在于,包括:
    通过无人机对需要巡检的海岸线进行视频拍摄,从视频中获取图像;
    将图像输入到FPN网络提取浅层特征和深层特征,融合浅层特征和深层特征得到共享特征,共享特征分别通过区域生成网络分支、前景分支和背景分支,最终输出全景识别结果;
    对图像进行标注并加入到数据集进行预训练,使得网络学习到边缘特征和颜色特征,根据海岸线巡检的要求修改分类器,对已标注的图像进行训练,使网络能够识别出海岸线以及漂浮垃圾;
    调整无人机飞行方向,无人机以海岸线延伸方向为方向角向前飞行,计算所有海岸线坐标的平均值,作为无人机飞行起点,将海岸线和陆地相接的点进行排序、拟合成一条连续的曲线,计算曲线上的点的切线方向,从而确定飞行方向角。
PCT/CN2020/078289 2020-01-17 2020-03-06 基于DANet的无人机海岸线漂浮垃圾巡检系统 WO2021142902A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010050817.5A CN111259809B (zh) 2020-01-17 2020-01-17 基于DANet的无人机海岸线漂浮垃圾巡检系统
CN202010050817.5 2020-01-17

Publications (1)

Publication Number Publication Date
WO2021142902A1 true WO2021142902A1 (zh) 2021-07-22

Family

ID=70947592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078289 WO2021142902A1 (zh) 2020-01-17 2020-03-06 基于DANet的无人机海岸线漂浮垃圾巡检系统

Country Status (3)

Country Link
US (1) US11195013B2 (zh)
CN (1) CN111259809B (zh)
WO (1) WO2021142902A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743208A (zh) * 2021-07-30 2021-12-03 南方海洋科学与工程广东省实验室(广州) 一种基于无人机阵列的中华白海豚数量统计方法及系统
CN113837924A (zh) * 2021-08-11 2021-12-24 航天科工深圳(集团)有限公司 一种基于无人艇感知系统的水岸线检测方法
CN114283237A (zh) * 2021-12-20 2022-04-05 中国人民解放军军事科学院国防科技创新研究院 一种无人机仿真视频生成方法
CN115100553A (zh) * 2022-07-06 2022-09-23 浙江科技学院 基于卷积神经网络的河面污染信息检测处理方法及系统
CN115439765A (zh) * 2022-09-17 2022-12-06 艾迪恩(山东)科技有限公司 基于机器学习无人机视角下海洋塑料垃圾旋转检测方法
CN115713174A (zh) * 2022-11-11 2023-02-24 中国地质大学(武汉) 一种无人机城市巡检系统及方法
CN116052027A (zh) * 2023-03-31 2023-05-02 深圳联和智慧科技有限公司 基于无人机的漂浮垃圾种类识别方法、系统及云平台
CN117392465A (zh) * 2023-12-08 2024-01-12 聚真宝(山东)技术有限公司 一种基于视觉的垃圾分类数字化管理方法
CN117671545A (zh) * 2024-01-31 2024-03-08 武汉华测卫星技术有限公司 一种基于无人机的水库巡检方法及系统
CN117876910A (zh) * 2024-03-06 2024-04-12 西北工业大学 基于主动学习的无人机目标检测关键数据筛选方法
CN118170156A (zh) * 2024-05-14 2024-06-11 石家庄思凯电力建设有限公司 基于飞行动态规划的无人机清除杆塔鸟窝的方法及装置

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112748742A (zh) * 2020-06-10 2021-05-04 宋师光 自动化山体目标躲避平台及方法
CN111629151B (zh) * 2020-06-12 2023-01-24 北京字节跳动网络技术有限公司 视频合拍方法、装置、电子设备及计算机可读介质
CN112102369B (zh) * 2020-09-11 2024-04-09 陕西欧卡电子智能科技有限公司 水面漂浮目标自主巡检方法、装置、设备及存储介质
CN112360699A (zh) * 2020-10-22 2021-02-12 华能大理风力发电有限公司 一种全自动风力发电机组叶片智能巡视及诊断分析方法
CN112257623B (zh) * 2020-10-28 2022-08-23 长沙立中汽车设计开发股份有限公司 一种路面清洁度判定和自动清扫方法及自动清扫环卫装置
CN112802039B (zh) * 2021-01-26 2022-03-01 桂林电子科技大学 一种基于全局边缘注意力的全景分割方法
CN113096136A (zh) * 2021-03-30 2021-07-09 电子科技大学 一种基于深度学习的全景分割方法
CN113158965B (zh) * 2021-05-08 2024-03-19 福建万福信息技术有限公司 一种实现海漂垃圾识别的仿视觉识别方法、设备和介质
CN113592822B (zh) * 2021-08-02 2024-02-09 郑州大学 一种电力巡检图像的绝缘子缺陷定位方法
CN113743470B (zh) * 2021-08-04 2022-08-23 浙江联运环境工程股份有限公司 自动破袋分类箱基于ai算法垃圾识别精度提升方法
CN113780078B (zh) * 2021-08-05 2024-03-19 广州西威科智能科技有限公司 无人驾驶视觉导航中故障物快速精准识别方法
CN113807347A (zh) * 2021-08-20 2021-12-17 北京工业大学 一种基于目标检测技术的厨余垃圾杂质识别方法
CN113762132A (zh) * 2021-09-01 2021-12-07 国网浙江省电力有限公司金华供电公司 一种无人机巡检图像自动归类与自动命名系统
CN113657691B (zh) * 2021-10-19 2022-03-01 北京每日优鲜电子商务有限公司 信息显示方法、装置、电子设备和计算机可读介质
CN114092877A (zh) * 2021-11-03 2022-02-25 北京工业大学 一种基于机器视觉的垃圾桶无人值守系统设计方法
CN113867404B (zh) * 2021-11-05 2024-02-09 交通运输部天津水运工程科学研究所 一种基于无人机的海滩垃圾巡检方法和系统
CN114220044B (zh) * 2021-11-23 2022-07-29 慧之安信息技术股份有限公司 一种基于ai算法的河道漂浮物检测方法
CN113919762B (zh) * 2021-12-10 2022-03-15 重庆华悦生态环境工程研究院有限公司深圳分公司 一种基于漂浮物事件的调度方法及装置
CN114422822B (zh) * 2021-12-27 2023-06-06 北京长焜科技有限公司 一种支持自适应hdmi编码的无人机数图传输控制方法
CN114565635B (zh) * 2022-03-08 2022-11-11 安徽新宇环保科技股份有限公司 一种智能识别河道垃圾并进行分类收集的无人船系统
CN114550016B (zh) * 2022-04-22 2022-07-08 北京中超伟业信息安全技术股份有限公司 一种基于上下文信息感知的无人机定位方法及系统
CN114782871B (zh) * 2022-04-29 2022-11-25 广东技术师范大学 一种基于物联网的海洋异常信息监测方法和装置
CN114596536A (zh) * 2022-05-07 2022-06-07 陕西欧卡电子智能科技有限公司 无人船沿岸巡检方法、装置、计算机设备及存储介质
CN114584403B (zh) * 2022-05-07 2022-07-19 中国长江三峡集团有限公司 一种发电厂巡检设备认证管理系统和方法
CN115061490B (zh) * 2022-05-30 2024-04-05 广州中科云图智能科技有限公司 基于无人机的水库巡检方法、装置、设备以及存储介质
CN115060343B (zh) * 2022-06-08 2023-03-14 山东智洋上水信息技术有限公司 一种基于点云的河流水位检测系统、检测方法
CN114792319B (zh) * 2022-06-23 2022-09-20 国网浙江省电力有限公司电力科学研究院 一种基于变电图像的变电站巡检方法及系统
CN114937199B (zh) * 2022-07-22 2022-10-25 山东省凯麟环保设备股份有限公司 一种基于判别性特征增强的垃圾分类方法与系统
CN115272890B (zh) * 2022-07-27 2023-08-22 杭州亚太工程管理咨询有限公司 一种水利工程数据采集系统及方法
CN115147703B (zh) * 2022-07-28 2023-11-03 广东小白龙环保科技有限公司 一种基于GinTrans网络的垃圾分割方法及系统
CN115019216B (zh) * 2022-08-09 2022-10-21 江西师范大学 实时地物检测和定位计数方法、系统及计算机
CN115562348A (zh) * 2022-11-03 2023-01-03 国网福建省电力有限公司漳州供电公司 基于变电站的无人机图像技术方法
CN115564838B (zh) * 2022-12-06 2023-03-24 深圳联和智慧科技有限公司 基于无人机的河堤检测侵占定位方法及系统
CN115588145B (zh) * 2022-12-12 2023-03-21 深圳联和智慧科技有限公司 基于无人机的河道垃圾漂浮识别方法及系统
CN115601670B (zh) * 2022-12-12 2023-03-24 合肥恒宝天择智能科技有限公司 基于人工智能和高分辨率遥感影像的松材线虫病监测方法
CN115861359B (zh) * 2022-12-16 2023-07-21 兰州交通大学 一种水面漂浮垃圾图像自适应分割提取方法
CN115797619B (zh) * 2023-02-10 2023-05-16 南京天创电子技术有限公司 一种适用于巡检机器人仪表图像定位的纠偏方法
CN116152115B (zh) * 2023-04-04 2023-07-07 湖南融城环保科技有限公司 基于计算机视觉的垃圾图像去噪处理方法
CN116630828B (zh) * 2023-05-30 2023-11-24 中国公路工程咨询集团有限公司 基于地形环境适配的无人机遥感信息采集系统及方法
CN116363537B (zh) * 2023-05-31 2023-10-24 广东电网有限责任公司佛山供电局 一种变电站站外飘挂物隐患识别方法和系统
US20230348120A1 (en) * 2023-07-10 2023-11-02 Brian Panahi Johnson System and method for identifying trash within a predetermined geographic boundary using unmanned aerial vehicles
CN116614084B (zh) * 2023-07-17 2023-11-07 北京数维思创科技有限公司 一种基于无人机场的光伏电站远程巡检系统
CN116682000B (zh) * 2023-07-28 2023-10-13 吉林大学 一种基于事件相机的水下蛙人目标检测方法
CN117148871B (zh) * 2023-11-01 2024-02-27 中国民航管理干部学院 一种多无人机协同电力巡检方法及系统
CN117274723B (zh) * 2023-11-22 2024-03-26 国网智能科技股份有限公司 一种用于输电巡检的目标识别方法、系统、介质及设备
CN117274845A (zh) * 2023-11-22 2023-12-22 山东中宇航空科技发展有限公司 一种飞行无人机影像抓取方法、系统、设备及储存介质
CN117474190B (zh) * 2023-12-28 2024-02-27 磐石浩海(北京)智能科技有限公司 一种机柜自动巡检方法和装置
CN117765482B (zh) * 2024-02-22 2024-05-14 交通运输部天津水运工程科学研究所 基于深度学习的海岸带垃圾富集区的垃圾识别方法及系统
CN117893933B (zh) * 2024-03-14 2024-05-24 国网上海市电力公司 一种用于输变电设备的无人巡检故障检测方法和系统
CN118130742A (zh) * 2024-05-06 2024-06-04 阳光学院 基于迁移学习的河湖水质遥感反演及评价方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254800A1 (en) * 2014-03-06 2015-09-10 F12 Solutions, Llc Nitrogen status determination in growing crops
CN106886745A (zh) * 2016-12-26 2017-06-23 西北工业大学 一种基于实时在线地图生成的无人机侦察方法
CN206313928U (zh) * 2017-01-12 2017-07-07 王昱淇 一种用于水域漂浮物监测的无人机监控系统
US20180025480A1 (en) * 2015-02-05 2018-01-25 The Technology Research Centre Ltd. Apparatus and method for analysis of growing items

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897015A (zh) * 2006-05-18 2007-01-17 王海燕 基于机器视觉的车辆检测和跟踪方法及系统
EP2187339A1 (en) * 2008-11-12 2010-05-19 Fundación Robotiker Method for integrating spectral and spatial features for classifying materials
US9684673B2 (en) * 2013-12-04 2017-06-20 Urthecast Corp. Systems and methods for processing and distributing earth observation images
CA2929254C (en) * 2016-05-06 2018-12-11 SKyX Limited Unmanned aerial vehicle (uav) having vertical takeoff and landing (vtol) capability
CN108510750A (zh) * 2018-04-25 2018-09-07 济南浪潮高新科技投资发展有限公司 一种基于神经网络模型的无人机巡检违章停车的方法
CN108824397A (zh) * 2018-09-29 2018-11-16 五邑大学 一种河流漂浮垃圾收集装置
CN110309762A (zh) * 2019-06-26 2019-10-08 扆亮海 一种基于航空遥感的林业健康评价系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254800A1 (en) * 2014-03-06 2015-09-10 F12 Solutions, Llc Nitrogen status determination in growing crops
US20180025480A1 (en) * 2015-02-05 2018-01-25 The Technology Research Centre Ltd. Apparatus and method for analysis of growing items
CN106886745A (zh) * 2016-12-26 2017-06-23 西北工业大学 一种基于实时在线地图生成的无人机侦察方法
CN206313928U (zh) * 2017-01-12 2017-07-07 王昱淇 一种用于水域漂浮物监测的无人机监控系统

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743208A (zh) * 2021-07-30 2021-12-03 南方海洋科学与工程广东省实验室(广州) 一种基于无人机阵列的中华白海豚数量统计方法及系统
CN113837924A (zh) * 2021-08-11 2021-12-24 航天科工深圳(集团)有限公司 一种基于无人艇感知系统的水岸线检测方法
CN114283237A (zh) * 2021-12-20 2022-04-05 中国人民解放军军事科学院国防科技创新研究院 一种无人机仿真视频生成方法
CN114283237B (zh) * 2021-12-20 2024-05-10 中国人民解放军军事科学院国防科技创新研究院 一种无人机仿真视频生成方法
CN115100553A (zh) * 2022-07-06 2022-09-23 浙江科技学院 基于卷积神经网络的河面污染信息检测处理方法及系统
CN115439765B (zh) * 2022-09-17 2024-02-02 艾迪恩(山东)科技有限公司 基于机器学习无人机视角下海洋塑料垃圾旋转检测方法
CN115439765A (zh) * 2022-09-17 2022-12-06 艾迪恩(山东)科技有限公司 基于机器学习无人机视角下海洋塑料垃圾旋转检测方法
CN115713174A (zh) * 2022-11-11 2023-02-24 中国地质大学(武汉) 一种无人机城市巡检系统及方法
CN116052027A (zh) * 2023-03-31 2023-05-02 深圳联和智慧科技有限公司 基于无人机的漂浮垃圾种类识别方法、系统及云平台
CN117392465B (zh) * 2023-12-08 2024-03-22 聚真宝(山东)技术有限公司 一种基于视觉的垃圾分类数字化管理方法
CN117392465A (zh) * 2023-12-08 2024-01-12 聚真宝(山东)技术有限公司 一种基于视觉的垃圾分类数字化管理方法
CN117671545A (zh) * 2024-01-31 2024-03-08 武汉华测卫星技术有限公司 一种基于无人机的水库巡检方法及系统
CN117671545B (zh) * 2024-01-31 2024-04-19 武汉华测卫星技术有限公司 一种基于无人机的水库巡检方法及系统
CN117876910A (zh) * 2024-03-06 2024-04-12 西北工业大学 基于主动学习的无人机目标检测关键数据筛选方法
CN118170156A (zh) * 2024-05-14 2024-06-11 石家庄思凯电力建设有限公司 基于飞行动态规划的无人机清除杆塔鸟窝的方法及装置

Also Published As

Publication number Publication date
CN111259809B (zh) 2021-08-17
US11195013B2 (en) 2021-12-07
CN111259809A (zh) 2020-06-09
US20210224512A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
WO2021142902A1 (zh) 基于DANet的无人机海岸线漂浮垃圾巡检系统
CN106127204B (zh) 一种全卷积神经网络的多方向水表读数区域检测算法
CN103679674B (zh) 一种无人飞行器实时图像拼接方法及系统
CN110188696A (zh) 一种水面无人装备多源感知方法及系统
Kong et al. General road detection from a single image
CN111368690B (zh) 基于深度学习的海浪影响下视频图像船只检测方法及系统
CN101145200A (zh) 多视觉传感器信息融合的内河船舶自动识别系统
CN110796009A (zh) 基于多尺度卷积神经网络模型的海上船只检测方法及系统
CN103544505B (zh) 面向无人机航拍图像的船只识别系统及方法
CN112488020B (zh) 基于无人机航拍数据的水环境污染情况检测评估装置
CN111986240A (zh) 基于可见光和热成像数据融合的落水人员检测方法及系统
CN109145747A (zh) 一种水面全景图像语义分割方法
CN108681718A (zh) 一种无人机低空目标精准检测识别方法
US11776104B2 (en) Roof condition assessment using machine learning
CN116052222A (zh) 自然采集牛脸图像的牛脸识别方法
CN116385958A (zh) 一种用于电网巡检和监控的边缘智能检测方法
CN116229292A (zh) 一种基于无人机路面巡检病害的巡检系统及方法
CN115240089A (zh) 一种航空遥感图像的车辆检测方法
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
CN115661932A (zh) 一种垂钓行为检测方法
CN114581307A (zh) 用于目标追踪识别的多图像拼接方法、系统、设备及介质
CN110334703B (zh) 一种昼夜图像中的船舶检测和识别方法
Liu et al. STCN-Net: A novel multi-feature stream fusion visibility estimation approach
CN115690610A (zh) 一种基于图像匹配的无人机导航方法
CN111401286B (zh) 一种基于部件权重生成网络的行人检索方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914171

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914171

Country of ref document: EP

Kind code of ref document: A1