CN110705623B - Sea-sky-line on-line detection method based on full convolution neural network - Google Patents

Sea-sky-line on-line detection method based on full convolution neural network Download PDF

Info

Publication number
CN110705623B
CN110705623B CN201910915844.1A CN201910915844A CN110705623B CN 110705623 B CN110705623 B CN 110705623B CN 201910915844 A CN201910915844 A CN 201910915844A CN 110705623 B CN110705623 B CN 110705623B
Authority
CN
China
Prior art keywords
sea
sky
image
line
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910915844.1A
Other languages
Chinese (zh)
Other versions
CN110705623A (en
Inventor
姜文
陈启贤
廖煜雷
王博
沈海龙
李志晨
李姿科
成昌盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201910915844.1A priority Critical patent/CN110705623B/en
Publication of CN110705623A publication Critical patent/CN110705623A/en
Application granted granted Critical
Publication of CN110705623B publication Critical patent/CN110705623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a sea antenna on-line detection method based on a full convolution neural network, which is used for completing the detection task of an unmanned ship to a sea antenna under a water surface environment, namely predicting the sea antenna under the current sensor view angle according to image information returned by an optical vision sensor carried by the unmanned ship on the water surface and ship body posture information of the unmanned ship on the water surface, designing a full convolution neural network and a full connection network model according to the general category information of a water surface image aiming at a complex water surface environment based on a classification fitting principle, solving the image stabilization problem of the sensor and the correction problem of the ship body pose under the complex water surface environment, simultaneously reducing a search area, accelerating the search speed, providing accurate sensor information for the environment perception, motion planning and control of the unmanned ship on the water surface, and the like.

Description

Sea-sky-line on-line detection method based on full convolution neural network
Technical Field
The invention relates to a sea-sky-line detection method, in particular to a sea-sky-line online detection method for an unmanned surface vehicle based on a full convolution neural network, and belongs to the field of image recognition.
Background
The sea-sky line is a dividing line or a dividing region for distinguishing the sea surface region from the sky region, and may be approximately expressed by a straight line or may be fitted by one region. Sea-sky-line detection is an important part in the traditional sea surface information acquisition process, not only because the sea-sky-line can be used for segmenting images and searching for targets in the obtained sea surface area, the detection speed is greatly increased, but also the sea-sky-line can be used for monitoring the sensor attitude of the unmanned ship, assisting in obstacle avoidance navigation and working state judgment, and the most important point is that the sea-sky-line can correct the camera attitude, namely stabilize the image, so as to acquire a high-quality data source.
In 2005, the Zhao renxia of Harbin engineering university, etc., decomposed the histogram of the image into wavelet coefficients of different levels, then selected the threshold value according to the given segmentation criteria and wavelet coefficients, and marked the image segmentation region by using the threshold value. On the result of threshold segmentation, the characteristic that the edge is a grey level discontinuous point and has singularity is utilized, and a discrete wavelet transform method is adopted to perform wavelet transform on the image and detect the sea-sky-line.
In 2009, the people of Zhao kuan of the naval air engineering institute, etc. utilized edge detection to obtain edge points, and then extracted edge points representing boundary lines at these edge points to construct a sea-sky-line to complete segmentation. The influence of the water marks is removed by utilizing the difference of boundary lines and the water marks under the large-scale wavelet transform and whether the boundary lines and the water marks belong to the edge dense in the vertical direction. However, the method only assumes the water wave correction when the sea-sky-line is basically in the horizontal state, and the effect of the sea-sky-line in the inclined posture or the continuous shaking still needs to be verified.
In 2014, Hin and the like of the university of tebessa use a random sampling consistency algorithm (randomasampleconsenssus) to detect the sea-sky-line, so that the problem that factors such as sunlight reflection, shielding, poor light and staggering in boundary areas bring about the detection of the sea-sky-line is solved. However, this method is limited to scenes with large noise, such as coastal images with many buildings and numbers, and is not effective.
Disclosure of Invention
Aiming at the prior art, the technical problem to be solved by the invention is to provide a sea-sky-line online detection method based on a full convolution neural network, which is used for detecting the sea-sky-line in real time by an unmanned ship and provides important reference for posture correction and target search of the unmanned ship.
In order to solve the technical problem, the sea-sky-line online detection method based on the full convolution neural network comprises the following steps:
s1, collecting image data of the real boat sensor, selecting images under different unmanned boat viewing angles, and screening out images containing sea antennas;
s2: designing a full-convolution neural network structure, constructing a loss function according to the mean square error of the pixel point prediction classification result, training on a sea-sky-line detection data set until the model converges, and if the model does not converge, adding an L2 regularization item into the loss function or adopting a batch regularization training method to enable the model to achieve the convergence effect;
s3: evaluating the accuracy of the converged full-convolution model by adopting an average pixel precision index, and executing S4 when the average pixel precision index is greater than a set value; when the average pixel precision index does not meet the requirement of being larger than the set value, returning to the step S2;
s4: classifying pixel points of the test image by using a full convolution neural network model, and combining the category information of the pixel points and the position information of the pixel points under an image coordinate system to obtain the information tuple of each point, wherein the method comprises the following steps: category, x-coordinate, y-coordinate;
s5: and designing a sea-sky-line equation y under a pixel coordinate system as ax + b and a shallow fully-connected neural network, fitting and correcting sea-sky-line equation parameters by using information tuples of image pixel points, and obtaining a final sea-sky-line.
The invention also includes:
judging whether the image meets the machine reading habit or not by using the image evaluation index root mean square error of the image including the sea-sky line screened in the S1, wherein the expression is as follows:
Figure BDA0002216091450000021
wherein X is the average number, N is the number of samples, if RMSE < 60%, the image is discarded, and the training set data is screened again; if RMSE > 60%, the image is proved to be capable of further obtaining a production data set.
The invention has the beneficial effects that: based on a classification fitting principle, aiming at a complex water surface environment, a full convolution neural network and a full connection network model are designed according to the general class information of a water surface image, the unmanned ship image information has good nonlinear fitting capability and self-adaption capability, the problems of sea antenna missing detection and false detection caused by unpredictable movement of a ship body and interference of the complex water surface environment on texture characteristics are solved, the limitation of a traditional detection method is improved, and a high-quality information source is provided for unmanned ship environment perception and movement control. Compared with the traditional sea-sky-line detection method, the sea-sky-line detection method for the unmanned surface vehicle provides a new detection idea, the classification information of the pixel points is obtained through image segmentation, and sea-sky-line fitting is carried out by combining the coordinates of the pixel points under a pixel coordinate system.
Drawings
FIG. 1 is a flow chart of a method for detecting a sea-sky-line for an unmanned ship based on a full convolution neural network according to the invention;
FIG. 2 is a full convolution design of the present invention;
FIG. 3 is a fully connected network architecture design of the present invention;
FIG. 4 shows the structure of the detection body of the present invention.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
The invention discloses a sea-sky-line detection method for an unmanned ship based on a full convolution neural network, which is used for completing the detection task of the sea-sky-line by the unmanned ship under the water surface environment. The sea-sky line under the current sensor visual angle is predicted according to image information returned by an optical vision sensor carried by the unmanned surface vehicle and the hull posture information of the unmanned surface vehicle, the purpose is to solve the problem of image stabilization of the sensor under the complex water surface environment and the problem of correction of the hull posture, and meanwhile, the search area can be reduced, the search speed is increased, and accurate sensor information is provided for environment sensing, motion planning and control of the unmanned surface vehicle. The method mainly comprises the following steps:
(1) the method comprises the steps of firstly collecting water surface image data of an unmanned boat during navigation, collecting water surface images with an opening angle of about 80 degrees in front of the unmanned boat by a high-definition camera carried by the unmanned boat at a certain frame rate, and carrying out preliminary screening by manually judging whether sea antennas are contained or not to obtain about 1000 water surface images with the sea antennas in different postures and angles.
(2) Then, judging whether the image accords with the machine reading habit by using a Root Mean Square Error (RMSE), wherein the expression is as follows:
Figure BDA0002216091450000031
wherein X is the average number and N is the number of samples. If RMSE is less than 60%, discarding the image, and re-screening the training set data; if RMSE > 60%, the image is proved to be capable of further obtaining a production data set, and about 1000 water surface images are collected. The Images are screened under offline processing, and then the Images are processed by using a standard Pascal-Voc format to obtain three main files, namely Antotatations, Images and Info, namely a sea-sky-line detection data set.
(3) Designing a full convolution neural network structure, constructing a loss function according to the mean square error of the pixel point prediction classification result, and training on a sea-sky-line detection data set until the model converges; if not, adding an L2 Regularization term (L2-Regularization) to the loss function or adopting a batch Regularization training method to enable the model to achieve the convergence effect.
The structure of the first partial full convolution neural network is designed, a 7+2 type structure is adopted, as shown in fig. 2, the first 6 layers can be regarded as backbone networks for extracting features, the 7 th layer starts convolution to obtain a heat map, the low-layer heat map is limited to the problem of accuracy and cannot well restore the features in the image, and therefore forward iteration is needed, and a jump structure is introduced to further improve the classification accuracy.
The jump structure is a process for obtaining the heat map 2 and the heat map 3 in the map, wherein the connection output between the upper sampling layers is a weighted sum of two input layers. The results of the later layer are up-sampled to the dimension of the results of the former layer, then corresponding positions are added, for the selection of weighting parameters, sampling is carried out at intervals of 0.1 from 0.5 to 2.0, the 15 models are respectively trained and evaluated, and the parameter with the highest average pixel accuracy MPA is taken as the weight of the jump structure.
(4) Determining structure and weighting parameters, after obtaining a model trained by using a sea-sky-line detection data set, using an average pixel accuracy rate constraint model, because the water surface environment of the unmanned ship is complex and changeable, after each task is executed, the unmanned ship environment sensing system can accumulate a plurality of image materials, adding an original image and an original model prediction result into the sea-sky-line detection data set for updating, training to obtain a new sea-sky-line detection model, and requiring that the average pixel accuracy rate is more than 0.6 model to participate in prediction.
(5) After the category information of the image pixel points is predicted by using a full convolution model, a data set of a single image fitting sea-sky line is constructed by combining the coordinates of the image pixel points in a pixel coordinate system. Each group of data is in the form of (X, Y, K), wherein X, Y are coordinates of pixel points, and K is a category corresponding to the pixel points, mainly (1) - -water surface, (0) - -background, (-1) - -sky. And taking out pixel points of which the categories are sky and water surface in the data, and taking the calculated amount into consideration, and randomly taking 50% of the points of each category as a final data set.
(6) The method comprises the steps of constructing a full-connection neural network for classification and sea-sky-line fitting, and establishing a trainable full-connection neural network comprising 2 hidden layers, wherein the number of nodes of each hidden layer is 1.5 times of the number of input nodes, and the number of output nodes is parameters a and b of a sea-sky-line equation, so that the nonlinear fitting capability is integrated, and partial calculated amount is reduced to a certain extent. About 400 groups of data combination of each image is taken, and after about 2000 times of iterative training, the fully-connected neural network reaches a stable convergence state, and the position of the sea-sky-line under the pixel coordinate system can be obtained by predicting the values of the sea-sky-line equation parameters a and b, as shown in fig. 3.
Wherein, the total loss of the fully-connected neural network can be expressed as:
Figure BDA0002216091450000041
the sum of each expected error needs to be calculated.
Comparing the predicted sea-sky-line equation with the actual data, using the included angle of the two sea-sky-lines as a deviation expression index to obtain the accuracy of the model under different water surface environments, and merging the images detected in real time into a sea-sky-line detection data set for off-line correction, wherein the whole detection process is shown in fig. 4.
The specific implementation mode of the invention also comprises:
the sea-sky-line on-line detection method based on the full convolution neural network comprises the following steps:
(1) and collecting image data of the real boat sensor, selecting images under different unmanned boat viewing angles, and screening out images containing sea antennas. The method comprises the steps of marking about 1000 screened water surface images and manufacturing the water surface images into a sea-sky-line detection data set; (2) designing a full convolution neural network structure, constructing a loss function according to the mean square error of the pixel point prediction classification result, and training on a sea-sky-line detection data set until the model converges; if not, adding an L2 Regularization term (L2-Regularization) to the loss function or adopting a batch Regularization training method to enable the model to achieve the convergence effect. (3) Evaluating the Accuracy of the converged full-convolution model by adopting a Mean Pixel Accuracy (MPA) index; and (5) if the index is larger than the set threshold, proceeding to the next step, and if the index does not exceed the threshold, returning to the step (2) to train again until the requirement is met. (4) Determining a full convolution neural network model, classifying pixel points of a test image by using the model, and combining the category information of the pixel points and the position information of the pixel points in an image coordinate system to obtain an information tuple (category, x coordinate and y coordinate) of each point; (5) and designing a sea-sky-line equation y under a pixel coordinate system as ax + b and a shallow fully-connected neural network, fitting and correcting sea-sky-line equation parameters by using information tuples of image pixel points, and obtaining a final sea-sky-line.
The invention also includes such features:
1. the full convolution neural network is adopted to segment the image, and the full convolution neural network converts the image segmentation problem into an end-to-end image processing problem by reducing manual preprocessing and subsequent processing, so that the model has more parameter adjusting spaces, and the overall integrating degree of the model is increased. The full convolution neural network is widely applied to the field of image segmentation, can accept input images of any size, avoids the problems of repeated storage and convolution calculation amount caused by using pixel blocks, and meets the actual requirements of unmanned ship sea-sky-line detection in both precision and speed.
2. Considering that the classification result has some wrongly classified interference points, a fully-connected neural network with certain nonlinear fitting capacity is used in the fitting stage, so that the influence of the interference points on the fitting precision is effectively inhibited. The fully-connected neural network realizes the mapping process from input category and position information to output sea-sky-line equation parameters, can approximate a nonlinear continuous function with higher precision, has stronger nonlinear mapping capability, and is suitable for sea-sky-line fitting with interference pixel points.

Claims (2)

1. A sea-sky-line online detection method based on a full convolution neural network is characterized by comprising the following steps:
s1, collecting image data of the real boat sensor, selecting images under different unmanned boat viewing angles, and screening out images containing sea-sky lines;
s2: designing a full-convolution neural network structure, constructing a loss function according to the mean square error of the pixel point prediction classification result, training on a sea-sky-line detection data set until the model converges, and if the model does not converge, adding an L2 regularization item into the loss function or adopting a batch regularization training method to enable the model to achieve the convergence effect;
s3: evaluating the accuracy of the converged full-convolution model by adopting an average pixel precision index, and executing S4 when the average pixel precision index is greater than a set value; when the average pixel precision index does not meet the requirement of being larger than the set value, returning to the step S2;
s4: classifying pixel points of the test image by using a full convolution neural network model, and combining the category information of the pixel points and the position information of the pixel points under an image coordinate system to obtain the information tuple of each point, wherein the method comprises the following steps: category, x-coordinate, y-coordinate;
s5: and designing a sea-sky-line equation y under a pixel coordinate system as ax + b and a shallow fully-connected neural network, fitting and correcting sea-sky-line equation parameters by using information tuples of image pixel points, and obtaining a final sea-sky-line.
2. The sea-sky-line online detection method based on the full convolution neural network as claimed in claim 1, characterized in that: judging whether the image meets the machine reading habit or not by using the image evaluation index root mean square error of the image including the sea-sky line screened in the S1, wherein the expression is as follows:
Figure FDA0002216091440000011
wherein X is the average number, N is the number of samples, if RMSE < 60%, the image is discarded, and the training set data is screened again; if RMSE > 60%, the image is proved to be capable of further obtaining a production data set.
CN201910915844.1A 2019-09-26 2019-09-26 Sea-sky-line on-line detection method based on full convolution neural network Active CN110705623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910915844.1A CN110705623B (en) 2019-09-26 2019-09-26 Sea-sky-line on-line detection method based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910915844.1A CN110705623B (en) 2019-09-26 2019-09-26 Sea-sky-line on-line detection method based on full convolution neural network

Publications (2)

Publication Number Publication Date
CN110705623A CN110705623A (en) 2020-01-17
CN110705623B true CN110705623B (en) 2022-08-02

Family

ID=69196434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910915844.1A Active CN110705623B (en) 2019-09-26 2019-09-26 Sea-sky-line on-line detection method based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN110705623B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279973A (en) * 2010-06-11 2011-12-14 中国兵器工业第二○五研究所 Sea-sky-line detection method based on high gradient key points
CN108765439A (en) * 2018-05-29 2018-11-06 上海大学 A kind of sea horizon detection method based on unmanned water surface ship
CN109472200A (en) * 2018-09-29 2019-03-15 深圳市锦润防务科技有限公司 A kind of intelligent sea rubbish detection method, system and storage medium
CN109766830A (en) * 2019-01-09 2019-05-17 深圳市芯鹏智能信息有限公司 A kind of ship seakeeping system and method based on artificial intelligence image procossing
CN109886336A (en) * 2019-02-21 2019-06-14 山东超越数控电子股份有限公司 A kind of object detection method and system based on warship basic image
CN109919026A (en) * 2019-01-30 2019-06-21 华南理工大学 A kind of unmanned surface vehicle local paths planning method
CN110188696A (en) * 2019-05-31 2019-08-30 华南理工大学 A kind of water surface is unmanned to equip multi-source cognitive method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279973A (en) * 2010-06-11 2011-12-14 中国兵器工业第二○五研究所 Sea-sky-line detection method based on high gradient key points
CN108765439A (en) * 2018-05-29 2018-11-06 上海大学 A kind of sea horizon detection method based on unmanned water surface ship
CN109472200A (en) * 2018-09-29 2019-03-15 深圳市锦润防务科技有限公司 A kind of intelligent sea rubbish detection method, system and storage medium
CN109766830A (en) * 2019-01-09 2019-05-17 深圳市芯鹏智能信息有限公司 A kind of ship seakeeping system and method based on artificial intelligence image procossing
CN109919026A (en) * 2019-01-30 2019-06-21 华南理工大学 A kind of unmanned surface vehicle local paths planning method
CN109886336A (en) * 2019-02-21 2019-06-14 山东超越数控电子股份有限公司 A kind of object detection method and system based on warship basic image
CN110188696A (en) * 2019-05-31 2019-08-30 华南理工大学 A kind of water surface is unmanned to equip multi-source cognitive method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Jeong 等.A novel approach for detecting the horizon using a convolutional neural network and multi-scale edge detection.《Multidim Syst Sign Process》.2018,(第30期), *
梅升阳 等.基于海天线引导与卷积神经网络的船舰目标检测.《计算机与数字工程》.2018,第46卷(第04期), *

Also Published As

Publication number Publication date
CN110705623A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN113065558B (en) Lightweight small target detection method combined with attention mechanism
CN110232350B (en) Real-time water surface multi-moving-object detection and tracking method based on online learning
CN107680054B (en) Multi-source image fusion method in haze environment
CN111222574B (en) Ship and civil ship target detection and classification method based on multi-model decision-level fusion
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN108805906A (en) A kind of moving obstacle detection and localization method based on depth map
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN114266977B (en) Multi-AUV underwater target identification method based on super-resolution selectable network
Liang et al. Horizon detection from electro-optical sensors under maritime environment
CN113799124B (en) Robot flexible grabbing detection method in unstructured environment
CN111145145B (en) Image surface defect detection method based on MobileNet
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN110991257A (en) Polarization SAR oil spill detection method based on feature fusion and SVM
CN111539422A (en) Flight target cooperative identification method based on fast RCNN
CN111199245A (en) Rape pest identification method
CN112396655A (en) Point cloud data-based ship target 6D pose estimation method
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN112053385B (en) Remote sensing video shielding target tracking method based on deep reinforcement learning
CN113933828A (en) Unmanned ship environment self-adaptive multi-scale target detection method and system
Li et al. Vision-based target detection and positioning approach for underwater robots
CN110826575A (en) Underwater target identification method based on machine learning
CN116777956A (en) Moving target screening method based on multi-scale track management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant