CN110298271A - Seawater method for detecting area based on critical point detection network and space constraint mixed model - Google Patents

Seawater method for detecting area based on critical point detection network and space constraint mixed model Download PDF

Info

Publication number
CN110298271A
CN110298271A CN201910519569.1A CN201910519569A CN110298271A CN 110298271 A CN110298271 A CN 110298271A CN 201910519569 A CN201910519569 A CN 201910519569A CN 110298271 A CN110298271 A CN 110298271A
Authority
CN
China
Prior art keywords
sea
sea horizon
image
horizon
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910519569.1A
Other languages
Chinese (zh)
Inventor
刘靖逸
李恒宇
沈斐玲
罗均
谢少荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201910519569.1A priority Critical patent/CN110298271A/en
Publication of CN110298271A publication Critical patent/CN110298271A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of seawater method for detecting area based on critical point detection network and space constraint mixed model, comprising the following steps: (1) acquire sea image pattern, construct sea horizon key point training sample;(2) sea horizon key point training sample training critical point detection network is utilized;(3) using sea image to be detected as input, pass through the critical point detection neural network forecast sea horizon key point trained;(4) sea horizon key point is connected, corresponding sea horizon is obtained;(5) mixed model for establishing space constraint is initialized by reference pair model parameter and prior distribution of sea horizon;(6) semantic segmentation is carried out to sea image by expectation-maximization algorithm, to detect seawater region.The seawater region under complex background can be effectively detected out in method of the invention, and has the characteristics that speed is fast, robustness is good.

Description

Seawater region detection based on critical point detection network and space constraint mixed model Method
Technical field
The present invention relates to technical field of image processing, and in particular to one kind is mixed based on critical point detection network and space constraint The seawater method for detecting area of molding type.
Background technique
Seawater region detection technique can provide a travelable region for unmanned water surface ship, to provide road for independent navigation Diameter planning auxiliary, is the key components of its environment sensing.With the rise of ocean development and the hair of unmanned technology Exhibition, the seawater region detection core technology indispensable as unmanned water surface ship also need to have breakthrough.Under sail, unmanned water Face ship helps to improve the safety of driving to the enhancing of scene perception ability around.Therefore, seawater region detection technique is studied There is critically important realistic meaning.
Currently, unmanned water surface ship mainly has laser radar, visible light wave range and infrared waves to the cognition technology of scene around The Vision imaging system etc. of section.Wherein, it due to including target area detailed information abundant in optical imagery, is regarded with visible light Feel that the intelligence system based on imaging is easier to perceive complicated Ocean Scenes, and position is made accurately Judgement.In addition, it also executes the various activities including analytical judgment and monitoring and reconnaissance activities with important to the mankind Booster action.In recent years, the seawater region detection research based on photopic vision is not also very much.2016, Kristan Et al. in " Fast Image-Based Obstacle Detection From Unmanned Surface Vehicles " Propose a kind of semantic segmentation method based on probability graph model.This method mainly passes through three Gaussian Profiles and one is uniformly divided Cloth describes sky, haze/seashore Mixed Zone, seawater region and potential barrier region (singular value region) respectively, And pass through the semantic segmentation of expectation-maximization algorithm (EM algorithm) sea Lai Shixian image.This method detection performance preferably, speed Comparatively fast, but the initial method of its model parameter is too simple, only simply on the image along vertical direction in proportion Mark off three regions indicate sky, intermediate Mixed Zone, seawater region observation data and corresponding height is calculated with this This distribution initial parameter.When flating is larger, the seawater region marked off in proportion may includes more centre The pixel of Mixed Zone, and it is not accurate enough with the Gaussian Distribution Parameters of this calculated seawater classification, eventually lead to sea image Semantic segmentation result there are large errors.2017, in order to improve the sea image semantic segmentation side of Kristan et al. proposition The deficiency of method, Bovcon et al. is in " Improving vision-based obstacle detection on USV using Inertial sensor " in propose a kind of semantic segmentation method of combination inertial navigation sensor.This method is first with inertial navigation number According to calculating rough sea horizon, and using this sea horizon as reference pair sky, intermediate Mixed Zone, seawater area classification Gauss Distribution parameter is initialized.The method improve the performances of Kristan et al. semantic segmentation method proposed still to work as nothing When people's water surface ship is closer apart from seashore, this method can be located in intermediate Mixed Zone using the sea horizon of inertial guidance data estimation, and It is not located at the intersection of intermediate Mixed Zone and seawater region, to cause initial model parameter not accurate enough.In addition, should Method needs the support of inertial navigation sensor, increases the cost of unmanned water surface ship visual perception system.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides one kind to be based on critical point detection network and space constraint mixed model Seawater method for detecting area, this method can be effectively detected out the seawater region under complex background, and have speed it is fast, The good feature of robustness.
In order to achieve the above objectives, the present invention adopts the following technical scheme:
A kind of seawater method for detecting area based on critical point detection network and space constraint mixed model, including following step It is rapid:
(1) sea image pattern is acquired, sea horizon key point training sample is constructed;
(2) sea horizon key point training sample training critical point detection network is utilized;
(3) it using sea image to be detected as input, is closed by the critical point detection neural network forecast sea horizon trained Key point;
(4) sea horizon key point is connected, corresponding sea horizon is obtained;
(5) mixed model for establishing space constraint carries out just by reference pair model parameter and prior distribution of sea horizon Beginningization;
(6) semantic segmentation is carried out to sea image by expectation-maximization algorithm (EM), to detect seawater region.
Further, in the step (1), the N sea color images of camera acquisition carried using unmanned water surface ship are used Annotation tool manually marks acquired image, forms sea horizon key point training set T={ t1, t2,…,tN}.Its In, each training sample tiBy sea image IiO is constituted with corresponding sea horizon key pointi, i.e. ti={ Ii,Oi}.For sea Antenna key point OiMark, detailed process are as follows: equidistantly mark n vertical line along the vertical direction of image, then utilize mark Note tool manually marks out the point intersected on this n vertical line with sea horizon, it is hereby achieved that n sea horizon key point coordinate, I.e.
Further, in the step (2), the structure of critical point detection network is mainly by 3 continuous 3 × 3 convolutional layers (block1), 3 continuous are decomposed convolution module (block2), 3 continuous expansions convolution module (block3), 13 × 3 Convolutional layer (S1), 18 × 8 convolutional layer (S2) and 1 full articulamentum constitute (S3).Wherein, convolution module and expansion volume are decomposed Volume module be all based on residual error module, the difference is that: decompose convolution by residual error module 3 × 3 convolutional layers replace At 3 × 1 convolutional layers and 1 × 3 convolutional layer, and expanding convolution module then is by 3 × 3 convolution kernels in residual error module It has been substituted for one 3 × 3 expansion convolution kernel.Also with Multiscale Fusion method, by block3 module, S1 layers, S2 layers Output characteristic pattern inputs to full articulamentum after being merged.The input of critical point detection network is sea image, and what is exported is Sea horizon key point coordinate.
Further, in the step (5), it is assumed that mixed model is uniformly distributed by 3 Gaussian Profiles and 1 and is formed, wherein 3 A Gaussian Profile is respectively used to description description sky, haze/seashore Mixed Zone, seawater region, and is uniformly distributed for describing Potential barrier region (singular value region).Then, in image ith pixel feature vector yiProbability can indicate Are as follows:
In above formula, N (| m, C) indicates that mean value is m and gauss of distribution function that covariance is C, and U ()=ε indicates uniform Distribution function (wherein, ε is a very small positive value hyper parameter);yiIndicate the feature vector of ith pixel in image (also referred to as Observe data), mainly it is made of the color characteristic of pixel (c1, c2, c3) and coordinate (r, c);θ indicates all Gausses point in model Parameter (the i.e. θ={ m of clothk,Ck}K=1,2,3);π indicates that the category prior of all pixels in image is distributed (i.e. π={ πi}I=1:M, In, M is the number of pixel in image), πiIndicate that the category prior of ith pixel is distributed (i.e. πi=[πi1..., πik..., πi4], Wherein, πik=p (xi=k) indicate ith pixel classification xiProbability when for k, it is assumed that k=1 indicates sky classification, k=2 table Show intermediate seashore/haze hybrid category, k=3 indicates seawater classification, and k=4 indicates barrier classification);H expression is examined by key point The sea horizon that neural network forecast obtains is surveyed, andWhen indicating known to sea horizon, the condition prior distribution of ith pixel classification
It, can be by introducing Ma Erke in order to overcome local noise adverse effect caused by image segmentation in the image of sea Husband's random field (Markov Random Field, MRF) to carry out space constraint to mixed model, i.e. all pictures in hypothesis image The category prior of element is distributed π={ πi}I=1:MAnd Posterior distrbutionp P={ pi}I=1:MIt is a MRF about neighborhood system.According to The joint probability distribution of Besag method, prior distribution π can be approximated to be:
In above formula, NiFor the neighborhood of pixel i,For neighborhood NiCategory prior distribution:
Wherein, λijFor fixed positive value weight, when pixel j and range pixel i are smaller, λijIt is bigger, and ∑jλij=1.
In addition, the potential-energy function in MRF is (i.e.) can be with is defined as:
In above formula,For KL divergence item, H (πi) it is entropy item.
And Posterior distrbutionp P={ pi}I=1:MJoint probability distribution are as follows:
Wherein, the Posterior distrbutionp p of pixel ii={ pik}K=1:4Calculation formula it is as follows:
Simultaneous formula (1), (2), (4) and (5), the available semantic segmentation model based on the uniform mixed distribution of gaussian sum Joint probability density function:
In above formula, due toWithIn there are coupled relations, therefore, it is difficult to directly carry out model parameter to it Estimation.In order to solve this problem, secondary category prior distribution collection s={ s can be introducedi}I=1:MWith auxiliary Posterior distrbutionp collection q= {qi}I=1:MInto above formula, and peer-to-peer both sides are derived from right logarithm operation simultaneously, to obtain punishing for space constraint mixed model Penalize log-likelihood function:
In above formula, ο indicates Hadamard product operation;And work as si≡πiAnd qi≡piWhen, formula can be reduced to (9).In addition, according to maximum a posteriori criterion above-mentioned formula can be maximized by EM algorithm, to realize to the excellent of mixed model Change.
The initialization of mixed model parameter:
In fact, when known to sea horizon h, the pixel of sea horizon h following region can only be seawater or barrier classification, because This is when pixel i is located at sea horizon h following region, it is assumed that condition prior probability p (xi=k | h) are as follows:
And when pixel i is located at sea horizon h area above, it is assumed that condition prior probability p (xi=k | h) are as follows:
In addition, the initial Gaussian parameter θ in mixed model is determined see also the position of sea horizon h.Specifically, for sea Initial Gaussian parameter { the m of water classification3,C3, can by count sea horizon h following region all pixels characteristic mean and Covariance matrix obtains;For the initial Gaussian parameter { m of sky classification and intermediate seashore/haze hybrid categoryk,Ck}K=1,2, can Sea horizon h or more is first divided into two regions up and down, thus pixel characteristic mean value and association side using the two regions Initial Gaussian parameter of the poor matrix respectively as sky classification and intermediate seashore/haze hybrid category.For initial category priori It is distributed πi, initialize formula are as follows:
Wherein, ε is a very small positive value hyper parameter.
Further, in the step (6), the specific steps of expectation-maximization algorithm (EM) are as follows:
In E-step:
1. by θ, π and p (xi=k | formula (6) h) are substituted into, calculate the Posterior distrbutionp P={ p of all super-pixeli}I=1:M
2. (12) according to the following formula calculate secondary category prior distribution collection s={ si}I=1:M
In above formula, ο indicates Hadamard product operation, and * indicates convolution algorithm,For normaliztion constant.
3. calculating auxiliary Posterior distrbutionp collection q={ qi}I=1:M, calculation formula is as follows:
In above formula,For normaliztion constant.
In M-step:
4. update condition category prior distribution collection, calculation formula are as follows:
5. updating Parameters of Normal Distribution collection, calculation formula is as follows:
6. judging whether EM algorithm reaches stopping criterion for iteration;If reached, stop iteration, otherwise, then continue 1.~ ⑥.Wherein, stopping criterion for iteration is as follows:
According to the Posterior distrbutionp collection q={ q after EM algorithm optimizationi}I=1:M, and seawater region detection is obtained using formula (18) As a result.
Compared with prior art, the invention has the benefit that
Firstly, the present invention predicts sea horizon key point using critical point detection network, and as reference pair The parameter of the mixed model of space constraint is initialized, and is largely avoided mixed model and is fallen into EM algorithm optimization Locally optimal solution, to improve the accuracy of seawater region detection.Secondly, seawater method for detecting area designed by the present invention It is simple with structure, the characteristics of fast speed, it is easy to Practical Project deployment.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is critical point detection network structure of the present invention.
Fig. 3 is critical point detection network configuration figure of the present invention.
Fig. 4 is that convolution module (Factorized Block) signal is decomposed used in critical point detection network of the present invention Figure.
Fig. 5 is that convolution module (Dilated Block) schematic diagram is expanded used in critical point detection network of the present invention.
Fig. 6 is the schematic diagram of the method for the present invention one embodiment, wherein (a) is embodiment mapping to be checked;It (b) is sea horizon Critical point detection result figure;It (c) is embodiment semantic segmentation result figure;It (d) is seawater region testing result figure.
Specific embodiment
It is clear to be more clear the object, technical solutions and advantages of the present invention, with reference to the accompanying drawing, to tool of the invention Body embodiment elaborates.Method or step involved in following embodiment is then unless otherwise instructed this technology neck The conventional method or step in domain, those skilled in the art can make conventional selection or adaptability tune according to concrete application scene It is whole.Following embodiment is realized using python programming language, pytorch frame.
As shown in Figure 1, the seawater method for detecting area based on critical point detection network and space constraint mixed model, specifically Realize that steps are as follows:
(1) sea image pattern is acquired, sea horizon key point training sample is constructed;
3000 sea color images are acquired using the camera that unmanned water surface ship carries, using annotation tool to collected Image is manually marked, and is formed sea horizon key point training set (totally 2000 samples), verifying collection (totally 500 samples) and is surveyed Examination collection (totally 500 samples).Wherein, each sample is by sea image IiO is constituted with corresponding sea horizon key pointi, i.e. ti= {Ii,Oi}.For sea horizon key point OiMark, detailed process are as follows: zoom to the resolution ratio of collected sea image 512 × 512, equidistantly mark 17 vertical lines along the vertical direction of image, then manually marked out using annotation tool this 17 The point intersected on vertical line with sea horizon, it is hereby achieved that 17 sea horizon key point coordinates.In order to reach algorithm in real time, The resolution ratio of every image in training sample is zoomed to 64 × 64, at the same by sea horizon key point coordinate by adjust in proportion to In 64 × 64 ranges.
(2) sea horizon key point training sample training critical point detection network is utilized;
As shown in Figures 2 and 3, the structure of critical point detection network mainly by 3 continuous 3 × 3 convolutional layers (block1), 3 continuous decompose convolution module (block2), 3 continuous expansion convolution modules (block3, expansion factor is respectively 2,3, 5), 13 × 3 convolutional layer (S1), 18 × 8 convolutional layer (S2) and 1 full articulamentum constitute (S3).In addition, using multiple dimensioned Fusion method inputs to full articulamentum after being merged block3 module, S1 layers, S2 layers of output characteristic pattern.Wherein, it decomposes Convolution module and expansion convolution module be all based on residual error module, the difference is that: decompose convolution will be in residual error module 3 × 3 convolutional layers be substituted for 3 × 1 convolutional layers and 1 × 3 convolutional layer, as shown in Figure 4;And expand convolution module then It is the expansion convolution kernel that 3 × 3 convolution kernels in residual error module have been substituted for one 3 × 3, as shown in Figure 5.Critical point detection net The input of network is sea image, and what is exported is sea horizon key point coordinate.Loss function uses mean square error function (Mean- Square Error, MSE).
(3) it using sea image to be detected as input, is closed by the critical point detection neural network forecast sea horizon trained Key point;
It is as shown in Figure 6 a the present embodiment sea image to be detected, picture used mainly includes seashore, obstruction buoy, sea Unrestrained clutter etc..Fig. 6 b is the prediction result of the critical point detection network of the present embodiment.
(4) sea horizon key point is connected, corresponding sea horizon is obtained;
According to the coordinate of sea horizon key point, these key points are connected from beginning to end from left to right using straight line, to obtain Corresponding sea horizon h.
(5) mixed model for establishing space constraint carries out just by reference pair model parameter and prior distribution of sea horizon Beginningization;
It is formed assuming that mixed model is uniformly distributed by 3 Gaussian Profiles and 1, wherein 3 Gaussian Profiles are respectively used to retouch Description sky, haze/seashore Mixed Zone, seawater region are stated, and is uniformly distributed (unusual for describing potential barrier region It is worth region).In addition, by introducing Markov random field (Markov Random Field, MRF) to carry out mixed model Space constraint, i.e., the category prior of all pixels is distributed π={ π in hypothesis imagei}I=1:MAnd Posterior distrbutionp P={ pi}I=1:M It is a MRF about neighborhood system.By derivation, the penalized log-likelihood function of space constraint mixed model is finally obtained:
The initialization of mixed model:
Sea horizon position h can be determined by step (3) (4), and the pixel of sea horizon h following region can only be seawater Or barrier classification, therefore when pixel i is located at sea horizon h following region, it is assumed that condition prior probability p (xi=k | h) are as follows:
And when pixel i is located at sea horizon h area above, it is assumed that condition prior probability p (xi=k | h) are as follows:
In addition, the initial Gaussian parameter θ in mixed model is determined see also the position of sea horizon h.Specifically, for sea Initial Gaussian parameter { the m of water classification3, C3, can by count sea horizon h following region all pixels characteristic mean and Covariance matrix obtains;For the initial Gaussian parameter { m of sky classification and intermediate seashore/haze hybrid categoryk,Ck}K=1,2, can Sea horizon h or more is first divided into two regions up and down, thus pixel characteristic mean value and association side using the two regions Initial Gaussian parameter of the poor matrix respectively as sky classification and intermediate seashore/haze hybrid category.For initial category priori It is distributed πi, initialize formula are as follows:
Wherein, ε is a very small positive value hyper parameter, in the present embodiment, ε=1 × 10-15
In the present embodiment, uniformly distributed function U ()=ε=1 × 10-15
(6) semantic segmentation is carried out to sea image by expectation-maximization algorithm (EM), to detect seawater region.
Space constraint mixed model is after EM algorithm optimization, available Posterior distrbutionp collection q={ qi}I=1:M, utilize formulaObtain embodiment semantic segmentation result as fig. 6 c.Fig. 6 d is the inspection of the present embodiment seawater region Survey result.

Claims (4)

1. a kind of seawater method for detecting area based on critical point detection network and space constraint mixed model, which is characterized in that The following steps are included:
(1) sea image pattern is acquired, sea horizon key point training sample is constructed;
(2) sea horizon key point training sample training critical point detection network is utilized;
(3) using sea image to be detected as input, pass through the critical point detection neural network forecast sea horizon key point trained;
(4) sea horizon key point is connected, corresponding sea horizon is obtained;
(5) mixed model for establishing space constraint is initialized by reference pair model parameter and prior distribution of sea horizon;
(6) semantic segmentation is carried out to sea image by expectation-maximization algorithm, to detect seawater region.
2. the seawater region detection side according to claim 1 based on critical point detection network and space constraint mixed model Method, which is characterized in that in the step (1), the N sea color images of camera acquisition carried using unmanned water surface ship are used Annotation tool manually marks acquired image, forms sea horizon key point training set T={ t1,t2,…,tN};Its In, each training sample tiBy sea image IiO is constituted with corresponding sea horizon key pointi, i.e. ti={ Ii,Oi};For sea Antenna key point OiMark, detailed process are as follows: equidistantly mark n vertical line along the vertical direction of image, then utilize mark Note tool manually marks out the point intersected on this n vertical line with sea horizon, to obtain n sea horizon key point coordinate, i.e.,
3. the seawater region detection side according to claim 1 based on critical point detection network and space constraint mixed model Method, which is characterized in that in the step (2), the structure of critical point detection network is mainly by 3 continuous 3 × 3 convolutional layers Block1,3 continuous decomposition convolution module block2,3 continuous expansion convolution module block3,13 × 3 convolutional layer S1,18 × 8 convolutional layer S2 and 1 full articulamentum S3 are constituted;Wherein, decompose convolution module and expansion convolution module be all with Based on residual error module, the difference is that: convolution, which is decomposed, by 3 × 3 convolutional layers in residual error module has been substituted for one 3 × 1 Convolutional layer and 1 × 3 convolutional layer, and expanding convolution module then is that 3 × 3 convolution kernels in residual error module have been substituted for one 3 × 3 expansion convolution kernel;Also with Multiscale Fusion method, by block3 module, S1 layers, S2 layers of output characteristic pattern into Full articulamentum is inputed to after row fusion, the input of critical point detection network is sea image, and what is exported is sea horizon key point Coordinate.
4. the seawater region detection side according to claim 1 based on critical point detection network and space constraint mixed model Method, which is characterized in that in the step (5), it is assumed that mixed model is uniformly distributed by 3 Gaussian Profiles and 1 and is formed, wherein 3 A Gaussian Profile is respectively used to description sky, haze/seashore Mixed Zone, seawater region, and is uniformly distributed potential for describing Barrier region, i.e. singular value region;Then, in image ith pixel feature vector yiProbability be expressed as:
In above formula, N (| m, C) indicates that mean value is m and gauss of distribution function that covariance is C, and U ()=ε expression is uniformly distributed Function, wherein ε is a very small positive value hyper parameter;yiThe feature vector for indicating ith pixel in image, is also referred to as observed Data are mainly made of the color characteristic of pixel (c1, c2, c3) and coordinate (r, c);θ indicates all Gaussian Profiles in model Parameter, i.e. θ={ mk,Ck}K=1,2,3;π indicates the category prior distribution of all pixels in image, i.e. π={ πi}I=1:M, wherein M For the number of pixel in image, πiIndicate the category prior distribution of ith pixel, i.e. πi=[πi1..., πik,…,πi4], wherein πik=p (xi=k) indicate ith pixel classification xiProbability when for k, it is assumed that k=1 indicates sky classification, in k=2 expression Between seashore/haze hybrid category, k=3 indicate seawater classification, k=4 indicate barrier classification;H is indicated by critical point detection network Predict obtained sea horizon, andWhen indicating known to sea horizon, the condition prior distribution of ith pixel classification p(xi=k | when h) known to sea horizon h, the classification x of pixel ii The probability of=k;
In order to overcome local noise adverse effect caused by image segmentation in the image of sea, by introducing Markov random field MRF carries out space constraint to mixed model, i.e., the category prior of all pixels is distributed π={ π in hypothesis imagei}I=1:MAfter and Test distribution P={ pi}I=1:MIt is a MRF about neighborhood system;By deriving, punishing for space constraint mixed model is finally obtained Penalize log-likelihood function:
In above formula,Expression Hadamard product operation, D (| |) indicate KL divergence, s={ si}I=1:MFor auxiliary category prior point Cloth collection, q={ qi}I=1:MTo assist Posterior distrbutionp collection,For the neighborhood N of pixel iiCategory prior distribution,For pixel i's Neighborhood NiPosterior distrbutionp;
The initialization of mixed model parameter:
When known to sea horizon h, the pixel of sea horizon h following region can only be seawater or barrier classification, therefore work as pixel i When sea horizon h following region, it is assumed that condition prior probability p (xi=k | h) are as follows:
And when pixel i is located at sea horizon h area above, it is assumed that condition prior probability p (xi=k | h) are as follows:
In addition, the initial Gaussian parameter θ in mixed model is determined referring also to the position of sea horizon h;Specifically, for seawater classification Initial Gaussian parameter { m3, C3, the characteristic mean and covariance matrix of all pixels by counting sea horizon h following region It obtains;For the initial Gaussian parameter { m of sky classification and intermediate seashore/haze hybrid categoryk, Ck}K=1,2, first by sea horizon h It is above to be divided into two regions up and down, thus using the two regions pixel characteristic mean value and covariance matrix respectively as The initial Gaussian parameter of sky classification and intermediate seashore/haze hybrid category;For initial category prior distribution πi, initialization Formula are as follows:
Wherein, ε is a very small positive value hyper parameter.
CN201910519569.1A 2019-06-17 2019-06-17 Seawater method for detecting area based on critical point detection network and space constraint mixed model Pending CN110298271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910519569.1A CN110298271A (en) 2019-06-17 2019-06-17 Seawater method for detecting area based on critical point detection network and space constraint mixed model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910519569.1A CN110298271A (en) 2019-06-17 2019-06-17 Seawater method for detecting area based on critical point detection network and space constraint mixed model

Publications (1)

Publication Number Publication Date
CN110298271A true CN110298271A (en) 2019-10-01

Family

ID=68028079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910519569.1A Pending CN110298271A (en) 2019-06-17 2019-06-17 Seawater method for detecting area based on critical point detection network and space constraint mixed model

Country Status (1)

Country Link
CN (1) CN110298271A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583325A (en) * 2020-05-10 2020-08-25 上海大学 Image processing-based method for detecting sea waves by unmanned ship
CN112327265A (en) * 2020-10-23 2021-02-05 北京理工大学 Division and treatment detection method based on semantic segmentation network
CN112731521A (en) * 2019-10-14 2021-04-30 中国石油化工股份有限公司 Marine seismic wave simulation method and system based on Gaussian random distribution
CN113538425A (en) * 2021-09-16 2021-10-22 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Passable water area segmentation equipment, image segmentation model training and image segmentation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279973A (en) * 2010-06-11 2011-12-14 中国兵器工业第二○五研究所 Sea-sky-line detection method based on high gradient key points
CN107808386A (en) * 2017-09-26 2018-03-16 上海大学 A kind of sea horizon detection method based on image, semantic segmentation
CN108764027A (en) * 2018-04-13 2018-11-06 上海大学 A kind of sea-surface target detection method calculated based on improved RBD conspicuousnesses
CN109376591A (en) * 2018-09-10 2019-02-22 武汉大学 The ship object detection method of deep learning feature and visual signature joint training
CN109886336A (en) * 2019-02-21 2019-06-14 山东超越数控电子股份有限公司 A kind of object detection method and system based on warship basic image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102279973A (en) * 2010-06-11 2011-12-14 中国兵器工业第二○五研究所 Sea-sky-line detection method based on high gradient key points
CN107808386A (en) * 2017-09-26 2018-03-16 上海大学 A kind of sea horizon detection method based on image, semantic segmentation
CN108764027A (en) * 2018-04-13 2018-11-06 上海大学 A kind of sea-surface target detection method calculated based on improved RBD conspicuousnesses
CN109376591A (en) * 2018-09-10 2019-02-22 武汉大学 The ship object detection method of deep learning feature and visual signature joint training
CN109886336A (en) * 2019-02-21 2019-06-14 山东超越数控电子股份有限公司 A kind of object detection method and system based on warship basic image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MATEJ KRISTAN,ET AL: "Fast Image-Based Obstacle Detection From Unmanned Surface Vehicles", 《IEEE TRANSACTIONS ON CYBERNETICS》 *
涂兵等: "多特征与边缘校正融合的天际线检测算法研究", 《计算机工程与应用》 *
胡耀辉等: "基于海天线的舰船弱小目标检测", 《西北工业大学学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731521A (en) * 2019-10-14 2021-04-30 中国石油化工股份有限公司 Marine seismic wave simulation method and system based on Gaussian random distribution
CN111583325A (en) * 2020-05-10 2020-08-25 上海大学 Image processing-based method for detecting sea waves by unmanned ship
CN112327265A (en) * 2020-10-23 2021-02-05 北京理工大学 Division and treatment detection method based on semantic segmentation network
CN113538425A (en) * 2021-09-16 2021-10-22 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Passable water area segmentation equipment, image segmentation model training and image segmentation method
CN113538425B (en) * 2021-09-16 2021-12-24 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Passable water area segmentation equipment, image segmentation model training and image segmentation method

Similar Documents

Publication Publication Date Title
CN108596101B (en) Remote sensing image multi-target detection method based on convolutional neural network
CN107818326B (en) A kind of ship detection method and system based on scene multidimensional characteristic
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN110298271A (en) Seawater method for detecting area based on critical point detection network and space constraint mixed model
Xia et al. A fast edge extraction method for mobile LiDAR point clouds
CN108805906A (en) A kind of moving obstacle detection and localization method based on depth map
CN109255317B (en) Aerial image difference detection method based on double networks
Schoenberg et al. Segmentation of dense range information in complex urban scenes
CN111429514A (en) Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
CN102982555B (en) Guidance Tracking Method of IR Small Target based on self adaptation manifold particle filter
CN110287837A (en) Sea obstacle detection method based on prior estimate network and space constraint mixed model
CN106960442A (en) Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN110009010A (en) Wide area optical remote sensing target detection method based on the re-detection of interest region
CN109919026A (en) A kind of unmanned surface vehicle local paths planning method
CN112017243A (en) Medium visibility identification method
Shi et al. Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles
Li et al. CSF-Net: Color spectrum fusion network for semantic labeling of airborne laser scanning point cloud
CN103688289A (en) Method and system for estimating a similarity between two binary images
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN115187959B (en) Method and system for landing flying vehicle in mountainous region based on binocular vision
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN103236053B (en) A kind of MOF method of moving object detection under mobile platform
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net
CN104851090A (en) Image change detection method and image change detection device
CN114066795A (en) DF-SAS high-low frequency sonar image fine registration fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191001