CN106651886A - Cloud image segmentation method based on superpixel clustering optimization CNN - Google Patents
Cloud image segmentation method based on superpixel clustering optimization CNN Download PDFInfo
- Publication number
- CN106651886A CN106651886A CN201710000627.0A CN201710000627A CN106651886A CN 106651886 A CN106651886 A CN 106651886A CN 201710000627 A CN201710000627 A CN 201710000627A CN 106651886 A CN106651886 A CN 106651886A
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- cloud
- super
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30192—Weather; Meteorology
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a cloud image segmentation method based on a superpixel clustering optimization CNN. According to the method, through utilization of a mean shift algorithm, an original pixel-level image is segmented into images composed of one region-level superpixel after another; valid information extracted from the superpixels is the same; the images of the regions which take the core points enough to represent overall characteristics of the regions as centers are input into a network trained through CNN, thereby obtaining the labels of the core points; the labels corresponding to the superpixels are represented; and the results of different regions are combined, thereby obtaining an optimum segmentation result. According to the method, the superpixels are introduced, the consistency of the pixels is ensured, the segmentation accuracy of the cloud image reaches 99.55%, and on the premise of ensuring the segmentation accuracy, the segmentation speed is greatly improved.
Description
Technical field
The present invention is applied to image segmentation field, and in particular to image characteristics extraction and identification.Sent out using radar heavenwards
The radar echo intensity of millimeter wave acquisition is penetrated, the level for reflecting its high-spatial and temporal resolution is drawn out by way of virtual color display
The cloud evolution of vertical stratification is object, it is proposed that a kind of convolutional neural networks (SP-CNN) that optimization is clustered based on super-pixel
Cloud image fast segmentation method.
Background technology
Cloud, by the visible condensate that in an atmosphere thousands upon thousands little water droplets or ice crystal are constituted that suspends, is a kind of common
Weather phenomenon.Sunlight impinges upon the surface of the earth and forms vapor, and in the case of steam supersaturation, hydrone can be gathered in micronic dust week
Enclose, sunlight is scattered to all directions by the water droplet or ice crystal of generation, just generates the outward appearance of cloud.Cloud not only reflects air at that time
Motion, degree of stability and steam situation, and be also the important coindication that imply that Changes in weather, cloud is observed flight
Safety, artificial snowfall operation etc. provide help.The present invention utilizes cloud evolution as shown in Figure 1, and the transverse axis of image is the time
Axle, each big lattice represent 2 hours, and ten little lattice of a big lattice point, each little lattice represent 12 minutes;The longitudinal axis is altitude axis, often
One big lattice represent 3 kms, and ten little lattice of a big lattice point, each little lattice represent 300 meters.The figure is launched using radar heavenwards
Wavelength be 8.6mm, peak power for 4W the radar return rate that reflects of millimeter wave, by way of virtual color display and spy
Fixed color lookup table draws out the cloud evolution of the horizontal vertical structure that can reflect cloud level spatial and temporal resolution.The cloud radar be by
The Ka wave bands that China Meteorological detection center, Xi'an Huateng Microwave Co., Ltd. and Chengdu Information Engineering University Union Joint are developed are complete
The full coherent quasi c. w. Doppler radar of solid-state, the cardinal principle of the cloud radar is the scattering spy using cloud particle to electromagnetic wave
Property.The height of radar return rate reflects the height of cloud layer water content, and rainfall, the probability size of snowfall, the power of intensity, with list
Position dBZ is represented, is for estimating the possibility that the diastrous weather such as rainfall and snowfall intensity and prediction hail, strong wind occurs
Science numerical value.When the value of radar return rate is more than or equal to 40dBZ, the possibility for thunderstorm weather occur is larger, and works as its
Value is in 45dBZ or more, and the possibility for the strong convective weathers such as heavy rain, hail, strong wind occur is larger, is at this moment accomplished by monitoring people
Member and forecast personnel make corresponding forecast, precautionary measures etc..
, there is very weak some radar echo intensities, the aerosol containing moisture and approximate PM2.5 in low layer on high
Aqueous particulate, low latitude noise (black surround part as shown in Figure 1), the observation prediction of the presence of these particulates to cloud causes certain
Interference and impact, hinder it for follow-up observation and forecast and provide more accurately data.Therefore, " cloud " body is recognized and is protected
Stay, by " non-cloud " partly recognize and reject have become cloud observation pretreatment steps necessary.
The cloud identification of early stage is to carry out naked-eye observation analysis by Heuristics by weather scientist to judge, but with cloud
Diagram data grows with each passing day, and that cloud atlas segmentation is realized only by naked eyes, craft realizes that difficulty is increasing, and observer starts with meter
The mode of calculation machine vision is being pre-processed.China Meteorological Administration's aerosols in 2013 and the Fan Ya of cloud precipitation emphasis open laboratory
Text et al. has carried out preliminary surface analysis by the Vertical Profile with reference to echo strength to the grand microscopic feature of cloud type;2013 peaceful
Ripple university it is golden bright et al. by the corresponding multi-channel spectral feature of multiple manual extraction cloud atlas sample point and TPLBP textural characteristics
The segmentation result of subregion cloud atlas is first obtained, gray average feature and DI features, " one kind of proposition is extracted after first cluster again
The secondary cluster segmentation method of satellite cloud picture ".Deep learning because its similar human brain hierarchy structure model, to input data from
Mapping relations that bottom layer signal is set up to the feature extraction of high-level semantic and receive publicity, image classification, image recognition,
The fields such as image segmentation all achieve good performance.2014, Huang Yongzhen of Institute of Automation, Chinese Academy of sociences et al. proposed profit
The separation problem of prospect and background is solved with convolutional neural networks, the background of personage itself and complexity is distinguished as two kinds of labels
Train in input convolutional neural networks, and classification segmentation is carried out to whole pixels of whole image using the grader for obtaining.
2016, the automatic segmentation problem of cloud atlas of the He Jiaoyu of Beijing University of Technology et al. based on deep learning proposed that " one kind is based on more
The millimetre-wave radar cloud atlas dividing method of resolution ratio CNN ", i.e., first by the picture with cloud sector figure feature and non-cloud sector feature with three
Individual different resolution is trained from local to the overall situation, recycles the grader obtained by convolutional neural networks training to scheme whole
Whole pixels of piece carry out image recognition classification, finally picture are split using classification results.Because the method needs
The all pixels point of whole pictures is classified one by one, therefore the efficiency of its segmentation is very low.
For image mostly all in units of pixel, represented with two-dimensional matrix, do not consider the group between pixel
Relation is knitted, causes the efficiency of algorithm very low, Ren Mal ik propose the concept of super-pixel within 2003, and it is by a series of positions
The zonule of the similar pixel composition of adjacent and color, brightness, Texture eigenvalue, these zonules remain further mostly
Carry out the effective information of image segmentation, and will not typically destroy the boundary information of objects in images, reduce to a great extent
The complexity of successive image process task.
In sum, traditional cloud atlas partitioning algorithm needs complicated early stage to process, in addition it is also necessary to which manual extraction feature is used for
Split or split etc. by threshold decision, these methods are all only used for the cloud Picture work of small range, and divide
Cut precision not high enough.There is efficiency pole again in the feature that millimetre-wave radar cloud atlas picture is extracted by multiresolution CNN (MR-CNN)
Low situation, for these problems, the present invention is proposed to cluster super-pixel and optimizes the cloud atlas segmentation being applied to based on deep learning
Field, obtains millimetre-wave radar cloud atlas as the crucial pixel feature of homogeneous region, using volume by the optimization of super-pixel cluster
Product neutral net carries out image classification to the subgraph of crucial pixel, and it is right that finally the recognition results of crucial pixel will be extended to
The recognition results of homogeneous region super-pixel have reached the segmentation accuracy of cloud atlas and CNN identical levels so as to be split,
There is very big lifting compared to CNN in efficiency.The content of the invention
It is an object of the invention to provide a kind of millimeter wave that optimization convolutional neural networks (SP-CNN) is clustered based on super-pixel
Radar cloud atlas dividing method, the framework of dividing method is as shown in Fig. 2 method flow is as shown in Figure 3.
The method first calculates the skew average of current point first with mean shift algorithm, as new starting point,
Continue to move to until threshold value no longer meets the feature of such point, so as to cluster all pixel regions with identical Gradient Features,
Our this region is called a super-pixel.So, just a width originally it is the figure of Pixel-level, is divided into one and another region
The figure of the super-pixel composition of level, and it is identical that effective information is extracted in these super-pixel.These are enough to again represent this
The image in the region centered on the core point of region populations feature is input to (instruction in the network that we are trained by CNN
Practice method as shown in Figure 2), the label of the core point is obtained, and then the corresponding label of the super-pixel is characterized, finally by different areas
The result in domain is combined, it is possible to obtain the segmentation result of optimum, and holistic approach flow process is as shown in Figure 3.
The concrete steps of this cloud atlas dividing method are introduced below:
1st, super-pixel cluster and core point are chosen:
This method is intended to simplify, and is converted into processing one by one whole pixels in image in the past to representing one
The key point in individual region is processed, and then reflects the situation in this region, and the proposition of super-pixel thought can exactly meet us
Demand.
Super-pixel be exactly be originally a width Pixel-level figure, be divided into the figure of region class.In this method, the reality of super-pixel
Existing mode is mean shift algorithm (MeanShift).Mean shift algorithm has extensively at aspects such as cluster, image smoothing, tracking
General application, it was proposed in 1975 by Fukunage, the step of refer to an iteration, i.e., first calculated the inclined of current point
Average is moved, then as new starting point, is continued to move to, until meeting certain termination condition.
Task of this algorithm comprising two bottom visions, segment smoothing and image segmentation.Firstly for the every of image
One pixel, mean shift algorithm performs following operation to it:The pixel average of the vertex neighborhood is calculated, then the pixel
The mean location of the gray value of all pixels point in its neighborhood is moved to, and is constantly repeated on each pixel, until
These pixels are owned by identical visual signature, such as color, texture, gradient, accurately say, are not really to move pixel,
But the pixel of the pixel and its convergence position is labeled as same class.
By this algorithm, it is possible to the image for completing a pair, several different super-pixel are divided into, such as Fig. 4 institutes
Show.
For each super-pixel, this method, will these pixels by each pixel vectorization of these super-pixel
The coordinate value of point as vector in an element, sample 5 pixels from the vectorial equal intervals, as the key of super-pixel
Point.
In sampling process, because the super-pixel in view of boundaries on either side may belong to different classification, therefore this method is being adopted
Corrosion treatmentCorrosion Science is done to super-pixel during sample, its principle is, the point centered on each pixel in super-pixel, by this pixel
Point is compared one by one with the pixel on its four neighborhood, if this 5 pixels belong to this super-pixel, then retain this center
Point, the otherwise point are considered super-pixel border, and the point is removed.Its objective is the scope for reducing equal interval sampling so as to the greatest extent
Possibly do not fall at the edge of super-pixel, with the impact for avoiding border factor from causing super-pixel identification.
Clustered by super-pixel, we are converted into the pixel that script is 100,000 orders of magnitude super-pixel of thousand of, are
Follow-up method lays the first stone.
2nd, the making of training sample set, checking sample set and test sample collection
The data set owner of the present invention will have training sample set, checking three aspects of sample set and test sample collection.This tripartite
The producing principle in face is identical, and the data area simply chosen is variant, still only the acquisition modes of one of which are done
Detailed introduction:
For millimetre-wave radar cloud evolution, due to the not disclosed data set of cloud image processing field, so we
Need to make groundtruth figures as supervisory signals when training CNN networks, concrete pretreatment operation is as follows:
(1) groundtruth is generated.As shown in figure 5, (a) is Nephogram, (c) it is soft using the picture such as Photoshop
Corresponding to the manual Nephogram that " cloud " and " non-cloud " region mark after being distinguished with black-white colors in by cloud atlas of part
groundtruth.We need to randomly choose several cloud atlas from cloud atlas image set, and choose corresponding
Groundtruth schemes, the CNN network trainings and the generation of test sample collection for next step.
(2) picture size adjustment.In order to ensure when training sample set, checking sample set and test sample collection are sampled, energy
Enough each pixels for completely gathering whole image, we are adjusted first to the size of cloud evolution, as size
Cloud atlas picture for W*H increased the background image border of D/2 pixel, and now image is changed into (W+D) * (H+D).
(3) collection and generation of sample set.Training sample set and checking sample set are to a small amount of with groundtruth
The cloud atlas of W*H sizes is processed;Test sample collection is then the process to a more parts, due to three sample set acquisition methods
It is almost completely the same, still repeat no more.Concrete operations are as follows:
A. several image C1 centered on pixel p are cut out for training and verifying CNN networks.With W*H cloud atlas
Centered on a certain pixel p as in, the image C1 of D*D sizes is cut out as the length of side with D.C1 is exactly centered on pixel p
Subgraph, it includes the pixel characteristic around the pixel.
B. for each pixel p in a, it is right to find in the groundtruth figures that we can be corresponding to it
The pixel p ' for answering, according to tag attributes of the pixel in groundtruth figures, makes in the form of a list training label
Text, its form be " absolute path/image name tag attributes ", wherein the tag attributes " cloud " of each pixel or
" non-cloud ", is represented with 1 or 0.For all images of training set, it would be desirable to retain its label text file as training
Supervisory signals when CNN networks;For all images of checking set, the net that we generate checking sample by training set
Network model obtains the result for judging, and the order of accuarcy of our network model is checked using label text file;And for
Test set, we need not generate label, it would be desirable to contrasted with segmentation result figure using its groundtruth figure, come
Evaluate the subjectivity and objectivity of network.It should be noted that for the accuracy of Objective corroboration network, three sample sets it
Between should be non-intersect.
C. need exist for illustrate be a bit, training set and checking collection structure in, different under normal circumstances etc.
Probability stochastical sampling, the present invention takes the method for sampling that content is guided, it will be seen from figure 1 that image includes extended background area
Domain, these regions do not include " cloud " or " non-cloud " information, and characteristics of image is very few, to our training sets, specific aim of test set, many
Sample and flexibility cannot all play help, therefore training set sampling density is less.And for " cloud " or " non-cloud ", contain
Substantial amounts of effective information, We conducted high-density sampling.And for test set, it would be desirable to all pixels point is sampled
3rd, the training of convolutional neural networks model
The convolutional neural networks structure that the present invention is adopted is as shown in Fig. 2 be in ImageNet storehouses using AlexNet networks
Image carry out what evolutionary process was obtained.ImageNet is the image data base for possessing the millions order of magnitude, for data degree
The reason for limited and workload, we are difficult to make the database of same rank and carry out re -training network, meanwhile, the choosing of parameter
Select, excellent and network structure the adjustment of selecting of data is also all difficult at short notice.Therefore the appearance of evolutionary process,
Just become a more satisfactory selection.
The network is made up of 5 convolutional layers, 3 full articulamentums, and only gives convolutional layer C1, convolutional layer C2 and convolutional layer C5
Add pooling layers.F1 to F3 is full articulamentum, complete along with one three layers equivalent on the basis of five layers of convolutional layer
Connection Neural Network grader.What is should be noted is a little that the neuron number of F3 in AlexNet is adjusted to 2 by us by 1000,
Reason is to realize 2 classification of " cloud " and " non-cloud ".Specific trim process is as follows:
First.No matter the picture size of input is how many, train for convenience, dimension of picture can all be reset and be scaled
227*227, what be input into here is the image that size is D*D, and is input into net with red, green, blue three color dimensions
Network, data volume size is 227*227*3.
As shown in Fig. 2 C1 to C5 is convolutional layer, by taking convolutional layer C1 as an example, the size of its convolution filter is 11*11,
Convolution stride is 4, the layer totally 96 convolution filter, therefore output is the picture of 96 55*55 sizes.After C1 convolutional filterings,
Add linearity rectification function ReLU to accelerate convergence, prevent its excessive concussion.Core size is 3, and step-length is 2 maximum pond sample level
Introducing so that by convolution obtain feature there is space-invariance, the invariable rotary shape of feature is solved, while to convolution
Feature carries out dimensionality reduction, greatly reduces amount of calculation, obtains the image of 96 27*27 sizes.
In the same manner, by the size of convolution kernel be 5, be filled to 2, convolution stride is 1, has the volume of 256 convolution filters
Lamination C2, obtains the image of 256 27*27 sizes, image of the dimensionality reduction to 13*13 after maximum pond sample level.By convolution
Core size is 3, be filled to 1, convolution stride is 1, has the convolutional layer C3 of 384 wave filters, obtains 384 27*27 sizes
Image.The image of 384 13*13 sizes is obtained by convolutional layer C4.The figure of 256 6*6 sizes is then obtained by convolutional layer C5
Picture.
As shown in Fig. 2 full articulamentum F1 to F3, is the full connection along with three layers on the basis of five layers of convolutional layer
Layer neural network classifier.Full articulamentum is made up of linear segment and non-linear partial two parts:Linear segment is to being input into number
According to the analysis for doing different angles, draw under the angle to the judgement of overall input data;The effect of non-linear partial is exactly to break
Linear mapping relation before, makees the normalization of data, no matter what kind of work linear segment above has done, has arrived non-linear
Here, all of numerical value will be limited within the scope of one, if the Internet behind so will be based on front layer data after
Continuous to calculate, this numerical value is just relatively controllable.This two parts is combined, its objective is that just huge and mixed and disorderly data are entered
Row dimensionality reduction, finally obtains an effective range that can reach segmentation purpose.Full articulamentum F1 and full articulamentum F2 enters to data
Row linear change and nonlinear change, the dimensionality reduction that 6*6*256 is tieed up to 4096.Finally, full articulamentum F3 by Data Dimensionality Reduction into 2
" cloud " and " non-cloud " two class in dimension, that is, the present invention.Using two classification, we realize the segmentation of cloud evolution.
4th, the image segmentation guided based on region content
The present invention divides the image into algorithm and is converted into and contains same edge, the region of Texture eigenvalue to some of image
Identification, be not used in traditional convolutional neural networks carries out one by one image recognition classification to all pixels point of whole pictures, this
It is bright to have chosen representational 5 key points in a certain region, and this 5 points are done one by one with image recognition classification, by this five knots
Fruit weighting obtains the final segmentation result in this super-pixel region, and the method is used for into all super-pixel regions to whole figure, and
Image is split using its classification results.For a certain piece of super-pixel, the classification results corresponding to it meet following
Formula:
R=r1*ω1+r2+ω2+r3*ω3+r4*ω4+r5*ω5 (1)
Wherein R represents the recognition result of the super-pixel, and r1, r2, r3, r4 and r5 are the identification of the subgraph centered on five points
As a result, ω1、ω2、ω3、ω4And ω5For its corresponding weight.
Using this method, it is possible to the image recognition conversion in super-pixel region of the script containing hundreds and thousands of pixels
It is the image recognition to 5 points therein, has greatly improved the operation efficiency of algorithm.As shown in figure 5, (a) is Nephogram,
B () is the result of (a) super-pixel cluster, (c) groundtruth corresponding to (a), (d) is to be tied using the segmentation of threshold method
Really, it is (f) segmentation result of (a) this method (e) using the segmentation result of MR-CNN.
5th, segmentation evaluation
Thresholding segmentation method can be generally divided into direct analysis and the class of indirect experiment method two.Direct analysis is straight
Connect what the principle to algorithm, property, feature and performance etc. were analyzed, the algorithm itself used by main research image segmentation, its
Have the disadvantage not accounting for the environment of algorithm application, for the application evaluation result underaction of different field;Indirect experiment method
Be then from output segmentation figure quality or output figure with segmentation with reference to figure difference be starting point, split by induction and conclusion
The performance of algorithm.
Because the precision that convolutional neural networks are applied to image segmentation field is very high, therefore the present invention is except adopting picture
The evaluation method of plain error also introduces program runtime as standard weighing segmentation result.
Pixel error reflects the pixel similarity of segmentation picture and original tag, and its computational methods is that statistics is given to be measured
Segmentation tag L in each pixel and its real data label L ' each pixel Hamming distance:
Epixcel=| | L-L ' | |2 (2)
According to the method, the present invention is tested on 150 Zhang Yun's evolutions, finally reaches the segmentation accuracy of cloud atlas
99.55%, compared to using MR-CNN and threshold method to identical Segmentation of Data Set, 99.36% and 96.7% for obtaining
As a result there is further lifting.
Meanwhile, in processing speed, SP-CNN is averagely split an image and is only needed to 33.12 seconds, and precision is close
MR-CNN then need with 17615 seconds, may certify that this method is a kind of not only effective but also quick method.
In sum, the advantage of the method is embodied in following three points:
1) cloud atlas segmentation is two sorting techniques on the border for differentiating " cloud " and " non-cloud ", based on super-pixel cluster optimization convolution
The process object of neutral net is the super-pixel for representing a class pixel.
2) super-pixel is introduced, it is ensured that the uniformity of pixel, it is to avoid ambiguous situation.
3) proposition of optimization convolutional neural networks is clustered based on super-pixel, the segmentation accuracy of cloud atlas is reached
99.55%, the speed of segmentation is greatly improved on the premise of segmentation precision is ensured.
Description of the drawings
Fig. 1 is Nephogram in the present invention as example;
Fig. 2 is the segmentation framework designed by the present invention;
Fig. 3 is the algorithm flow of the present invention;
Fig. 4 is the super-pixel mapping graph obtained by super-pixel cluster;
Fig. 5 is the comparative result figure of the present invention and other dividing methods:
(a) original image, (b) super-pixel cluster result,
(c) Tag reference figure, (d) threshold method segmentation result,
(e) CNN segmentation results, (f) this method segmentation result
Fig. 6 difference key point choosing methods are in the comparison diagram in pixel error and process time;
Fig. 7 is comparison diagram of the different dividing methods in pixel error and process time;
Specific embodiment
The present invention is combined super-pixel principle with convolutional neural networks, there is provided a kind of volume that optimization is clustered based on super-pixel
The cloud image partition method of product neutral net (SP-CNN).The invention realizes that step is as follows:
1st, picture pretreatment
The data set owner of the present invention will have training sample set, checking three aspects of sample set and test sample collection, wherein instructing
Practice sample set and checking sample set is required to obtain label using the groundtruth figures of manual mark, for training sample
Collection label is used as supervisory signals when training CNN networks, for the accuracy that checking sample set label is used as detection network.Sample
It is as follows that the label of this collection and checking collection specifically generates operation:
(1) groundtruth is generated.As shown in figure 5, (a) is Nephogram, (c) it is soft using the picture such as Photoshop
Corresponding to the manual Nephogram that " cloud " and " non-cloud " region mark after being distinguished with black-white colors in by cloud atlas of part
groundtruth.The present invention have 120 Zhang Yun's evolutions and its corresponding groundtruth figure, per Zhang Yun figure and
The size of groundtruth images is 719*490.Because solve " cloud " and " non-cloud " mixing ratio it is more serious in the case of point
Cut problem, in order that training pattern has enough specific aims, diversity and flexibility, we have chosen 20 " clouds " and
The more serious image of " non-cloud " region mixing ratio, and choose corresponding groundtruth to scheme, for the CNN nets of next step
The training sample set of network training and the generation of checking sample set.
(2) picture size adjustment.In order to ensure when training sample set, checking sample set and test sample collection are sampled, can
Completely to gather each pixel of whole image, we are adjusted first to the size of cloud evolution, and life size is
The white background picture border of 28 pixels has respectively been augmented on four borders up and down of the cloud atlas picture of 719*490, now image
It is changed into 775*546, for groundtruth figures, we also take same method to adjust size.
(3) collection of sample set and label are generated.For 120 sizes are the cloud evolution of 775*546 and its corresponding
Groundtruth schemes, and we select 15 therein for training, 5 qualities for being used to verify training network, remaining 100
Contrast experiment as actual partitioning algorithm test, concrete operations are as follows:
A. cut out in 350 cloud evolutions with 775*546 centered on pixel p, 56 as the length of side, 56*56 sizes
Image C1, it includes the pixel characteristic around the pixel, and these images are used to train CNN networks.Likewise, cutting out 150
Open for verifying CNN networks.It is noted herein that, in the structure of training set and checking collection, different under normal circumstances
Equiprobability stochastical sampling, the present invention take content guide the method for sampling, for region do not include " cloud " or " non-cloud " information
Long-pending background area, characteristics of image is very few, and our training set, the specific aim of checking collection, diversity and flexibilities all cannot
Help is played, therefore sampling density is less.And for the region of " cloud " or " non-cloud " relatively concentration, contain substantial amounts of effective letter
Breath, We conducted high-density sampling.
B. for each pixel p in a, it is right to find in the groundtruth figures that we can be corresponding to it
The pixel p ' for answering, according to tag attributes of the pixel in groundtruth figures, makes in the form of a list training label
Text, its form be " absolute path/image name tag attributes ", wherein the tag attributes " cloud " of each pixel or
" non-cloud ", is represented with 1 or 0.For all images of training set, it would be desirable to retain its label text file as training
Supervisory signals when CNN networks;For all images of checking set, the net that we generate checking sample by training set
Network model obtains the result for judging, and the order of accuarcy of our network model is checked using label text file;And for
Test set, we need not generate label, it would be desirable to contrasted with segmentation result figure using its groundtruth figure, come
Evaluate the subjectivity and objectivity of network.It should be noted that for the accuracy of Objective corroboration network, three sample sets it
Between should be non-intersect.
2nd, the training of convolutional neural networks model
The convolutional neural networks structure that the present invention is adopted is as shown in Fig. 2 be in ImageNet storehouses using AlexNet networks
Image carry out what evolutionary process was obtained.ImageNet is the image data base for possessing the millions order of magnitude, for data degree
The reason for limited and workload, we are difficult to make the database of same rank and carry out re -training network, meanwhile, the choosing of parameter
Select, excellent and network structure the adjustment of selecting of data is also all difficult at short notice.Therefore the appearance of evolutionary process,
Just become a more satisfactory selection.
The network is made up of 5 convolutional layers, 3 full articulamentums, and only gives convolutional layer C1, convolutional layer C2 and convolutional layer C5
Add pooling layers.F1 to F3 is full articulamentum, complete along with one three layers equivalent on the basis of five layers of convolutional layer
Connection Neural Network grader.What is should be noted is a little that the neuron number of F3 in AlexNet is adjusted to 2 by us by 1000,
Reason is to realize 2 classification of " cloud " and " non-cloud ".Specific trim process is as follows:
First.No matter the picture size of input is how many, train for convenience, dimension of picture can all be reset and be scaled
227*227, what be input into here is the image that size is D*D, and is input into net with red, green, blue three color dimensions
Network, data volume size is 227*227*3.
As shown in Fig. 2 C1 to C5 is convolutional layer, by taking convolutional layer C1 as an example, the size of its convolution filter is 11*11,
Convolution stride is 4, the layer totally 96 convolution filter, therefore output is the picture of 96 55*55 sizes.After C1 convolutional filterings,
Add linearity rectification function ReLU to accelerate convergence, prevent its excessive concussion.Core size is 3, and step-length is 2 maximum pond sample level
Introducing so that by convolution obtain feature there is space-invariance, the invariable rotary shape of feature is solved, while to convolution
Feature carries out dimensionality reduction, greatly reduces amount of calculation, obtains the image of 96 27*27 sizes.
In the same manner, by the size of convolution kernel be 5, be filled to 2, convolution stride is 1, has the volume of 256 convolution filters
Lamination C2, obtains the image of 256 27*27 sizes, image of the dimensionality reduction to 13*13 after maximum pond sample level.By convolution
Core size is 3, be filled to 1, convolution stride is 1, has the convolutional layer C3 of 384 wave filters, obtains 384 27*27 sizes
Image.The image of 384 13*13 sizes is obtained by convolutional layer C4.The figure of 256 6*6 sizes is then obtained by convolutional layer C5
Picture.
As shown in Fig. 2 full articulamentum F1 to F3, is the full connection along with three layers on the basis of five layers of convolutional layer
Layer neural network classifier.Full articulamentum is made up of linear segment and non-linear partial two parts:Linear segment is to being input into number
According to the analysis for doing different angles, draw under the angle to the judgement of overall input data;The effect of non-linear partial is exactly to break
Linear mapping relation before, makees the normalization of data, no matter what kind of work linear segment above has done, has arrived non-linear
Here, all of numerical value will be limited within the scope of one, if the Internet behind so will be based on front layer data after
Continuous to calculate, this numerical value is just relatively controllable.This two parts is combined, its objective is that just huge and mixed and disorderly data are entered
Row dimensionality reduction, finally obtains an effective range that can reach segmentation purpose.Full articulamentum F1 and full articulamentum F2 enters to data
Row linear change and nonlinear change, the dimensionality reduction that 6*6*256 is tieed up to 4096.Finally, full articulamentum F3 by Data Dimensionality Reduction into 2
" cloud " and " non-cloud " two class in dimension, that is, the present invention.Using two classification, we realize the segmentation of cloud evolution.
3rd, pixel cluster and key point are chosen
Super-pixel be exactly be originally a width Pixel-level figure, be divided into the figure of region class.In this method, the reality of super-pixel
Existing mode is mean shift algorithm (MeanShift).Mean shift algorithm has extensively at aspects such as cluster, image smoothing, tracking
General application, it was proposed in 1975 by Fukunage, the step of refer to an iteration, i.e., first calculated the inclined of current point
Average is moved, then as new starting point, is continued to move to, until meeting certain termination condition.
Task of this algorithm comprising two bottom visions, segment smoothing and image segmentation.Firstly for the every of image
One pixel, mean shift algorithm performs following operation to it:The pixel average of the vertex neighborhood is calculated, then the pixel
The mean location of the gray value of all pixels point in its neighborhood is moved to, and is constantly repeated on each pixel, until
These pixels are owned by identical visual signature, such as color, texture, gradient, accurately say, are not really to move pixel,
But the pixel of the pixel and its convergence position is labeled as same class.
By this algorithm, it is possible to the image for completing a pair, several different super-pixel are divided into, such as Fig. 4 institutes
Show.
For each super-pixel, this method, will these pixels by each pixel vectorization of these super-pixel
The coordinate value of point as vector in an element, sample 5 pixels from the vectorial equal intervals, as the key of super-pixel
Point.
In order to further prove that this method is accurate and efficient, we are using the 100 pictures contrast in whole experiment
Stochastical sampling 1,3,5,7 points, and equal interval sampling 1,3,5,7 points, as a result as shown in fig. 6, by comparison diagram can be seen that by
There is uncertainty in stochastical sampling, its accuracy has decline relative to equal interval sampling, and for similar accuracy etc.
7 two methods of 5 points of interval sampling and equal interval sampling, the treatment effeciency that 5 points of equal interval sampling is higher, so this method is most
Determine eventually using 5 points of foundations chosen as key point of equal interval sampling.
In sampling process, because the super-pixel in view of boundaries on either side may belong to different classification, therefore this method is being adopted
Corrosion treatmentCorrosion Science is done to super-pixel during sample, its principle is, the point centered on each pixel in super-pixel, by this pixel
Point is compared one by one with the pixel on its four neighborhood, if this 5 pixels belong to this super-pixel, then retain this center
Point, the otherwise point are considered super-pixel border, and the point is removed.Its objective is the scope for reducing equal interval sampling so as to the greatest extent
Possibly do not fall at the edge of super-pixel, with the impact for avoiding border factor from causing super-pixel identification.
Clustered by super-pixel, we are converted into the pixel that script is 100,000 orders of magnitude super-pixel of thousand of, are
Follow-up method lays the first stone.
4th, the image segmentation guided based on region content
The present invention divides the image into algorithm and is converted into and contains same edge, the region of Texture eigenvalue to some of image
Identification, be not used in traditional convolutional neural networks carries out one by one image recognition classification to all pixels point of whole pictures, this
It is bright to have chosen representational 5 key points in a certain region, and this 5 points are done one by one with image recognition classification, by this five knots
Fruit weighting obtains the final segmentation result in this super-pixel region, and the method is used for into all super-pixel regions to whole figure, and
Image is split using its classification results.For a certain piece of super-pixel, the classification results corresponding to it meet following
Formula:
R=r1*ω1+r2*ω2+r3*ω3+r4*ω4+r5*ω5 (5)
Wherein R represents the recognition result of the super-pixel, and r1, r2, r3, r4 and r5 are the identification of the subgraph centered on five points
As a result, ω1、ω2、ω3、ω4And ω5For its corresponding weight, in the present invention, because this five points are taken using equal interval sampling
, still identical weight, i.e. ω are given to it1、ω2、ω3、ω4And ω5It is 0.2.
Using this method, it is possible to the image recognition conversion in super-pixel region of the script containing hundreds and thousands of pixels
It is the image recognition to 5 points therein, has greatly improved the operation efficiency of algorithm.As shown in figure 5, (a) is Nephogram,
B () is the result of (a) super-pixel cluster, (c) groundtruth corresponding to (a), (d) is to be tied using the segmentation of threshold method
Really, it is (f) segmentation result of (a) this method (e) using the segmentation result of MR-CNN.
5th, segmentation evaluation
Because the precision that convolutional neural networks are applied to image segmentation field is very high, therefore the present invention is except adopting picture
The evaluation method of plain error also introduces program runtime as standard weighing segmentation result.
Pixel error reflects the pixel similarity of segmentation picture and original tag, and its computational methods is that statistics is given to be measured
Segmentation tag L in each pixel and its real data label L ' each pixel Hamming distance:
Epixcel=| | L-L ' | |2 (6)
According to the method, the present invention is tested on 120 Zhang Yun's evolutions, finally reaches the segmentation accuracy of cloud atlas
99.55%, compared to using MR-CNN and threshold method to identical Segmentation of Data Set, 99.36% and 96.7% for obtaining
As a result there is further lifting, segmentation result and comparing result are as shown in Figure 7.
Meanwhile, in processing speed, the close MR-CNN of precision was needed with 17615 seconds, and SP-CNN is averagely split
One image is only needed to 33.12 seconds, and about 530 times are improved in efficiency, may certify that this method is a kind of not only effectively but also quick
Method.
Claims (2)
1. it is a kind of that the cloud atlas dividing method for optimizing CNN is clustered based on super-pixel, it is characterised in that step is as follows:
1), super-pixel cluster and core point are chosen:
The image that one pair is completed, is divided into several different super-pixel, for each super-pixel, by these super-pixel
Each pixel vectorization, will these pixels coordinate value as vector in an element, from medium of the vector
Every 5 pixels of sampling, as the key point of super-pixel;And corrosion treatmentCorrosion Science has been done to super-pixel in sampling process,
2), the making of training sample set, checking sample set and test sample collection
Data set includes training sample set, checking three aspects of sample set and test sample collection;The producing principle of this three aspect is complete
Exactly the same, the data area simply chosen is variant, only to the acquisition modes of one of which does detailed introduction below:
For millimetre-wave radar cloud evolution, due to the not disclosed data set of cloud image processing field, need to make
Groundtruth is schemed as supervisory signals when training CNN networks, and concrete pretreatment operation is as follows:
(1) generate groundtruth will in cloud atlas picture " cloud " and " non-cloud " region with after black-white colors differentiation mark it is original
Groundtruth corresponding to cloud atlas;Several cloud atlas are randomly choosed from cloud atlas image set, and is chosen corresponding
Groundtruth schemes, the CNN network trainings and the generation of test sample collection for next step;
(2) picture size adjustment;In order to ensure when training sample set, checking sample set and test sample collection are sampled, can be complete
Each pixel of whole image of whole collection, is adjusted first to the size of cloud evolution, and as size is the cloud of W*H
Image increased the background image border of D/2 pixel, and now image is changed into (W+D) * (H+D);
(3) collection of sample set and generation concrete operations are as follows:
A. several image C1 centered on pixel p are cut out for training and verifying CNN networks;With in W*H cloud atlas pictures
A certain pixel p centered on, the image C1 of D*D sizes is cut out as the length of side with D;C1 is exactly the son centered on pixel p
Figure, it includes the pixel characteristic around the pixel;
B. for each pixel p in a, corresponding pixel p ', root are found in the groundtruth figures corresponding to it
According to tag attributes of the pixel in groundtruth figures, the text of training label, its lattice are made in the form of a list
Formula is " absolute path/image name tag attributes ", wherein the tag attributes " cloud " or " non-cloud " of each pixel, with 1 or 0 table
Show;For all images of training set, retain its label text file as supervisory signals when training CNN networks;It is right
In all images of checking set, the result that checking sample is obtained judging by the network model that training set is generated, and utilize
The order of accuarcy of network model of the label text file to check;And for test set, it is not necessary to label is generated, needs to utilize it
Groundtruth figures are contrasted to evaluate the subjectivity and objectivity of network with segmentation result figure;Between three sample sets
Should be non-intersect;
3), the training of convolutional neural networks model
The network is made up of 5 convolutional layers, 3 full articulamentums, and only to convolutional layer C1, convolutional layer C2 and convolutional layer C5 additions
Pooling layers;F1 to F3 is full articulamentum, equivalent to the full connection on the basis of five layers of convolutional layer along with three layers
Neural network classifier;The neuron number of F3 in AlexNet is adjusted to into 2 by 1000,
4), the image segmentation guided based on region content
To step 1) in this 5 points do image recognition classification one by one, this five results weighting is obtained into this super-pixel region final
Segmentation result, the method is used for into all super-pixel regions to whole figure, and image is carried out point using its classification results
Cut;For a certain piece of super-pixel, the classification results corresponding to it meet below equation:
R=r1*ω1+r2+ω2+r3*ω3+r4*ω4+r5*ω5 (1)
Wherein R represents the recognition result of the super-pixel, and r1, r2, r3, r4 and r5 are the identification knot of the subgraph centered on five points
Really, ω1、ω2、ω3, ω 4 and ωRIt is 0.2.
2. method according to claim 1, it is characterised in that the training detailed process of convolutional neural networks model is as follows:
First, no matter the picture size of input is how many, dimension of picture can all be reset and be scaled 227*227, and with red, green
Color, blue three color dimensions are input into network, and data volume size is 227*227*3;
C1 to C5 is convolutional layer, and by taking convolutional layer C1 as an example, the size of its convolution filter is 11*11, and convolution stride is 4, should
Layer totally 96 convolution filter, therefore output is the picture of 96 55*55 sizes;After C1 convolutional filterings, linearity rectification letter is added
Count ReLU to accelerate convergence;It is 3 to introduce core size, and step-length is 2 maximum pond sample level shape, while carrying out dimensionality reduction to convolution feature
Obtain the image of 96 27*27 sizes;
In the same manner, by the size of convolution kernel be 5, be filled to 2, convolution stride is 1, has the convolutional layer of 256 convolution filters
C2, obtains the image of 256 27*27 sizes, image of the dimensionality reduction to 13*13 after maximum pond sample level;It is big by convolution kernel
The convolutional layer C3 that little is 3, to be filled to 1, convolution stride be 1, have 384 wave filters, obtains the image of 384 27*27 sizes;
The image of 384 13*13 sizes is obtained by convolutional layer C4;The image of 256 6*6 sizes is then obtained by convolutional layer C5;
Full articulamentum F1 to F3, is the full articulamentum neural network classification along with three layers on the basis of five layers of convolutional layer
Device;Full articulamentum F1 and full articulamentum F2 carries out linear change and nonlinear change to data, and the dimensionality reduction that 6*6*256 is tieed up is arrived
4096;Finally, full articulamentum F3 ties up Data Dimensionality Reduction into 2, that is, " cloud " and " non-cloud " two class;Using two classification, cloud is realized
The segmentation of evolution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710000627.0A CN106651886A (en) | 2017-01-03 | 2017-01-03 | Cloud image segmentation method based on superpixel clustering optimization CNN |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710000627.0A CN106651886A (en) | 2017-01-03 | 2017-01-03 | Cloud image segmentation method based on superpixel clustering optimization CNN |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106651886A true CN106651886A (en) | 2017-05-10 |
Family
ID=58838253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710000627.0A Pending CN106651886A (en) | 2017-01-03 | 2017-01-03 | Cloud image segmentation method based on superpixel clustering optimization CNN |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106651886A (en) |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316289A (en) * | 2017-06-08 | 2017-11-03 | 华中农业大学 | Crop field spike of rice dividing method based on deep learning and super-pixel segmentation |
CN107392925A (en) * | 2017-08-01 | 2017-11-24 | 西安电子科技大学 | Remote sensing image terrain classification method based on super-pixel coding and convolutional neural networks |
CN107977969A (en) * | 2017-12-11 | 2018-05-01 | 北京数字精准医疗科技有限公司 | A kind of dividing method, device and the storage medium of endoscope fluorescence image |
CN108537250A (en) * | 2018-03-16 | 2018-09-14 | 新智认知数据服务有限公司 | A kind of target following model building method and device |
CN108549832A (en) * | 2018-01-21 | 2018-09-18 | 西安电子科技大学 | LPI radar signal sorting technique based on full Connection Neural Network |
CN108615010A (en) * | 2018-04-24 | 2018-10-02 | 重庆邮电大学 | Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern |
CN108734357A (en) * | 2018-05-29 | 2018-11-02 | 北京佳格天地科技有限公司 | Weather prognosis system and method |
CN108776823A (en) * | 2018-07-06 | 2018-11-09 | 武汉兰丁医学高科技有限公司 | Cervical carcinoma lesion analysis method based on cell image recognition |
CN108876789A (en) * | 2018-06-15 | 2018-11-23 | 南方医科大学 | A kind of sequential chart segmentation method combined based on super-pixel and neighborhood block feature |
CN108985247A (en) * | 2018-07-26 | 2018-12-11 | 北方工业大学 | Multispectral image urban road identification method |
CN109086777A (en) * | 2018-07-09 | 2018-12-25 | 南京师范大学 | A kind of notable figure fining method based on global pixel characteristic |
CN109146885A (en) * | 2018-08-17 | 2019-01-04 | 深圳蓝胖子机器人有限公司 | Image partition method, equipment and computer readable storage medium |
CN109166133A (en) * | 2018-07-14 | 2019-01-08 | 西北大学 | Soft tissue organs image partition method based on critical point detection and deep learning |
CN109427049A (en) * | 2017-08-22 | 2019-03-05 | 成都飞机工业(集团)有限责任公司 | A kind of detection method of holiday |
CN109558806A (en) * | 2018-11-07 | 2019-04-02 | 北京科技大学 | The detection method and system of high score Remote Sensing Imagery Change |
CN109816670A (en) * | 2019-01-31 | 2019-05-28 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating Image Segmentation Model |
CN109886205A (en) * | 2019-02-25 | 2019-06-14 | 苏州清研微视电子科技有限公司 | Safety belt method of real-time and system |
CN109948708A (en) * | 2019-03-21 | 2019-06-28 | 西安电子科技大学 | Multispectral image feature level information fusion method when more based on the implicit canonical of iteration |
CN110503696A (en) * | 2019-07-09 | 2019-11-26 | 浙江浩腾电子科技股份有限公司 | A kind of vehicle face color characteristic detection method based on super-pixel sampling |
CN110764090A (en) * | 2019-10-22 | 2020-02-07 | 上海眼控科技股份有限公司 | Image processing method, image processing device, computer equipment and readable storage medium |
CN110796673A (en) * | 2019-10-31 | 2020-02-14 | Oppo广东移动通信有限公司 | Image segmentation method and related product |
CN110874841A (en) * | 2018-09-04 | 2020-03-10 | 斯特拉德视觉公司 | Object detection method and device with reference to edge image |
TWI689894B (en) * | 2017-06-02 | 2020-04-01 | 宏達國際電子股份有限公司 | Image segmentation method and apparatus |
CN111126187A (en) * | 2019-12-09 | 2020-05-08 | 上海眼控科技股份有限公司 | Fire detection method, system, electronic device and storage medium |
CN111160529A (en) * | 2019-12-28 | 2020-05-15 | 天津大学 | Convolutional neural network-based training sample generation method in target pose measurement |
CN111598001A (en) * | 2020-05-18 | 2020-08-28 | 哈尔滨理工大学 | Apple tree pest and disease identification method based on image processing |
CN111695640A (en) * | 2020-06-18 | 2020-09-22 | 南京信息职业技术学院 | Foundation cloud picture recognition model training method and foundation cloud picture recognition method |
CN111742553A (en) * | 2017-12-14 | 2020-10-02 | 交互数字Vc控股公司 | Deep learning based image partitioning for video compression |
CN111882527A (en) * | 2020-07-14 | 2020-11-03 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112132842A (en) * | 2020-09-28 | 2020-12-25 | 华东师范大学 | Brain image segmentation method based on SEEDS algorithm and GRU network |
CN112561925A (en) * | 2020-12-02 | 2021-03-26 | 中国联合网络通信集团有限公司 | Image segmentation method, system, computer device and storage medium |
CN112639830A (en) * | 2018-08-30 | 2021-04-09 | 华为技术有限公司 | Apparatus and method for separating picture into foreground and background using deep learning |
WO2021233021A1 (en) * | 2020-05-18 | 2021-11-25 | 腾讯科技(深圳)有限公司 | Method for training image region segmentation model, and segmentation method and apparatus |
CN113792653A (en) * | 2021-09-13 | 2021-12-14 | 山东交通学院 | Method, system, equipment and storage medium for cloud detection of remote sensing image |
CN114648711A (en) * | 2022-04-11 | 2022-06-21 | 成都信息工程大学 | Clustering-based cloud particle sub-image false target filtering method |
CN114663790A (en) * | 2022-05-24 | 2022-06-24 | 济宁德信测绘有限公司 | Intelligent remote sensing mapping method and system |
CN114677499A (en) * | 2022-04-11 | 2022-06-28 | 成都信息工程大学 | Cloud microparticle image particle region positioning method |
US11989886B2 (en) | 2019-02-12 | 2024-05-21 | Tata Consultancy Services Limited | Automated unsupervised localization of context sensitive events in crops and computing extent thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914841A (en) * | 2014-04-03 | 2014-07-09 | 深圳大学 | Bacterium division and classification method based on superpixels and in-depth learning and application thereof |
CN105118049A (en) * | 2015-07-22 | 2015-12-02 | 东南大学 | Image segmentation method based on super pixel clustering |
CN105184772A (en) * | 2015-08-12 | 2015-12-23 | 陕西师范大学 | Adaptive color image segmentation method based on super pixels |
CN105321176A (en) * | 2015-09-30 | 2016-02-10 | 西安交通大学 | Image segmentation method based on hierarchical higher order conditional random field |
CN105844228A (en) * | 2016-03-21 | 2016-08-10 | 北京航空航天大学 | Remote sensing image cloud detection method based on convolution nerve network |
-
2017
- 2017-01-03 CN CN201710000627.0A patent/CN106651886A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914841A (en) * | 2014-04-03 | 2014-07-09 | 深圳大学 | Bacterium division and classification method based on superpixels and in-depth learning and application thereof |
CN105118049A (en) * | 2015-07-22 | 2015-12-02 | 东南大学 | Image segmentation method based on super pixel clustering |
CN105184772A (en) * | 2015-08-12 | 2015-12-23 | 陕西师范大学 | Adaptive color image segmentation method based on super pixels |
CN105321176A (en) * | 2015-09-30 | 2016-02-10 | 西安交通大学 | Image segmentation method based on hierarchical higher order conditional random field |
CN105844228A (en) * | 2016-03-21 | 2016-08-10 | 北京航空航天大学 | Remote sensing image cloud detection method based on convolution nerve network |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI689894B (en) * | 2017-06-02 | 2020-04-01 | 宏達國際電子股份有限公司 | Image segmentation method and apparatus |
CN107316289A (en) * | 2017-06-08 | 2017-11-03 | 华中农业大学 | Crop field spike of rice dividing method based on deep learning and super-pixel segmentation |
CN107392925A (en) * | 2017-08-01 | 2017-11-24 | 西安电子科技大学 | Remote sensing image terrain classification method based on super-pixel coding and convolutional neural networks |
CN107392925B (en) * | 2017-08-01 | 2020-07-07 | 西安电子科技大学 | Remote sensing image ground object classification method based on super-pixel coding and convolutional neural network |
CN109427049A (en) * | 2017-08-22 | 2019-03-05 | 成都飞机工业(集团)有限责任公司 | A kind of detection method of holiday |
CN107977969A (en) * | 2017-12-11 | 2018-05-01 | 北京数字精准医疗科技有限公司 | A kind of dividing method, device and the storage medium of endoscope fluorescence image |
CN107977969B (en) * | 2017-12-11 | 2020-07-21 | 北京数字精准医疗科技有限公司 | Endoscope fluorescence image segmentation method, device and storage medium |
CN111742553A (en) * | 2017-12-14 | 2020-10-02 | 交互数字Vc控股公司 | Deep learning based image partitioning for video compression |
CN108549832A (en) * | 2018-01-21 | 2018-09-18 | 西安电子科技大学 | LPI radar signal sorting technique based on full Connection Neural Network |
CN108549832B (en) * | 2018-01-21 | 2021-11-30 | 西安电子科技大学 | Low-interception radar signal classification method based on full-connection neural network |
CN108537250A (en) * | 2018-03-16 | 2018-09-14 | 新智认知数据服务有限公司 | A kind of target following model building method and device |
CN108615010B (en) * | 2018-04-24 | 2022-02-11 | 重庆邮电大学 | Facial expression recognition method based on parallel convolution neural network feature map fusion |
CN108615010A (en) * | 2018-04-24 | 2018-10-02 | 重庆邮电大学 | Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern |
CN108734357A (en) * | 2018-05-29 | 2018-11-02 | 北京佳格天地科技有限公司 | Weather prognosis system and method |
CN108876789A (en) * | 2018-06-15 | 2018-11-23 | 南方医科大学 | A kind of sequential chart segmentation method combined based on super-pixel and neighborhood block feature |
CN108776823A (en) * | 2018-07-06 | 2018-11-09 | 武汉兰丁医学高科技有限公司 | Cervical carcinoma lesion analysis method based on cell image recognition |
CN109086777A (en) * | 2018-07-09 | 2018-12-25 | 南京师范大学 | A kind of notable figure fining method based on global pixel characteristic |
CN109086777B (en) * | 2018-07-09 | 2021-09-28 | 南京师范大学 | Saliency map refining method based on global pixel characteristics |
CN109166133A (en) * | 2018-07-14 | 2019-01-08 | 西北大学 | Soft tissue organs image partition method based on critical point detection and deep learning |
CN109166133B (en) * | 2018-07-14 | 2021-11-23 | 西北大学 | Soft tissue organ image segmentation method based on key point detection and deep learning |
CN108985247A (en) * | 2018-07-26 | 2018-12-11 | 北方工业大学 | Multispectral image urban road identification method |
CN108985247B (en) * | 2018-07-26 | 2021-12-21 | 北方工业大学 | Multispectral image urban road identification method |
CN109146885A (en) * | 2018-08-17 | 2019-01-04 | 深圳蓝胖子机器人有限公司 | Image partition method, equipment and computer readable storage medium |
CN112639830A (en) * | 2018-08-30 | 2021-04-09 | 华为技术有限公司 | Apparatus and method for separating picture into foreground and background using deep learning |
CN110874841A (en) * | 2018-09-04 | 2020-03-10 | 斯特拉德视觉公司 | Object detection method and device with reference to edge image |
CN110874841B (en) * | 2018-09-04 | 2023-08-29 | 斯特拉德视觉公司 | Object detection method and device with reference to edge image |
CN109558806A (en) * | 2018-11-07 | 2019-04-02 | 北京科技大学 | The detection method and system of high score Remote Sensing Imagery Change |
CN109816670A (en) * | 2019-01-31 | 2019-05-28 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating Image Segmentation Model |
US11989886B2 (en) | 2019-02-12 | 2024-05-21 | Tata Consultancy Services Limited | Automated unsupervised localization of context sensitive events in crops and computing extent thereof |
CN109886205A (en) * | 2019-02-25 | 2019-06-14 | 苏州清研微视电子科技有限公司 | Safety belt method of real-time and system |
CN109886205B (en) * | 2019-02-25 | 2023-08-08 | 苏州清研微视电子科技有限公司 | Real-time safety belt monitoring method and system |
CN109948708A (en) * | 2019-03-21 | 2019-06-28 | 西安电子科技大学 | Multispectral image feature level information fusion method when more based on the implicit canonical of iteration |
CN110503696B (en) * | 2019-07-09 | 2021-09-21 | 浙江浩腾电子科技股份有限公司 | Vehicle face color feature detection method based on super-pixel sampling |
CN110503696A (en) * | 2019-07-09 | 2019-11-26 | 浙江浩腾电子科技股份有限公司 | A kind of vehicle face color characteristic detection method based on super-pixel sampling |
CN110764090A (en) * | 2019-10-22 | 2020-02-07 | 上海眼控科技股份有限公司 | Image processing method, image processing device, computer equipment and readable storage medium |
CN110796673B (en) * | 2019-10-31 | 2023-02-24 | Oppo广东移动通信有限公司 | Image segmentation method and related product |
CN110796673A (en) * | 2019-10-31 | 2020-02-14 | Oppo广东移动通信有限公司 | Image segmentation method and related product |
CN111126187A (en) * | 2019-12-09 | 2020-05-08 | 上海眼控科技股份有限公司 | Fire detection method, system, electronic device and storage medium |
CN111160529B (en) * | 2019-12-28 | 2023-06-20 | 天津大学 | Training sample generation method in target pose measurement based on convolutional neural network |
CN111160529A (en) * | 2019-12-28 | 2020-05-15 | 天津大学 | Convolutional neural network-based training sample generation method in target pose measurement |
CN111598001A (en) * | 2020-05-18 | 2020-08-28 | 哈尔滨理工大学 | Apple tree pest and disease identification method based on image processing |
WO2021233021A1 (en) * | 2020-05-18 | 2021-11-25 | 腾讯科技(深圳)有限公司 | Method for training image region segmentation model, and segmentation method and apparatus |
CN111695640A (en) * | 2020-06-18 | 2020-09-22 | 南京信息职业技术学院 | Foundation cloud picture recognition model training method and foundation cloud picture recognition method |
CN111695640B (en) * | 2020-06-18 | 2024-04-09 | 南京信息职业技术学院 | Foundation cloud picture identification model training method and foundation cloud picture identification method |
CN111882527A (en) * | 2020-07-14 | 2020-11-03 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112132842A (en) * | 2020-09-28 | 2020-12-25 | 华东师范大学 | Brain image segmentation method based on SEEDS algorithm and GRU network |
CN112561925A (en) * | 2020-12-02 | 2021-03-26 | 中国联合网络通信集团有限公司 | Image segmentation method, system, computer device and storage medium |
CN113792653A (en) * | 2021-09-13 | 2021-12-14 | 山东交通学院 | Method, system, equipment and storage medium for cloud detection of remote sensing image |
CN113792653B (en) * | 2021-09-13 | 2023-10-20 | 山东交通学院 | Method, system, equipment and storage medium for cloud detection of remote sensing image |
CN114677499A (en) * | 2022-04-11 | 2022-06-28 | 成都信息工程大学 | Cloud microparticle image particle region positioning method |
CN114677499B (en) * | 2022-04-11 | 2023-04-18 | 成都信息工程大学 | Cloud microparticle image particle region positioning method |
CN114648711B (en) * | 2022-04-11 | 2023-03-10 | 成都信息工程大学 | Clustering-based cloud particle sub-image false target filtering method |
CN114648711A (en) * | 2022-04-11 | 2022-06-21 | 成都信息工程大学 | Clustering-based cloud particle sub-image false target filtering method |
CN114663790A (en) * | 2022-05-24 | 2022-06-24 | 济宁德信测绘有限公司 | Intelligent remote sensing mapping method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106651886A (en) | Cloud image segmentation method based on superpixel clustering optimization CNN | |
CN106127725B (en) | A kind of millimetre-wave radar cloud atlas dividing method based on multiresolution CNN | |
CN112561146A (en) | Large-scale real-time traffic flow prediction method based on fuzzy logic and depth LSTM | |
CN103049763B (en) | Context-constraint-based target identification method | |
CN102902956B (en) | A kind of ground visible cloud image identifying processing method | |
CN106356757A (en) | Method for inspecting electric power lines by aid of unmanned aerial vehicle on basis of human vision characteristics | |
CN107742093A (en) | A kind of infrared image power equipment component real-time detection method, server and system | |
CN107016677A (en) | A kind of cloud atlas dividing method based on FCN and CNN | |
CN107346420A (en) | Text detection localization method under a kind of natural scene based on deep learning | |
CN108647602B (en) | A kind of aerial remote sensing images scene classification method determined based on image complexity | |
CN111126287B (en) | Remote sensing image dense target deep learning detection method | |
CN107563412A (en) | A kind of infrared image power equipment real-time detection method based on deep learning | |
CN103218832B (en) | Based on the vision significance algorithm of global color contrast and spatial distribution in image | |
CN108564115A (en) | Semi-supervised polarization SAR terrain classification method based on full convolution GAN | |
CN112819830A (en) | Individual tree crown segmentation method based on deep learning and airborne laser point cloud | |
CN107730515A (en) | Panoramic picture conspicuousness detection method with eye movement model is increased based on region | |
CN108416353A (en) | Crop field spike of rice fast partition method based on the full convolutional neural networks of depth | |
CN105046259B (en) | Coronal mass ejection detection method based on multi-feature fusion | |
CN108254750B (en) | Down-blast intelligent identification early warning method based on radar data | |
CN109712127A (en) | A kind of electric transmission line fault detection method for patrolling video flowing for machine | |
CN113239722B (en) | Deep learning based strong convection extrapolation method and system under multi-scale | |
CN108229589A (en) | A kind of ground cloud atlas sorting technique based on transfer learning | |
CN110222586A (en) | A kind of calculating of depth of building and the method for building up of urban morphology parameter database | |
CN108399424A (en) | A kind of point cloud classifications method, intelligent terminal and storage medium | |
CN110390673A (en) | Cigarette automatic testing method based on deep learning under a kind of monitoring scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |