CN108776805A - It is a kind of establish image classification model, characteristics of image classification method and device - Google Patents

It is a kind of establish image classification model, characteristics of image classification method and device Download PDF

Info

Publication number
CN108776805A
CN108776805A CN201810415090.9A CN201810415090A CN108776805A CN 108776805 A CN108776805 A CN 108776805A CN 201810415090 A CN201810415090 A CN 201810415090A CN 108776805 A CN108776805 A CN 108776805A
Authority
CN
China
Prior art keywords
pixel
image
training sample
classified
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810415090.9A
Other languages
Chinese (zh)
Inventor
王宁
曹红杰
肖计划
刘军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bd Navigation & Lbs Beijing Co Ltd
Original Assignee
Bd Navigation & Lbs Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bd Navigation & Lbs Beijing Co Ltd filed Critical Bd Navigation & Lbs Beijing Co Ltd
Priority to CN201810415090.9A priority Critical patent/CN108776805A/en
Publication of CN108776805A publication Critical patent/CN108776805A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses it is a kind of establish image classification model, characteristics of image classification method and device, to improve characteristics of image classification accuracy and robustness.It is described to establish image classification model method, including:Resampling is carried out to existing training sample set using bagging algorithms, obtains T training sample set;Wherein, T is the sum of preset training sample set;Training sample set includes pixel and corresponding classification logotype;The pixel and corresponding classification logotype that each training sample is concentrated are input to initial full convolutional network model as unit of group, obtain the full convolutional network model after T training;One group of pixel includes all pixels in picture size defined in convolution kernel.

Description

It is a kind of establish image classification model, characteristics of image classification method and device
Technical field
The present invention relates to computer and fields of communication technology, more particularly to a kind of to establish image classification model, characteristics of image The method and device of classification.
Background technology
With the development of remote sensing technology, pattern-recognition and artificial intelligence, classification of remote-sensing images research obtains fast-developing.State Inside and outside scholar proposes a variety of Remote Image Classifications, the maximum likelihood method such as based on statistical thinking, determining based on machine learning Plan tree method, random forest method, support vector machines method and BP (back propagation, back-propagating) neural network etc.. Relative to traditional statistical method, machine learning method can obtain more preferably classification results.But either support vector machines is also Be BP neural network all it is to belong to shallow-layer learning method, with the increase of sample size and the multifarious enhancing of sample, shallow-layer Model is gradually not suitable with complicated classification of remote-sensing images.The classification of remote-sensing images that develops into of depth learning technology opens new way Diameter.Development of the deep learning as neural network, by establish the model of deep layer original remote sensing image be changed into it is higher level, More abstract expression shows higher superiority to promote the precision of classification compared to several method above-mentioned.Theoretically The model number of plies is more, and structure is also complicated, and classification results are also accurate, but brings the longer model training time.
Invention content
The present invention provide it is a kind of establish image classification model, characteristics of image classification method and device, to improve image The accuracy of tagsort and robustness.
The present invention provides a kind of method for establishing image classification model, including:
Resampling is carried out to existing training sample set using bagging algorithms, obtains T training sample set;Wherein, T For the sum of preset training sample set;Training sample set includes pixel and corresponding classification logotype.
The pixel and corresponding classification logotype that each training sample is concentrated are input to initial full convolution as unit of group Network model obtains the full convolutional network model after T training;One group of pixel includes in picture size defined in convolution kernel All pixels.
The technical solution that the embodiment of the present invention provides can include the following benefits:In the present embodiment, utilize Bagging algorithms divide the weighting of pixel in each training sample concentration increased portion.T training sample concentrates the part pixel laid particular stress on Difference is used for image classification to which training obtains T full convolutional network models.The present embodiment obtains multiple full convolutional network moulds Type contributes to subsequent classification more acurrate.
Optionally, the pixel that each training sample is concentrated and corresponding classification logotype are input to as unit of group Initial full convolutional network model obtains the full convolutional network model after T training, including:
The pixel and corresponding classification logotype that each training sample is concentrated are input to initial full convolution as unit of group Network model;
By the full convolutional network model, convolution sum pondization at least twice is carried out to every group of pixel and is handled, then is carried out Deconvolution is handled, and obtains the full convolutional network model after T training.
The technical solution that the embodiment of the present invention provides can include the following benefits:The present embodiment carries out multiple convolution With pondization processing, makes classification results successive ignition, polymerization, improve the accuracy of full convolutional network model.
Optionally, the method further includes:
Using bagging algorithms to existing training sample set carry out resampling before, to the training sample set into Row pretreatment;The pretreatment includes at least one of following:Radiant correction, geometric correction, ortho-rectification, image mosaic and sanction It cuts.
The technical solution that the embodiment of the present invention provides can include the following benefits:The present embodiment is in advance to training sample This collection is pre-processed, and to reduce the deformation of image and the influence of other disturbing factors, is made image closer to standardization, is contributed to Improve the accuracy of the full convolutional network model of training.
The present invention provides a kind of method of characteristics of image classification, including:
By the pixel in image to be classified as unit of group, it is input to the full convolutional network model of T pre-established, is obtained T group first-level class results;One group of pixel includes all pixels in picture size defined in convolution kernel;
It polymerize T groups first-level class as a result, according to voting method, determines the final of each pixel in the image to be classified Classification results.
The technical solution that the embodiment of the present invention provides can include the following benefits:The present embodiment utilizes multiple full volumes Product network model classifies to same image to be classified, and manifold classification result is confirmed each other.It can be using less convolution time While number, the accuracy of basic guarantee classification improves the classification effectiveness of characteristics of image.
Optionally, the method further includes:
As unit of by the pixel in image to be classified by group, be input to the T that pre-establishes full convolutional network models it Before, the image to be classified is pre-processed;The pretreatment includes at least one of following:Radiant correction, geometric correction, just Penetrate correction, image mosaic and cutting.
The technical solution that the embodiment of the present invention provides can include the following benefits:The present embodiment is in advance to be sorted Image is pre-processed, and to reduce the deformation of image and the influence of other disturbing factors, is made image closer to standardization, is contributed to Subsequent processing.
Optionally, the polymerization T groups first-level class determines every in the image to be classified as a result, according to voting method The final classification of a pixel is as a result, include:
It polymerize T groups first-level class as a result, according to voting method, determines the two level of each pixel in the image to be classified Classification results;
By non-linear filtering method, the secondary classification result is filtered, is obtained in the image to be classified The final classification result of each pixel.
The technical solution that the embodiment of the present invention provides can include the following benefits:The present embodiment passes through nonlinear filtering Wave method, can be with the heterogeneous small patch and salt-pepper noise in filtered classification result.
Optionally, described that the secondary classification result is filtered by non-linear filtering method, obtain described wait for point The final classification of each pixel in class image is as a result, include:
Expansion and erosion operation are carried out to the secondary classification result, obtain each pixel in the image to be classified Final classification result.
The technical solution that the embodiment of the present invention provides can include the following benefits:The present embodiment is using expansion and corruption Operation is lost, not only may be implemented to filter, two adjacent pixels can also be connected.
The present invention provides a kind of device for establishing image classification model, including:
Resampling module obtains T instruction for carrying out resampling to existing training sample set using bagging algorithms Practice sample set;Wherein, T is the sum of preset training sample set;Training sample set includes pixel and corresponding classification logotype.
Training module, the pixel and corresponding classification logotype for concentrating each training sample are as unit of group, input To initial full convolutional network model, the full convolutional network model after T training is obtained;One group of pixel is defined comprising convolution kernel Picture size in all pixels.
Optionally, the training module includes:
Input submodule, the pixel and corresponding classification logotype for concentrating each training sample are defeated as unit of group Enter to initial full convolutional network model;
Training submodule, for by the full convolutional network model, convolution sum at least twice to be carried out to every group of pixel Pondization processing, then deconvolution processing is carried out, obtain the full convolutional network model after T training.
Optionally, described device further includes:
Preprocessing module is used for before carrying out resampling to existing training sample set using bagging algorithms, to institute Training sample set is stated to be pre-processed;The pretreatment includes at least one of following:Radiant correction, geometric correction, ortho-rectification, Image mosaic and cutting.
The present invention provides a kind of device of characteristics of image classification, including:
Convolution module, for the pixel in image to be classified as unit of group, to be input to the T pre-established full convolution Network model obtains T group first-level class results;One group of pixel includes all pixels in picture size defined in convolution kernel;
Vote module is determined for polymerizeing T groups first-level class as a result, according to voting method in the image to be classified The final classification result of each pixel.
Optionally, described device further includes:
Preprocessing module is complete for as unit of by the pixel in image to be classified by group, being input to the T pre-established Before convolutional network model, the image to be classified is pre-processed;The pretreatment includes at least one of following:Radiate school Just, geometric correction, ortho-rectification, image mosaic and cutting.
Optionally, the vote module includes:
Ballot submodule is determined for polymerizeing T groups first-level class as a result, according to voting method in the image to be classified Each pixel secondary classification result;
Submodule is filtered, for by non-linear filtering method, being filtered to the secondary classification result, is obtained described The final classification result of each pixel in image to be classified.
Optionally, the filtering submodule carries out expansion and erosion operation to the secondary classification result, obtains described wait for The final classification result of each pixel in classification image.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification It obtains it is clear that understand through the implementation of the invention.The purpose of the present invention and other advantages can be by the explanations write Specifically noted structure is realized and is obtained in book, claims and attached drawing.
Below by drawings and examples, technical scheme of the present invention will be described in further detail.
Description of the drawings
Attached drawing is used to provide further understanding of the present invention, and a part for constitution instruction, the reality with the present invention It applies example to be used to explain the present invention together, not be construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the method flow diagram that image classification model is established in the embodiment of the present invention;
Fig. 2 is the method flow diagram that image classification model is established in the embodiment of the present invention;
Fig. 3 is the method flow diagram that characteristics of image is classified in the embodiment of the present invention;
Fig. 4 is the method flow diagram that characteristics of image is classified in the embodiment of the present invention;
Fig. 5 is the structure drawing of device that image classification model is established in the embodiment of the present invention;
Fig. 6 is the structure chart of training module in the embodiment of the present invention;
Fig. 7 is the structure drawing of device that image classification model is established in the embodiment of the present invention;
Fig. 8 is the structure drawing of device that characteristics of image is classified in the embodiment of the present invention;
Fig. 9 is the structure drawing of device that characteristics of image is classified in the embodiment of the present invention;
Figure 10 is the structure chart of vote module in the embodiment of the present invention.
Specific implementation mode
Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings, it should be understood that preferred reality described herein Apply example only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention.
In the related technology, deep learning method is mostly used to classify to characteristics of image.But nicety of grading is higher, it is deep The model for spending study is more complicated, and operation time is longer.
To solve the above problems, the present embodiment is combined bagging algorithms with convolutional network algorithm, calculated by bagging Method improves the weighting of Partial Feature in each sample set, and tagsort is carried out using convolutional network algorithm.Due to each sample set In both increase the weighting of Partial Feature, therefore convolutional network algorithm can be divided in the case of less convolution number Class result.The present embodiment reduces convolution number in the case of basic guarantee classification accuracy, improves classification effectiveness.
Referring to Fig. 1, the method that image classification model is established in the present embodiment includes:
Step 101:Resampling is carried out to existing training sample set using bagging algorithms, obtains T training sample Collection;Wherein, T is the sum of preset training sample set;Training sample set includes pixel and corresponding classification logotype.
Step 102:The pixel and corresponding classification logotype that each training sample is concentrated are input to initial as unit of group Full convolutional network model, obtain the full convolutional network model after T training;One group of pixel includes image defined in convolution kernel All pixels in size.
The present embodiment carries out resampling by bagging algorithms to training sample set, and T is derived by a training sample set A training sample set.T training sample is concentrated, and each training sample set has weighting to part training sample.Thus trained The amount of bias of full convolutional network model to after T training, this T full convolutional network models is different.T full convolution of later use Network model carries out characteristics of image classification to same image to be classified.The present embodiment is by establishing more full convolutional network mould Type contributes to subsequent classification more acurrate.
Pixel is the basic unit of tagsort in the present embodiment, and a pixel can be a pixel.By training sample The each pixel concentrated as a sample, using bagging (Bootstrap aggregating, convergence method of booting, also referred to as Pocket-like) algorithm progress resampling.
About bagging algorithms, if containing n different training sample { x in a training sample set S1, x2..., xn, if randomly selecting a training sample from training sample set S with putting back to every time, n times are extracted altogether, are formed new Training sample set S*, then set S*In do not include some training sample xi(i=1,2 ..., Probability p n) be:
Therefore, although new training sample set S*Training sample sum it is equal with the former training sample sum of set S (being all n), but newly set S*In may include the training sample (putting back to extraction) of repetition.
Resampling is carried out using bagging algorithms so that set S*In may include the training sample of repetition, also just increase The weighting of this part training sample is added.Subsequently train full convolutional network model when, using weighting part training sample more The full convolutional network model of more training, can make the full convolutional network model that training obtains in future, and part is laid particular stress on to being similar to Training sample parts of images tagsort it is more acurrate.
The present embodiment repeats the above process, and T training sample set S is obtained*, T is the sum of preset training sample set, Such as T=10.T training sample set S*The number of iterations for further increasing training sample improves the weighting of training sample.
By T training sample set S*It is input to initial full convolutional network model (Fully Convolutional Networks, FCN).Each training sample set S*Training sample be input to FCN models after, obtain the model F after a training (X), T training sample set S*Just the model F after T training is obtained1(X),F2(X),…,FT(X).Because of the training sample of input This difference, so F1(X),F2(X),…,FT(X) amount of bias is different, each model F (X) be more adept to parts of images feature into Row classification.All model F (X) cooperations so that being classified to the feature in each image to be classified can be relatively more accurate.
Embodiment adds the quantity of full convolutional network model, therefore the convolution number of each full convolutional network model can To reduce.Multiple full convolutional network models can be run parallel.
Wherein, a large amount of known training samples classified are first passed through in advance to be trained FCN models.By taking remote sensing images as an example, Classification mark is carried out to different atural object in remote sensing image using the method for artificial vector quantization, collect vegetation, building, water body, The label figure of road and other atural objects.Training sample set contains training sample and its classification results (i.e. classification marks).Choosing The sample selected should meet two conditions:First, the practically species in training sample representated by each pixel should not be with such reality Border atural object classification is consistent;Second is that selection sample pixel answer it is representative.
Optionally, the step 102 includes:Step A1- steps A2.
Step A1:The pixel and corresponding classification logotype that each training sample is concentrated are input to initial as unit of group Full convolutional network model.
Step A2:By the full convolutional network model, convolution sum pondization at least twice is carried out to every group of pixel and is handled, Deconvolution processing is carried out again, obtains the full convolutional network model after T training.
Full convolutional network model can there are many frames in the present embodiment, such as U-Net (Convolutional Networks For Biomedical Image Segmentation) model, SegNet ((A Deep Convolutional Encoder- Decoder Architecture for Image Segmentation) model and DeepLab (Semantic Image Segmentation with Deep Convolutional Nets) model etc..
Shown in the Convolution Formula of FCN models such as formula (2):
X in formulamRepresent m-th of output mapping, xiRepresent one group of training sample i-th of wave band (such as RGB include 3 waves Section, remote sensing image include more wave bands), one group of training sample include picture size defined in convolution kernel included it is multiple Pixel, if convolution kernel is 5, the picture size of definition is 5*5, then multiple pictures is chosen from image to be classified according to the size of 5*5 Member is simultaneously input to full convolutional network model, and due to being handled by resampling, there may be the pixels repeated in institute's dimensions indicated above; I represents the number of plies of input feature vector mapping, KmiRepresent i-th of component, b in the convolution kernel corresponding to m-th of outputmIt indicates m-th Corresponding biasing is exported, and f () indicates that activation primitive, conv () indicate process of convolution.KmiAnd bmIt is preset parameter.
By T training sample set S*In all training samples be input to as unit of group in formula (2), complete a secondary volume Product processing.Then according to the output of convolution as a result, carrying out pond processing.
Maximum value pond method can be used in pond method in the present embodiment, referring to formula (3):
x(i, j)=max (fxI × s+k, j × s+k, k=0,1 ..., K) (formula 3)
X in formula(i, j)It is expressed as value of the output of pond layer at image coordinate (i, j), K represents selected in down-sampling Square part area the length of side (such as K=4), s indicate calculating process in regional area slide step-length (such as s=1).
It is handled by pond, chooses the maximum x in a regionm(the x in formula (2)m, it is the x of some position(i, j)) make For the output in the region, region is i × s+k, j × s+k.Pondization processing is a kind of down-sampling processing, can reduce the data of sample Amount, while improving robustness of the feature to variations such as the translations of input.
Down-sampling carries out operation to the rectangular area of some fixed size of input, does not change the number of plies of input.Therefore, Process of convolution can be carried out again according to pond result, using pond result as the input of secondary process of convolution.
The processing of convolution sum pondization is repeated several times, such as at least executes convolution sum pondization processing twice, then carries out a warp Product processing recovers the size of original image to be classified by deconvolution processing.
The present embodiment has first passed through weighting and the quantity that bagging algorithms improve training sample, therefore convolution sum in advance Pond number of processing can be reduced, and can meet default convolution termination condition, to which training obtains T full convolutional network moulds Type.
Wherein, deconvolution processing mainly makes characteristic pattern be restored to input picture (i.e. image to be classified) by up-sampling Identical size remains the spatial information in original input picture.Therefore, it is necessary first to determine that the convolution kernel of deconvolution is big It is small.Referring to formula (4):
O=S (W-1)+k (formula 4)
O is the size of original image in formula, and S is convolution step-length, and W is the size of image behind last time pond, and k is warp The size of long-pending convolution kernel.The value of k can be released by formula (4).Then it brings k into deconvolution formula, obtains T full volumes Product network model, i.e. F1(X),F2(X),…,FT(X)。
Pass through T training sample set S*, each training sample set S*Training obtains one group of KmiAnd bmIt is rolled up entirely to get to one Product network model.The K of T full convolutional network modelsmiAnd bmDifference, each full convolutional network model are more adept to partial category Characteristics of image is classified.
Wherein, enhancing processing is carried out to training sample figure in advance.Enhancing is handled:Rotation processing, Fuzzy Processing, illumination Adjustment processing and increase Gaussian noise and salt-pepper noise processing.Wherein, each enhancing processing includes different degrees of processing, such as Rotation processing includes the rotation processing of multiple angles, and Fuzzy Processing includes the processing etc. of multiple fog-levels.Above-mentioned place will be passed through Training sample after reason is used as training sample, is trained to FCN models.It is handled by above-mentioned enhancing, training can be increased The diversity of sample, so that training obtains more accurate FCN models.
Optionally, the method further includes:Step B.
Step B:Before carrying out resampling to existing training sample set using bagging algorithms, to the trained sample This collection is pre-processed;The pretreatment includes at least one of following:Radiant correction, geometric correction, ortho-rectification, image mosaic And cutting.
The present embodiment can be adapted for the sample training to the characteristics of image in remote sensing images.Training sample is carried out above-mentioned Pretreatment can make training sample closer to standardization, reduce the influence of anamorphose and air to image.
Realization process is discussed in detail below by embodiment.
Referring to Fig. 2, the method that image classification model is established in the present embodiment includes:
Step 201:The training sample set is pre-processed;The pretreatment includes at least one of following:Radiate school Just, geometric correction, ortho-rectification, image mosaic and cutting.
Step 202:Resampling is carried out to existing training sample set using bagging algorithms, obtains T training sample Collection;Wherein, T is the sum of preset training sample set;Training sample set includes pixel and corresponding classification logotype.
Step 203:The pixel and corresponding classification logotype that each training sample is concentrated are input to initial as unit of group Full convolutional network model.
Step 204:By the full convolutional network model, convolution sum pond Hua Chu at least twice is carried out to every group of pixel Reason, then deconvolution processing is carried out, obtain the full convolutional network model after T training.
After training obtains image classification model, it can be classified to characteristics of image using the image classification model, it is real Existing process is referring to the following examples.
Referring to Fig. 3, the method that characteristics of image is classified in the present embodiment includes:
Step 301:By the pixel in image to be classified as unit of group, it is input to the T full convolutional networks pre-established Model obtains T group first-level class results;One group of pixel includes all pixels in picture size defined in convolution kernel.
Step 302:It polymerize T groups first-level class as a result, according to voting method, determines each picture in the image to be classified The final classification result of member.
Image to be classified is separately input to the full convolutional network models of T by the present embodiment, and T full convolutional network models can be with Parallel operation.Since the biasing of each full convolutional network model is different, characteristics of image can be carried out from different sides respectively Classification, obtains T group first-level class results.Then, using voting method, the final classification result of each pixel is determined so that point Class result is more accurate.
For example, T full convolutional network model F1(X),F2(X),…,FT(X) a1 is respectively obtained ..., aT first-level class knot Fruit.Due to T sample set S*Include the sample of multiple repetitions, then first-level class results set a1 ..., aT also contain repetition Classification results.Therefore, it polymerize the first-level class of all pixels of T sample set as a result, according to voting method, is waited for described in determination The final classification result of each pixel in classification image.
Optionally, the method further includes:Step C.
Step C:As unit of by the pixel in image to be classified by group, it is input to the T full convolutional networks pre-established Before model, the image to be classified is pre-processed;The pretreatment includes at least one of following:Radiant correction, geometry Correction, ortho-rectification, image mosaic and cutting.
The present embodiment can be adapted for classifying to the characteristics of image in remote sensing images.Image to be classified is carried out above-mentioned Pretreatment can make image to be classified closer to standardization, reduce the influence of anamorphose and air to image.
Optionally, the step 302 includes:Step D1 and step D2.
Step D1:It polymerize T groups first-level class as a result, according to voting method, determines each picture in the image to be classified The secondary classification result of member.
Step D2:By non-linear filtering method, the secondary classification result is filtered, obtains the figure to be sorted The final classification result of each pixel as in.
The present embodiment utilizes the heterogeneous stigma generated after Mathematical Morphology Method (i.e. non-linear filtering method) removal classification Block and salt-pepper noise improve the accuracy of classification results.
Optionally, the step D2 includes:Step D21.
Step D21:Expansion and erosion operation are carried out to the secondary classification result, obtained every in the image to be classified The final classification result of a pixel.
Mathematical morphological operation mainly has opening operation and closed operation.First corrode and be expanded to opening operation afterwards, opening operation can disappear Except scatterplot and burr carry out smoothly image;First expansion post-etching is closed operation, by selecting element structure appropriate can be with Two neighbouring targets are connected by closed operation.
Dilation operation is referring to formula (5):
Erosion operation is referring to formula (6):
Wherein, X is pending image, that is, the image after deconvolution;S is structural element.
Realization process is discussed in detail below by embodiment.
Referring to Fig. 4, the method that characteristics of image is classified in the present embodiment includes:
Step 401:The image to be classified is pre-processed;The pretreatment includes at least one of following:Radiate school Just, geometric correction, ortho-rectification, image mosaic and cutting.
Step 402:By the pixel in image to be classified as unit of group, it is input to the T full convolutional networks pre-established Model obtains T group first-level class results;One group of pixel includes all pixels in picture size defined in convolution kernel.
Step 403:It polymerize T groups first-level class as a result, according to voting method, determines each picture in the image to be classified The secondary classification result of member.
Step 404:Expansion and erosion operation are carried out to the secondary classification result, obtained every in the image to be classified The final classification result of a pixel.
Above-described embodiment can be freely combined according to actual needs.
By the realization process classified above description describes characteristics of image, which can be realized by device, below to this The internal structure and function of device are introduced.
Referring to Fig. 5, the device that image classification model is established in the present embodiment includes:Resampling module 501 and training module 502。
Resampling module 501 obtains T for carrying out resampling to existing training sample set using bagging algorithms Training sample set;Wherein, T is the sum of preset training sample set;Training sample set includes pixel and corresponding classification logotype.
Training module 502, the pixel and corresponding classification logotype for concentrating each training sample are defeated as unit of group Enter and obtains the full convolutional network model after T training to initial full convolutional network model;One group of pixel is determined comprising convolution kernel All pixels in the picture size of justice.
Optionally, as shown in fig. 6, the training module 502 includes:Input submodule 601 and training submodule 602.
Input submodule 601, the pixel and corresponding classification logotype for concentrating each training sample as unit of group, It is input to initial full convolutional network model.
Training submodule 602, for by the full convolutional network model, convolution at least twice to be carried out to every group of pixel It is handled with pondization, then carries out deconvolution processing, obtain the full convolutional network model after T training.
Optionally, as shown in fig. 7, described device further includes:
Preprocessing module 701 is used for before carrying out resampling to existing training sample set using bagging algorithms, The training sample set is pre-processed;The pretreatment includes at least one of following:Radiant correction, is just penetrated at geometric correction Correction, image mosaic and cutting.
Referring to Fig. 8, the device that characteristics of image is classified in the present embodiment, including:Convolution module 801 and vote module 802.
Convolution module 801 is complete for the pixel in image to be classified as unit of group, to be input to the T pre-established Convolutional network model obtains T group first-level class results;One group of pixel includes all pictures in picture size defined in convolution kernel Member.
Vote module 802 is determined for polymerizeing T groups first-level class as a result, according to voting method in the image to be classified Each pixel final classification result.
Optionally, as shown in figure 9, described device further includes:Preprocessing module 901.
Preprocessing module 901, for as unit of by the pixel in image to be classified by group, being input to the T pre-established Before a full convolutional network model, the image to be classified is pre-processed;The pretreatment includes at least one of following:Spoke Penetrate correction, geometric correction, ortho-rectification, image mosaic and cutting.
Optionally, as shown in Figure 10, the vote module 802 includes:Ballot submodule 1001 and filtering submodule 1002.
Ballot submodule 1001 determines the figure to be sorted for polymerizeing T groups first-level class as a result, according to voting method The secondary classification result of each pixel as in.
Submodule 1002 is filtered, for by non-linear filtering method, being filtered, obtaining to the secondary classification result The final classification result of each pixel in the image to be classified.
Optionally, the filtering submodule 1002 carries out expansion and erosion operation to the secondary classification result, obtains institute State the final classification result of each pixel in image to be classified.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, the present invention can be used in one or more wherein include computer usable program code computer The shape for the computer program product implemented in usable storage medium (including but not limited to magnetic disk storage and optical memory etc.) Formula.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that can be realized by computer program instructions every first-class in flowchart and/or the block diagram The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided Instruct the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine so that the instruction executed by computer or the processor of other programmable data processing devices is generated for real The device for the function of being specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that instruction generation stored in the computer readable memory includes referring to Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device so that count Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, in computer or The instruction executed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in a box or multiple boxes.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art God and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (14)

1. a kind of method for establishing image classification model, which is characterized in that including:
Resampling is carried out to existing training sample set using bagging algorithms, obtains T training sample set;Wherein, T is pre- If training sample set sum;Training sample set includes pixel and corresponding classification logotype;
The pixel and corresponding classification logotype that each training sample is concentrated are input to initial full convolutional network as unit of group Model obtains the full convolutional network model after T training;One group of pixel includes all in picture size defined in convolution kernel Pixel.
2. the method as described in claim 1, which is characterized in that the pixel for concentrating each training sample and corresponding class It does not identify as unit of group, is input to initial full convolutional network model, obtain the full convolutional network model after T training, wrap It includes:
The pixel and corresponding classification logotype that each training sample is concentrated are input to initial full convolutional network as unit of group Model;
By the full convolutional network model, convolution sum pondization at least twice is carried out to every group of pixel and is handled, then is carried out primary Deconvolution is handled, and obtains the full convolutional network model after T training.
3. the method as described in claim 1, which is characterized in that the method further includes:
Before carrying out resampling to existing training sample set using bagging algorithms, the training sample set is carried out pre- Processing;The pretreatment includes at least one of following:Radiant correction, geometric correction, ortho-rectification, image mosaic and cutting.
4. a kind of method of characteristics of image classification, which is characterized in that including:
By the pixel in image to be classified as unit of group, it is input to the full convolutional network model of T pre-established, obtains T groups First-level class result;One group of pixel includes all pixels in picture size defined in convolution kernel;
It polymerize T groups first-level class as a result, according to voting method, determines the final classification of each pixel in the image to be classified As a result.
5. method as claimed in claim 4, which is characterized in that the method further includes:
As unit of by the pixel in image to be classified by group, it is input to before the full convolutional network model of T pre-established, it is right The image to be classified is pre-processed;The pretreatment includes at least one of following:Radiant correction, just penetrates school at geometric correction Just, image mosaic and cutting.
6. method as claimed in claim 4, which is characterized in that the polymerization T groups first-level class as a result, according to voting method, Determine the final classification of each pixel in the image to be classified as a result, including:
It polymerize T groups first-level class as a result, according to voting method, determines the secondary classification of each pixel in the image to be classified As a result;
By non-linear filtering method, the secondary classification result is filtered, each of described image to be classified is obtained The final classification result of pixel.
7. method as claimed in claim 6, which is characterized in that it is described by non-linear filtering method, to the secondary classification As a result it is filtered, obtains the final classification of each pixel in the image to be classified as a result, including:
Expansion and erosion operation are carried out to the secondary classification result, obtain the final of each pixel in the image to be classified Classification results.
8. a kind of device for establishing image classification model, which is characterized in that including:
Resampling module obtains T trained sample for carrying out resampling to existing training sample set using bagging algorithms This collection;Wherein, T is the sum of preset training sample set;Training sample set includes pixel and corresponding classification logotype;
Training module, the pixel and corresponding classification logotype for concentrating each training sample are input to just as unit of group The full convolutional network model to begin, obtains the full convolutional network model after T training;One group of pixel includes figure defined in convolution kernel As all pixels in size.
9. device as claimed in claim 8, which is characterized in that the training module includes:
Input submodule, pixel and corresponding classification logotype for concentrating each training sample are input to as unit of group Initial full convolutional network model;
Training submodule, for by the full convolutional network model, convolution sum pond at least twice to be carried out to every group of pixel Processing, then deconvolution processing is carried out, obtain the full convolutional network model after T training.
10. device as claimed in claim 8, which is characterized in that described device further includes:
Preprocessing module is used for before carrying out resampling to existing training sample set using bagging algorithms, to the instruction Practice sample set to be pre-processed;The pretreatment includes at least one of following:Radiant correction, geometric correction, ortho-rectification, image It inlays and cuts.
11. a kind of device of characteristics of image classification, which is characterized in that including:
Convolution module, for the pixel in image to be classified as unit of group, to be input to the T pre-established full convolutional network Model obtains T group first-level class results;One group of pixel includes all pixels in picture size defined in convolution kernel;
Vote module determines each of described image to be classified for polymerizeing T groups first-level class as a result, according to voting method The final classification result of pixel.
12. device as claimed in claim 11, which is characterized in that described device further includes:
Preprocessing module, for as unit of by the pixel in image to be classified by group, being input to the T pre-established full convolution Before network model, the image to be classified is pre-processed;The pretreatment includes at least one of following:Radiant correction, Geometric correction, ortho-rectification, image mosaic and cutting.
13. device as claimed in claim 11, which is characterized in that the vote module includes:
Ballot submodule determines every in the image to be classified for polymerizeing T groups first-level class as a result, according to voting method The secondary classification result of a pixel;
Submodule is filtered, for by non-linear filtering method, being filtered to the secondary classification result, obtains described wait for point The final classification result of each pixel in class image.
14. device as claimed in claim 13, which is characterized in that the filtering submodule carries out the secondary classification result Expansion and erosion operation, obtain the final classification result of each pixel in the image to be classified.
CN201810415090.9A 2018-05-03 2018-05-03 It is a kind of establish image classification model, characteristics of image classification method and device Pending CN108776805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810415090.9A CN108776805A (en) 2018-05-03 2018-05-03 It is a kind of establish image classification model, characteristics of image classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810415090.9A CN108776805A (en) 2018-05-03 2018-05-03 It is a kind of establish image classification model, characteristics of image classification method and device

Publications (1)

Publication Number Publication Date
CN108776805A true CN108776805A (en) 2018-11-09

Family

ID=64026932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810415090.9A Pending CN108776805A (en) 2018-05-03 2018-05-03 It is a kind of establish image classification model, characteristics of image classification method and device

Country Status (1)

Country Link
CN (1) CN108776805A (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009058915A1 (en) * 2007-10-29 2009-05-07 The Trustees Of The University Of Pennsylvania Computer assisted diagnosis (cad) of cancer using multi-functional, multi-modal in-vivo magnetic resonance spectroscopy (mrs) and imaging (mri)
CN102663374A (en) * 2012-04-28 2012-09-12 北京工业大学 Multi-class Bagging gait recognition method based on multi-characteristic attribute
CN103106265A (en) * 2013-01-30 2013-05-15 北京工商大学 Method and system of classifying similar images
CN106096561A (en) * 2016-06-16 2016-11-09 重庆邮电大学 Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN106248559A (en) * 2016-07-14 2016-12-21 中国计量大学 A kind of leukocyte five sorting technique based on degree of depth study
CN106845406A (en) * 2017-01-20 2017-06-13 深圳英飞拓科技股份有限公司 Head and shoulder detection method and device based on multitask concatenated convolutional neutral net
CN106897681A (en) * 2017-02-15 2017-06-27 武汉喜恩卓科技有限责任公司 A kind of remote sensing images comparative analysis method and system
CN106991374A (en) * 2017-03-07 2017-07-28 中国矿业大学 Handwritten Digit Recognition method based on convolutional neural networks and random forest
CN107066553A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of short text classification method based on convolutional neural networks and random forest
CN107103338A (en) * 2017-05-19 2017-08-29 杭州电子科技大学 Merge the SAR target identification methods of convolution feature and the integrated learning machine that transfinites
CN107220980A (en) * 2017-05-25 2017-09-29 重庆理工大学 A kind of MRI image brain tumor automatic division method based on full convolutional network
CN107301221A (en) * 2017-06-16 2017-10-27 华南理工大学 A kind of data digging method of multiple features dimension heap fusion
CN107527352A (en) * 2017-08-09 2017-12-29 中国电子科技集团公司第五十四研究所 Remote sensing Ship Target contours segmentation and detection method based on deep learning FCN networks
CN107679110A (en) * 2017-09-15 2018-02-09 广州唯品会研究院有限公司 The method and device of knowledge mapping is improved with reference to text classification and picture attribute extraction
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN107909566A (en) * 2017-10-28 2018-04-13 杭州电子科技大学 A kind of image-recognizing method of the cutaneum carcinoma melanoma based on deep learning
CN107958257A (en) * 2017-10-11 2018-04-24 华南理工大学 A kind of Chinese traditional medicinal materials recognition method based on deep neural network

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009058915A1 (en) * 2007-10-29 2009-05-07 The Trustees Of The University Of Pennsylvania Computer assisted diagnosis (cad) of cancer using multi-functional, multi-modal in-vivo magnetic resonance spectroscopy (mrs) and imaging (mri)
CN102663374A (en) * 2012-04-28 2012-09-12 北京工业大学 Multi-class Bagging gait recognition method based on multi-characteristic attribute
CN103106265A (en) * 2013-01-30 2013-05-15 北京工商大学 Method and system of classifying similar images
CN106096561A (en) * 2016-06-16 2016-11-09 重庆邮电大学 Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN106248559A (en) * 2016-07-14 2016-12-21 中国计量大学 A kind of leukocyte five sorting technique based on degree of depth study
CN106845406A (en) * 2017-01-20 2017-06-13 深圳英飞拓科技股份有限公司 Head and shoulder detection method and device based on multitask concatenated convolutional neutral net
CN106897681A (en) * 2017-02-15 2017-06-27 武汉喜恩卓科技有限责任公司 A kind of remote sensing images comparative analysis method and system
CN106991374A (en) * 2017-03-07 2017-07-28 中国矿业大学 Handwritten Digit Recognition method based on convolutional neural networks and random forest
CN107066553A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of short text classification method based on convolutional neural networks and random forest
CN107103338A (en) * 2017-05-19 2017-08-29 杭州电子科技大学 Merge the SAR target identification methods of convolution feature and the integrated learning machine that transfinites
CN107220980A (en) * 2017-05-25 2017-09-29 重庆理工大学 A kind of MRI image brain tumor automatic division method based on full convolutional network
CN107301221A (en) * 2017-06-16 2017-10-27 华南理工大学 A kind of data digging method of multiple features dimension heap fusion
CN107527352A (en) * 2017-08-09 2017-12-29 中国电子科技集团公司第五十四研究所 Remote sensing Ship Target contours segmentation and detection method based on deep learning FCN networks
CN107679110A (en) * 2017-09-15 2018-02-09 广州唯品会研究院有限公司 The method and device of knowledge mapping is improved with reference to text classification and picture attribute extraction
CN107958257A (en) * 2017-10-11 2018-04-24 华南理工大学 A kind of Chinese traditional medicinal materials recognition method based on deep neural network
CN107909566A (en) * 2017-10-28 2018-04-13 杭州电子科技大学 A kind of image-recognizing method of the cutaneum carcinoma melanoma based on deep learning
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜培军: "《遥感原理与应用》", 31 July 2006 *

Similar Documents

Publication Publication Date Title
CN109685819B (en) Three-dimensional medical image segmentation method based on feature enhancement
CN113160062B (en) Infrared image target detection method, device, equipment and storage medium
CN107358262B (en) High-resolution image classification method and classification device
CN113469073A (en) SAR image ship detection method and system based on lightweight deep learning
WO2019102005A1 (en) Object recognition using a convolutional neural network trained by principal component analysis and repeated spectral clustering
CN112434672A (en) Offshore human body target detection method based on improved YOLOv3
CN113569667A (en) Inland ship target identification method and system based on lightweight neural network model
CN109785344A (en) The remote sensing image segmentation method of binary channel residual error network based on feature recalibration
CN108416318A (en) Diameter radar image target depth method of model identification based on data enhancing
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
Pan et al. A central-point-enhanced convolutional neural network for high-resolution remote-sensing image classification
CN109543662A (en) Object detection method, system, device and the storage medium proposed based on region
CN110084284A (en) Target detection and secondary classification algorithm and device based on region convolutional neural networks
CN111860537B (en) Deep learning-based green citrus identification method, equipment and device
CN111145188A (en) Image segmentation method based on ResNet and UNet models
CN109635653A (en) A kind of plants identification method
CN117649610B (en) YOLOv-based pest detection method and YOLOv-based pest detection system
US20220132050A1 (en) Video processing using a spectral decomposition layer
CN109948575A (en) Eyeball dividing method in ultrasound image
AlArfaj et al. Multi-Step Preprocessing With UNet Segmentation and Transfer Learning Model for Pepper Bell Leaf Disease Detection
CN116631190A (en) Intelligent traffic monitoring system and method thereof
CN117011699A (en) GAN model-based crop identification model of high-resolution remote sensing image and identification method thereof
CN110852255A (en) Traffic target detection method based on U-shaped characteristic pyramid
CN117853895A (en) Remote sensing image directed target detection method based on smooth GIoU regression loss function
CN113989671B (en) Remote sensing scene classification method and system based on semantic perception and dynamic graph convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181109