CN111652317B - Super-parameter image segmentation method based on Bayes deep learning - Google Patents

Super-parameter image segmentation method based on Bayes deep learning Download PDF

Info

Publication number
CN111652317B
CN111652317B CN202010501892.9A CN202010501892A CN111652317B CN 111652317 B CN111652317 B CN 111652317B CN 202010501892 A CN202010501892 A CN 202010501892A CN 111652317 B CN111652317 B CN 111652317B
Authority
CN
China
Prior art keywords
target
image
feature
edge
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010501892.9A
Other languages
Chinese (zh)
Other versions
CN111652317A (en
Inventor
齐仁龙
张庆辉
杨绪华
朱小会
李大海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Science and Technology
Original Assignee
Zhengzhou University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Science and Technology filed Critical Zhengzhou University of Science and Technology
Priority to CN202010501892.9A priority Critical patent/CN111652317B/en
Publication of CN111652317A publication Critical patent/CN111652317A/en
Application granted granted Critical
Publication of CN111652317B publication Critical patent/CN111652317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a super-parameter image segmentation method based on Bayes deep learning, and aims to solve the technical problems of large calculation amount and low precision of super-parameter extraction in the existing image segmentation. The method comprises the steps of selecting an image training set, performing a Gaussian process on image information, preprocessing a data set by adopting an L2 regular operator to obtain image contour edge features, constructing a target feature edge feature recognition training set, classifying the data set according to a Bayesian theorem, setting an image target edge segmentation label based on semantic recognition, extracting the target edge feature data set by further adopting the Gaussian process, and calculating a target set edge feature Gaussian super-parameter set. The invention has the beneficial effects that: and the efficiency and the accuracy of target identification are improved.

Description

Super-parameter image segmentation method based on Bayes deep learning
Technical Field
The invention relates to the technical field of machine learning, in particular to a super-parameter image segmentation method based on Bayesian deep learning.
Background
In the field of computer vision, image segmentation refers to the task of assigning a label to each pixel in an image, which can also be regarded as a classification of pixels. Unlike object detection using rectangular candidate boxes, image segmentation needs to be accurate to pixel level locations, and thus it plays a very important role in tasks such as medical analysis, satellite image object detection, iris recognition, and automatic driving of automobiles.
Human recognition targets are more dependent on experience to distinguish targets, while deep learning is to extract target features by constructing a convolutional neural network and training to further recognize targets. The recognition result of the conventional target recognition method is often some objects of a certain class defined in advance, such as a human face, a vehicle and the like, and the content contained in one image is far more than some mutually independent objects, and also contains a plurality of objects and information such as attributes, spatial relations, logical relations and the like of the objects, wherein the information cannot be described by using some class labels, but needs to be described by using natural language. Any mathematical model is difficult to meet all target recognition, so that a plurality of condition classification recognition is formed, and the cross-domain fusion recognition efficiency of deep learning is low.
Pixel-level image segmentation is a research hotspot in the field of artificial intelligence, which is a mathematical modeling problem involving multiple disciplines such as image processing, pattern recognition, visual perception, and psychological cognition. It is very easy for human beings to identify targets by experience during long-term evolution and learning, but it is important to select identification models and super-parametric optimization by automatically identifying targets from complex backgrounds by means of machines and requiring complex mathematical modeling. The deep learning super parameters have the characteristics of difficult selection and no regularity, and the unpredictable influence exists among different super parameters, so that the debugging is very time-consuming, and a large amount of iterative computation is needed for the evaluation of each super parameter combination. For such problems, classical optimization algorithms: such as particle swarm algorithms, simulated annealing algorithms, local search algorithms, etc., are no longer applicable. Researchers have proposed methods that employ proxy models to reduce the evaluation cost of such problems by simulating the estimated value of the objective function. However, no matter the gatekeeper algorithm or the self-adaptive simulation algorithm proposed in the reinforcement learning field is adopted, the learning process is always time-consuming, and is limited to a certain condition or a certain field, the precision is difficult to ensure, and the cross-boundary fusion is difficult to realize.
Disclosure of Invention
The invention provides a super-parameter image segmentation method based on Bayes deep learning, which aims to solve the technical problems of large extraction calculation amount and low precision of super-parameters in the existing image segmentation.
In order to solve the technical problems, the invention adopts the following technical scheme:
a super-parameter image segmentation method based on Bayes deep learning is designed, which comprises the following steps:
step 1: data preprocessing, namely regularizing data elements in an image to generate an image segmentation class data set;
step 2: extracting target edge features from the image by using Gaussian masks;
step 3: extracting a boundary box and a target mask of the image through the edge features of the target by using Bayesian estimation;
step 4: the boundary frame and the target mask are put into a feature dictionary for comparison, so that the category of each target in the image can be obtained;
the construction method of the feature dictionary comprises the following steps:
(1) Establishing a training set and a testing set of images;
(2) Carrying out the operation in the step 1-3 on each image in the training set to obtain a boundary frame and a target mask;
(3) Collecting the bounding box and the target mask in the step (2) to obtain a feature dictionary composed of the bounding box and the target mask;
(4) Inputting the test set into the feature dictionary in the step (3), checking the accuracy of the obtained feature dictionary, and if the accuracy does not meet the requirement, adjusting the parameters of the model to retrain until the accuracy of the feature dictionary meets the requirement.
Further, in step 1, the image segmentation class data set includes N target segmentation class attributes and M data attributes of each target class, and when the probability of the N class attributes and the probability of the M data attributes are the largest, a bayesian classification matcher is adopted to select the target and segment the image; the software used is python and the framework uses tensorflow.
Further, in step 2, the specific steps of extracting the edge feature of the target are:
the first step: assuming that the edge probability of the image pixel f (x, y) satisfies the gaussian distribution, the two-dimensional gaussian function is:
and a second step of: gradient functions are obtained for the x and y directions of the image:
and a third step of: convolving the image dataset:
fourth step: calculating the probability density distribution of the image target edge, namely the target edge characteristics:
further, in step 2, prior to extracting the target class label, the prior probability of the target occurrence is calculated:
wherein C is i For class C target set (C 1 、C 2 、C 3 ...C n ) Any one of the elements, N i Representing the number of occurrences of the target, and N represents the total amount of the target set.
Further, in step 2, in the process of extracting the target class label, the conditional probability of the target occurrence is calculated:
wherein x is a Representing the abscissa, y of the target point a Representing the ordinate of the target point. P (x) a )、P(y a ) Representing the target edge feature probability.
Further, in step 3, the specific steps of extracting the bounding box of the image and the target mask are:
(1) Obtaining a target region of the image and a classification weight of each pixel in the region by learning the extracted target boundary characteristics;
(2) After the target areas of the images are obtained, the internal and external feature images of each target area are combined into two complete feature images, and then two branch data sets D1 and D2 of image segmentation and image classification are synchronously carried out;
(3) In image segmentation, classifying internal and external feature images of a target area by using a Bayesian classifier to distinguish a foreground and a background in an image and generate a mask;
(4) In the image classification, the maximum value is taken according to the pixel probability distribution in the two types of feature images to obtain a new feature image, and then the maximum likelihood estimation classifier is used to obtain the class of the object in the target area.
Further, in step 4, the method for comparing the bounding box, the target mask and the feature dictionary is as follows: firstly, calculating similarity weights of the boundary frame and the target mask and the feature dictionary by using an L2 regular operator, then extracting a target edge feature data set in a similarity Gaussian process, and obtaining a semantic segmentation result through Bayesian classification matching.
Furthermore, before the semantic segmentation result is output, an edge Gaussian super-parameter function is calculated, then the target matching degree is calculated according to the value, and the better the super-parameter set is, the higher the precision value of the semantic segmentation is obtained.
Further, the training set of Images includes an Open Images V4 detection set that contains 190 ten thousand pictures and 1540 ten thousand boxes for 600 categories on the pictures.
Compared with the prior art, the invention has the beneficial technical effects that:
according to the invention, a Bayesian formula is mainly utilized, a python language and tensorsurface framework is utilized, the image is subjected to pixel preprocessing according to the steep principle of the edge feature protruding position after the image edge feature Gaussian process, the whole image is subjected to the Gaussian process to obtain an edge feature data set, then the data set is preprocessed by utilizing an L2 criterion, a Bayesian estimation model is utilized for image target edge feature recognition according to the image target segmentation requirement based on semantic recognition, an image target edge feature data dictionary based on semantic recognition is constructed by combining deep learning, and the trained model is applied to a complex target recognition system, so that the target recognition efficiency and accuracy are improved.
Drawings
FIG. 1 is a flow chart of a Bayesian deep learning-based hyper-parametric image segmentation method of the present invention.
Detailed Description
The following examples are given to illustrate the invention in detail, but are not intended to limit the scope of the invention in any way.
The procedures involved or relied on in the following embodiments are conventional procedures or simple procedures in the technical field, and those skilled in the art can make routine selections or adaptation according to specific application scenarios.
Example 1: referring to fig. 1, the overall steps of the super-parametric image segmentation method based on bayesian deep learning are as follows: selecting an image training set, performing Gaussian process on image information, preprocessing the data set by adopting an L2 regular operator to obtain image contour edge characteristics, constructing a target characteristic edge characteristic recognition training set, classifying the data set according to a Bayesian rule, setting an image target edge segmentation label based on semantic recognition, further adopting the Gaussian process to extract the target edge characteristic data set, calculating a target edge characteristic Gaussian super-parameter set, calculating target edge posterior probability according to the data set, taking the maximum posterior probability as semantic-based target image segmentation and recognition probability, performing Bayesian score matching, judging that the recognition is correct if the score is higher than 90, otherwise adopting deep learning and 0.618 coefficient adjustment Gaussian and super-parameter re-Gaussian process training until super-optimal parameters are obtained. The final result of the invention can be realized: and inputting the image and the target label into the model for image target segmentation, wherein the target can be separated from the corresponding background. That is, for the trained model, given an image and target information to be queried, the corresponding target can be detected from the image.
The manufacturing method of the characteristic dictionary comprises the following steps:
(1) Data preprocessing
Regularizing data elements in the image to generate an image segmentation class data set. The data set comprises N target segmentation class attributes and M data attributes representing each target class, and when the probability of the N class attributes and the probability of the M data attributes are the largest, the model is adopted to segment the image according to the target edge profile. The specific implementation is realized under the artificial intelligence tensorflow framework by using Python language.
(2) Extracting edge features of a target
The target super-edge parameter characteristics are extracted mainly through Gaussian transformation, and the main extracted characteristics are as follows:
1. target image edge pixel gray transitions;
2. the pixel boundary between different materials, textures, colors and brightness of the target generates jump;
3. the target contour line and the background have different reflection characteristics, and pixel value jump is formed;
4. the object is shaded by the illumination, which also creates inter-pixel grey value transitions.
Calculating the attribute comprises the following steps: the Gaussian mask edge feature extraction algorithm is adopted for the image f (x, y), and the method is characterized in that: and selecting a proper Gaussian mask according to the characteristic of large probability density change of the image target edge data transition, and acquiring the position with high probability density of the data set based on the maximum value in the pixel point neighborhood calculation, namely the edge pixel point.
The specific process is as follows:
the first step: assuming that the probability of the edge of the image pixel f (x, y) satisfies the gaussian distribution, the two-dimensional gaussian function is:
and a second step of: for the x, y direction, the gradient function is calculated:
and a third step of: convolving the image dataset:
fourth step: calculating the probability density distribution of the image target edge, namely the target edge characteristics:
the calculation method of the Gaussian edge hyper-parameter feature extraction module is characterized in that a Gaussian kernel function is calculated, and then the Gaussian kernel function is used as a mask to be convolved with a target pixel, so that the target edge feature can be extracted, and the process is also a hyper-parameter estimation process.
After the image pixel edge characteristics are obtained, characteristic distribution parameters are calculated by utilizing a Gaussian process, and then the minimum (optimal) of Gaussian characteristic function loss functions are calculated by utilizing an L2 regularization operator, and meanwhile, a penalty function is added in an algorithm to prevent the model from being fitted. The L2 regularization operator is:
where Loss is a Loss function, E in Is the training sample error that does not contain regularization term, λ is the regularization parameter (penalty function). In order to make the model more optimal, the regularization function is defined as follows:
i.e. the sum of squares of all w (errors) does not exceed the parameter C (threshold), it is ensured that the training sample error E is minimized in The loss function value is the smallest.
(3) Boundary box and target mask for extracting image
The Bayesian estimation algorithm is mainly adopted. The Bayesian estimation model is based on the principle that a priori probability recognition algorithm is used for recognizing the objective world, the basic idea is that on the basis of accumulating a large number of samples, the maximum probability estimation is carried out on the recognized object through the priori probability and the conditional probability, the maximum probability is the recognition result, and when the analysis sample is as large as the number of the approximate population, the probability of occurrence of the event in the sample is close to the probability of occurrence of the event in the population, so that the minimum error prediction can be realized. The Bayesian estimation core is super-parameter selection, in order to improve the trapping local advantage in the image segmentation process, a super-parameter image data classification model based on Gaussian distribution is constructed, and is combined with deep learning, a 0.618 estimation method (golden section method) is adopted, super-parameters are selected, balance between super-parameter errors and weights is effectively adjusted, and an efficient pixel segmentation method based on semantic understanding is realized.
The Bayes classifier is a classification method based on statistical theory, and for a sample set C= { C containing M class samples 1 C 2 C 3 ......C n The classifier first calculates an N-dimensional feature vector x= [ X ] 1 x 2 ......x n ]Maximum likelihood estimation of tags belonging to each category, calculating category tag C to which x belongs by sorting them and taking the maximum value i The Bayesian formula is as follows:
wherein Pr (c) i |x) is the posterior probability, pr (x|c) i ) Conditional probability, pr (c) i ) For a priori probability, P (x a )、P(y a ) Representing the target edge feature probability. The classification problem is reduced to the x attribute class C i Maximum problem:
C i =argmaxPr(x|c i )Pr(c i ) (11)
experiments prove that the precision of the naive Bayes classifier and other classes of classifiers is much higher.
Depending on the gray level of the image, there will generally be a sharp raised edge at the image boundary, with which the image can be segmented. The nucleon is a filter with Gaussian super parameters, has the characteristics of denoising, smoothing and strengthening edge characteristic attributes of an image, and the calculation process is divided into four steps: the first step is to smooth the image with a Gaussian filter; calculating the amplitude and direction of the gradient by using the finite difference of the first-order partial derivatives; thirdly, carrying out non-maximum suppression on the gradient amplitude; and fourthly, detecting and connecting edges by using a double-threshold algorithm.
The image segmentation process is as follows:
(1) Obtaining a target region of the image and a classification weight of each pixel in the region by learning the extracted target boundary characteristics;
(2) After the target areas of the images are obtained, the internal and external feature images of each target area are combined into two complete feature images, and then two branch data sets D1 and D2 of image segmentation and image classification are synchronously carried out;
(3) In image segmentation, classifying internal and external feature images of a target area by using a Bayesian classifier to distinguish a foreground and a background in an image and generate a mask;
(4) In the image classification, the maximum value is taken according to the pixel probability distribution in the two types of feature images to obtain a new feature image, and then the maximum likelihood estimation classifier is used to obtain the class of the object in the target area.
After the image segmentation is completed, an image target contour edge data set needs to be established, and the method specifically comprises the following steps: the image target edge segmentation task comprises 5 basic subtasks, such as generation of a target candidate set, extraction of edge features of a candidate target, bayesian classification of the candidate target, hyper-parameter correction of the candidate target, construction of an edge feature dictionary of the candidate target, and the like. An alternative target dataset includes an Open Images V4 detection set containing 190 ten thousand pictures and 1540 ten thousand bounding boxes for 600 categories on the picture. Preprocessing image data by adopting a pixel L2 regular operator to form an image classification set and a classified image pixel subset, calculating a target edge feature kernel mask by adopting a multidimensional Gaussian distribution probability model, acquiring a target edge feature data set by convolution, and acquiring the prior probability of the target edge feature classification set by Bayesian evidence learning.
(4) Implementation process
Mainly comprises three parts: (1) image target pixel preprocessing: adopting an L2 regularization algorithm, setting a threshold function to generate an initial point set: x, y= (X1, Y1), (X2, Y2), (xt, yt); (2) Using a gaussian kernel model, constructing a dataset d= { (x 1, y 1)..(xt, yt) }; (3) entering Bayesian maximum posterior probability estimation. The specific operation is as follows:
A. setting labels according to target classification;
B. counting image target classification;
C. calculating prior probabilities of various targets: pr (C) i );
D. According to the target classification, selecting all data in the corresponding data set D, constructing a Gaussian process model, and extracting target edge points Xi and Yi sets;
E. calculation of Gaussian distribution hyper-parameter function (μ) ii );
F. Further, an acquisition function (μ is used ii ) Calculating the next evaluation point xi, i with the value of 1-t, xi=argmaxu (x|D), and calculating the response yi;
G. adding new data points to the set D, D+.DU { xi, yi }, i+.i+1;
H. calculating posterior probability using Bayes posterior estimation formula
I. Calculating posterior probability distribution: r=max (Pr (C) 1 |X),Pr(C 2 |X)......Pr(C t X));
J. And (3) checking the calculation result: matching the calculated posterior probability with the edge characteristics of the target, and setting score >90 according to the model if the month identified by the higher matching score is close to the actual target;
K. correcting, if the score is smaller than 90, adopting deep learning to make Gaussian super-parameter mu i ,σ i Correcting, wherein the correction coefficient is 0.618;
and L, placing the target edge probability parameters with the scores greater than 90 into a data dictionary to form a super-parameter data dictionary set.
(5) Training and testing
The 190 ten thousand pictures of the Open Images V4 detection set are divided into a training set and a test set, and whether the trained model meets the requirements is verified by inputting the test set through supervised learning training model. Thus, the feature dictionary establishment is completed.
(6) Image detection
Inputting the image to be detected into a detection model, extracting a boundary frame and a target mask of the image to be detected, and then putting the boundary frame and the target mask into a feature dictionary for comparison, so that the category of each target in the image can be obtained. The method comprises the following steps: firstly, calculating similarity weights of a boundary frame and a target mask and the feature dictionary by using an L2 regular operator, then extracting a target edge feature data set in the similarity Gaussian process, and obtaining a semantic segmentation result through Bayesian classification matching. Before the semantic segmentation result is output, an edge Gaussian super-parameter function is calculated, then the target matching degree is calculated according to the value of the score, and the better the super-parameter set is, the higher the precision score of the semantic segmentation is obtained.
Through the steps, the image segmentation method based on the super-parameter optimization of the Bayes deep learning can be completed, and the target edge segmentation super-parameter dictionary is obtained on the basis of training by means of an image set. The super-parameter dictionary can realize the identification and segmentation of the target, namely: after training, the system can realize separation and recognition of targets and backgrounds through model calculation in the environment of input images and voices. The method can effectively solve the defects of long time consumption, large performance fluctuation, large occupied resources and the like of the traditional deep learning optimization algorithm, and the model can be applied to plug-ins in image semantic recognition software of the smart phone after training.
While the present invention has been described in detail with reference to the drawings and the embodiments, those skilled in the art will understand that various specific parameters in the above embodiments may be changed without departing from the spirit of the invention, and a plurality of specific embodiments are common variation ranges of the present invention, and will not be described in detail herein.

Claims (4)

1. A super-parameter image segmentation method based on Bayes deep learning is characterized by comprising the following steps:
step 1: data preprocessing, namely regularizing data elements in an image to generate an image segmentation class data set;
step 2: extracting target edge features from the image by using Gaussian masks;
step 3: extracting a boundary box and a target mask of the image through the target edge characteristics by using Bayesian estimation;
step 4: the boundary frame and the target mask are put into a feature dictionary for comparison, so that the category of each target in the image can be obtained;
in step 2, the specific steps of extracting the edge feature of the target are as follows:
the first step: assuming that the edge probability of the image pixel f (x, y) satisfies the gaussian distribution, the two-dimensional gaussian function is:
and a second step of: gradient functions are obtained for the x and y directions of the image:
and a third step of: convolving the image dataset:
fourth step: calculating the probability density distribution of the image target edge, namely the target edge characteristics:
before the class label of the target is acquired in the step 3, the prior probability of the occurrence of the target needs to be calculated:
wherein C is i For class C target set (C 1 、C 2 、C 3 ...C n ) Any one of the elements, N i Representing the number of occurrences of the object, N representing the total amount of the object set,
in the step 3, in the process of obtaining the category label of the target, calculating the conditional probability of the occurrence of the target:
wherein x is a Representing the abscissa, y of the target point a Representing the ordinate of the target point, P (x a )、P(y a ) Representing the probability of the edge feature of the object,
in step 3, the specific steps of extracting the bounding box of the image and the target mask are as follows:
(1) Obtaining a target region of the image and a classification weight of each pixel in the region by learning the extracted target boundary characteristics;
(2) After the target areas of the images are obtained, the internal and external feature images of each target area are combined into two complete feature images, and then two branch data sets D1 and D2 of image segmentation and image classification are synchronously carried out;
(3) In image segmentation, classifying the internal and external feature images of the target region by using a Bayesian classifier to distinguish a foreground and a background in the image and generate a mask;
(4) In the image classification, the maximum value is taken according to the pixel probability distribution in the two types of feature images to obtain a new feature image, and then a maximum likelihood estimation classifier is used to obtain the class of the object in the target area;
in step 4, the method for constructing the feature dictionary is as follows:
(1) Establishing a training set and a testing set of images;
(2) Carrying out the operation in the step 1-3 on each image in the training set to obtain a boundary frame and a target mask;
(3) Collecting the bounding box and the target mask in the step (2), and obtaining a feature dictionary composed of the bounding box and the target mask;
(4) Inputting the test set into the feature dictionary in the step (3), checking the accuracy of the obtained feature dictionary, and if the accuracy does not meet the requirement, adjusting the parameters of the model to retrain until the accuracy of the feature dictionary meets the requirement.
2. The method for performing super-parametric image segmentation based on Bayesian deep learning according to claim 1, wherein in step 1, the image segmentation class data set comprises N target segmentation class attributes and M data attributes of each target class, and when the probability of the N class attributes is the maximum with the probability of the M data attributes, a Bayesian classification matcher is adopted to select the target and segment the image.
3. The method for segmenting a super-parametric image based on bayesian deep learning according to claim 1, wherein in step 4, the method for comparing the bounding box and the target mask with the feature dictionary is as follows: firstly, calculating similarity weights of a boundary frame and a target mask and the feature dictionary by using an L2 regular operator, then extracting a target edge feature data set in the similarity Gaussian process, and obtaining a semantic segmentation result through Bayesian classification matching.
4. A super-parametric image segmentation method based on bayesian deep learning according to claim 3, wherein before outputting the semantic segmentation result, an edge gaussian super-parametric function is calculated, then the target matching degree is calculated according to the score value, and the better the super-parametric set is, the higher the precision score of the semantic segmentation is.
CN202010501892.9A 2020-06-04 2020-06-04 Super-parameter image segmentation method based on Bayes deep learning Active CN111652317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010501892.9A CN111652317B (en) 2020-06-04 2020-06-04 Super-parameter image segmentation method based on Bayes deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010501892.9A CN111652317B (en) 2020-06-04 2020-06-04 Super-parameter image segmentation method based on Bayes deep learning

Publications (2)

Publication Number Publication Date
CN111652317A CN111652317A (en) 2020-09-11
CN111652317B true CN111652317B (en) 2023-08-25

Family

ID=72347345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010501892.9A Active CN111652317B (en) 2020-06-04 2020-06-04 Super-parameter image segmentation method based on Bayes deep learning

Country Status (1)

Country Link
CN (1) CN111652317B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464948A (en) * 2020-11-11 2021-03-09 常州码库数据科技有限公司 Natural scene target contour extraction method and system based on bionics
CN113468939A (en) * 2020-11-30 2021-10-01 电子科技大学 SAR target recognition method based on supervised minimization deep learning model
CN112906704A (en) * 2021-03-09 2021-06-04 深圳海翼智新科技有限公司 Method and apparatus for cross-domain target detection
CN113111928B (en) * 2021-04-01 2023-12-29 中国地质大学(北京) Semi-supervised learning mineral resource quantitative prediction method based on geometrics database
CN113269782B (en) * 2021-04-21 2023-01-03 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
CN113158960A (en) * 2021-05-06 2021-07-23 吴国军 Medical image recognition model construction and recognition method and device
CN113450268A (en) * 2021-05-24 2021-09-28 南京中医药大学 Image noise reduction method based on posterior probability
CN113392933B (en) * 2021-07-06 2022-04-15 湖南大学 Self-adaptive cross-domain target detection method based on uncertainty guidance

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010129711A1 (en) * 2009-05-05 2010-11-11 The Trustees Of Columbia University In The City Of New York Devices, systems, and methods for evaluating vision and diagnosing and compensating impairment of vision
US7979363B1 (en) * 2008-03-06 2011-07-12 Thomas Cecil Minter Priori probability and probability of error estimation for adaptive bayes pattern recognition
CN110245587A (en) * 2019-05-29 2019-09-17 西安交通大学 A kind of remote sensing image object detection method based on Bayes's transfer learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039239B2 (en) * 2002-02-07 2006-05-02 Eastman Kodak Company Method for image region classification using unsupervised and supervised learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7979363B1 (en) * 2008-03-06 2011-07-12 Thomas Cecil Minter Priori probability and probability of error estimation for adaptive bayes pattern recognition
WO2010129711A1 (en) * 2009-05-05 2010-11-11 The Trustees Of Columbia University In The City Of New York Devices, systems, and methods for evaluating vision and diagnosing and compensating impairment of vision
CN110245587A (en) * 2019-05-29 2019-09-17 西安交通大学 A kind of remote sensing image object detection method based on Bayes's transfer learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
包晓敏 ; 汪亚明 ; .基于最小风险贝叶斯决策的织物图像分割.纺织学报.2006,(02),全文. *

Also Published As

Publication number Publication date
CN111652317A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111652317B (en) Super-parameter image segmentation method based on Bayes deep learning
Pan et al. Object detection based on saturation of visual perception
Lopes et al. Automatic histogram threshold using fuzzy measures
CN109033978B (en) Error correction strategy-based CNN-SVM hybrid model gesture recognition method
US20070065003A1 (en) Real-time recognition of mixed source text
CN104504366A (en) System and method for smiling face recognition based on optical flow features
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
CN108734200B (en) Human target visual detection method and device based on BING (building information network) features
JP2009069996A (en) Image processing device and image processing method, recognition device and recognition method, and program
CN108230330B (en) Method for quickly segmenting highway pavement and positioning camera
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
Xiang et al. Moving object detection and shadow removing under changing illumination condition
Stenroos Object detection from images using convolutional neural networks
CN110633727A (en) Deep neural network ship target fine-grained identification method based on selective search
CN111368845B (en) Feature dictionary construction and image segmentation method based on deep learning
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN113743426A (en) Training method, device, equipment and computer readable storage medium
CN112183237A (en) Automatic white blood cell classification method based on color space adaptive threshold segmentation
Yang et al. An improved algorithm for the detection of fastening targets based on machine vision
CN105844299B (en) A kind of image classification method based on bag of words
CN115661860A (en) Method, device and system for dog behavior and action recognition technology and storage medium
CN114927236A (en) Detection method and system for multiple target images
Tang et al. Research of color image segmentation algorithm based on asymmetric kernel density estimation
KR20230046818A (en) Data learning device and method for semantic image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant