CN117314763A - Oral hygiene management method and system based on machine learning - Google Patents

Oral hygiene management method and system based on machine learning Download PDF

Info

Publication number
CN117314763A
CN117314763A CN202311038730.6A CN202311038730A CN117314763A CN 117314763 A CN117314763 A CN 117314763A CN 202311038730 A CN202311038730 A CN 202311038730A CN 117314763 A CN117314763 A CN 117314763A
Authority
CN
China
Prior art keywords
model
image
pixel
parameter
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311038730.6A
Other languages
Chinese (zh)
Inventor
吴姗
董银银
钟锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Affiliated Stomatological Hospital Of Guizhou Medical University
Original Assignee
Affiliated Stomatological Hospital Of Guizhou Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Affiliated Stomatological Hospital Of Guizhou Medical University filed Critical Affiliated Stomatological Hospital Of Guizhou Medical University
Priority to CN202311038730.6A priority Critical patent/CN117314763A/en
Publication of CN117314763A publication Critical patent/CN117314763A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an oral hygiene management method and system based on machine learning. The invention belongs to the technical field of computer vision, and particularly relates to an oral hygiene management method and system based on machine learning.

Description

Oral hygiene management method and system based on machine learning
Technical Field
The invention relates to the technical field of computer vision, in particular to an oral hygiene management method and system based on machine learning.
Background
Traditional personal oral hygiene management methods rely mainly on subjective judgment and experience of individuals, and cannot provide scientific and accurate oral health assessment and personalized management advice, and in addition, patients often lack knowledge of the importance and correct methods of oral hygiene, resulting in a decline in oral health. The traditional image denoising algorithm has the problems of low processing speed caused by high processing complexity, lack of adaptability to process different similar images and noise conditions and limited processing effect on complex noise and images; the original DBN model has the problems of high complexity, over fitting and poor parameter convergence adjusting effect, so that the performance of the model is reduced; the general searching algorithm has the contradiction that the searching speed is low due to too many searching parameters, and the global optimum can not be achieved due to too few searching parameters.
Disclosure of Invention
Aiming at the problems that the traditional image denoising algorithm has low processing speed due to high processing complexity and lacks adaptability to process different similar images and noise conditions and has limited processing effects on complex noise and images, the method for calculating the sum of noise intensities is adopted to accurately evaluate the noise level of the image, the target pixel is extracted based on an edge segmentation technology, the pixel value is comprehensively considered, and finally, the marked pixel value is denoising processed, so that the accurate and rapid denoising effect is achieved, the algorithm adjustability is improved due to the fact that the denoising effect is adjusted based on an angle threshold value, and the algorithm adaptability is enhanced; aiming at the problems of high complexity, over-fitting and poor parameter convergence adjusting effect of an original DBN model so as to reduce the performance of the model, the scheme adopts the method based on exponential decay adaptability to adjust the learning rate, carries out matrix norm constraint on a weight matrix of the model, adds regularization and sparsity penalty so as to improve the fitness of the model, enhance the robustness, prevent the over-fitting so as to enable the model to learn more sparsely and flatly and optimize the performance; aiming at the contradiction problem that the general search algorithm has slow search speed due to too many search parameters and can only achieve local optimum but not global optimum due to too few search parameters, the scheme adopts parallelization training, calculates the reduction coefficient and social strength, judges global optimum based on fitness, increases the search efficiency, and enables the search process to be more balanced and comprehensive so as to improve the performance and accuracy of the whole search algorithm.
The technical scheme adopted by the invention is as follows: the invention provides an oral hygiene management method based on machine learning, which comprises the following steps:
step S1: collecting data;
step S2: establishing an edge-oriented denoising model, segmenting an image by a multi-scale morphological method, extracting target pixels based on an edge segmentation technology, comprehensively considering the pixels extracted from each segment, calculating a standard average pixel value, identifying and marking similar points and dissimilar points in the standard pixel set segments, and finally denoising the marked pixel set;
step S3: the DBN model is initialized based on matrix norm constraint, an energy function, joint distribution, independent probability distribution and conditional probability distribution are defined, parameter gradient values are calculated, learning rate is updated, a contrast divergence model is utilized to reorganize a parameter set theta, and finally matrix norm constraint is carried out on a weight matrix of the model, regularization and sparsity penalty are added to complete the DBN model initialization based on the matrix norm constraint;
step S4: based on the super-parameter optimization of the parallelization training, the searched parameter space is divided into a plurality of subspaces, the optimal parameters in the subspaces are searched respectively by using the parallelization training and a unique search algorithm, and finally, the optimal parameters are combined and the model performance is further verified, so that a final model is determined.
Further, in step S1, the data acquisition is to acquire an oral image dataset, the oral image dataset including oral images and corresponding labels, the labels being oral hygiene assessment grades;
further, in step S2, the establishing an edge-guided denoising model specifically includes the following steps:
step S21: the sum of the noise image intensities is obtained by the following formula:
wherein DNIDS is the sum of the acquired noise image intensities, M is the M-Th segment image, gI is the acquired image, gIts is the acquired image intensity, and Th is the intensity threshold for enhancing the quality;
step S22: image segmentation, based on edge segmentation technology, extracting target pixels in each segment, wherein the steps are as follows:
wherein IS IS a target pixel value of each pixel point after image segmentation, I (X, Y) IS a pixel value of an original image, (X, Y) IS coordinates, M IS an M-th segment, delta IS a change rate, size (IS (I)) IS the number of pixels of IS (I), IS (I) IS a segmentation value of the pixel point I, pixS [ M ] IS an M-th segment pixel set, gIsr (I) IS a segmentation value of an acquired pixel point, FPixS [ M ] IS an M-th segment maximum pixel value, ISgr (I (I)) IS an average intensity value of an I-th segmentation region, R IS a total pixel of the M-th segment, eta IS an angle, G IS a function of a damaged pixel set, max Its IS a maximum image intensity, and min Its IS a minimum image intensity;
step S23: calculating a standard average pixel value, comprehensively considering the pixels extracted from each segment, wherein the following formula is adopted:
where StdPixS IS a standard average pixel value, abs IS an absolute value, IS (X) IS a division point of the pixel point X after image division, and max Its (FpixS (I)) IS a maximum intensity value of the pixel point in FpixS (I);
step S24: marking a pixel set, identifying dissimilar points in a standard pixel set fragment, and marking the similar points and the dissimilar point pixel sets, wherein the formula is as follows:
wherein LabPixS is the set of labeled pixels;
step S25: denoising, namely denoising the marked pixel set, wherein the formula is as follows:
where DnIS is denoising, labS (X, Y) is the pixel of the original image with coordinates (X, Y), gIts () is a function of the pixel intensities, labPixS (I) is the I-th set of related pixels, η i Is an angle threshold.
Further, in step S3, the DBN is a probability generation model, which is composed of a set of restricted boltzmann machines, namely RBM and a back propagation neural network, and includes a visible layer, an n hidden layer and an output layer, wherein the visible layer is placed at the end of the model, features are transferred through the hidden layer during the learning process, and finally appropriate class labels are allocated to the output layer, and the initialization of the DBN model based on matrix norm constraint specifically includes the following steps:
step S31: defining an energy function E of RBM, the input layer is provided with vectors v= { v1, v2, …, vm }, and the hidden layer is provided with vectors h= { h1, h2, …, hn }, with the following formula:
where v is the state of the visible layer, h is the state of the hidden layer, θ is the parameter of RBM, a i Is the unit deviation of the input layer, b j Is the unit deviation of the hidden layer, i and j are the node indices of the visible layer and the hidden layer, ω, respectively i,j Is the link weight between the nodes between the input layer and the hidden layer;
step S32: the joint distribution p (v, h) is defined using the following formula:
wherein R (θ) is a normalization factor;
step S33: the independent probability distribution of the input layer is defined as follows:
step S34: the conditional probability distribution of all layers is defined using the following formula:
where σ () is a sigmoid function;
step S35: the parameter gradient value is calculated by the following formula:
in the method, in the process of the invention,<> data is the probability that the RBM derives,<> model is the probability provided by the reconstructed RBM;
step S36: updating the learning rate and recombining the parameter set theta by using the contrast divergence model, when the initial training process of the RBM is completed, the current hidden layer converts the RBM into a visible layer of a subsequent RBM, and the depth characteristics are classified each time the RBM training is completed, wherein the following formula is used:
wherein, alpha and beta respectively represent the learning rate and batch processing size, t is the iteration number, dr is the learning attenuation rate, and dqb is the current training step;
step S37: and (3) performing matrix norm constraint on a weight matrix of the model, adding regularization and sparsity penalty, limiting the value range of parameters, and controlling the complexity of the model, wherein the formula is as follows:
where Loss' is the original LossFunction II w II 2 Is L of a weight matrix w 2 The paradigm, λ and β1 are the weight parameters of the regularization and sparsity penalty, p1 is the desired sparsity probability, and q1 is the actual activation probability.
Further, in step S4, the parallelization training-based super-parameter optimization specifically includes the following steps:
step S41: dividing the subspace, dividing the parameter space used in the step S3 into a plurality of subspaces, and initializing an initial solution set X= { X1, X2, …, xn } of the parameters for each subspace;
step S42: calculating fitness values, classifying the test data set by using the DBN model established by the parameters corresponding to each solution, and taking the classification accuracy as the fitness value of the solution;
step S43: the reduction coefficient is calculated, the maximum iteration number is preset, and the formula is as follows:
wherein c is a reduction coefficient, c max And c min Equal to 1 and 0.00001, respectively, t represents the current iteration number, t max Representing a maximum number of iterations;
step S44: the social strength is calculated, the attraction strength f is preset, and the formula is as follows:
where s is the social strength, y is the distance between solutions, and l is the upper bound of the parameter space;
step S45: the solution position is adjusted using the following formula:
where i and j are indices of different solutions, n is the number of solution sets, u is the lower bound of the parameter space,is the maximum fitness value;
step S46: evaluating and judging, namely presetting an evaluation threshold value, searching parameters corresponding to the solution sets of each parameter space and the maximum value of the parameter f by using a gradient ascending algorithm as optimal parameters, merging the optimal parameters of the respective parameter spaces to establish a DBN model, and re-dividing the data set and turning to the step S41 if the classification accuracy of the DBN to the test sample data set is lower than the evaluation threshold value, otherwise turning to the step S47;
step S47: running in real time, collecting oral cavity images in real time, inputting the oral cavity images into the DBN model, and giving sanitary advice based on oral cavity sanitary grade output by the model.
The invention provides an oral hygiene management system based on machine learning, which comprises a data acquisition module, an edge guide denoising module, a DBN model initialization module and a super-parameter optimization module, wherein the data acquisition module is used for acquiring data of the oral hygiene management system;
the data acquisition module acquires an oral cavity image data set, wherein the oral cavity image data set comprises an oral cavity image and a corresponding label, the label is an oral cavity health evaluation grade, and the acquired data is sent to the edge guide denoising module;
the edge-oriented denoising module receives the data sent by the data acquisition module, segments the image through a multi-scale morphological method, extracts target pixels based on an edge segmentation technology, comprehensively considers the pixels extracted from each segment, calculates a standard average pixel value, identifies and marks similar points and dissimilar points in the standard pixel set segments, performs denoising processing on the marked pixel set, and sends the data to the DBN model initialization module;
the DBN model initialization module receives data sent by the edge-oriented denoising module, and by defining an energy function, joint distribution, independent probability distribution and conditional probability distribution, and calculating a parameter gradient value, a parameter set theta is recombined by using a contrast divergence model, finally, matrix norm constraint is carried out on a weight matrix of the model, regularization and sparsity penalty are added to complete DBN model initialization based on the matrix norm constraint, and the data is sent to the super-parameter optimization module;
the super-parameter optimization module receives the data sent by the DBN model initialization module, divides the searched parameter space into a plurality of subspaces, searches the optimal parameters in the subspaces respectively by using parallelization training and a unique search algorithm, combines the optimal parameters and further verifies the model performance, determines a final model, evaluates the oral hygiene grade on the oral cavity image acquired in real time and gives hygiene advice.
By adopting the scheme, the beneficial effects obtained by the invention are as follows:
(1) Aiming at the problems that the traditional image denoising algorithm has low processing speed due to high processing complexity, lacks adaptability to process different similar images and noise conditions and has limited processing effect on complex noise and images, the method for calculating the noise intensity sum is adopted to accurately evaluate the noise level of the image, the target pixel is extracted based on the edge segmentation technology, the pixel value is comprehensively considered, and finally the marked pixel value is denoising processed, so that the accurate and rapid denoising effect is achieved, the algorithm adjustability is improved due to the denoising effect adjusted based on the angle threshold value, and the algorithm adaptability is enhanced.
(2) Aiming at the problems that the original DBN model is too high in complexity and poor in overfitting and parameter convergence adjusting effect, so that the model performance is reduced, the scheme adopts the mode of adjusting the learning rate based on exponential decay adaptability, carries out matrix norm constraint on a weight matrix of the model, adds regularization and sparsity penalty, so that the model fitness is improved, the robustness is enhanced, overfitting is prevented, the model learning is sparse and flat, and the performance is optimized.
(3) Aiming at the contradiction problem that the general search algorithm has slow search speed due to too many search parameters and can only achieve local optimum but not global optimum due to too few search parameters, the scheme adopts parallelization training, calculates the reduction coefficient and social strength, judges global optimum based on fitness, increases the search efficiency, and enables the search process to be more balanced and comprehensive so as to improve the performance and accuracy of the whole search algorithm.
Drawings
Fig. 1 is a schematic flow chart of an oral hygiene management method based on machine learning according to the present invention;
fig. 2 is a schematic diagram of an oral hygiene management system based on machine learning according to the present invention;
FIG. 3 is a flow chart of step S3;
fig. 4 is a flow chart of step S4;
FIG. 5 is a flow chart of step S5;
FIG. 6 is a schematic diagram of an optimization parameter search location;
fig. 7 is a graph of an optimization parameter search.
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate orientation or positional relationships based on those shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
Referring to fig. 1, the present invention provides a machine learning-based oral hygiene management method, which includes the following steps:
step S1: collecting data;
step S2: establishing an edge-oriented denoising model, segmenting an image by a multi-scale morphological method, extracting target pixels based on an edge segmentation technology, comprehensively considering the pixels extracted from each segment, calculating a standard average pixel value, identifying and marking similar points and dissimilar points in the standard pixel set segments, and finally denoising the marked pixel set;
step S3: the DBN model is initialized based on matrix norm constraint, an energy function, joint distribution, independent probability distribution and conditional probability distribution are defined, parameter gradient values are calculated, learning rate is updated, a contrast divergence model is utilized to reorganize a parameter set theta, and finally matrix norm constraint is carried out on a weight matrix of the model, regularization and sparsity penalty are added to complete the DBN model initialization based on the matrix norm constraint;
step S4: based on the super-parameter optimization of the parallelization training, the searched parameter space is divided into a plurality of subspaces, the optimal parameters in the subspaces are searched respectively by using the parallelization training and a unique search algorithm, and finally, the optimal parameters are combined and the model performance is further verified, so that a final model is determined.
In step S1, the data acquisition is to acquire an oral image dataset, which includes an oral image and a corresponding label, the label being an oral hygiene assessment grade, referring to fig. 1, the embodiment being based on the above embodiment.
In a third embodiment, referring to fig. 1 and 3, the method for establishing an edge-guided denoising model in step S2 specifically includes the following steps:
step S21: the sum of the noise image intensities is obtained by the following formula:
wherein DNIDS is the sum of the acquired noise image intensities, M is the M-Th segment image, gI is the acquired image, gIts is the acquired image intensity, and Th is the intensity threshold for enhancing the quality;
step S22: image segmentation, based on edge segmentation technology, extracting target pixels in each segment, wherein the steps are as follows:
wherein IS IS a target pixel value of each pixel point after image segmentation, I (X, Y) IS a pixel value of an original image, (X, Y) IS coordinates, M IS an M-th segment, delta IS a change rate, size (IS (I)) IS the number of pixels of IS (I), IS (I) IS a segmentation value of the pixel point I, pixS [ M ] IS an M-th segment pixel set, gIsr (I) IS a segmentation value of an acquired pixel point, FPixS [ M ] IS an M-th segment maximum pixel value, ISgr (I (I)) IS an average intensity value of an I-th segmentation region, R IS a total pixel of the M-th segment, eta IS an angle, G IS a function of a damaged pixel set, max Its IS a maximum image intensity, and min Its IS a minimum image intensity;
the sum of the noise image intensities is the basis of the image segmentation, and in step S21, the total intensity of noise in the image is obtained by calculating the sum of the noise image intensities; in step S22, the image is segmented into different regions by an image segmentation technique, the pixels in each region having similar characteristics; the characteristics comprise a target pixel value, a segmentation value of a pixel point and the like, which are all calculated based on the sum of the intensity of the noise image; thus, the noise image intensity sum provides a basis and foundation for image segmentation;
according to the given formula in step S22, the basis of image segmentation is to extract the target pixels in each segment by edge segmentation technique; specifically, calculating and comparing parameters such as a target pixel value of each pixel point, a pixel value and coordinates of an original image according to the parameters to obtain a pixel set and a maximum pixel value of each section; dividing the image into different regions, the pixels in each region having similar characteristics for subsequent denoising;
delta is measured by calculating the difference between the maximum intensity value and the minimum intensity value for each pixel point; the higher the change rate is, the larger the intensity change of the pixel points in the image is;
η is measured by calculating the relationship between the coordinates (X, Y) of each pixel in the image and the size (X, Y) of the entire image; the larger the angle is, the wider the position distribution of the pixel points in the image is;
the purpose of the two parameters δ and η is to help identify edges and target pixels in the image; the edge segmentation technique needs to extract a target pixel according to the intensity and the position of the pixel, and the two parameters can provide information about the intensity change and the position distribution of the pixel, so as to help image segmentation;
step S23: calculating a standard average pixel value, comprehensively considering the pixels extracted from each segment, wherein the following formula is adopted:
where StdPixS IS a standard average pixel value, abs IS an absolute value, IS (X) IS a division point of the pixel point X after image division, and max Its (FpixS (I)) IS a maximum intensity value of the pixel point in FpixS (I);
step S24: marking a pixel set, identifying dissimilar points in a standard pixel set fragment, and marking the similar points and the dissimilar point pixel sets, wherein the formula is as follows:
wherein LabPixS is the set of labeled pixels;
step S25: denoising, namely denoising the marked pixel set, wherein the formula is as follows:
where DnIS is denoising, labS (X, Y) is the pixel of the original image with coordinates (X, Y), gIts () is a function of the pixel intensities, labPixS (I) is the I-th set of related pixels, η i Is an angle threshold;
according to the formula in step S25, the denoising process is implemented by processing the marked pixel set, and by such denoising process, the image can be denoised according to the characteristic and intensity value of the marked pixel set, so as to improve the quality and definition of the image.
By executing the above operation, aiming at the problems of low processing speed, lack of adaptability to process different similar images and noise conditions and limited processing effects of complex noise and images existing in the traditional image denoising algorithm due to high processing complexity, the scheme adopts a method for calculating the sum of noise intensities to accurately evaluate the noise level of the image, extracts target pixels based on an edge segmentation technology and comprehensively considers pixel values to finally denoise marked pixel values so as to achieve the effects of accurate and rapid denoising, and adjusts the denoising effect based on an angle threshold value so as to improve the adjustability of the algorithm and enhance the adaptability of the algorithm.
Fourth embodiment referring to fig. 1 and 4, based on the above embodiment, in step S3, the initialization of the DBN model based on the matrix norm constraint specifically includes the following steps:
step S31: defining an energy function E of RBM, the input layer is provided with vectors v= { v1, v2, …, vm }, and the hidden layer is provided with vectors h= { h1, h2, …, hn }, with the following formula:
where v is the state of the visible layer, h is the state of the hidden layer, θ is the parameter of RBM, a i Is the unit deviation of the input layer, b j Is the unit deviation of the hidden layer, i and j are the node indices of the visible layer and the hidden layer, ω, respectively i,j Is the link weight between the nodes between the input layer and the hidden layer;
step S32: the joint distribution p (v, h) is defined using the following formula:
wherein R (θ) is a normalization factor;
step S33: the independent probability distribution of the input layer is defined as follows:
step S34: the conditional probability distribution of all layers is defined using the following formula:
where σ () is a sigmoid function;
step S35: the parameter gradient value is calculated by the following formula:
in the method, in the process of the invention,<> data is the probability that the RBM derives,<> model is the probability provided by the reconstructed RBM;
step S36: updating the learning rate and recombining the parameter set theta by using the contrast divergence model, when the initial training process of the RBM is completed, the current hidden layer converts the RBM into a visible layer of a subsequent RBM, and the depth characteristics are classified each time the RBM training is completed, wherein the following formula is used:
wherein, alpha and beta respectively represent the learning rate and batch processing size, t is the iteration number, dr is the learning attenuation rate, and dqb is the current training step;
step S37: and (3) performing matrix norm constraint on a weight matrix of the model, adding regularization and sparsity penalty, limiting the value range of parameters, and controlling the complexity of the model, wherein the formula is as follows:
where Loss' is the original Loss function, |w| | 2 Is L of a weight matrix w 2 The paradigm, λ and β1 are the weight parameters of the regularization and sparsity penalty, p1 is the desired sparsity probability, and q1 is the actual activation probability.
Aiming at the problems that the original DBN model is too high in complexity and poor in overfitting and parameter convergence adjusting effect, so that the model performance is reduced, the scheme adopts the mode of adjusting the learning rate based on exponential decay adaptability, carries out matrix norm constraint on a weight matrix of the model, adds regularization and sparsity penalty, so that the model fitness is improved, the robustness is enhanced, overfitting is prevented, the model learning is sparse and flat, and the performance is optimized.
Fifth embodiment, referring to fig. 1 and 5, based on the above embodiment, in step S4, the super parameter optimization based on parallelization training specifically includes the following steps:
step S41: dividing the subspace, dividing the parameter space used in the step S3 into a plurality of subspaces, and initializing an initial solution set X= { X1, X2, …, xn } of the parameters for each subspace;
step S42: calculating fitness values, classifying 30% of the data after denoising in the step S2, which are randomly selected, by using the DBN model established by the parameters corresponding to each solution, wherein the classification accuracy is used as the fitness value of the solution;
step S43: the reduction coefficient is calculated, the maximum iteration number is preset, and the formula is as follows:
wherein c is a reduction coefficient, c max And c min Equal to 1 and 0.00001, respectively, t represents the current iteration number, t max Representing a maximum number of iterations;
step S44: the social strength is calculated, the attraction strength f is preset, and the formula is as follows:
where s is the social strength, y is the distance between solutions, and l is the upper bound of the parameter space;
step S45: the solution position is adjusted using the following formula:
where i and j are indices of different solutions, n is the number of solution sets, u is the lower bound of the parameter space,is the maximum fitness value;
step S46: evaluating and judging, namely presetting an evaluation threshold value, searching parameters corresponding to the solution sets of each parameter space and the maximum value of the parameter f by using a gradient ascending algorithm as optimal parameters, merging the optimal parameters of the respective parameter spaces to establish a DBN model, and re-dividing the data set and turning to the step S41 if the classification accuracy of the DBN to the test sample data set is lower than the evaluation threshold value, otherwise turning to the step S47;
step S47: running in real time, collecting oral cavity images in real time, inputting the oral cavity images into the DBN model, and giving sanitary advice based on oral cavity sanitary grade output by the model.
By executing the above operation, the method and the device adopt parallelization training, calculate the reduction coefficient and social strength and judge the global optimum based on the fitness, increase the searching efficiency and lead the searching process to be more balanced and comprehensive so as to improve the performance and accuracy of the whole searching algorithm.
An embodiment six, referring to fig. 6 and 7, based on the above embodiment, in fig. 6, shows a process of continuously updating the position of the optimization parameter until the optimal position is found, where the coordinate axis is the size of the parameter; in fig. 7, the ordinate is the position of the optimal solution of the optimization parameter, and the abscissa is the number of iterations, showing the process that the position of the optimization parameter continuously tends to the position of the optimal solution along with the change of the number of iterations, so that the optimization parameter is close to a better search area.
An embodiment seven, referring to fig. 2, based on the above embodiment, the oral hygiene management system based on machine learning provided by the invention includes a data acquisition module, an edge-guided denoising module, a DBN model initialization module and a super parameter optimization module;
the data acquisition module acquires an oral cavity image data set, wherein the oral cavity image data set comprises an oral cavity image and a corresponding label, the label is an oral cavity health evaluation grade, and the acquired data is sent to the edge guide denoising module;
the edge-oriented denoising module receives the data sent by the data acquisition module, segments the image through a multi-scale morphological method, extracts target pixels based on an edge segmentation technology, comprehensively considers the pixels extracted from each segment, calculates a standard average pixel value, identifies and marks similar points and dissimilar points in the standard pixel set segments, performs denoising processing on the marked pixel set, and sends the data to the DBN model initialization module;
the DBN model initialization module receives data sent by the edge-oriented denoising module, and by defining an energy function, joint distribution, independent probability distribution and conditional probability distribution, and calculating a parameter gradient value, a parameter set theta is recombined by using a contrast divergence model, finally, matrix norm constraint is carried out on a weight matrix of the model, regularization and sparsity penalty are added to complete DBN model initialization based on the matrix norm constraint, and the data is sent to the super-parameter optimization module;
the super-parameter optimization module receives the data sent by the DBN model initialization module, divides the searched parameter space into a plurality of subspaces, searches the optimal parameters in the subspaces respectively by using parallelization training and a unique search algorithm, combines the optimal parameters and further verifies the model performance, determines a final model, evaluates the oral hygiene grade on the oral cavity image acquired in real time and gives hygiene advice.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The invention and its embodiments have been described above with no limitation, and the actual construction is not limited to the embodiments of the invention as shown in the drawings. In summary, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical solution should not be creatively devised without departing from the gist of the present invention.

Claims (2)

1. An oral hygiene management method based on machine learning is characterized in that: the method comprises the following steps:
step S1: collecting data;
step S2: establishing an edge-oriented denoising model, segmenting an image by a multi-scale morphological method, extracting target pixels based on an edge segmentation technology, comprehensively considering the pixels extracted from each segment, calculating a standard average pixel value, identifying and marking similar points and dissimilar points in the standard pixel set segments, and finally denoising the marked pixel set;
step S3: the DBN model is initialized based on matrix norm constraint, an energy function, joint distribution, independent probability distribution and conditional probability distribution are defined, parameter gradient values are calculated, learning rate is updated, a contrast divergence model is utilized to reorganize a parameter set theta, and finally matrix norm constraint is carried out on a weight matrix of the model, regularization and sparsity penalty are added to complete the DBN model initialization based on the matrix norm constraint;
step S4: based on super-parameter optimization of parallelization training, dividing the searched parameter space into a plurality of subspaces, respectively searching the optimal parameters in the subspaces by using parallelization training and a unique search algorithm, and finally merging the optimal parameters and further verifying the model performance to determine a final model;
in step S2, the establishing an edge-guided denoising model specifically includes the following steps:
step S21: the sum of the noise image intensities is obtained by the following formula:
wherein DNIDS is the sum of the acquired noise image intensities, M is the M-Th segment image, gI is the acquired image, gIts is the acquired image intensity, and Th is the intensity threshold for enhancing the quality;
step S22: image segmentation, based on edge segmentation technology, extracting target pixels in each segment, wherein the steps are as follows:
wherein IS IS a target pixel value of each pixel point after image segmentation, I (X, Y) IS a pixel value of an original image, (X, Y) IS coordinates, M IS an M-th segment, delta IS a change rate, size (IS (I)) IS the number of pixels of IS (I), IS (I) IS a segmentation value of the pixel point I, pixS [ M ] IS an M-th segment pixel set, gIsr (I) IS a segmentation value of an acquired pixel point, FPixS [ M ] IS an M-th segment maximum pixel value, ISgr (I (I)) IS an average intensity value of an I-th segmentation region, R IS a total pixel of the M-th segment, eta IS an angle, G IS a function of a damaged pixel set, max Its IS a maximum image intensity, and min Its IS a minimum image intensity;
step S23: calculating a standard average pixel value, comprehensively considering the pixels extracted from each segment, wherein the following formula is adopted:
where StdPixS IS a standard average pixel value, abs IS an absolute value, IS (X) IS a division point of the pixel point X after image division, and max Its (FpixS (I)) IS a maximum intensity value of the pixel point in FpixS (I);
step S24: marking a pixel set, identifying dissimilar points in a standard pixel set fragment, and marking the similar points and the dissimilar point pixel sets, wherein the formula is as follows:
wherein LabPixS is the set of labeled pixels;
step S25: denoising, namely denoising the marked pixel set, wherein the formula is as follows:
where DnIS is denoising, labS (X, Y) is the pixel of the original image with coordinates (X, Y), gIts () is a function of the pixel intensities, labPixS (I) is the I-th set of related pixels, η i Is an angle threshold;
in step S4, the above-mentioned super-parameter optimization based on parallelization training specifically includes the following steps:
step S41: dividing the subspace, dividing the parameter space used in the step S3 into a plurality of subspaces, and initializing an initial solution set X= { X1, X2, …, xn } of the parameters for each subspace;
step S42: calculating fitness values, classifying the test data set by using the DBN model established by the parameters corresponding to each solution, and taking the classification accuracy as the fitness value of the solution;
step S43: the reduction coefficient is calculated, the maximum iteration number is preset, and the formula is as follows:
wherein c is a reduction coefficient, c max And c min Equal to 1 and 0.00001, respectively, t represents the current iteration number, t max Representing a maximum number of iterations;
step S44: the social strength is calculated, the attraction strength f is preset, and the formula is as follows:
where s is the social strength, y is the distance between solutions, and l is the upper bound of the parameter space;
step S45: the solution position is adjusted using the following formula:
where i and j are indices of different solutions, n is the number of solution sets, u is the lower bound of the parameter space,is the maximum fitness value;
step S46: evaluating and judging, namely presetting an evaluation threshold value, searching parameters corresponding to the solution sets of each parameter space and the maximum value of the parameter f by using a gradient ascending algorithm as optimal parameters, merging the optimal parameters of the respective parameter spaces to establish a DBN model, and re-dividing the data set and turning to the step S41 if the classification accuracy of the DBN to the test sample data set is lower than the evaluation threshold value, otherwise turning to the step S47;
step S47: the method comprises the steps of running in real time, collecting oral cavity images in real time, inputting the oral cavity images into a DBN model, and giving sanitary advice based on oral cavity sanitary grade output by the model;
in step S3, the DBN is a probability generation model, and is composed of a set of restricted boltzmann machines, namely RBM and a back propagation neural network, and includes a visible layer, an n hidden layer and an output layer, wherein the visible layer is placed at the end of the model, features are transferred through the hidden layer during the learning process, and finally appropriate class labels are allocated to the output layer, and the initialization of the DBN model based on matrix norm constraint specifically includes the following steps:
step S31: defining an energy function E of RBM, the input layer is provided with vectors v= { v1, v2, …, vm }, and the hidden layer is provided with vectors h= { h1, h2, …, hn }, with the following formula:
where v is the state of the visible layer, h is the state of the hidden layer, θ is the parameter of RBM, a i Is the unit deviation of the input layer, b j Is the unit deviation of the hidden layer, i and j are the node indices of the visible layer and the hidden layer, ω, respectively i,j Is the link weight between the nodes between the input layer and the hidden layer;
step S32: the joint distribution p (v, h) is defined using the following formula:
wherein R (θ) is a normalization factor;
step S33: the independent probability distribution of the input layer is defined as follows:
step S34: the conditional probability distribution of all layers is defined using the following formula:
where σ () is a sigmoid function;
step S35: the parameter gradient value is calculated by the following formula:
in the method, in the process of the invention,<> data is the probability that the RBM derives,<> model is the probability provided by the reconstructed RBM;
step S36: updating the learning rate and recombining the parameter set theta by using the contrast divergence model, when the initial training process of the RBM is completed, the current hidden layer converts the RBM into a visible layer of a subsequent RBM, and the depth characteristics are classified each time the RBM training is completed, wherein the following formula is used:
wherein, alpha and beta respectively represent the learning rate and batch processing size, t is the iteration number, dr is the learning attenuation rate, and dqb is the current training step;
step S37: and (3) performing matrix norm constraint on a weight matrix of the model, and adding regularization and sparsity penalty, wherein the formula is as follows:
where Loss' is the original Loss function, |w| | 2 Is L of a weight matrix w 2 The paradigm, λ and β1 are the weight parameters of the regularization and sparsity penalty, p1 is the desired sparsity probability, q1 is the actual activation probability;
in step S1, the data acquisition is an acquisition of an oral image dataset comprising oral images and corresponding labels, the labels being oral hygiene assessment grades.
2. A machine learning based oral hygiene management system for implementing a machine learning based oral hygiene management method as claimed in claim 1, characterized by: the system comprises a data acquisition module, an edge guide denoising module, a DBN model initialization module and a super-parameter optimization module;
the data acquisition module acquires an oral cavity image data set, wherein the oral cavity image data set comprises an oral cavity image and a corresponding label, the label is an oral cavity health evaluation grade, and the acquired data is sent to the edge guide denoising module;
the edge-oriented denoising module receives the data sent by the data acquisition module, segments the image through a multi-scale morphological method, extracts target pixels based on an edge segmentation technology, comprehensively considers the pixels extracted from each segment, calculates a standard average pixel value, identifies and marks similar points and dissimilar points in the standard pixel set segments, performs denoising processing on the marked pixel set, and sends the data to the DBN model initialization module;
the DBN model initialization module receives data sent by the edge-oriented denoising module, and by defining an energy function, joint distribution, independent probability distribution and conditional probability distribution, and calculating a parameter gradient value, a parameter set theta is recombined by using a contrast divergence model, finally, matrix norm constraint is carried out on a weight matrix of the model, regularization and sparsity penalty are added to complete DBN model initialization based on the matrix norm constraint, and the data is sent to the super-parameter optimization module;
the super-parameter optimization module receives the data sent by the DBN model initialization module, divides the searched parameter space into a plurality of subspaces, searches the optimal parameters in the subspaces respectively by using parallelization training and a unique search algorithm, combines the optimal parameters and further verifies the model performance, determines a final model, evaluates the oral hygiene grade on the oral cavity image acquired in real time and gives hygiene advice.
CN202311038730.6A 2023-08-17 2023-08-17 Oral hygiene management method and system based on machine learning Pending CN117314763A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311038730.6A CN117314763A (en) 2023-08-17 2023-08-17 Oral hygiene management method and system based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311038730.6A CN117314763A (en) 2023-08-17 2023-08-17 Oral hygiene management method and system based on machine learning

Publications (1)

Publication Number Publication Date
CN117314763A true CN117314763A (en) 2023-12-29

Family

ID=89254287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311038730.6A Pending CN117314763A (en) 2023-08-17 2023-08-17 Oral hygiene management method and system based on machine learning

Country Status (1)

Country Link
CN (1) CN117314763A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118230988A (en) * 2024-05-23 2024-06-21 佛山市口腔医院(佛山市牙病防治指导中心) Child oral health management system based on big data

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077595A (en) * 2014-06-15 2014-10-01 北京工业大学 Deep belief network image recognition method based on Bayesian regularization
US20180114113A1 (en) * 2016-10-20 2018-04-26 Uber Technologies, Inc. Intelligent regularization of neural network architectures
WO2018099321A1 (en) * 2016-11-30 2018-06-07 华南理工大学 Generalized tree sparse-based weighted nuclear norm magnetic resonance imaging reconstruction method
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN109064406A (en) * 2018-08-26 2018-12-21 东南大学 A kind of rarefaction representation image rebuilding method that regularization parameter is adaptive
CN109507655A (en) * 2018-12-11 2019-03-22 西北工业大学 SAR Target Recognition Algorithms based on guiding reconstruct and norm constraint DBN
CN109948783A (en) * 2019-03-29 2019-06-28 中国石油大学(华东) A kind of Topological expansion method based on attention mechanism
US20190205746A1 (en) * 2017-12-29 2019-07-04 Intel Corporation Machine learning sparse computation mechanism for arbitrary neural networks, arithmetic compute microarchitecture, and sparsity for training mechanism
US20190223725A1 (en) * 2018-01-25 2019-07-25 Siemens Healthcare Gmbh Machine Learning-based Segmentation for Cardiac Medical Imaging
CN110288526A (en) * 2019-06-14 2019-09-27 中国科学院光电技术研究所 A kind of image reconstruction algorithm based on deep learning promotes the optimization method of single pixel camera imaging quality
US20200311878A1 (en) * 2019-04-01 2020-10-01 Canon Medical Systems Corporation Apparatus and method for image reconstruction using feature-aware deep learning
CN112700389A (en) * 2021-01-13 2021-04-23 安徽工业大学 Active sludge microorganism color microscopic image denoising method
CN113313175A (en) * 2021-05-28 2021-08-27 北京大学 Image classification method of sparse regularization neural network based on multivariate activation function
US20210295175A1 (en) * 2020-03-18 2021-09-23 Fair Isaac Corporation Training artificial neural networks with constraints
CN113724164A (en) * 2021-08-31 2021-11-30 南京邮电大学 Visible light image noise removing method based on fusion reconstruction guidance filtering
KR20220096903A (en) * 2020-12-31 2022-07-07 서울과학기술대학교 산학협력단 Filter pruning method for deep learning networks
CN114743058A (en) * 2022-05-18 2022-07-12 河南工业大学 Width learning image classification method and device based on mixed norm regular constraint
CN115147710A (en) * 2022-07-15 2022-10-04 杭州电子科技大学 Sonar image target processing method based on heterogeneous filtering detection and level set segmentation
CN115599844A (en) * 2022-11-10 2023-01-13 西安交通大学(Cn) Visual detection method for misloading and neglected loading of airplane airfoil connecting piece

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077595A (en) * 2014-06-15 2014-10-01 北京工业大学 Deep belief network image recognition method based on Bayesian regularization
US20180114113A1 (en) * 2016-10-20 2018-04-26 Uber Technologies, Inc. Intelligent regularization of neural network architectures
WO2018099321A1 (en) * 2016-11-30 2018-06-07 华南理工大学 Generalized tree sparse-based weighted nuclear norm magnetic resonance imaging reconstruction method
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
US20190205746A1 (en) * 2017-12-29 2019-07-04 Intel Corporation Machine learning sparse computation mechanism for arbitrary neural networks, arithmetic compute microarchitecture, and sparsity for training mechanism
US20190223725A1 (en) * 2018-01-25 2019-07-25 Siemens Healthcare Gmbh Machine Learning-based Segmentation for Cardiac Medical Imaging
CN109064406A (en) * 2018-08-26 2018-12-21 东南大学 A kind of rarefaction representation image rebuilding method that regularization parameter is adaptive
CN109507655A (en) * 2018-12-11 2019-03-22 西北工业大学 SAR Target Recognition Algorithms based on guiding reconstruct and norm constraint DBN
CN109948783A (en) * 2019-03-29 2019-06-28 中国石油大学(华东) A kind of Topological expansion method based on attention mechanism
US20200311878A1 (en) * 2019-04-01 2020-10-01 Canon Medical Systems Corporation Apparatus and method for image reconstruction using feature-aware deep learning
CN110288526A (en) * 2019-06-14 2019-09-27 中国科学院光电技术研究所 A kind of image reconstruction algorithm based on deep learning promotes the optimization method of single pixel camera imaging quality
US20210295175A1 (en) * 2020-03-18 2021-09-23 Fair Isaac Corporation Training artificial neural networks with constraints
KR20220096903A (en) * 2020-12-31 2022-07-07 서울과학기술대학교 산학협력단 Filter pruning method for deep learning networks
CN112700389A (en) * 2021-01-13 2021-04-23 安徽工业大学 Active sludge microorganism color microscopic image denoising method
CN113313175A (en) * 2021-05-28 2021-08-27 北京大学 Image classification method of sparse regularization neural network based on multivariate activation function
CN113724164A (en) * 2021-08-31 2021-11-30 南京邮电大学 Visible light image noise removing method based on fusion reconstruction guidance filtering
CN114743058A (en) * 2022-05-18 2022-07-12 河南工业大学 Width learning image classification method and device based on mixed norm regular constraint
CN115147710A (en) * 2022-07-15 2022-10-04 杭州电子科技大学 Sonar image target processing method based on heterogeneous filtering detection and level set segmentation
CN115599844A (en) * 2022-11-10 2023-01-13 西安交通大学(Cn) Visual detection method for misloading and neglected loading of airplane airfoil connecting piece

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
GONGMING WANG: "An Adaptive Deep Belief Network With Sparse Restricted Boltzmann Machines", 《 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》, 24 December 2019 (2019-12-24) *
刘芳;路丽霞;王洪娟;王鑫;: "基于稀疏自动编码器和支持向量机的图像分类", 系统仿真学报, no. 08, 8 August 2018 (2018-08-08) *
孙洋;叶庆卫;王晓东;周宇;: "基于稀疏约束的LLE改进算法", 计算机工程, no. 05, 15 May 2013 (2013-05-15) *
宋坤骏;林建辉;丁建明;: "极限学习改造稀疏自动编码机及其在故障诊断中的应用", 上海铁道科技, no. 01, 25 March 2017 (2017-03-25) *
李蓓蓓: "基于深度神经网络的特征提取算法及其应用研究", 《中国优秀硕士论文电子期刊网》, 15 January 2019 (2019-01-15) *
薛朋: "基于DBN的旋转机械故障诊断及剩余寿命预测研究", 《中国优秀硕士论文电子期刊网》, 15 March 2022 (2022-03-15) *
高强;阳武;李倩;: "基于空间信息的DBN图像分类快速训练模型", 系统仿真学报, no. 03, 8 March 2015 (2015-03-08) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118230988A (en) * 2024-05-23 2024-06-21 佛山市口腔医院(佛山市牙病防治指导中心) Child oral health management system based on big data
CN118230988B (en) * 2024-05-23 2024-09-20 佛山市口腔医院(佛山市牙病防治指导中心) Child oral health management system based on big data

Similar Documents

Publication Publication Date Title
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN108648191B (en) Pest image recognition method based on Bayesian width residual error neural network
CN107633522B (en) Brain image segmentation method and system based on local similarity active contour model
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN102855633B (en) A kind of Fast Fuzzy Cluster Digital Image Segmentation method with noise immunity
CN105931226A (en) Automatic cell detection and segmentation method based on deep learning and using adaptive ellipse fitting
Liang et al. Comparison detector for cervical cell/clumps detection in the limited data scenario
CN114692732B (en) Method, system, device and storage medium for updating online label
CN117557579A (en) Method and system for assisting non-supervision super-pixel segmentation by using cavity pyramid collaborative attention mechanism
CN117314763A (en) Oral hygiene management method and system based on machine learning
CN115100406B (en) Weight information entropy fuzzy C-means clustering method based on superpixel processing
Liu et al. Memory consistent unsupervised off-the-shelf model adaptation for source-relaxed medical image segmentation
CN110245620A (en) A kind of non-maximization suppressing method based on attention
CN113743474A (en) Digital picture classification method and system based on cooperative semi-supervised convolutional neural network
CN112529063A (en) Depth domain adaptive classification method suitable for Parkinson voice data set
Lei et al. Robust deep kernel-based fuzzy clustering with spatial information for image segmentation
CN117274750B (en) Knowledge distillation semi-automatic visual labeling method and system
Dong et al. An active contour model based on shadow image and reflection edge for image segmentation
CN113989256A (en) Detection model optimization method, detection method and detection device for remote sensing image building
Horng et al. Parametric active contour model by using the honey bee mating optimization
CN108304546B (en) Medical image retrieval method based on content similarity and Softmax classifier
CN116433679A (en) Inner ear labyrinth multi-level labeling pseudo tag generation and segmentation method based on spatial position structure priori
CN106709921B (en) Color image segmentation method based on space Dirichlet mixed model
Zhao et al. Overlapping region reconstruction in nuclei image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination