CN112613557A - Method and system for classifying tongue proper and tongue coating based on deep learning - Google Patents

Method and system for classifying tongue proper and tongue coating based on deep learning Download PDF

Info

Publication number
CN112613557A
CN112613557A CN202011536981.3A CN202011536981A CN112613557A CN 112613557 A CN112613557 A CN 112613557A CN 202011536981 A CN202011536981 A CN 202011536981A CN 112613557 A CN112613557 A CN 112613557A
Authority
CN
China
Prior art keywords
tongue
image
coating
rectangular
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011536981.3A
Other languages
Chinese (zh)
Other versions
CN112613557B (en
Inventor
魏春雨
宋臣
汤青
王东卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ennova Health Technology Co ltd
Original Assignee
Ennova Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ennova Health Technology Co ltd filed Critical Ennova Health Technology Co ltd
Priority to CN202011536981.3A priority Critical patent/CN112613557B/en
Publication of CN112613557A publication Critical patent/CN112613557A/en
Application granted granted Critical
Publication of CN112613557B publication Critical patent/CN112613557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a system for classifying tongue proper and tongue coating based on deep learning. Wherein, the method comprises the following steps: dividing the tongue image into a tongue quality image and a tongue fur image; converting the tongue image into a rectangular tongue image, and converting the tongue fur image into a rectangular tongue fur image; determining the representative tongue quality image as a typical tongue quality image, and determining the representative tongue coat image as a typical tongue coat image; determining the Euclidean distance of the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image, and determining the Euclidean distance of the RGB mean value of the rectangular tongue coating image which is not marked and the RGB mean value of the typical rectangular tongue coating image; creating a tongue color sample library according to the tongue quality Euclidean distance, and creating a tongue coating color sample library according to the tongue coating Euclidean distance; and training the tongue color sample to obtain a tongue color classification model, and training the tongue fur color sample to obtain the tongue fur color classification model.

Description

Method and system for classifying tongue proper and tongue coating based on deep learning
Technical Field
The application relates to the technical field of power systems, in particular to a method and a system for classifying tongue proper and tongue coating based on deep learning.
Background
With the gradual development of image processing technology and the continuous maturity of artificial intelligence technologies such as machine learning and deep learning, deep convolutional neural networks are beginning to be applied to tongue diagnosis in traditional Chinese medicine, and various methods are generated. At present, the traditional Chinese medicine tongue diagnosis method by utilizing machine learning and deep learning does not weaken the characteristics of shape texture and the like in a tongue image, can not highlight color characteristics, and can not better classify tongue coating.
Aiming at the technical problems that the existing method for carrying out traditional Chinese medicine tongue diagnosis by utilizing machine learning and deep learning in the prior art does not weaken the characteristics such as shape and texture in a tongue image, cannot highlight color characteristics and cannot better classify tongue coating, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the disclosure provides a method and a system for classifying tongue coatings based on deep learning, which at least solve the technical problems that in the prior art, the traditional Chinese medicine tongue diagnosis method by utilizing machine learning and deep learning does not weaken the characteristics of shape texture and the like in a tongue image, cannot highlight color characteristics and cannot better classify the tongue coatings.
According to an aspect of the embodiments of the present disclosure, there is provided a method for classifying tongue coating based on deep learning, including: segmenting the tongue image into a tongue image and a tongue fur image by utilizing a tongue fur separation algorithm; converting the tongue image into a rectangular tongue image by using a tongue image conversion algorithm, and converting the tongue fur image into a rectangular tongue fur image by using a tongue fur image algorithm; determining representative tongue quality images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue quality images, and determining representative tongue fur images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue fur images; determining the Euclidean distance of the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image, and determining the Euclidean distance of the RGB mean value of the rectangular tongue coating image which is not marked and the RGB mean value of the typical rectangular tongue coating image; sorting the rectangular tongue images which are not marked according to the Euclidean distance of the tongue, determining a tongue color sample, creating a tongue color sample library, sorting the rectangular tongue fur images which are not marked according to the Euclidean distance of the tongue fur, determining a tongue fur color sample, and creating a tongue fur color sample library; training the tongue color samples to obtain a tongue color classification model, training the tongue fur color samples to obtain a tongue fur color classification model, performing tongue color classification by using the tongue color classification model, and performing tongue fur color classification by using the tongue fur color classification model.
According to another aspect of the disclosed embodiments, there is also provided a system for classifying tongue coating based on deep learning, including: the tongue texture and tongue coating image segmentation module is used for segmenting the tongue image into a tongue texture image and a tongue coating image by utilizing a tongue coating texture separation algorithm; the conversion rectangular tongue coating module is used for converting the tongue coating image into a rectangular tongue coating image by using a tongue coating image conversion algorithm and converting the tongue coating image into a rectangular tongue coating image by using a tongue coating image algorithm; the typical tongue quality and tongue coating determining module is used for determining representative tongue quality images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue quality images and determining representative tongue coating images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue coating images; the Euclidean distance determining module is used for determining the Euclidean distance of the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image, and determining the Euclidean distance of the RGB mean value of the rectangular tongue coating image which is not marked and the RGB mean value of the typical rectangular tongue coating image; the color sample library creating module is used for sorting the rectangular tongue images which are not marked according to the Euclidean distance of the tongue, determining tongue color samples, creating a tongue color sample library, sorting the rectangular tongue fur images which are not marked according to the Euclidean distance of the tongue fur, determining tongue fur color samples and creating a tongue fur color sample library; and the training classification module is used for training the tongue color samples to obtain a tongue color classification model, training the tongue fur color samples to obtain a tongue fur color classification model, performing tongue color classification by using the tongue color classification model, and performing tongue fur color classification by using the tongue fur color classification model.
In the invention, the tongue quality image is converted into the rectangular tongue quality image, the tongue fur image is converted into the rectangular tongue fur image, negative factors such as background, edges and the like in the tongue quality image and the tongue fur image are eliminated, the tongue quality color and the tongue fur color are more prominent, and the training of a subsequent classification algorithm is facilitated. Based on deep learning, the workload of a doctor is reduced by calculating the Euclidean distance, the working efficiency is improved, and the labor cost is reduced. Further solves the technical problems that the traditional Chinese medicine tongue diagnosis method by utilizing machine learning and deep learning in the prior art does not weaken the characteristics of shape, texture and the like in the tongue image, cannot highlight color characteristics and cannot better classify the tongue coating.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
fig. 1 is a schematic diagram of a method for classifying tongue coating based on deep learning according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a method for classifying tongue coating based on deep learning according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a segmentation of a tongue image into a tongue quality image and a tongue coating image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of converting a tongue image into a rectangular tongue image and converting a tongue coating image into a rectangular tongue coating image according to an embodiment of the disclosure;
FIG. 5 is a schematic view of an exemplary tongue image according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an exemplary rectangular tongue image corresponding to an exemplary tongue image according to an embodiment of the present disclosure;
FIG. 7 is a schematic view of an exemplary image of a tongue coating according to an embodiment of the present disclosure;
FIG. 8 is a schematic view of an exemplary rectangular tongue image corresponding to an exemplary tongue image according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of sorting a batch of tongue images in each category by euclidean distance according to an embodiment of the disclosure.
Detailed Description
The exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, however, the present invention may be embodied in many different forms and is not limited to the embodiments described herein, which are provided for complete and complete disclosure of the present invention and to fully convey the scope of the present invention to those skilled in the art. The terminology used in the exemplary embodiments illustrated in the accompanying drawings is not intended to be limiting of the invention. In the drawings, the same units/elements are denoted by the same reference numerals.
Unless otherwise defined, terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Further, it will be understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense.
According to a first aspect of the present embodiment, a method 100 of classifying tongue quality and tongue coating based on deep learning is provided. Fig. 1 shows a schematic flow diagram of the method, and referring to fig. 1, the method 100 includes:
s102: segmenting the tongue image into a tongue image and a tongue fur image by utilizing a tongue fur separation algorithm;
s104: converting the tongue image into a rectangular tongue image by using a tongue image conversion algorithm, and converting the tongue fur image into a rectangular tongue fur image by using a tongue fur image algorithm;
s106: determining representative tongue quality images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue quality images, and determining representative tongue fur images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue fur images;
s108: determining the Euclidean distance of the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image, and determining the Euclidean distance of the RGB mean value of the rectangular tongue coating image which is not marked and the RGB mean value of the typical rectangular tongue coating image;
s110: sorting the rectangular tongue images which are not marked according to the Euclidean distance of the tongue, determining a tongue color sample, creating a tongue color sample library, sorting the rectangular tongue fur images which are not marked according to the Euclidean distance of the tongue fur, determining a tongue fur color sample, and creating a tongue fur color sample library;
s112: training the tongue color samples to obtain a tongue color classification model, training the tongue fur color samples to obtain a tongue fur color classification model, performing tongue color classification by using the tongue color classification model, and performing tongue fur color classification by using the tongue fur color classification model.
Specifically, referring to fig. 2, the tongue image is segmented into a tongue image and a tongue coating image by using a coating separation algorithm. Referring to fig. 3, the tongue image is divided into a tongue quality image and a tongue coating image. Referring to fig. 4, the tongue image is converted into a rectangular tongue image by a tongue image conversion algorithm, and the tongue coating image is converted into a rectangular tongue coating image by a tongue coating image algorithm. Because the tongue image comprises a tongue part and a background part, the background part is pure black pixels which are not beneficial to the training of the deep convolutional neural network model, and the shape of edge pixels between the tongue part and the background can also influence the modeling of the deep convolutional neural network model; in order to eliminate the above adverse factors, the tongue image conversion algorithm is responsible for arranging the pixel points in the tongue image into a rectangular image line by line to form a rectangular tongue image.
The conversion process is as follows:
firstly, the total number of tongue quality pixel points in the tongue quality image is calculated: m;
calculating the square root k of m to sqrt (m), and taking the integral part l of k as the side length of the square;
filling the tongue pixels into a square with the side length of l line by line;
at this time, the tongue image includes some noise points, and the image is smoothed by using median filtering to weaken the influence of the salt and pepper noise, so as to obtain a final tongue image. The median filtering can be realized by directly calling an opencv function:
CV_EXPORTS_WvoidmedianBlur(InputArraysrc,OutputArraydst,int ksize);
the tongue coating image also comprises a tongue coating part and a background part, and the required rectangular tongue coating image can be obtained through the conversion algorithm. The link mainly eliminates negative factors such as background, edges and the like in the tongue quality image and the tongue coating image, highlights the tongue quality color and the tongue coating color and is beneficial to the training of a subsequent classification algorithm.
Further, referring to fig. 5, a plurality of doctor in clinical practice of traditional Chinese medicine with rich experience label the tongue image, including 6 tongue body colors such as pale white tongue, pale purple tongue, pale red tongue, dark red tongue, and deep red tongue; for each type of tongue color, a typical tongue image is selected that represents the tongue color. Referring to fig. 6, a rectangular tongue image corresponding to the tongue image in fig. 5 is determined. Similarly, referring to FIG. 7, typical tongue coating images with tongue coating colors including 4 tongue coating colors of white coating, yellow and white coating and gray and black coating can be selected. Referring to fig. 8, a rectangular tongue image corresponding to the tongue image in fig. 7 is determined.
Determining the Euclidean distance of the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image, and determining the Euclidean distance of the RGB mean value of the rectangular tongue coating image which is not marked and the RGB mean value of the typical rectangular tongue coating image.
Taking a tongue image as an example, for a large number of rectangular tongue images which are not marked, the patent takes the average value of RGB of a typical rectangular tongue image as a current characteristic value; and calculating the distance from a large number of unmarked rectangular tongue quality images to the typical tongue quality image, namely calculating the Euclidean distance between the mean value of RGB of the current rectangular tongue quality image and the characteristic value of all the typical rectangular tongue quality images.
A, calculating the mean value of RGB of a typical tongue image: ci=(Ri,Gi,Bi) 1,2, 6, which respectively represent 6 tongue proper colors such as pale white tongue, pale purple tongue, pale red tongue, dark red tongue, deep red tongue, and the like;
b, calculating the mean value of RGB of all the unlabeled tongue images: c. Cj=(rj,gj,bj),j=1,2,...,n,
Calculating the distance between the RGB mean value of all the unlabeled tongue quality images and the RGB mean value of the typical tongue quality image:
Figure BDA0002853795430000051
the tongue images in each category are sorted according to Euclidean distance, and referring to FIG. 9, FIG. 9 shows the tongue images of pale white, red and purple tongue, and the numbers in front of the file names represent the distance between the current tongue image and the typical tongue image.
Similarly, the Euclidean distance of the tongue coating image can be calculated, which is not described herein again.
Further, according to the Euclidean distance of the tongue proper, sorting the rectangular tongue proper images which are not marked, determining a tongue proper color sample, creating a tongue proper color sample library, according to the Euclidean distance of the tongue coating, sorting the rectangular tongue coating images which are not marked, determining a tongue coating color sample, and creating a tongue coating color sample library. For example, referring to fig. 9, a batch of tongue images in each category are sorted according to the euclidean distance, and finally, a batch of tongue images are observed and determined by experienced doctor of traditional Chinese medicine to form a tongue color sample library; in the same way, a tongue coating color sample library can be formed.
And finally, training the tongue color sample to obtain a tongue color classification model, training the tongue fur color sample to obtain a tongue fur color classification model, performing tongue color classification by using the tongue color classification model, and performing tongue fur color classification by using the tongue fur color classification model. The deep learning framework used in this patent is Caffe, and the deep convolutional neural network used is a simplified SqueezeNet. This is simplified because less time is required and faster sorting speeds are achieved. And respectively training the two sample libraries to obtain a tongue quality color classification model and a tongue coating color classification model. The models are used for the subsequent reasoning phase, and the dnn module of opencv is used for calling the two models to carry out tongue quality color classification and tongue coating color classification.
The network structure of the deep convolutional neural network is as follows: inputitage- > conv1- > maxpool1- > fire2- > fire5- > fire6- > maxpool8- > fire9- > conv10- > avgpool10, parts such as fire3, fire4, maxpool4, fire7, fire8 and the like in the original network structure are removed, and the network is simplified. Of course, other deep convolutional neural networks may be used, such as lightweight networks, MobileNet, ShuffleNet, Xception, and so on.
Therefore, the tongue quality image is converted into the rectangular tongue quality image, the tongue fur image is converted into the rectangular tongue fur image, negative factors such as background and edges in the tongue quality image and the tongue fur image are eliminated, the tongue quality color and the tongue fur color are more prominent, and the training of a subsequent classification algorithm is facilitated. Based on deep learning, the workload of a doctor is reduced by calculating the Euclidean distance, the working efficiency is improved, and the labor cost is reduced. Further solves the technical problems that the traditional Chinese medicine tongue diagnosis method by utilizing machine learning and deep learning in the prior art does not weaken the characteristics of shape, texture and the like in the tongue image, cannot highlight color characteristics and cannot better classify the tongue coating.
Optionally, converting the tongue image into a rectangular tongue image by using a tongue image conversion algorithm, and converting the tongue image into a rectangular tongue image by using a tongue image algorithm, including: determining the total number of tongue quality pixel points in the tongue quality image and determining the total number of tongue coating pixel points in the tongue coating image; determining the side length of the rectangular tongue image according to the total number of the tongue pixels; determining the side length of the rectangular tongue coat image according to the total number of the tongue coat pixel points; and filling the tongue texture pixel points to the rectangular tongue texture image line by line, and filling the tongue fur pixel points to the rectangular tongue fur image line by line.
Optionally, the tongue quality image comprises 6 tongue quality colors of pale white tongue, pale purple tongue, pale red tongue, dark red tongue and dark red tongue, and for each tongue quality color, a tongue quality image capable of representing the tongue quality color is selected as a typical tongue quality image; the tongue coating images comprise 4 types of tongue coating colors including white coating, yellow coating, both yellow and white coating and gray and black coating, and for each type of tongue coating color, one tongue coating image capable of representing the type of tongue coating color is selected as a typical tongue coating image.
Optionally, determining a tongue euclidean distance between the RGB mean of the unlabeled rectangular tongue quality image and the RGB mean of the typical rectangular tongue quality image comprises: determining the RGB mean value of the rectangular tongue quality image which is not marked, and determining the RGB mean value of the typical rectangular tongue quality image; and determining the Euclidean distance of the tongue quality according to the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image.
Optionally, determining a tongue Euclidean distance between the RGB mean of the unlabeled rectangular tongue image and the RGB mean of the typical rectangular tongue image, further comprises: determining the RGB mean value of the rectangular tongue fur image which is not marked, and determining the RGB mean value of the typical rectangular tongue fur image; and determining the Euclidean distance of the tongue coating according to the RGB mean value of the unmarked rectangular tongue coating image and the RGB mean value of the typical rectangular tongue coating image.
Therefore, the tongue quality image is converted into the rectangular tongue quality image, the tongue fur image is converted into the rectangular tongue fur image, negative factors such as background and edges in the tongue quality image and the tongue fur image are eliminated, the tongue quality color and the tongue fur color are more prominent, and the training of a subsequent classification algorithm is facilitated. Based on deep learning, the workload of a doctor is reduced by calculating the Euclidean distance, the working efficiency is improved, and the labor cost is reduced. Further solves the technical problems that the traditional Chinese medicine tongue diagnosis method by utilizing machine learning and deep learning in the prior art does not weaken the characteristics of shape, texture and the like in the tongue image, cannot highlight color characteristics and cannot better classify the tongue coating.
According to another aspect of the present embodiment, a system for classifying tongue proper and tongue coating based on deep learning is provided. The system comprises: the tongue texture and tongue coating image segmentation module is used for segmenting the tongue image into a tongue texture image and a tongue coating image by utilizing a tongue coating texture separation algorithm; the conversion rectangular tongue coating module is used for converting the tongue coating image into a rectangular tongue coating image by using a tongue coating image conversion algorithm and converting the tongue coating image into a rectangular tongue coating image by using a tongue coating image algorithm; the typical tongue quality and tongue coating determining module is used for determining representative tongue quality images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue quality images and determining representative tongue coating images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue coating images;
the Euclidean distance determining module is used for determining the Euclidean distance of the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image, and determining the Euclidean distance of the RGB mean value of the rectangular tongue coating image which is not marked and the RGB mean value of the typical rectangular tongue coating image; the color sample library creating module is used for sorting the rectangular tongue images which are not marked according to the Euclidean distance of the tongue, determining tongue color samples, creating a tongue color sample library, sorting the rectangular tongue fur images which are not marked according to the Euclidean distance of the tongue fur, determining tongue fur color samples and creating a tongue fur color sample library; and the training classification module is used for training the tongue color samples to obtain a tongue color classification model, training the tongue fur color samples to obtain a tongue fur color classification model, performing tongue color classification by using the tongue color classification model, and performing tongue fur color classification by using the tongue fur color classification model.
Optionally, a transformed rectangular tongue coating module comprising: the sub-module for determining the total number of the pixel points is used for determining the total number of the tongue fur pixel points in the tongue fur image and determining the total number of the tongue fur pixel points in the tongue fur image; the side length determining submodule of the rectangular image is used for determining the side length of the rectangular tongue image according to the total number of the tongue pixels; determining the side length of the rectangular tongue coat image according to the total number of the tongue coat pixel points; and the filling rectangular image submodule is used for filling the tongue texture pixel points into the rectangular tongue texture image line by line and filling the tongue fur pixel points into the rectangular tongue fur image line by line.
Optionally, the tongue quality image comprises 6 tongue quality colors of pale white tongue, pale purple tongue, pale red tongue, dark red tongue and dark red tongue, and for each tongue quality color, a tongue quality image capable of representing the tongue quality color is selected as a typical tongue quality image; the tongue coating images comprise 4 types of tongue coating colors including white coating, yellow coating, both yellow and white coating and gray and black coating, and for each type of tongue coating color, one tongue coating image capable of representing the type of tongue coating color is selected as a typical tongue coating image.
Optionally, the determine euclidean distance module includes: the determining tongue quality image RGB mean value submodule is used for determining the RGB mean value of the rectangular tongue quality image which is not marked and determining the RGB mean value of the typical rectangular tongue quality image; and the tongue quality Euclidean distance determining submodule is used for determining the tongue quality Euclidean distance according to the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image.
Optionally, the determine euclidean distance module includes: the tongue coating image RGB mean value determining submodule is used for determining the RGB mean value of the rectangular tongue coating image which is not marked and determining the RGB mean value of the typical rectangular tongue coating image; and the tongue fur Euclidean distance determining submodule is used for determining the tongue fur Euclidean distance according to the RGB mean value of the rectangular tongue fur image which is not marked and the RGB mean value of the typical rectangular tongue fur image.
The system for classifying the tongue-shaped and tongue-coating based on deep learning according to the embodiment of the present invention corresponds to the method for classifying the tongue-shaped and tongue-coating based on deep learning according to another embodiment of the present invention, and is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The scheme in the embodiment of the application can be implemented by adopting various computer languages, such as object-oriented programming language Java and transliterated scripting language JavaScript.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for classifying tongue proper and tongue coating based on deep learning is characterized by comprising the following steps:
segmenting the tongue image into a tongue image and a tongue fur image by utilizing a tongue fur separation algorithm;
converting the tongue image into a rectangular tongue image by using a tongue image conversion algorithm, and converting the tongue coating image into a rectangular tongue coating image by using a tongue coating image algorithm;
determining representative tongue quality images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue quality images, and determining representative tongue fur images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue fur images;
determining the Euclidean distance of the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image, and determining the Euclidean distance of the RGB mean value of the rectangular tongue coating image which is not marked and the RGB mean value of the typical rectangular tongue coating image;
sorting the rectangular tongue images which are not marked according to the Euclidean distance of the tongue, determining a tongue color sample, creating a tongue color sample library, sorting the rectangular tongue images which are not marked according to the Euclidean distance of the tongue coating, determining a tongue coating color sample, and creating a tongue coating color sample library;
training the tongue color samples to obtain a tongue color classification model, training the tongue fur color samples to obtain a tongue fur color classification model, performing tongue color classification by using the tongue color classification model, and performing tongue fur color classification by using the tongue fur color classification model.
2. The method of claim 1, wherein converting the tongue image into a rectangular tongue image using a tongue image conversion algorithm and converting the tongue image into a rectangular tongue image using a tongue image algorithm comprises:
determining the total number of the tongue texture pixel points in the tongue texture image and determining the total number of the tongue fur pixel points in the tongue fur image;
determining the side length of the rectangular tongue image according to the total number of the tongue pixels; determining the side length of the rectangular tongue coat image according to the total number of the tongue coat pixel points;
and filling the tongue texture pixel points to the rectangular tongue texture image line by line, and filling the tongue fur pixel points to the rectangular tongue fur image line by line.
3. The method of claim 1,
the tongue quality images comprise 6 tongue quality colors of pale white tongue, pale purple tongue, pale red tongue, dark red tongue and dark red tongue, and for each tongue quality color, one tongue quality image capable of representing the tongue quality color is selected as a typical tongue quality image;
the tongue coating images comprise 4 types of tongue coating colors including white coating, yellow coating, both yellow and white coating and gray and black coating, and for each type of tongue coating color, one tongue coating image capable of representing the type of tongue coating color is selected as a typical tongue coating image.
4. The method of claim 3, wherein determining the Euclidean tongue distance between the RGB mean of the unlabeled rectangular tongue image and the RGB mean of the typical rectangular tongue image comprises:
determining the RGB mean value of the rectangular tongue quality image which is not marked, and determining the RGB mean value of the typical rectangular tongue quality image;
and determining the Euclidean distance of the tongue quality according to the RGB mean value of the unmarked rectangular tongue quality image and the RGB mean value of the typical rectangular tongue quality image.
5. The method of claim 4, wherein determining a Euclidean distance of the RGB mean of the unlabeled rectangular tongue image from the RGB mean of the typical rectangular tongue image, further comprises:
determining the RGB mean value of the rectangular tongue fur image which is not marked, and determining the RGB mean value of the typical rectangular tongue fur image;
and determining the Euclidean distance of the tongue coating according to the RGB mean value of the unmarked rectangular tongue coating image and the RGB mean value of the typical rectangular tongue coating image.
6. A system for classifying tongue proper and tongue coating based on deep learning, comprising:
the tongue texture and tongue coating image segmentation module is used for segmenting the tongue image into a tongue texture image and a tongue coating image by utilizing a tongue coating texture separation algorithm;
the conversion rectangular tongue coating module is used for converting the tongue coating image into a rectangular tongue coating image by using a tongue coating image conversion algorithm and converting the tongue coating image into a rectangular tongue coating image by using a tongue coating image algorithm;
the typical tongue quality and tongue coating determining module is used for determining representative tongue quality images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue quality images and determining representative tongue coating images marked by a plurality of doctor doctors in clinical practice with abundant experience as typical tongue coating images;
the Euclidean distance determining module is used for determining the Euclidean distance of the RGB mean value of the rectangular tongue quality image which is not marked and the RGB mean value of the typical rectangular tongue quality image, and determining the Euclidean distance of the RGB mean value of the rectangular tongue coating image which is not marked and the RGB mean value of the typical rectangular tongue coating image;
creating a color sample library module, which is used for sequencing the rectangular tongue quality images which are not marked according to the Euclidean distance of the tongue quality, determining a tongue quality color sample, creating a tongue quality color sample library, sequencing the rectangular tongue fur images which are not marked according to the Euclidean distance of the tongue fur, determining a tongue fur color sample, and creating a tongue fur color sample library;
and the training classification module is used for training the tongue color samples to obtain a tongue color classification model, training the tongue fur color samples to obtain a tongue fur color classification model, performing tongue color classification by using the tongue color classification model, and performing tongue fur color classification by using the tongue fur color classification model.
7. The system of claim 6, wherein the conversion rectangular tongue coating module comprises:
the sub-module for determining the total number of the pixel points is used for determining the total number of the tongue fur pixel points in the tongue fur image and determining the total number of the tongue fur pixel points in the tongue fur image;
the side length determining submodule of the rectangular image is used for determining the side length of the rectangular tongue image according to the total number of the tongue pixels; determining the side length of the rectangular tongue coat image according to the total number of the tongue coat pixel points;
and the filling rectangular image submodule is used for filling the tongue texture pixel points into the rectangular tongue texture image line by line and filling the tongue fur pixel points into the rectangular tongue fur image line by line.
8. The system of claim 6, comprising:
the tongue quality images comprise 6 tongue quality colors of pale white tongue, pale purple tongue, pale red tongue, dark red tongue and dark red tongue, and for each tongue quality color, one tongue quality image capable of representing the tongue quality color is selected as a typical tongue quality image;
the tongue coating images comprise 4 types of tongue coating colors including white coating, yellow coating, both yellow and white coating and gray and black coating, and for each type of tongue coating color, one tongue coating image capable of representing the type of tongue coating color is selected as a typical tongue coating image.
9. The system of claim 8, wherein the determine euclidean distance module comprises:
the determining tongue quality image RGB mean value submodule is used for determining the RGB mean value of the rectangular tongue quality image which is not marked and determining the RGB mean value of the typical rectangular tongue quality image;
and the tongue quality Euclidean distance determining submodule is used for determining the tongue quality Euclidean distance according to the RGB mean value of the unmarked rectangular tongue quality image and the RGB mean value of the typical rectangular tongue quality image.
10. The system of claim 9, wherein the determine euclidean distance module comprises:
the tongue coating image RGB mean value determining submodule is used for determining the RGB mean value of the rectangular tongue coating image which is not marked and determining the RGB mean value of the typical rectangular tongue coating image;
and the tongue fur Euclidean distance determining submodule is used for determining the tongue fur Euclidean distance according to the RGB mean value of the unmarked rectangular tongue fur image and the RGB mean value of the typical rectangular tongue fur image.
CN202011536981.3A 2020-12-23 2020-12-23 Method and system for classifying tongue proper and tongue coating based on deep learning Active CN112613557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011536981.3A CN112613557B (en) 2020-12-23 2020-12-23 Method and system for classifying tongue proper and tongue coating based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011536981.3A CN112613557B (en) 2020-12-23 2020-12-23 Method and system for classifying tongue proper and tongue coating based on deep learning

Publications (2)

Publication Number Publication Date
CN112613557A true CN112613557A (en) 2021-04-06
CN112613557B CN112613557B (en) 2023-03-24

Family

ID=75244409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011536981.3A Active CN112613557B (en) 2020-12-23 2020-12-23 Method and system for classifying tongue proper and tongue coating based on deep learning

Country Status (1)

Country Link
CN (1) CN112613557B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538398A (en) * 2021-07-28 2021-10-22 平安科技(深圳)有限公司 Tongue coating classification method, device, equipment and medium based on feature matching

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122354A (en) * 2011-03-15 2011-07-13 上海交通大学 Adaptive characteristic block selection-based gait identification method
CN104298983A (en) * 2013-07-15 2015-01-21 清华大学 Tongue fur image acquisition and analysis system with distributed user terminals
CN109598297A (en) * 2018-11-29 2019-04-09 新绎健康科技有限公司 A kind of tongue fur tongue analysis method, system, computer equipment and storage medium
CN110634125A (en) * 2019-01-14 2019-12-31 广州爱孕记信息科技有限公司 Deep learning-based fetal ultrasound image identification method and system
CN111008664A (en) * 2019-12-05 2020-04-14 上海海洋大学 Hyperspectral sea ice detection method based on space-spectrum combined characteristics
WO2020114346A1 (en) * 2018-12-03 2020-06-11 深圳市前海安测信息技术有限公司 Traditional chinese medicine tongue tip redness detection apparatus, method and computer storage medium
US10803586B1 (en) * 2019-09-26 2020-10-13 Aiforia Technologies Oy Image analysis in pathology

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122354A (en) * 2011-03-15 2011-07-13 上海交通大学 Adaptive characteristic block selection-based gait identification method
CN104298983A (en) * 2013-07-15 2015-01-21 清华大学 Tongue fur image acquisition and analysis system with distributed user terminals
CN109598297A (en) * 2018-11-29 2019-04-09 新绎健康科技有限公司 A kind of tongue fur tongue analysis method, system, computer equipment and storage medium
WO2020114346A1 (en) * 2018-12-03 2020-06-11 深圳市前海安测信息技术有限公司 Traditional chinese medicine tongue tip redness detection apparatus, method and computer storage medium
CN110634125A (en) * 2019-01-14 2019-12-31 广州爱孕记信息科技有限公司 Deep learning-based fetal ultrasound image identification method and system
US10803586B1 (en) * 2019-09-26 2020-10-13 Aiforia Technologies Oy Image analysis in pathology
CN111008664A (en) * 2019-12-05 2020-04-14 上海海洋大学 Hyperspectral sea ice detection method based on space-spectrum combined characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨雁南: ""基于模糊径向基融合的纸病图像处理算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
梁金鹏等: "基于颜色特征的常见舌质舌苔分类识别", 《微型机与应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538398A (en) * 2021-07-28 2021-10-22 平安科技(深圳)有限公司 Tongue coating classification method, device, equipment and medium based on feature matching
CN113538398B (en) * 2021-07-28 2023-11-24 平安科技(深圳)有限公司 Tongue fur classification method, device, equipment and medium based on feature matching

Also Published As

Publication number Publication date
CN112613557B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN110717896B (en) Plate strip steel surface defect detection method based on significance tag information propagation model
CN105205804B (en) Caryoplasm separation method, sorting technique and the device of leucocyte in blood cell image
CN111553837B (en) Artistic text image generation method based on neural style migration
CN110210387B (en) Method, system and device for detecting insulator target based on knowledge graph
CN112750106B (en) Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium
WO2020029915A1 (en) Artificial intelligence-based device and method for tongue image splitting in traditional chinese medicine, and storage medium
CN109145964B (en) Method and system for realizing image color clustering
CN112116620B (en) Indoor image semantic segmentation and coating display method
CN111798470B (en) Crop image entity segmentation method and system applied to intelligent agriculture
CN113724354B (en) Gray image coloring method based on reference picture color style
CN107622280B (en) Modularized processing mode image saliency detection method based on scene classification
CN113705579A (en) Automatic image annotation method driven by visual saliency
CN113379764A (en) Pathological image segmentation method based on domain confrontation self-supervision learning
CN112613557B (en) Method and system for classifying tongue proper and tongue coating based on deep learning
CN112949378A (en) Bacterial microscopic image segmentation method based on deep learning network
CN116152500A (en) Full-automatic tooth CBCT image segmentation method based on deep learning
CN114529832A (en) Method and device for training preset remote sensing image overlapping shadow segmentation model
CN108664968B (en) Unsupervised text positioning method based on text selection model
CN112446417B (en) Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
CN112418299B (en) Coronary artery segmentation model training method, coronary artery segmentation method and device
CN117496532A (en) Intelligent recognition tool based on 0CR
CN111160161B (en) Self-learning face age estimation method based on noise elimination
CN111738964A (en) Image data enhancement method based on modeling
CN116596891A (en) Wood floor color classification and defect detection method based on semi-supervised multitasking detection
CN113902765B (en) Automatic semiconductor partitioning method based on panoramic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant