CN110648336A - Method and device for dividing tongue texture and tongue coating - Google Patents

Method and device for dividing tongue texture and tongue coating Download PDF

Info

Publication number
CN110648336A
CN110648336A CN201910900014.1A CN201910900014A CN110648336A CN 110648336 A CN110648336 A CN 110648336A CN 201910900014 A CN201910900014 A CN 201910900014A CN 110648336 A CN110648336 A CN 110648336A
Authority
CN
China
Prior art keywords
tongue
region
image
area
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910900014.1A
Other languages
Chinese (zh)
Other versions
CN110648336B (en
Inventor
肖俊勇
马方励
胡明华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infinitus China Co Ltd
Original Assignee
Infinitus China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infinitus China Co Ltd filed Critical Infinitus China Co Ltd
Priority to CN201910900014.1A priority Critical patent/CN110648336B/en
Publication of CN110648336A publication Critical patent/CN110648336A/en
Application granted granted Critical
Publication of CN110648336B publication Critical patent/CN110648336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application provides a method and a device for segmenting tongue quality and tongue coating, wherein a target area (comprising an oral cavity area and a tongue area) in a face image is determined based on feature points in the face image, the tongue area is segmented from the target area based on color components of the target area in an HSV (hue, saturation, value) space, and the tongue quality area and the tongue coating area are segmented from the tongue area based on color components of the tongue area in a CIELAB space.

Description

Method and device for dividing tongue texture and tongue coating
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for segmenting tongue proper and tongue coating.
Background
In the medical field, the tongue is an important basis for disease diagnosis, and in medicine, the tongue can be divided into a tongue proper and a tongue coating, wherein the color and the form of the tongue proper and the thickness, the distribution and the color of the tongue coating can be used as diagnosis bases.
At present, tongue diagnosis is finished manually, and automation cannot be realized. However, the automated tongue diagnosis is a trend, and in the automated tongue diagnosis, how to segment the tongue proper and tongue coating from the image is one of the important research methods.
Disclosure of Invention
The application provides a method and a device for segmenting a tongue texture and a tongue coating, and aims to solve the problem of how to segment the tongue texture and the tongue coating from an image.
In order to achieve the above object, the present application provides the following technical solutions:
a method for dividing tongue proper and tongue coating comprises the following steps:
acquiring a face image, wherein the face image comprises an oral cavity area and a tongue area;
determining a target region in the face image based on feature points in the face image, the target region including the oral cavity region and the tongue region;
segmenting the tongue region from the target region based on color components of the target region in HSV space;
and based on the color components of the tongue area in the CIELAB space, a tongue texture area and a tongue fur area are obtained by segmentation in the tongue area.
Optionally, the determining a target region in the image based on the feature points in the face image includes:
inputting the face image into a preset neural network model to obtain the position information of the feature points of the face image output by the neural network model; the neural network model is used for extracting convolution characteristics of the face image and outputting position information of the characteristic points of the face image based on the convolution characteristics;
and determining the position information of the target area in the face image based on the position information of the feature points and a preset operation rule.
Optionally, the segmenting the tongue region from the target region based on the color component of the target region in the HSV space includes:
calculating an H component and a V component of the target region in an HSV space;
carrying out binarization processing on the H component and the V component;
fusing the H component and the V component after binarization, and obtaining a binarization image through binarization;
taking the maximum connected domain in the binary image as a mask image;
and performing AND operation on the mask image and the target region to obtain the tongue region.
Optionally, the face image and the target area are both RGB space images;
before the calculating the H component and the V component of the target region in the HSV space, the method further comprises the following steps:
converting the target region from the RGB space to the HSV space.
Optionally, before the step of using the maximum connected component in the binarized image as a mask image, the method further includes:
performing morphological opening operation and closing operation on the binary image to obtain a processed binary image;
the step of taking the maximum connected domain in the binarized image as a mask image comprises the following steps:
and taking the maximum connected domain in the processed binary image as the mask image.
Optionally, the segmenting the tongue region into a tongue texture region and a tongue fur region from the tongue region based on the color component of the tongue region in the CIELAB space includes:
determining a threshold value of Otsu's algorithm based on a component of the tongue region in the CIELAB space;
obtaining a seed point according to the threshold value;
clustering by using the seed points to obtain a tongue texture region mask image and a tongue fur region mask image;
and performing AND operation on the tongue texture region mask image and the tongue region to obtain the tongue texture region, and performing AND operation on the tongue coating region mask image and the tongue region to obtain the tongue coating region.
Optionally, the face image, the target region and the tongue region are RGB spatial images;
prior to the calculating a component of the tongue region in the CIELAB space, further comprising:
converting the tongue region from the RGB space to the CIELAB space.
Optionally, after the tongue texture region and the tongue coating region are obtained by dividing the tongue region, the method further includes:
and performing morphological opening and closing operation on the tongue texture area and the tongue fur area obtained by the segmentation.
A device for separating the tongue proper and tongue coating, comprising:
the image acquisition module is used for acquiring a face image, wherein the face image comprises an oral cavity area and a tongue area;
a first segmentation module to determine a target region in the face image based on feature points in the face image, the target region including the oral cavity region and the tongue region;
the second segmentation module is used for segmenting the tongue region from the target region based on the color components of the target region in the HSV space;
and the third segmentation module is used for segmenting the tongue texture region and the tongue fur region from the tongue region based on the color components of the tongue region in the CIELAB space.
A device for separating the tongue proper and coating, comprising:
a memory and a processor;
the memory is used for storing programs;
the processor is used for executing the program and the tongue texture and tongue coating segmentation method.
11. A storage medium having stored thereon a program which, when executed by a processor, implements the above-described tongue proper and tongue coating segmentation method.
The method and the device for segmenting the tongue texture and the tongue fur determine a target area (including an oral cavity area and a tongue area) in a face image based on characteristic points in the face image, segment the target area based on color components of the target area in an HSV (hue, saturation, value) space to obtain the tongue area, segment the tongue texture area and the tongue fur area based on color components of the tongue area in a CIELAB space, and accordingly segment the tongue texture area and the tongue fur area from the face image in a layered progressive mode based on the characteristic points and the color components of different color spaces, and lay a foundation for automatic tongue diagnosis.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for dividing tongue proper and tongue coating according to the embodiment of the present application;
fig. 2 is a specific flowchart for identifying feature points from a face image according to the embodiment of the present disclosure;
fig. 3 is a specific flowchart for segmenting a tongue region from a target region based on color components of the target region in HSV space, disclosed in an embodiment of the present application;
FIG. 4 is a flowchart of segmenting the tongue proper region and the tongue coating region from the tongue region based on the color components of the tongue region in the CIELAB space as disclosed in the embodiment of the present application;
FIG. 5 is an exemplary diagram of a target area;
FIG. 6 is an exemplary view of the tongue region sectioned from FIG. 5;
FIG. 7(a) is an exemplary view of a tongue region segmented from a tongue region;
FIG. 7(b) is an exemplary view of a tongue coating region segmented from a tongue region;
fig. 8 is a schematic structural diagram of a tongue proper and tongue coating dividing device disclosed in the embodiments of the present application.
Detailed Description
The method for segmenting the tongue texture and the tongue coating disclosed in the embodiment of the application aims to automatically segment the tongue texture region and the tongue coating region from the image, and it should be noted that the tongue texture and the tongue coating include but are not limited to human tissues and can also be animal tissues, so that the image for segmentation includes but is not limited to a human face image and can also be an animal face image. The tongue proper and tongue coating of a human body and a face image will be described as an example.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a method for dividing tongue proper and tongue coating, which is disclosed in the embodiment of the present application, and comprises the following steps:
s101: and acquiring a human face image.
The face image comprises an oral cavity area and a tongue area.
Specifically, the acquisition mode of the face image is as follows: the patient opens the mouth and extends the tongue and uses an image acquisition device, such as a camera, to acquire an image directed at the patient's facial region.
It should be noted that, in the present embodiment, the tongue image is not directly acquired, but the face image is acquired, because the tongue is located in the oral cavity, and therefore, even if only the tongue region is acquired, the image inevitably includes the oral cavity region and a partial region of the face, such as the corner of the mouth. The mouth area and the partial area of the face are helpful to locate the tongue area instead, and the face image is easier to collect, so the embodiment chooses to use the face image.
Optionally, in order to keep the authenticity of the tongue color in the image as much as possible, that is, to ensure the fidelity of the tongue image, there is a certain requirement on the quality of the camera and the camera, a standard color chart should be used for color correction before shooting, and further, a uniform white light source may be added. Specific corrective measures can be found in the prior art.
In this embodiment, examples of the parameters of the face image acquired by the camera may be: the image is stored as a 24-bit RGB image in format bmp or TIFF, the size of the image is no less than 1280 x 1024 pixels, and the oral area occupies no less than 320 x 240 pixels in the image.
S102: and determining a target area in the face image based on the feature points in the face image.
Wherein the target area includes an oral area and a tongue area.
As described above, the oral cavity region and the tongue region are included in the face image, and therefore, the step S102 is directed to extracting the oral cavity region and the tongue region from the face image.
Because some points in the face, such as corners of the mouth, eyes, etc., are fixed relative to the position and distance of the mouth and tongue, these points can be used as feature points to locate the mouth area.
In this step, the feature points may include, but are not limited to: the canthus, mouth, nose, eyebrows, and cheeks, for example, 68 points are selected as feature points from the above regions. And identifying the position information of the characteristic points from the face image, and determining the approximate positions of the outlet cavity and the tongue region by using the positions and the distances of the characteristic points and the oral cavity region and the tongue region. The position and distance relationship between the facial feature point and the oral cavity and tongue region can be estimated in advance according to the corresponding values of the feature point and the oral cavity region and the tongue region in a large number of samples. For example, the distance and angle of the corner of the eye from the corner of the mouth are taken from a large number of samples as one of the set relationships.
An example of a target area is shown in fig. 5. A specific process of identifying feature points from a face image can be seen in fig. 2.
S103: and segmenting the tongue region from the target region based on the color components of the target region in the HSV space.
The main purpose of this step is to extract the tongue region from the target region, while discarding useless regions such as corners of the mouth, teeth, etc.
HSV is a relatively intuitive color model that includes the following color components: hue (Hue, H), Saturation (S) and lightness (Value, V).
In the research process, the applicant finds that the tongue cannot be separated from the lips and the oral cavity by performing threshold segmentation on the RGB components due to the similar colors of the lips, the oral cavity and the tongue, and after the images are converted into HSV models, histograms of all channels (namely H, S, V) have obvious peaks, so that the tongue region and the oral cavity region can be segmented. Therefore, in the step, the tongue region is obtained by segmentation from the target region based on the color component of the target region in the HSV space.
Fig. 6 is a view of the tongue region sectioned from fig. 5. The specific implementation process of S103 can be seen in fig. 3.
S104: the tongue texture region and the tongue coating region are segmented from the tongue region based on the color components of the tongue region in CIELAB space.
The primary purpose of this step is to separate the tongue proper region from the tongue coating region. Because the tongue proper and the tongue coating have medical meanings, the medical meanings can be defined only by separating the tongue proper and the tongue coating.
The CIELAB space represents the position of a color in a geometric graph using three different coordinate axes L, a, b. The L axis represents lightness, black and white at the bottom and top of the L axis, respectively; + a denotes red, -a denotes green; + b, -b respectively represent yellow to blue. The hierarchical change of any color can be represented by L, a, b, which describe any color in nature.
In the research process, the applicant finds that the tongue texture area and the tongue coating area have obvious distinguishing characteristics in the CIELAB space, so that the tongue texture area and the tongue coating area are divided in the CIELAB space, and the effect is better.
The specific implementation process of S104 can be seen in fig. 4.
S105: and performing morphological opening and closing operation on the tongue texture area and the tongue fur area obtained by segmentation.
The opening and closing operations of mathematical morphology have the effect of smoothing the edges of the image. In this embodiment, the number of the opening operation and the closing operation may be set according to the requirement, and is not limited herein.
Fig. 7(a) shows a tongue texture region divided from a tongue region, and fig. 7(b) shows a tongue coating region divided from a tongue region.
S105 is an optional step and may not be performed.
The flow shown in fig. 1 has the following beneficial effects:
1. realizes the automatic segmentation of the tongue proper and the tongue coating and provides an effective basic means for the automatic medical detection.
2. The executed object is a face image instead of a tongue image, and the execution is easier.
3. Based on the characteristic points, the oral cavity area and the tongue area are segmented from the face image, the tongue area is further segmented based on the color components in the HSV space, and the tongue texture area and the tongue fur area are obtained by segmentation based on the color components in the CIELAB space.
Fig. 2 is a specific process for identifying feature points from a face image, which includes the following steps:
s201: and training the neural network model.
An example of a neural network model is the CNN model. Specifically, the neural network model includes a convolutional layer, a pooling layer, an excitation layer, a connection layer, and a classification layer. Convolution is a linear, translation-invariant operation consisting of performing locally weighted combinations on the input signal, through convolution layer operations, many new image features can be obtained; the pooling layer is used for rapidly reducing the dimension of the feature; the excitation layer carries out nonlinear mapping on the characteristics; the full connection and classification layer classifies and distinguishes the features.
In this embodiment, a certain number of face samples are collected from a database for face feature detection, and used as training samples, and label images of the sample images are obtained, where the label images include 68 feature points for labeling faces, including pixel points of the canthus, mouth, nose, eyebrows, and two cheeks, and the background is pure black or white.
The method comprises the steps of training a neural network model by using a sample image and a label image, specifically, enabling the sample image to penetrate into the neural network model to obtain feature points output by the neural network model, then calculating the difference between the feature points output by the model and the feature points in the label image by using a preset error function, minimizing the difference based on algorithms such as random gradient descent and the like, and obtaining parameters of the model.
S202: and inputting the face image obtained in the step S201 into the trained neural network model to obtain the position information of the feature points of the face image output by the neural network model.
The characteristic points of the face image are obtained by using the neural network model, and the accuracy is high. It should be noted that the above method is only one implementation, and besides the neural network model, other methods, such as other existing feature extraction methods, may also be used to extract the feature points.
Fig. 3 is a specific process of segmenting the tongue region from the target region based on the color component of the target region in the HSV space, which includes the following steps:
s301: the target area is converted from the RGB space to the HSV space.
As described above, the face image captured by the camera is an RGB image, and therefore, the target image is also an RGB image, and therefore, the RGB image needs to be converted into an HSV image before HSV components can be extracted.
RGB is designed based on the principle of color emission, which is commonly referred to as a mixture of three colors, Red (R, Red), Green (G, Green), and Blue (B, Blue).
The conversion relationship between the RGB space and the HSV space is as follows:
R′=R/255.0
G′=G/255.0
B′=B/255.0
Cmax=max(R′,G′,B′),Cmin=min(R′,G′,B′)
Figure BDA0002211530050000091
Figure BDA0002211530050000092
V=Vmax
s302: and calculating the H component and the V component of the target region in the HSV space.
S303: and performing binarization processing on the H component and the V component.
S304: and fusing the H component and the V component after binarization, and performing thresholding (namely binarization) to obtain a binarization image.
The fusion is to calculate, for example, add the values of the H and V components at a position of any one pixel to obtain a new value as a pixel value at the position after the fusion. Thresholding is typically based on an Otsu algorithm. .
S305: and performing morphological opening operation and closing operation on the binary image to obtain a processed binary image.
Morphological opening and closing operations are used to break small connections in the image, fill in elongated cracks, and smooth the image, which is an optional step that may not be performed.
S306: and taking the maximum connected domain in the processed binary image as a mask image.
The maximum connected region may be searched based on the 8-neighborhood principle, and the manner of searching the maximum connected region in the image may be referred to in the prior art, which is not described herein again.
S307: and performing AND operation on the mask image and the target area to obtain a tongue area.
In the research process, the applicant finds that the H component can well display the contours of the tongue tip and two sides of the tongue body, the V component can well display the contour of the tongue root, the binary component is fused to obtain a rough contour of the tongue, and based on the finding, the tongue area is segmented by using the flow shown in the figure 3, so that the tongue area has high accuracy.
Fig. 4 is a flowchart for obtaining a tongue texture region and a tongue coating region by segmentation from a tongue region based on color components of the tongue region in CIELAB space, and includes the following steps:
s401: the tongue area is converted from RGB space to CIELAB space.
As described above, the tongue region is obtained by performing and operation using the mask image and the target region, and the tongue region is also an RGB image because the target region is an RGB image.
Therefore, to acquire the CIELAB component, the tongue region needs to be converted from the RGB space to the CIELAB space first. The conversion relationship between the RGB space and the CIELAB space is as follows:
Figure BDA0002211530050000101
Figure BDA0002211530050000102
Figure BDA0002211530050000104
Figure BDA0002211530050000106
Figure BDA0002211530050000107
s402: the threshold of the Otsu algorithm is determined based on the a component of the tongue region in the CIELAB space.
The tongue coating refers to a thin coating spread on the back of the tongue, which is usually white or yellow, and usually has no possibility of red or green color, and the tongue is usually red, magenta or purple, with little white. In the course of the research, the applicant found that although the tongue proper and the tongue coating are different in color, the tongue coating is attached to the tongue proper, so that it is difficult to separate the tongue coating from the tongue proper. And the two differ in the a component. Hence segmentation is based on the a-component.
The Otsu algorithm divides an image into a background part and a target part according to the gray characteristic of the image. The larger the inter-class variance between the background and the object, the larger the difference between the two parts constituting the image, and the smaller the difference between the two parts when part of the object is mistaken for the background or part of the background is mistaken for the object. Thus, a segmentation that maximizes the inter-class variance means that the probability of false positives is minimized.
The threshold of the algorithm is a threshold for dividing the pixel into two parts, wherein a threshold larger than the threshold is a target (or background), and a threshold smaller than the threshold is a background (or target).
The specific way of determining the threshold of the algorithm can be seen in the prior art, and is not described herein.
S403: and obtaining a seed point according to the threshold value.
For the steps in this embodiment, the threshold is a value of a component, and above the threshold is the tongue proper seed, and below the threshold is the tongue coating seed.
Since the accuracy of the segmentation by the full threshold is not high, in the present embodiment, only the threshold is used to determine the seed point.
S404: and clustering by using the seed points to obtain a tongue texture region mask image and a tongue fur region mask image.
Namely, the tongue texture region mask image is obtained by clustering tongue texture seed points, and the tongue coating region mask image is obtained by clustering tongue coating seed points.
S405: and performing AND operation on the tongue texture region mask image and the tongue region to obtain a tongue texture region, and performing AND operation on the tongue coating region mask image and the tongue region to obtain a tongue coating region.
As can be seen from the flow shown in fig. 4, the tongue proper region and the tongue coating region are segmented based on the improvement of the greater body fluid algorithm, so that a more accurate segmentation effect can be obtained.
Fig. 8 is a device for separating the tongue proper and tongue coating, according to an embodiment of the present application, including: the image segmentation device comprises an image acquisition module, a first segmentation module, a second segmentation module and a third segmentation module.
The image acquisition module is used for acquiring a face image, and the face image comprises an oral cavity area and a tongue area. The first segmentation module is used for determining a target area in the face image based on the characteristic points in the face image, wherein the target area comprises an oral cavity area and a tongue area. The second segmentation module is used for segmenting the tongue region from the target region based on the color components of the target region in the HSV space. The third segmentation module is used for segmenting the tongue texture region and the tongue fur region from the tongue region based on the color components of the tongue region in the CIELAB space.
Specifically, the first segmentation module determines a specific implementation manner of the target area in the face image based on the feature points in the face image as follows: inputting the face image into a preset neural network model to obtain position information of feature points of the face image output by the neural network model; the neural network model is used for extracting convolution characteristics of the face image, outputting position information of characteristic points of the face image based on the convolution characteristics, and determining position information of a target area in the face image based on the position information of the characteristic points and a preset operation rule.
The second segmentation module is used for segmenting the tongue region from the target region based on the color component of the target region in the HSV space in a specific implementation manner as follows: calculating an H component and a V component of a target region in an HSV space; carrying out binarization processing on the H component and the V component; fusing the H component and the V component after binarization to obtain a binarization image; taking the maximum connected domain in the binary image as a mask image; and performing AND operation on the mask image and the target area to obtain a tongue area. Further, the face image and the target area are both images in RGB space; before calculating the H component and the V component of the target region in the HSV space, the second segmentation module is further to: converting a target area from an RGB space to an HSV space, and performing morphological opening and closing operations on the binary image before the maximum connected domain in the binary image is used as a mask image to obtain a processed binary image; and taking the maximum connected domain in the processed binary image as a mask image.
The third segmentation module is used for segmenting the tongue texture region and the tongue fur region from the tongue region based on the color components of the tongue region in the CIELAB space in a specific implementation mode as follows: determining a threshold value of the Otsu algorithm based on a component a of the tongue region in the CIELAB space; obtaining seed points according to a threshold value; clustering by using the seed points to obtain a tongue texture region mask image and a tongue fur region mask image; and performing AND operation on the tongue texture region mask image and the tongue region to obtain a tongue texture region, and performing AND operation on the tongue coating region mask image and the tongue region to obtain a tongue coating region. Further, the face image, the target area and the tongue area are all RGB space images; the third segmentation module is further configured to: the tongue area is converted from RGB space to CIELAB space before calculating the a components of the tongue area in the CIELAB space.
Optionally, the apparatus may further include: and the morphology processing module is used for performing morphology opening and closing operation on the tongue texture area and the tongue fur area obtained by segmentation.
The tongue proper and tongue coating segmentation device shown in fig. 8 gradually segments the tongue proper region and the tongue coating region from the face image based on the extraction of the feature points and the color components of different color spaces, has higher accuracy, and lays a foundation for the realization of automatic tongue proper.
The embodiment of the application also discloses a device for cutting the tongue proper and the tongue coating, which comprises: a memory and a processor. The memory is used for storing programs; the processor is used for executing programs to realize the tongue quality and tongue coating segmentation method.
The embodiment of the application also discloses a storage medium, wherein a program is stored on the storage medium, and when the program is executed by a processor, the method for segmenting the tongue texture and the tongue fur is realized.
The functions described in the method of the embodiment of the present application, if implemented in the form of software functional units and sold or used as independent products, may be stored in a storage medium readable by a computing device. Based on such understanding, part of the contribution to the prior art of the embodiments of the present application or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including several instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A method for dividing tongue proper and tongue coating, comprising:
acquiring a face image, wherein the face image comprises an oral cavity area and a tongue area;
determining a target region in the face image based on feature points in the face image, the target region including the oral cavity region and the tongue region;
segmenting the tongue region from the target region based on color components of the target region in HSV space;
and based on the color components of the tongue area in the CIELAB space, a tongue texture area and a tongue fur area are obtained by segmentation in the tongue area.
2. The method of claim 1, wherein determining the target region in the image based on the feature points in the face image comprises:
inputting the face image into a preset neural network model to obtain the position information of the feature points of the face image output by the neural network model; the neural network model is used for extracting convolution characteristics of the face image and outputting position information of the characteristic points of the face image based on the convolution characteristics;
and determining the position information of the target area in the face image based on the position information of the feature points and a preset operation rule.
3. The method of claim 1, wherein the segmenting the tongue region from the target region based on color components of the target region in HSV space comprises:
calculating an H component and a V component of the target region in an HSV space;
carrying out binarization processing on the H component and the V component;
fusing the H component and the V component after binarization, and obtaining a binarization image through binarization;
taking the maximum connected domain in the binary image as a mask image;
and performing AND operation on the mask image and the target region to obtain the tongue region.
4. The method of claim 3, wherein the face image and the target area are both images of RGB space;
before the calculating the H component and the V component of the target region in the HSV space, the method further comprises the following steps:
converting the target region from the RGB space to the HSV space.
5. The method according to claim 3, wherein before said using the largest connected component in said binarized image as a mask image, further comprising:
performing morphological opening operation and closing operation on the binary image to obtain a processed binary image;
the step of taking the maximum connected domain in the binarized image as a mask image comprises the following steps:
and taking the maximum connected domain in the processed binary image as the mask image.
6. The method of claim 1, wherein the segmenting the tongue proper region and the tongue coating region from the tongue region based on the color components of the tongue region in CIELAB space comprises:
determining a threshold value of Otsu's algorithm based on a component of the tongue region in the CIELAB space;
obtaining a seed point according to the threshold value;
clustering by using the seed points to obtain a tongue texture region mask image and a tongue fur region mask image;
and performing AND operation on the tongue texture region mask image and the tongue region to obtain the tongue texture region, and performing AND operation on the tongue coating region mask image and the tongue region to obtain the tongue coating region.
7. The method of claim 6, wherein the face image, the target region and the tongue region are images of RGB space;
prior to the calculating a component of the tongue region in the CIELAB space, further comprising:
converting the tongue region from the RGB space to the CIELAB space.
8. The method according to any one of claims 1-7, further comprising, after said segmenting from said tongue region,:
and performing morphological opening and closing operation on the tongue texture area and the tongue fur area obtained by the segmentation.
9. A device for separating the tongue proper and tongue coating, comprising:
the image acquisition module is used for acquiring a face image, wherein the face image comprises an oral cavity area and a tongue area;
a first segmentation module to determine a target region in the face image based on feature points in the face image, the target region including the oral cavity region and the tongue region;
the second segmentation module is used for segmenting the tongue region from the target region based on the color components of the target region in the HSV space;
and the third segmentation module is used for segmenting the tongue texture region and the tongue fur region from the tongue region based on the color components of the tongue region in the CIELAB space.
10. A device for separating the tongue proper and coating, comprising:
a memory and a processor;
the memory is used for storing programs;
the processor is configured to execute the program to implement the tongue proper and tongue coating segmentation method according to any one of claims 1 to 8.
11. A storage medium having a program stored thereon, wherein the program, when executed by a processor, implements the tongue proper and coating segmentation method according to any one of claims 1-8.
CN201910900014.1A 2019-09-23 2019-09-23 Method and device for dividing tongue texture and tongue coating Active CN110648336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910900014.1A CN110648336B (en) 2019-09-23 2019-09-23 Method and device for dividing tongue texture and tongue coating

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910900014.1A CN110648336B (en) 2019-09-23 2019-09-23 Method and device for dividing tongue texture and tongue coating

Publications (2)

Publication Number Publication Date
CN110648336A true CN110648336A (en) 2020-01-03
CN110648336B CN110648336B (en) 2022-07-08

Family

ID=69011008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910900014.1A Active CN110648336B (en) 2019-09-23 2019-09-23 Method and device for dividing tongue texture and tongue coating

Country Status (1)

Country Link
CN (1) CN110648336B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035573A (en) * 2022-05-27 2022-09-09 哈尔滨工程大学 Lip segmentation method based on fusion strategy
CN115147372A (en) * 2022-07-04 2022-10-04 海南榕树家信息科技有限公司 Traditional Chinese medicine tongue image intelligent identification and treatment method and system based on medical image segmentation
CN116777930A (en) * 2023-05-24 2023-09-19 深圳汇医必达医疗科技有限公司 Image segmentation method, device, equipment and medium applied to tongue image extraction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1908984A (en) * 2006-08-18 2007-02-07 清华大学 Coated tongue division and extracting method for colored digital photo of tongue
CN102194121A (en) * 2010-03-04 2011-09-21 天津市天堰医教科技开发有限公司 Application of improved maximum between-class variance method in tongue crack recognition
CN103745217A (en) * 2013-12-31 2014-04-23 北京工业大学 Automatic analysis method of tongue color and coating color in traditional Chinese medicine based on image retrieval
CN104574405A (en) * 2015-01-15 2015-04-29 北京天航华创科技股份有限公司 Color image threshold segmentation method based on Lab space
CN104658003A (en) * 2015-03-16 2015-05-27 北京理工大学 Tongue image segmentation method and device
CN109859229A (en) * 2018-12-14 2019-06-07 上海源庐加佳信息科技有限公司 A kind of Chinese medicine tongue nature coating nature separation method
CN109872299A (en) * 2018-12-14 2019-06-11 上海源庐加佳信息科技有限公司 A kind of Chinese medicine tongue color coating colour recognition methods

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1908984A (en) * 2006-08-18 2007-02-07 清华大学 Coated tongue division and extracting method for colored digital photo of tongue
CN102194121A (en) * 2010-03-04 2011-09-21 天津市天堰医教科技开发有限公司 Application of improved maximum between-class variance method in tongue crack recognition
CN103745217A (en) * 2013-12-31 2014-04-23 北京工业大学 Automatic analysis method of tongue color and coating color in traditional Chinese medicine based on image retrieval
CN104574405A (en) * 2015-01-15 2015-04-29 北京天航华创科技股份有限公司 Color image threshold segmentation method based on Lab space
CN104658003A (en) * 2015-03-16 2015-05-27 北京理工大学 Tongue image segmentation method and device
CN109859229A (en) * 2018-12-14 2019-06-07 上海源庐加佳信息科技有限公司 A kind of Chinese medicine tongue nature coating nature separation method
CN109872299A (en) * 2018-12-14 2019-06-11 上海源庐加佳信息科技有限公司 A kind of Chinese medicine tongue color coating colour recognition methods

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
蒋振豪: "基于舌体图像特征分析的舌苔诊断基础研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
蒋振豪: "基于舌体图像特征分析的舌苔诊断基础研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, no. 5, 15 May 2019 (2019-05-15), pages 36 *
黄志标,姚宇: "基于像素聚类的超声图像分割", 《计算机应用》, vol. 37, no. 2, 28 February 2017 (2017-02-28), pages 570 - 573 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035573A (en) * 2022-05-27 2022-09-09 哈尔滨工程大学 Lip segmentation method based on fusion strategy
CN115147372A (en) * 2022-07-04 2022-10-04 海南榕树家信息科技有限公司 Traditional Chinese medicine tongue image intelligent identification and treatment method and system based on medical image segmentation
CN115147372B (en) * 2022-07-04 2024-05-03 海南榕树家信息科技有限公司 Intelligent Chinese medicine tongue image identification and treatment method and system based on medical image segmentation
CN116777930A (en) * 2023-05-24 2023-09-19 深圳汇医必达医疗科技有限公司 Image segmentation method, device, equipment and medium applied to tongue image extraction
CN116777930B (en) * 2023-05-24 2024-01-09 深圳汇医必达医疗科技有限公司 Image segmentation method, device, equipment and medium applied to tongue image extraction

Also Published As

Publication number Publication date
CN110648336B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
Olugbara et al. Segmentation of melanoma skin lesion using perceptual color difference saliency with morphological analysis
EP1596573B1 (en) Image correction apparatus
WO2019100282A1 (en) Face skin color recognition method, device and intelligent terminal
CN110648336B (en) Method and device for dividing tongue texture and tongue coating
JP2020522807A (en) System and method for guiding a user to take a selfie
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
WO2020038312A1 (en) Multi-channel tongue body edge detection device and method, and storage medium
CN108615239B (en) Tongue image segmentation method based on threshold technology and gray level projection
CN110807775A (en) Traditional Chinese medicine tongue image segmentation device and method based on artificial intelligence and storage medium
CN111860538A (en) Tongue color identification method and device based on image processing
CN105844242A (en) Method for detecting skin color in image
US20180228426A1 (en) Image Processing System and Method
CN111476849B (en) Object color recognition method, device, electronic equipment and storage medium
CN113436734B (en) Tooth health assessment method, equipment and storage medium based on face structure positioning
JP2007272435A (en) Face feature extraction device and face feature extraction method
CN109948461B (en) Sign language image segmentation method based on centroid positioning and distance transformation
WO2021016896A1 (en) Image processing method, system and device, and movable platform and storage medium
US20240020843A1 (en) Method for detecting and segmenting the lip region
CN112712054B (en) Face wrinkle detection method
CN113643281A (en) Tongue image segmentation method
CN109934152B (en) Improved small-bent-arm image segmentation method for sign language image
CN114511567B (en) Tongue body and tongue coating image identification and separation method
CN108629780B (en) Tongue image segmentation method based on color decomposition and threshold technology
CN113706515B (en) Tongue image anomaly determination method, tongue image anomaly determination device, computer equipment and storage medium
CN113781330A (en) Image processing method, device and electronic system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant