CN113610753A - Method, device and storage medium for extracting Gabor texture features of tongue image - Google Patents
Method, device and storage medium for extracting Gabor texture features of tongue image Download PDFInfo
- Publication number
- CN113610753A CN113610753A CN202110688079.1A CN202110688079A CN113610753A CN 113610753 A CN113610753 A CN 113610753A CN 202110688079 A CN202110688079 A CN 202110688079A CN 113610753 A CN113610753 A CN 113610753A
- Authority
- CN
- China
- Prior art keywords
- gabor
- pixel
- texture
- value
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000004044 response Effects 0.000 claims abstract description 148
- 230000004927 fusion Effects 0.000 claims description 28
- 238000001914 filtration Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 abstract description 27
- 238000000605 extraction Methods 0.000 abstract description 10
- 206010012601 diabetes mellitus Diseases 0.000 description 20
- 238000009826 distribution Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 13
- 238000003745 diagnosis Methods 0.000 description 11
- 238000011524 similarity measure Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000001575 pathological effect Effects 0.000 description 3
- UGJMXCAKCUNAIE-UHFFFAOYSA-N Gabapentin Chemical compound OC(=O)CC1(CN)CCCCC1 UGJMXCAKCUNAIE-UHFFFAOYSA-N 0.000 description 2
- 238000000540 analysis of variance Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 238000002405 diagnostic procedure Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 229960002870 gabapentin Drugs 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000005181 root of the tongue Anatomy 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method, a device and a storage medium for extracting Gabor texture features of a tongue image. Because the texture enhancement filter is an improved wide-line detection filter, the response data of the texture enhancement filter does not depend on brightness information, the finally obtained novel Gabor texture characteristic value is not interfered by the brightness information, and the problem that the extraction method of the Gabor texture characteristic value in the prior art is easily interfered by brightness drift and reflective spots can be effectively solved.
Description
Technical Field
The invention relates to the field of medical image processing, in particular to a method, a device and a storage medium for extracting Gabor texture features of a tongue image.
Background
In recent years, tongue diagnosis has been increasingly emphasized in medical research because of its advantages of convenience, non-invasive and accuracy, compared with the traditional diagnostic methods. Tongue diagnosis refers to a simple and effective method for observing changes of color and form of the tongue to assist diagnosis and identification. Currently, there are many studies on tongue image texture features for medical diagnosis purposes in the tongue image analysis. For example, Gabor filters (Gabor filters) have been successfully used for tongue surface texture feature extraction. However, the conventional method for extracting the gabor texture feature value as the tongue surface texture feature by using the gabor filter is easily interfered by brightness drift and reflective dots of an image, so that the reliability of the extracted tongue surface texture feature is reduced.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The present invention is directed to provide a method, an apparatus, and a storage medium for extracting a gabor texture feature of a tongue image, which are used to solve the above-mentioned drawbacks of the prior art, and aims to solve the problem that the method for extracting a gabor texture feature value as a tongue surface texture feature by using a gabor filter in the prior art is easily interfered by brightness drift of an image and a reflection point, thereby reducing reliability of the extracted tongue surface texture feature.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a method for extracting a gabor texture feature of a tongue image, where the method includes:
acquiring gray image data corresponding to original tongue image data, and calculating similarity data of each pixel on the gray image and the region where the pixel is located;
calculating texture enhancement filter response data of the gray level image according to the similarity data;
and segmenting the response data of the texture enhancement filter of the gray level image, calculating the response data of the Gabor filter, and obtaining the Gabor texture characteristic value corresponding to the original tongue image data through the response data of the Gabor filter.
In one embodiment, the acquiring grayscale image data corresponding to the original tongue image data, and the calculating similarity data between each pixel on the grayscale image and the region where the pixel is located includes:
acquiring original tongue image data, and converting the original tongue image data into gray level image data;
acquiring the difference value of the gray value between each pixel in the gray image and the pixel in the area where the pixel is located;
and comparing the difference value of the gray values with a preset threshold parameter, and generating similarity data between each pixel and the pixel in the area where the pixel is located according to the comparison result.
In an embodiment, the comparing the difference between the gray values with a preset threshold parameter, and generating similarity data between each pixel and the pixel in the region where the pixel is located according to the comparison result includes:
comparing the difference value of the gray values with a preset threshold parameter;
when the difference value of the gray values is smaller than or equal to a preset threshold parameter, determining that the similarity between the pixel and the pixel in the area where the pixel is located is a first value;
and when the difference value of the gray values is larger than a preset threshold parameter, determining that the similarity value of the pixel and the pixel in the area where the pixel is located is a second value.
In one embodiment, said calculating texture enhancement filter response data for said grayscale image from said similarity data comprises:
performing weighted summation on the similarity data, and taking the result of the weighted summation as the texture enhancement filter response data of the pixel
Texture enhancement filter response data for the grayscale image is obtained by determining texture enhancement filter response data for each pixel.
In one embodiment, the segmenting the texture enhancement filter response data of the grayscale image and calculating the gabor filter response data, and obtaining the gabor texture feature value corresponding to the original tongue image data from the gabor filter response data includes:
segmenting a plurality of target blocks with preset quantity and size on the texture enhancement filter response data of the gray level image;
calculating a Gabor filtering fusion response value of each target block;
and outputting Gabor texture characteristic values corresponding to each target block through Gabor filtering fusion response values of each target block.
In one embodiment, the calculating the gabor filter fusion response value of each target block includes:
convolving the target block with Gabor filters of different scales and directions respectively to obtain a Gabor filter response value generated by each filter based on the target block;
and fusing all Gabor filter response values of the target block obtained under Gabor filters with different scales and directions, and taking the Gabor filter response value obtained after fusion as the Gabor filter fusion response value of the target block.
In one embodiment, the fusing all gabor filter response values obtained by the target block under gabor filters of different scales and directions, and using the fused gabor filter response value as the gabor filter fused response value of the target block includes:
and selecting the maximum value of the Gabor filter response value of each pixel in different scales and directions pixel by pixel to realize the fusion of all Gabor filter response values of the target block, and taking the Gabor filter response value obtained after the fusion as the Gabor filter fusion response value of the target block.
In one embodiment, the outputting the gabor texture feature value corresponding to each target block through the gabor filter fusion response value of each target block includes:
and calculating the average value of the Gabor filter fusion response value of the target block, and taking the calculated average value as the Gabor texture characteristic value of the corresponding area of the target block in the original tongue image data.
In a second aspect, an embodiment of the present invention further provides an apparatus for extracting a gabor texture feature of a tongue image, where the apparatus includes:
the calculation module is used for acquiring gray image data corresponding to the original tongue image data and calculating the similarity data of each pixel on the gray image and the region where the pixel is located;
a response module, configured to calculate texture enhancement filter response data of the grayscale image according to the similarity data;
and the texture module is used for segmenting the response data of the texture enhancement filter of the gray level image, calculating the response data of the Gabor filter, and obtaining the Gabor texture characteristic value corresponding to the original tongue image data through the response data of the Gabor filter.
In a third aspect, the present invention further provides a computer-readable storage medium, on which a plurality of instructions are stored, wherein the instructions are adapted to be loaded and executed by a processor to implement any of the above-mentioned steps of the method for extracting a gabor texture feature of a tongue image.
The invention has the beneficial effects that: according to the embodiment of the invention, the gray scale image data corresponding to the original tongue image data is obtained, the similarity data of each pixel on the gray scale image and the region of the gray scale image is calculated, then the texture enhancement filter response data of the gray scale image is calculated through the similarity data, the texture enhancement filter response data of the gray scale image is segmented and the Gabor filter response data is calculated, and the Gabor texture characteristic value corresponding to the original tongue image data is obtained through the Gabor filter response data. Because the texture enhancement filter is an improved wide-line detection filter, the response data of the texture enhancement filter does not depend on brightness information, the finally obtained novel Gabor texture characteristic value is not interfered by the brightness information, and the problem that the extraction method of the Gabor texture characteristic value in the prior art is easily interfered by brightness drift and reflective spots can be effectively solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a method for extracting a gabor texture feature of a tongue image according to an embodiment of the present invention.
Fig. 2 is a distribution diagram of a target block on a lingual surface according to an embodiment of the present invention.
Fig. 3 is a relationship diagram of an original tongue image and a gray scale image according to an embodiment of the present invention.
Fig. 4 is a reference diagram of global luminance deviation and local texture pattern provided by an embodiment of the present invention.
FIG. 5 is a block diagram illustrating a set of samples in descending order of Gabor texture feature values, according to an embodiment of the present invention.
Fig. 6 is a graph illustrating the linear correlation between gabor texture features and luminance according to an embodiment of the present invention.
FIG. 7 is a first set of sample graphs demonstrating the perturbation of retro-reflective Pobo texture features provided by embodiments of the present invention.
FIG. 8 is a second set of sample graphs demonstrating the perturbation of retro-reflective Pob texture features provided by embodiments of the present invention.
FIG. 9 is a block flow diagram of a method for extracting Gabor texture features of tongue images according to an embodiment of the present invention
Fig. 10 is an original tongue image provided by an embodiment of the present invention.
Fig. 11 is a response diagram of a wide line detection filter corresponding to an original tongue image according to an embodiment of the present invention.
FIG. 12 is a graph of the response of the texture enhancement filter corresponding to the original tongue image according to an embodiment of the present invention.
FIG. 13 is a graph comparing a wide line detection filter response to a texture enhancement filter response graph provided by an embodiment of the present invention.
FIG. 14 is a graph comparing texture enhancement filter responses under threshold and hyperbolic secant similarity metric provided by embodiments of the present invention.
FIG. 15 is an error bar graph of Gabor texture features and novel Gabor texture features for 8 target blocks according to an embodiment of the present invention.
FIG. 16 is a graph of a comparison of common Gabor texture features for healthy and diabetic populations, provided by an embodiment of the present invention.
FIG. 17 is a graph of a comparison of the novel Gabor texture features of healthy and diabetic populations provided by an embodiment of the present invention.
FIG. 18 is a sample graph of a novel Gabor texture feature for a healthy population provided by an embodiment of the present invention.
FIG. 19 is a sample graph of a novel Gabor texture feature for a diabetic population provided by an embodiment of the present invention.
Fig. 20 is a graph of a response using a symmetric similarity metric, provided by an embodiment of the present invention.
Fig. 21 is a graph of a response using a semi-symmetric similarity metric provided by an embodiment of the present invention.
FIG. 22 is a graph of the texture enhancement filter response for the sample in FIG. 18 provided by an embodiment of the present invention.
FIG. 23 is a graph of the texture enhancement filter response for the sample in FIG. 19 provided by an embodiment of the present invention.
Fig. 24 is a distribution diagram of the new type of gabor texture features on the luminance plane according to an embodiment of the present invention.
Fig. 25 is a sample diagram showing the new type of gabor texture features in descending order of their values according to an embodiment of the present invention.
Fig. 26 is a block diagram of a device for extracting the bobble texture feature of the tongue image according to an embodiment of the present invention.
Fig. 27 is a functional block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
In recent years, tongue diagnosis has been increasingly emphasized in medical research because of its advantages of convenience, non-invasive and accuracy, compared with the traditional diagnostic methods. Tongue diagnosis refers to a simple and effective method for observing changes of color and form of the tongue to assist diagnosis and identification. Currently, there are many studies on tongue image texture features for medical diagnosis purposes in the tongue image analysis. For example, Gabor filters (Gabor filters) have been successfully used for texture feature extraction of tongue images.
With regard to the application of the Bob filter in the tongue image texture feature extraction method, the article "Computerized tongued diagnosis based on Bayesian networks (Pang, Bo, et al." Computerized tongued diagnosis based on Bayesian networks. "IEEE Transactions on biological engineering 51.10(2004):1803-1810) proposes a set of Computerized tongue diagnosis systems using tongue surface color and texture, disclosing that the tongue texture is described by two statistics of the underlying tongue image gray co-matrix. The paper detection diodes and nonprolative biological characterization using a texture color, texture, and geometry features (Zhang, Bob, BVK Vijaya Kumar, and David Zhang, "detection diodes and nonprolative biological characterization using a texture color, texture, and geometry features," IEEE interactions on biological engineering 61.2(2014): 491-. However, the existing methods for extracting the Gabor texture feature values are all easily affected by a plurality of interference factors.
First, the extraction of Gabor texture feature values is susceptible to brightness drift of the image. The present embodiment provides fig. 3 and 4 as a reference. Specifically, in practical applications, the gabor texture feature value is calculated directly based on a tongue image gray scale image, and the tongue image gray scale image generally includes two components: the overall brightness deviation and the local texture pattern, in fig. 3, the gray image is a curved surface embedded in a three-dimensional space, and fig. 4 shows the overall brightness deviation (smooth curve) and the local texture pattern (zigzag broken line) corresponding to the curved surface, however, only the local texture pattern is crucial in the texture feature representation, and the overall brightness deviation is actually a disturbing factor. However, in the convolution process of calculating the Gabor response, the overall brightness deviation and the local texture mode both participate in calculation, and if the overall brightness deviation of the original tongue image is approximately the same, the direct convolution of the gray scale image corresponding to the original tongue image does not cause much problem. However, in practical applications, the overall brightness deviation may generate a drift phenomenon with time, so that the gabor texture feature value obtained after convolution is interfered by the brightness drift. To verify this conclusion, the inventors performed experiments in a large database containing 4989 tongue image samples and 39,912 small blocks. For each patch, the inventors calculated their Gabor texture feature values and luminance. In fig. 5, the inventors show a set of samples in descending order of the values of the gabor texture features. As shown in fig. 6, the inventors plotted the distribution of the database in a plane with the gabor texture feature value and the brightness as coordinates, respectively. The dots represent samples in the database, the dashed lines represent linear regression of the data, and it can be seen that the Gabor texture feature values significantly correlate with the luminance, i.e., the luminance and Gabor texture feature values have a strong linear correlation. In practice, the Pearson correlation coefficient for the luminance and the gabor texture feature values is 0.8452, so the gabor texture feature values are largely luminance dependent. Therefore, if the brightness drifts with time, the output Gabor texture feature value will be affected to some extent.
Secondly, the Gabor texture feature value is also easily affected by the reflection points of the tongue surface. That is, in addition to brightness drift, brightness disturbance is caused by the reflection points on the tongue surface. To demonstrate this phenomenon, the inventors have shown two sets of samples in fig. 7, 8, respectively, the first set (fig. 7) being substantially smooth and with many reflective dots. While the second set of samples (fig. 8) are relatively coarse samples and have no glistenings. Theoretically, the Gabor texture feature value is used to describe the roughness of the image texture, and the coarser the image texture, the greater the value of the Gabor texture feature value. The upper limit of the Gabor texture features of the second set in the image result is below 2.8, while all the Gabor texture features of the first set are not below 3.7, so the image result shows that the Gabor texture feature values of the first set are much higher than the Gabor texture feature values of the second set, but in fact the samples of the second set are coarser than the samples of the first set. The higher values of the first group of Gabor texture feature values are due primarily to the brightness of the glistenings. Therefore, the existing Gabor texture feature value extraction method is also seriously interfered by the glistening points.
Aiming at the problems of brightness drift and reflection point interference in the prior art, the invention provides a method for extracting Gabor Texture features of a tongue image, which carries out image preprocessing by using a non-linear-based Texture Enhancement Filter (TEF), and then takes response data of the Texture enhancement Filter as input data of the Gabor Filter to obtain a novel Gabor Texture feature value. Because the Texture enhancement filter is an improved Wide Line Detector (WLD filter), and its response data does not depend on the luminance information, the finally obtained new type Gabor Texture Feature value (P-Gabor, pre-conditioned Gabor Texture Feature) is not interfered by the luminance information, and the problem that the extraction method of the Gabor Texture Feature value in the prior art is easily interfered by the luminance drift and the reflection point can be effectively solved.
As shown in fig. 1, the method for extracting the gabor texture features of the tongue image includes the following steps:
and S100, acquiring gray image data corresponding to the original tongue image data, and calculating the similarity data of each pixel on the gray image and the region where the pixel is located.
Specifically, in this embodiment, the bobo texture feature value also needs to be calculated based on the tongue image grayscale image, so the grayscale image data corresponding to the original tongue image data needs to be obtained first. Grayscale image data refers to an image having only one sample color per pixel. After acquiring the grayscale image data, the present embodiment needs to pass through a given grayscale image I (x, y) and pixel (x)0,y0) Pixel (x)0,y0) Since the similarity of the pixel points is calculated from the response data of the texture enhancement filter, after the gray scale image data is acquired, the similarity data between each pixel on the gray scale image and the region where the pixel is located needs to be calculated.
In one implementation, the step S100 specifically includes the following steps:
step S110, acquiring original tongue image data, and converting the original tongue image data into gray level image data;
step S120, obtaining the difference value of the gray value between each pixel in the gray image and the pixel in the area where the pixel is located;
step S130, comparing the difference value of the gray values with a preset threshold parameter, and generating similarity data between each pixel and the pixel in the area where the pixel is located according to the comparison result.
Specifically, after the image data of the original tongue image is acquired, the image data of the original tongue image needs to be converted into grayscale image data. In one implementation, a complete image, typically composed of three channels, red, green, and blue, is created. The scaled views of the three red, green and blue channels are all displayed in grayscale. The specific gravity of red, green and blue in the image is represented by different gray scale levels. The channel is the basis of the displayed image, and the color variation is actually indirect in the opposite channelAnd adjusting the gray scale map. Therefore, the original tongue image data can be converted from the RGB color representation system to the HIS color representation system, and the I channel of the tongue image represented by the new HIS color system is taken, so that the gray level image I (x, y) can be obtained. Then, in order to calculate similarity data between each pixel and the pixel in the region where the pixel is located, a difference value of a gray value between each pixel in the gray image and the pixel in the region where the pixel is located needs to be obtained, then the difference value of the gray value is compared with a preset threshold parameter, and the similarity data between each pixel and the pixel in the region where the pixel is located is generated according to a comparison result. The specific implementation mode is as follows: after the difference value of the gray values is obtained, the difference value of the gray values is compared with a preset threshold parameter, for example, the preset threshold parameter may be set as a standard deviation of image gray level change, and when the difference value of the gray values is less than or equal to the preset threshold parameter, it is determined that the similarity between the pixel and the pixel in the area where the pixel is located is a first value, for example, 0; and when the difference value of the gray values is larger than a preset threshold parameter, determining that the similarity value of the pixel and the pixel in the area where the pixel is located is a second value, for example 1. The formula is as follows: acquiring a grayscale image I (x, y) and a pixel (x)0,y0) Pixel (x)0,y0) For each set of pixels (x, y) and (x) in the gray scale image0,y0) Its similarity s (x, y, x)0,y0T) takes 1 when greater than threshold t, otherwise 0:
where the threshold parameter t is the standard deviation set to the image I gray scale change and the values of the variables x and y are traversed by (x)0,y0) A circular neighborhood with a radius at the center r.
It should be noted that the texture enhancement filter used in this embodiment is an improved wide line detection filter, and the wide line detection filter describes the image texture based on the gray scale variation characteristics between the neighboring pixels. In the original wide line detection filter, the method for measuring the similarity between the pixel and the neighborhood is different from the method for measuring the similarity adopted in the invention. One of the basic similarity measures for the original wide-line detection filter is the form:
another metric form that is considered more stable is:
where the function sech represents a hyperbolic secant function. However, the computation of the sech function is very time consuming, so the present invention uses a basic similarity measure and modifies it as well into the following asymmetric form, the following being the similarity measure method of the present invention:
fig. 14 compares two types of similarity measures of the examples. Fig. 14 shows that the response signals of the two methods are very close to each other. Therefore, the present embodiment loses very little precision by simplifying to the basic similarity measure, and saves a large amount of computation time. The inventors calculated Gabor texture feature values at two similarity measures on a database of 130 healthy and 293 diabetes samples, respectively. The average calculated time per sample versus time ratio is as follows 1:
TABLE 1
Hyperbolic secant function | Threshold value | |
CPU time(s) | 1.2722 | 6.1183 |
It can be seen that using the threshold similarity metric approach, the time cost of feature extraction is approximately 1/5 using the hyperbolic secant function. Therefore, the threshold similarity measure method employed in the present embodiment is more suitable for practical applications.
After determining the similarity data between each pixel and the pixel in the region, as shown in fig. 1, in order to calculate the texture enhancement filter response data of the gray-scale image, the method further includes the following steps:
and step S200, calculating texture enhancement filter response data of the gray level image according to the similarity data.
Specifically, since the texture enhancement filter is an improved wide-line detection filter, and the response information thereof is not interfered by the brightness, the embodiment needs to calculate the response data of the texture enhancement filter according to the similarity data, and use the calculated response data of the texture enhancement filter as the input data of the subsequent Gabor filter, so as to obtain the tongue texture features robust to the brightness drift.
In one implementation, the step S200 specifically includes the following steps:
step S210, carrying out weighted summation on the similarity data, and taking the result of the weighted summation as the response data of the texture enhancement filter of the pixel
Step S220, obtaining texture enhancement filter response data of the grayscale image by determining texture enhancement filter response data of each pixel.
Specifically, in this embodiment, the texture enhancement filter response data of each pixel is defined as a weighted sum of the similarity between the pixel and the region where the pixel is located, and a specific calculation formula is as follows:
m(x0,y0)=∫∫cw(x0-x,y0-y)s(x,y)dxdy
wherein the integration region is represented by (x)0,y0) A circular neighborhood with a radius at the center r. Calculating the weight w (x)0-x,y0-y) is:
by obtaining the texture enhancement filter response data of each pixel, the texture enhancement filter response data of the whole gray level image is further obtained.
After the texture enhancement filter response data of the gray-scale image is obtained, in this embodiment, the texture enhancement filter response data needs to be segmented and then used as input data of the gabor filter, so as to obtain a gabor texture feature value corresponding to the original tongue image data, as shown in fig. 1, the method further includes the following steps:
and step S300, segmenting the response data of the texture enhancement filter of the gray level image, calculating the response data of the Gabor filter, and obtaining the Gabor texture characteristic value corresponding to the original tongue image data through the response data of the Gabor filter.
Specifically, the embodiment needs to segment the response of the texture enhancement filter, and separately calculate the bobo texture feature value of each segmented small block, so as to output the tongue image texture feature vector corresponding to each small block.
In one implementation, the step S300 specifically includes the following steps:
step S310, segmenting a plurality of target blocks with preset quantity and size on the texture enhancement filter response data of the gray level image;
step S320, calculating a Gabor filtering fusion response value of each target block;
and step S330, outputting Gabor texture characteristic values corresponding to each target block through Gabor filtering fusion response values of each target block.
Specifically, in order to better represent the texture of the tongue, the present embodiment uses a plurality of target blocks with a preset size to strategically cover the surface of the tongue, so that a plurality of target blocks with a preset number and size need to be segmented from the texture enhancement filter response data of the grayscale image. In one implementation, the number of target blocks may be set to eight, as shown in fig. 2. In addition, since the larger target block covers the area outside the tongue boundary and overlaps other targets, the size of the target block may be set to 64 × 64 pixels in order to better cover the tongue surface area corresponding to all eight target blocks and achieve the minimum overlap between the areas. For example, as shown in FIG. 2, in the texture enhancement filter response data m (x)0,y0) Firstly, the tongue image foreground image is used for positioning the tongue image center, and then the eight same target blocks with 64 × 64 pixels are determined and obtained through segmentation based on the relative position from the positioned tongue image center to the tongue image edge. As can be seen from fig. 2, the first target block is located at the front end of the tongue, the second target block and the third target block, and the fourth target block and the fifth target block are respectively located at both sides of the first target block, the sixth target block and the seventh target block are located at the root of the tongue, and the eighth target block is located at the center of the tongue.
Then, for each of the plurality of target blocks, its corresponding Gabor texture feature value is calculated. Specifically, in order to obtain the gabor texture feature value corresponding to each target block, the present embodiment first needs to calculate the gabor filtering fusion response value of each target block. In one embodiment, the response value of the Gabor filter generated by each filter based on the target block may be obtained by convolving the target block with Gabor filters of different scales and directions, respectively. For example, assume that the texture enhancement filter response data m (x) needs to be calculated0,y0) The 2D Gabor filter response value of (f), then for each texture enhancement filter response pixel (x)0,y0) The filtering result is calculated by a 2D Gabor filterComprises the following steps:
where x 'xcos θ + ysin θ, y' xsin θ + ycos θ, σ is the scale, λ is the wavelength, γ is the aspect ratio of the sine function, and θ is the direction. Each target block is then convolved with Gabor filters of various scales and directions, respectively, each of which produces a response m (x)0,y0)Rk(x0,y0):
Rk(x0,y0)=∫∫cm(x0-x,y0-y)Gk(x,y)dxdy
Wherein the integration region is represented by (x)0,y0) A circular neighborhood with a radius at the center r.
And then all Gabor filter response values obtained by the target block under Gabor filters with different scales and directions are fused. Specifically, as shown in fig. 9, in order to fuse the gabor filter response values of all pixels in each target block, the maximum value of the gabor filter response value of each pixel in different scales and directions may be selected on a pixel-by-pixel basis to fuse all the gabor filter response values of the target block, and the gabor filter response value obtained after the fusion is used as the gabor filter fusion response value of the target block.
And finally, outputting Gabor texture characteristic values corresponding to each target block through Gabor filtering fusion response values of each target block. In one implementation, the calculated average value may be used as the gabor texture feature value of the corresponding region of the target block in the original tongue image data by calculating an average value of the gabor filter fusion response values of the target block. The process of assigning a single Gabor texture feature value to each target block is illustrated: first go through each pixel in each target block, fuse all responses of each target block by selecting the maximum Gabor filter response value on a pixel-by-pixel basis:
FR(x,y)=max(R1(x,y),....Rn(x,y))
then all the gray scales of the target block are fused through the average response value of the fused target block, and the obtained average value is used as the Gabor texture characteristic value of the target block:
the Gabor texture feature value describes the roughness and the smoothness of the tongue image, is a novel Gabor texture feature, and keeps robustness to different brightness and tongue surface reflection points.
It should be noted that the texture enhancement filter used in this embodiment is an improved wide line detection filter, which is not exactly the same as the existing wide line detection filter, but differs mainly in the following two aspects: 1. the response of the wide line detection filter is a truncated signal of the non-similarity measure, and the texture enhancement filter used by the invention directly utilizes the signal of the non-similarity measure and does not truncate the signal. 2. For the similarity/dissimilarity measure problem, the broad line detection filter uses a hyperbolic secant function, while the texture enhancement filter in the present invention uses a simple thresholding to improve time efficiency. This is because for the present invention, the dissimilarity can be directly obtained by measuring the difference in the neighborhood, and in addition, in the original operation of the wide line detection filter, the wide line detection filter is used for detecting the obvious crack feature, so that the tiny feature needs to be filtered out by the threshold processing, and the main object of the present invention is the texture enhancement and description, so that those tiny features need to be preserved, and it is because the texture enhancement filter in the present invention has different application purpose from the existing wide line detection filter, so that the operation principle of the two is also different.
In order to discuss the response difference between the texture enhancement filter used in the present invention and the existing wide line detection filter, the inventors made the following experiment:
fig. 10 is a sample of an original tongue image, and the inventors compared the wide line detection filter response and the texture enhancement filter response of the sample. Fig. 11 is a response of a conventional wide line detection filter, and fig. 12 is a response of a texture enhancement filter in the present invention. As can be seen in fig. 11, 12, both responses detect lingual textural features, but in contrast the texture enhancement filter response detects more lingual papillae than the normal ones in the broad line detection filter response, which are truncated due to the thresholding operation of the broad line detection filter. Further, a row of pixels in the image is selected for more detailed comparison: the solid line in fig. 13 is the wide line detection filter response of a selected row of pixels and the dashed line is the texture enhancement filter response, and it can be seen that the shape of the two polylines is the same over some intervals, especially in some intervals where the signal values are relatively high. In the interval with lower signal value, the response of the wide line detection filter is truncated, i.e. tiny features are filtered out (because the wide line detection filter is originally proposed to detect tongue cleft and the papilla on the tongue surface is filtered out by truncation, the tiny features are filtered out), and the above experimental result shows that the texture enhancement filter in the invention can provide more accurate representation of the texture features on the tongue surface.
It should be noted that the new gabor texture feature value obtained by the method of the present invention is obviously different from the common gabor texture feature value. The novel Gabor texture characteristic value obtained in the invention contains some important information which can be used as a reference index of diabetes and has quite high reliability. Thus, in one implementation, the method further comprises: and taking the Gabor texture characteristic value obtained by the method for extracting the Gabor texture characteristic of the tongue image as a reference index of diabetes. The inventors have made the following experiments for this purpose:
the inventors calculated the general and novel gabor texture features on a database of 130 healthy and 293 diabetic samples, respectively. The inventors found that the 8 th of the 8 segmented target blocks contained the most abundant pathological information, and experiments showed that healthy and diabetic populations had significant differences in texture values of the 8 th target block. Error bars for the normal Gabor texture feature and the New Gabor texture feature for the 8 target blocks are shown in FIG. 15. The 8 distributions (dashed error bars) of the common gabor texture features do not show any significant difference from each other, and no obvious distribution pattern can be found in the distributions of the dashed error bars. However, the distribution of the new Gabor texture features (solid error bars) shows some significant regularity. The range of texture features of the 8 th target block is much higher and varies much more than the other blocks. Specifically, the average value of the texture features of the 8 th target block may be as high as 1.5, while the maximum value of the other target blocks does not exceed 1.5. On the other hand, the 1 st target block is the lowest texture feature of all target blocks. In fact, the average value of the new gabor texture feature is about 0.2, which indicates that the tongue apex is smoothest in all target blocks. Therefore, the novel Gabor texture feature effectively reveals the pathological significance of the 8 th target block compared with the common Gabor texture feature.
For the 8 th target block, the inventors performed some analysis of variance on the new gabor texture features and the normal gabor texture features. The distribution of common spikey texture features for healthy and diabetic populations is shown in figure 16. The distribution of the novel gabor texture features for the healthy and diabetic populations is shown in fig. 17. It can be seen from fig. 16 and 17 that the distribution range of the general gabor texture features is much narrower than that of the novel gabor texture features. Specifically, the variation range of the common Gambo texture features is only limited within the interval [1,4], while the variation range of the novel Gambo texture features covers the interval [0,6.5], so that the distribution range of the novel Gambo texture features is much wider than that of the common Gambo texture features. More importantly, the novel Gabor texture features reveal important pathological information. In fig. 16, the distribution of healthy and diabetic populations is approximately similar. Both are concentrated at around 2.5. But the distribution difference in fig. 17 is quite significant. The novel gabapentin texture characteristic of healthy populations is long-tailed distribution, whereas diabetes is not. Most healthy populations are in the lower interval. In contrast, most diabetics are located approximately near the center 1.8. All of these indicate significant differences between healthy and diabetic populations on the novel gabor texture profile.
Table 2 shows a comparison of some indicators of the analysis of variance of the regular gabor texture features and the novel gabor texture features.
TABLE 2
Common Gabor texture feature value | Novel Gambo texture characteristic value | |
SSb | 0.3811 | 95.5301 |
SSw | 168.4815 | 1270.7918 |
SSt | 168.8625 | 1366.3219 |
MSb | 0.3811 | 95.5301 |
MSw | 0.1741 | 1.3128 |
F-Value | 2.1894 | 72.7681 |
p-Value | 0.1393 | 5.5613e-17 |
In table 2, the novel gabor texture features significantly improved SSb and MSb compared to the common gabor texture features, but the F-value was also improved from 2.1894 to 72.7681, which is a very important reference index for diabetes. Furthermore, the p-value of the novel gabor texture feature is well below the significant level of 5%, which means that healthy and diabetic populations have significant differences in the novel gabor texture feature. Whereas for the common gabor texture feature, due to the 13% p-value, the healthy and diabetic populations did not differ significantly in the common gabor texture feature. Some representative samples of both populations are shown in fig. 18, 19, with fig. 18 showing samples of healthy populations and fig. 19 showing samples of diabetic populations. Samples near the median of the new Gabor texture features were taken as representative samples. As shown in fig. 18 and 19, the differences in the novel cabbo texture features between the two populations were more pronounced, as evidenced by the coarser nature of the novel cabbo texture features in the diabetic group than in the healthy group. Therefore, compared with the common Gabor texture characteristics, the novel Gabor texture characteristics obtained by the invention can be used as reference indexes of diabetes more reliably.
In addition, in order to evaluate the performance of the technical scheme of the invention, the inventor carries out corresponding experiments and mainly verifies that the method provided by the invention can effectively solve the problems of brightness drift and glistening interference and can be reliably used as a reference index of diabetes.
At present, the Gabor texture features obtained by the prior art are easily interfered by the reflection points of the tongue surface. To solve this problem, the present invention only detects the low dark areas on the tongue surface, and the high bright areas on the tongue surface should be filtered out as reflective dots. The reflection points can be filtered out through the semi-symmetrical similarity measurement process corresponding to the step S130 in the invention. The inventors specifically compared the responses of symmetric similarity (fig. 20) and semi-symmetric similarity (fig. 21) for this purpose. In the original tongue image shown in fig. 10, in the area where there are some glistenings at the tongue apex and tongue side edges, this interference persists in fig. 20, whereas in fig. 21 this interference has been greatly reduced while the tongue crack and tongue papilla responses remain fairly intact, so that the semi-symmetrical similarity measure used in the present invention enables the filtering out of glistenings. In terms of texture feature representation, the texture enhancement filter response of the image in FIG. 7 is shown in FIG. 22. The texture enhancement filter response of the image of fig. 8 is shown in fig. 23, where the numbers are corresponding to the novel gabor texture features. The results in fig. 22 and 23 show that the texture of the lingual surface of group 2 itself is well preserved while the glistenings of group 1 are effectively removed. Thus, the new Gabor texture features of group 1 are much lower than the new Gabor texture features of group 1I. In fact, the maximum value of the new Gabor texture feature of group 1 is about 2.0 and is zero over many samples. In contrast, the novel gabor texture features of group 2 are not less than 5.0. These numerical results are consistent with the fact that group 2 is coarser than group 1. Therefore, the results show that the method provided by the invention can eliminate the interference of the reflective dots, and the obtained novel Gabor texture feature can more accurately describe the texture feature of the tongue surface.
To address the brightness drift problem, the inventors obtained a set of common Gabor texture features and corresponding brightness on a tongue image dataset of 39,912 patches. On the same dataset, the inventors extracted the novel Gabor texture features and analyzed their relationship to image brightness. The image brightness and the new gabor texture features for all samples are plotted in fig. 24. The distribution in fig. 24 shows that the novel gabor texture features do not have a significant statistical linear dependence on the image brightness, which indicates that the present invention effectively solves the problem of interference with brightness drift. More specifically, the correlation coefficient between the new type Gambo texture features and the luminance is 0.188646, which is much lower than that of the common Gambo texture features (see Table 3). Further, the inventors have shown some samples in fig. 25 in order of decreasing values of the novel texture features. The samples are smoother and less disturbed by brightness drift. Therefore, the result shows that the method provided by the invention can effectively solve the problem that the brightness drift can generate interference in the process of extracting the Gabor texture characteristic value.
TABLE 3
Correlation coefficient | |
Gabor | 0.8452 |
P-Gabor | 0.1907 |
Based on the above embodiment, as shown in fig. 26, the present invention further provides an apparatus for extracting a bob texture feature of a tongue image, the apparatus comprising:
the calculation module 01 is used for acquiring gray image data corresponding to the original tongue image data and calculating similarity data between each pixel on the gray image and the region where the pixel is located;
a response module 02, configured to calculate texture enhancement filter response data of the grayscale image according to the similarity data;
and the texture module 03 is configured to segment the texture enhancement filter response data of the grayscale image, calculate the gabor filter response data, and obtain a gabor texture feature value corresponding to the original tongue image data through the gabor filter response data.
Based on the above embodiments, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 27. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The computer program is executed by a processor to implement a method of extracting Gabor texture features of a tongue image. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be understood by those skilled in the art that the block diagram of fig. 27 is only a block diagram of a portion of the structure associated with the solution of the present invention, and does not limit the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.
In one implementation, one or more programs are stored in a memory of the smart terminal and configured to be executed by one or more processors include instructions for performing a method of extracting a gabor texture feature of a tongue image.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a method for extracting gabor texture features of a tongue image, which performs image preprocessing by using a non-linear-based texture enhancement filter, and then uses response data of the texture enhancement filter as input data of the gabor filter to obtain a novel gabor texture feature value. Because the texture enhancement filter is an improved wide-line detection filter, the response data of the texture enhancement filter does not depend on brightness information, the finally obtained novel Gabor texture characteristic value is not interfered by the brightness information, and the problem that the extraction method of the Gabor texture characteristic value in the prior art is easily interfered by brightness drift and reflective spots can be effectively solved.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.
Claims (10)
1. A method of extracting a bobo texture feature of a tongue image, the method comprising:
acquiring gray image data corresponding to original tongue image data, and calculating similarity data of each pixel on the gray image and the region where the pixel is located;
calculating texture enhancement filter response data of the gray level image according to the similarity data;
and segmenting the response data of the texture enhancement filter of the gray level image, calculating the response data of the Gabor filter, and obtaining the Gabor texture characteristic value corresponding to the original tongue image data through the response data of the Gabor filter.
2. The method of claim 1, wherein the obtaining of gray-scale image data corresponding to the original tongue image data and the calculating of similarity data between each pixel of the gray-scale image and its own region comprises:
acquiring original tongue image data, and converting the original tongue image data into gray level image data;
acquiring the difference value of the gray value between each pixel in the gray image and the pixel in the area where the pixel is located;
and comparing the difference value of the gray values with a preset threshold parameter, and generating similarity data between each pixel and the pixel in the area where the pixel is located according to the comparison result.
3. The method as claimed in claim 2, wherein the comparing the difference between the gray values with a preset threshold parameter and generating similarity data between each pixel and the pixel in the region where the pixel is located according to the comparison result comprises:
comparing the difference value of the gray values with a preset threshold parameter;
when the difference value of the gray values is smaller than or equal to a preset threshold parameter, determining that the similarity between the pixel and the pixel in the area where the pixel is located is a first value;
and when the difference value of the gray values is larger than a preset threshold parameter, determining that the similarity value of the pixel and the pixel in the area where the pixel is located is a second value.
4. The method of claim 1, wherein the calculating the texture enhancement filter response data of the gray-scale image from the similarity data comprises:
performing weighted summation on the similarity data, and taking the result of the weighted summation as the texture enhancement filter response data of the pixel
Texture enhancement filter response data for the grayscale image is obtained by determining texture enhancement filter response data for each pixel.
5. The method as claimed in claim 1, wherein the segmenting the texture enhancement filter response data of the gray scale image and calculating the Gabor filter response data, and obtaining the Gabor texture feature value corresponding to the original tongue image data from the Gabor filter response data comprises:
segmenting a plurality of target blocks with preset quantity and size on the texture enhancement filter response data of the gray level image;
calculating a Gabor filtering fusion response value of each target block;
and outputting Gabor texture characteristic values corresponding to each target block through Gabor filtering fusion response values of each target block.
6. The method of claim 5, wherein the calculating the Gabor filter fusion response value of each target block comprises:
convolving the target block with Gabor filters of different scales and directions respectively to obtain a Gabor filter response value generated by each filter based on the target block;
and fusing all Gabor filter response values of the target block obtained under Gabor filters with different scales and directions, and taking the Gabor filter response value obtained after fusion as the Gabor filter fusion response value of the target block.
7. The method as claimed in claim 6, wherein the fusing all the Gabor filter response values obtained by the target block under Gabor filters with different scales and directions, and using the Gabor filter response value obtained after fusing as the Gabor filter fusion response value of the target block comprises:
and selecting the maximum value of the Gabor filter response value of each pixel in different scales and directions pixel by pixel to realize the fusion of all Gabor filter response values of the target block, and taking the Gabor filter response value obtained after the fusion as the Gabor filter fusion response value of the target block.
8. The method of claim 5, wherein outputting the Gabor texture feature value corresponding to each target block according to the Gabor filter fusion response value of each target block comprises:
and calculating the average value of the Gabor filter fusion response value of the target block, and taking the calculated average value as the Gabor texture characteristic value of the corresponding area of the target block in the original tongue image data.
9. An apparatus for extracting Gabor texture features of a tongue image, the apparatus comprising:
the calculation module is used for acquiring gray image data corresponding to the original tongue image data and calculating the similarity data of each pixel on the gray image and the region where the pixel is located;
a response module, configured to calculate texture enhancement filter response data of the grayscale image according to the similarity data;
and the texture module is used for segmenting the response data of the texture enhancement filter of the gray level image, calculating the response data of the Gabor filter, and obtaining the Gabor texture characteristic value corresponding to the original tongue image data through the response data of the Gabor filter.
10. A computer readable storage medium having stored thereon instructions adapted to be loaded and executed by a processor to perform the steps of a method of extracting a gabor texture feature of a tongue image according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110688079.1A CN113610753B (en) | 2021-06-21 | 2021-06-21 | Method, device and storage medium for extracting blogging texture features of tongue image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110688079.1A CN113610753B (en) | 2021-06-21 | 2021-06-21 | Method, device and storage medium for extracting blogging texture features of tongue image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113610753A true CN113610753A (en) | 2021-11-05 |
CN113610753B CN113610753B (en) | 2024-08-02 |
Family
ID=78336689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110688079.1A Active CN113610753B (en) | 2021-06-21 | 2021-06-21 | Method, device and storage medium for extracting blogging texture features of tongue image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113610753B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082461A (en) * | 2022-08-19 | 2022-09-20 | 成都中医药大学 | Edge calculation-based pre-judgment filtering method and device |
CN117237478A (en) * | 2023-11-09 | 2023-12-15 | 北京航空航天大学 | Sketch-to-color image generation method, sketch-to-color image generation system, storage medium and processing terminal |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1931087A (en) * | 2006-10-11 | 2007-03-21 | 哈尔滨工业大学 | Automatic tongue picture grain analysis method |
US20070297673A1 (en) * | 2006-06-21 | 2007-12-27 | Jonathan Yen | Nonhuman animal integument pixel classification |
US20130015946A1 (en) * | 2011-07-12 | 2013-01-17 | Microsoft Corporation | Using facial data for device authentication or subject identification |
CN105930798A (en) * | 2016-04-21 | 2016-09-07 | 厦门快商通科技股份有限公司 | Tongue image quick detection and segmentation method based on learning and oriented to handset application |
CN110443128A (en) * | 2019-06-28 | 2019-11-12 | 广州中国科学院先进技术研究所 | One kind being based on SURF characteristic point accurately matched finger vein identification method |
CN111223063A (en) * | 2020-01-12 | 2020-06-02 | 杭州电子科技大学 | Finger vein image NLM denoising method based on texture features and binuclear function |
CN111242864A (en) * | 2020-01-12 | 2020-06-05 | 杭州电子科技大学 | Finger vein image restoration method based on Gabor texture constraint |
US20210097682A1 (en) * | 2019-09-30 | 2021-04-01 | Case Western Reserve University | Disease characterization and response estimation through spatially-invoked radiomics and deep learning fusion |
WO2021102039A1 (en) * | 2019-11-21 | 2021-05-27 | 10X Genomics, Inc, | Spatial analysis of analytes |
-
2021
- 2021-06-21 CN CN202110688079.1A patent/CN113610753B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070297673A1 (en) * | 2006-06-21 | 2007-12-27 | Jonathan Yen | Nonhuman animal integument pixel classification |
CN1931087A (en) * | 2006-10-11 | 2007-03-21 | 哈尔滨工业大学 | Automatic tongue picture grain analysis method |
US20130015946A1 (en) * | 2011-07-12 | 2013-01-17 | Microsoft Corporation | Using facial data for device authentication or subject identification |
CN105930798A (en) * | 2016-04-21 | 2016-09-07 | 厦门快商通科技股份有限公司 | Tongue image quick detection and segmentation method based on learning and oriented to handset application |
CN110443128A (en) * | 2019-06-28 | 2019-11-12 | 广州中国科学院先进技术研究所 | One kind being based on SURF characteristic point accurately matched finger vein identification method |
US20210097682A1 (en) * | 2019-09-30 | 2021-04-01 | Case Western Reserve University | Disease characterization and response estimation through spatially-invoked radiomics and deep learning fusion |
WO2021102039A1 (en) * | 2019-11-21 | 2021-05-27 | 10X Genomics, Inc, | Spatial analysis of analytes |
CN111223063A (en) * | 2020-01-12 | 2020-06-02 | 杭州电子科技大学 | Finger vein image NLM denoising method based on texture features and binuclear function |
CN111242864A (en) * | 2020-01-12 | 2020-06-05 | 杭州电子科技大学 | Finger vein image restoration method based on Gabor texture constraint |
Non-Patent Citations (6)
Title |
---|
DAGOBERTO PORRAS: ""DNN-based Acoustic-to-Articulatory Inversion using Ultrasound Tongue Imaging"", 《IJCNN 2019》 * |
JIAN WU: "" Illuminance Compensation and Texture Enhancement via the Hodge Decomposition"", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 * |
JIAN WU: ""Tongue Image Alignment via Conformal Mapping for Disease Detection"", 《IEEE ACCESS》 * |
乔新新: ""基于颜色与纹理特征聚类的彩色图像分割研究"", 《中国优秀硕士学位论文全文数据库》 * |
周书仁;殷建平;: "基于Haar特性的LBP纹理特征", 软件学报, no. 08 * |
杨朝辉: ""计算机舌诊中裂纹舌图像的诊断分类研究"", 《中国博士学位论文全文数据库》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082461A (en) * | 2022-08-19 | 2022-09-20 | 成都中医药大学 | Edge calculation-based pre-judgment filtering method and device |
CN115082461B (en) * | 2022-08-19 | 2022-11-04 | 成都中医药大学 | Edge calculation-based pre-judgment filtering method and device |
CN117237478A (en) * | 2023-11-09 | 2023-12-15 | 北京航空航天大学 | Sketch-to-color image generation method, sketch-to-color image generation system, storage medium and processing terminal |
CN117237478B (en) * | 2023-11-09 | 2024-02-09 | 北京航空航天大学 | Sketch-to-color image generation method, sketch-to-color image generation system, storage medium and processing terminal |
Also Published As
Publication number | Publication date |
---|---|
CN113610753B (en) | 2024-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021238030A1 (en) | Water level monitoring method for performing scale recognition on the basis of partitioning by clustering | |
Kovács et al. | A self-calibrating approach for the segmentation of retinal vessels by template matching and contour reconstruction | |
Nguyen et al. | An effective retinal blood vessel segmentation method using multi-scale line detection | |
CN115311292A (en) | Strip steel surface defect detection method and system based on image processing | |
CN113781402A (en) | Method and device for detecting chip surface scratch defects and computer equipment | |
CN109145921A (en) | A kind of image partition method based on improved intuitionistic fuzzy C mean cluster | |
CN113610753B (en) | Method, device and storage medium for extracting blogging texture features of tongue image | |
CN110781885A (en) | Text detection method, device, medium and electronic equipment based on image processing | |
CN106529559A (en) | Pointer-type circular multi-dashboard real-time reading identification method | |
CN112464829B (en) | Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system | |
CN111899237B (en) | Scale precision measuring method, apparatus, computer device and storage medium | |
CN108734108B (en) | Crack tongue identification method based on SSD network | |
CN108764119B (en) | SAR image change detection method based on iteration maximum between-class variance | |
US20170178341A1 (en) | Single Parameter Segmentation of Images | |
CN114399502A (en) | Appearance defect detection method and system suitable for LED chip and storage medium | |
CN115082466B (en) | PCB surface welding spot defect detection method and system | |
CN116912255B (en) | Follicular region segmentation method for ovarian tissue analysis | |
CN115359053A (en) | Intelligent detection method and system for defects of metal plate | |
CN111899247A (en) | Method, device, equipment and medium for identifying lumen region of choroidal blood vessel | |
Kosarevych et al. | Image segmentation based on the evaluation of the tendency of image elements to form clusters with the help of point field characteristics | |
CN115908363A (en) | Tumor cell counting method, device, equipment and storage medium | |
CN117705815B (en) | Printing defect detection method based on machine vision | |
CN111429487B (en) | Method and device for segmenting adhesion foreground of depth image | |
CN114742849B (en) | Leveling instrument distance measuring method based on image enhancement | |
CN108805186B (en) | SAR image circular oil depot detection method based on multi-dimensional significant feature clustering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |