WO2020108437A1 - 舌下静脉特征提取装置及方法 - Google Patents

舌下静脉特征提取装置及方法 Download PDF

Info

Publication number
WO2020108437A1
WO2020108437A1 PCT/CN2019/120645 CN2019120645W WO2020108437A1 WO 2020108437 A1 WO2020108437 A1 WO 2020108437A1 CN 2019120645 W CN2019120645 W CN 2019120645W WO 2020108437 A1 WO2020108437 A1 WO 2020108437A1
Authority
WO
WIPO (PCT)
Prior art keywords
tongue
vein
sublingual
image
ventral
Prior art date
Application number
PCT/CN2019/120645
Other languages
English (en)
French (fr)
Inventor
张贯京
葛新科
高伟明
吕超
Original Assignee
深圳市前海安测信息技术有限公司
深圳市易特科信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市前海安测信息技术有限公司, 深圳市易特科信息技术有限公司 filed Critical 深圳市前海安测信息技术有限公司
Publication of WO2020108437A1 publication Critical patent/WO2020108437A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the invention relates to the technical field of tongue treatment of traditional Chinese medicine, in particular to a device and method for extracting sublingual vein features.
  • the sublingual tongue and vein diagnosis method has the same value as the auxiliary diagnosis for various diseases.
  • the main syndrome is stasis syndrome, especially blood stasis syndrome; the main syndrome is malignant tumor, cardiopulmonary disease, liver disease, blood disease is particularly diagnostic value. Although it is a non-specific diagnosis, it can better reflect the overall situation of the patient on the cause of the disease, especially whether the qi and blood are balanced, whether the meridians are unobstructed, and whether there is internal resistance of phlegm and stasis. According to a large number of studies in traditional Chinese medicine, it mainly reflects changes in the state of the systemic circulation and microcirculation and blood. If doctors have certain experience, through detailed inspections to exclude the effects of age, individual differences, and climate interference, the results can make up for the shortcomings of traditional tongue diagnosis, and provide more and more important information for medical clinics in syndrome differentiation.
  • the main purpose of the present invention is to provide a sublingual vein feature extraction device and method applied in the face diagnosis of traditional Chinese medicine, aiming to solve the technical problem of low accuracy of sublingual vein feature extraction in the prior art.
  • the present invention provides a sublingual vein feature extraction device, including a processor suitable for implementing various computer program instructions and a memory suitable for storing multiple computer program instructions, the computer program instructions are loaded by the processor And perform the following steps: input different tongue and face sample images through the input unit to construct multiple positive and negative samples; use the opencv_createsamples program in the opencv open source library to process multiple positive and negative samples to generate a training data set; use the opencv open source library
  • the opencv_traincascade program trains the training data set to generate the tongue and ventral surface detector; the tongue and ventral surface detector is used to detect the tongue and ventral surface including the lips from the tongue and facial image to be detected; the position information of the tongue and ventral surface is determined based on the tongue and facial image to be detected, Intercept the tongue and ventral surface including the lips according to the position information of the tongue and ventral surface; perform threshold segmentation on the intercepted tongue and ventral surface to obtain the shadow area and the tooth area of the tongue and ventral surface; use
  • the step of creating a tongue and ventral segmentation template using the shadow area and the tooth area of the tongue and ventral surface includes: extracting the contour line of the tongue and ventral shadow area using a canny edge detection algorithm, extracting all n coordinate points of the contour line, and extracting All n coordinate points of the pair are paired to form n ⁇ (n-1)/2 edges; for each edge, check whether the remaining (n-2) points are on the same side of the edge; if all points are On one side of the edge, the edge is added to the convex hull set until all edges have been traversed, and the convex hull set is used as the outline of the shadow area of the ventral surface of the tongue; create a single sample image size of the tongue surface Channel template image, and map the contour line of the shadow area of the tongue and ventral surface to the corresponding position of the single channel template image; set all pixel values of the non-white area inside the contour line to 1 and set all pixel values of the white area inside the contour line Is 3, all the
  • the step of using the kmeans clustering algorithm in the opencv open source library to classify the tongue and ventral image to obtain a classification result image includes: converting the RGB tongue and ventral image to the Lab color space, and using the kmeans clustering in the opencv open source library Based on the pixel values of the a and b color channels of the tongue and ventral image in Lab color space, the algorithm classifies the tongue and ventral image to obtain the classification result image.
  • the step of calculating the sublingual vein images using the sublingual left and right vein templates includes: summing the color channels of the sublingual left and right vein templates and the ventral surface image separately, and combining the calculation results to obtain the sublingual vein images .
  • the sublingual vein characteristics include R, G, B color values and H, S, V color values of the sublingual vein, the length and width of the left vein, the length and width of the right vein, and the tongue length ratio of the left vein Compared with the tongue length of the right vein, where:
  • the calculation method for extracting the R color value of the sublingual vein is: dividing the sum of all pixel values of the R channel of the sublingual vein image by the number of sublingual vein pixels of the R channel to obtain the R color value; G, B, H, S , V color values are calculated according to the method of extracting R color values;
  • the calculation method for extracting the length and width of the left vein is to calculate the minimum circumscribed rectangle of the left vein in the sublingual vein image, then the length and width of the rectangle are the length and width of the left sublingual vein, respectively.
  • the rectangle calculation steps include:
  • the calculation method for extracting the length and width of the right vein is the same as the calculation method for extracting the length and width of the left vein;
  • the calculation method for extracting the tongue length ratio of the left vein is: the ratio of the input tongue length divided by the length of the sublingual left vein is recorded as the tongue length ratio of the left vein;
  • the calculation method for extracting the right vein tongue length ratio is: the ratio of the input tongue length divided by the length of the sublingual right vein is recorded as the right vein tongue length ratio.
  • the present invention also provides a method for extracting sublingual vein features.
  • the method includes the following steps: inputting different tongue surface sample images through an input unit to construct multiple positive and negative samples; using the opencv_createsamples program in the opencv open source library The positive and negative samples are processed to generate a training data set; the opencv_traincascade program in the opencv open source library is used to train the training data set to generate a tongue and ventral surface detector; and the tongue and ventral surface detector is used to detect the lips and tongues from the tongue surface image to be detected.
  • Tongue and ventral surface determine the position information of the tongue and ventral surface based on the tongue surface image to be detected, and intercept the tongue and ventral surface including the lips according to the position information of the tongue and ventral surface; threshold segmentation of the intercepted tongue and ventral surface to obtain the shadow area and the tooth area of the tongue and ventral surface; Use the shadow area and tooth area of the tongue and ventral surface to create the tongue and ventral segmentation template; input the tongue and ventral segmentation template to be detected into the grapeCut function in the opencv open source library to segment the tongue and ventral image; use the kmeans in the opencv open source library
  • the clustering algorithm classifies the tongue and ventral image to obtain the classification result image; the sublingual left and right veins are segmented from the classification result image, and the sublingual left and right veins are morphologically processed to obtain the sublingual left and right vein templates; The sublingual vein image is extracted, and the sublingual vein features are extracted from the sublingual vein image.
  • the step of creating a tongue and ventral segmentation template using the shadow area and the tooth area of the tongue and ventral surface includes: extracting the contour line of the tongue and ventral shadow area using a canny edge detection algorithm, extracting all n coordinate points of the contour line, and extracting All n coordinate points of the pair are paired to form n ⁇ (n-1)/2 edges; for each edge, check whether the remaining (n-2) points are on the same side of the edge; if all points are On one side of the edge, the edge is added to the convex hull set until all edges have been traversed, and the convex hull set is used as the outline of the shadow area of the ventral surface of the tongue; create a single sample image size of the tongue surface Channel template image, map the contour line of the shadow area of the tongue and ventral surface to the corresponding position of the single channel template image; set all pixel values of the non-white area inside the contour line to 1, and set all pixel values of the white area inside the contour line to 3. Set all the
  • the step of using the kmeans clustering algorithm in the opencv open source library to classify the tongue and ventral image to obtain a classification result image includes: converting the RGB tongue and ventral image to the Lab color space, and using the kmeans clustering in the opencv open source library Based on the pixel values of the a and b color channels of the tongue and ventral image in Lab color space, the algorithm classifies the tongue and ventral image to obtain the classification result image.
  • the step of calculating the sublingual vein images using the sublingual left and right vein templates includes: summing the color channels of the sublingual left and right vein templates and the ventral surface image separately, and combining the calculation results to obtain the sublingual vein images .
  • the sublingual vein characteristics include R, G, B color values and H, S, V color values of the sublingual vein, the length and width of the left vein, the length and width of the right vein, and the tongue length ratio of the left vein Compared with the tongue length of the right vein, where:
  • the calculation method for extracting the R color value of the sublingual vein is: dividing the sum of all pixel values of the R channel of the sublingual vein image by the number of sublingual vein pixels of the R channel to obtain the R color value; G, B, H, S , V color values are calculated according to the method of extracting R color values;
  • the calculation method for extracting the length and width of the left vein is to calculate the minimum circumscribed rectangle of the left vein in the sublingual vein image, then the length and width of the rectangle are the length and width of the left sublingual vein, respectively.
  • the rectangle calculation steps include:
  • the calculation method for extracting the length and width of the right vein is the same as the calculation method for extracting the length and width of the left vein;
  • the calculation method for extracting the tongue length ratio of the left vein is: the ratio of the input tongue length divided by the length of the sublingual left vein is recorded as the tongue length ratio of the left vein;
  • the calculation method for extracting the right vein tongue length ratio is: the ratio of the input tongue length divided by the length of the sublingual right vein is recorded as the right vein tongue length ratio.
  • the sublingual vein feature extraction device and method of the present invention through a large number of tongue sample image training to obtain a tongue and ventral surface detector to effectively detect the tongue and ventral surface including lips, improve the accuracy of tongue and ventral segmentation ;
  • the sublingual left and right veins are effectively segmented by color classification processing of the tongue and ventral image, and the sublingual left and right veins are effectively segmented, and the sublingual left and right veins segmented in terms of vein segmentation are more complete, improving the accuracy of sublingual vein feature extraction Sex, for Chinese doctors to provide reference for the diagnosis of sublingual collateral pulse of Chinese medicine, so as to assist Chinese doctors to judge the accuracy of the diagnosis of sublingual collateral pulse of traditional Chinese medicine.
  • FIG. 1 is a schematic diagram of functional modules of a preferred embodiment of a sublingual vein feature extraction device of the present invention
  • FIG. 2 is a method flowchart of a preferred embodiment of a method for extracting features of sublingual veins of the present invention
  • FIG. 3 is a schematic diagram of segmenting the tongue and ventral image from the original tongue and facial image
  • FIG. 4 is a schematic diagram of segmenting the sublingual vein image from the ventral tongue image
  • FIG. 5 is a schematic diagram of extracting sublingual vein features from a sublingual vein image.
  • FIG. 1 is a schematic diagram of a functional module of a preferred embodiment of a sublingual vein feature extraction device of the present invention.
  • the sublingual vein feature extraction device 1 may be a personal computer, a workstation computer, a traditional Chinese medicine facial image instrument, a traditional Chinese medicine four-diagnostic instrument, etc. equipped with a sublingual vein feature extraction system 10, which has data processing functions and image processing Functional computer device.
  • the sublingual vein feature extraction device 1 includes, but is not limited to, a sublingual vein feature extraction system 10, an input unit 11, a memory 12 adapted to store multiple computer program instructions, and execute various computer programs Instruction processor 13 and output unit 14.
  • the input unit 11 may be an image input device such as a high-definition camera, which is used to capture a tongue surface image and input it into the sublingual vein feature extraction device 1; the input unit 11 may also be an image reading device, used In order to read the tongue surface image from the database storing the tongue surface image and input it to the sublingual vein feature extraction device 1.
  • the memory 12 may be a read-only memory ROM, a random access memory RAM, an electrically erasable memory EEPROM, a flash memory FLASH, a magnetic disk, or an optical disk.
  • the processor 13 is a central processing unit (CPU), a microcontroller (MCU), a data processing chip, or an information processing unit with a data processing function.
  • the output unit 14 can be an output device such as a display or a printer, and can output the extracted sublingual vein features to the display or the printer for printing, so as to provide a clinical reference for the TCM doctor to provide a clinical reference for the TCM doctor to assist the TCM doctor to the TCM The accuracy of tongue diagnosis.
  • the sublingual vein feature extraction system 10 is composed of program modules composed of a plurality of computer program instructions, including but not limited to, a lingual and ventral surface training module 101, a lingual and ventral surface detection module 102, and a lingual and ventral surface segmentation module 103 , Sublingual vein segmentation module 104 and sublingual vein feature extraction module 105.
  • the module referred to in the present invention refers to a series of computer program instruction segments that can be executed by the processor 13 of the sublingual vein feature extraction device 1 and can complete a fixed function, and are stored in the memory 12, which will be described in detail below with reference to FIG. 2 The specific function of each module.
  • FIG. 2 it is a flowchart of a preferred embodiment of the method for extracting sublingual vein features of the present invention.
  • various method steps of the sublingual vein feature extraction method are implemented by a computer software program, which is stored in a computer-readable storage medium (eg, memory 12) in the form of computer program instructions,
  • the computer-readable storage medium may include: a read-only memory, a random access memory, a magnetic disk, or an optical disk.
  • the computer program instructions can be loaded by a processor (for example, the processor 13) and execute the following steps S21 to S32.
  • Step S21 input different tongue sample images to construct multiple positive and negative samples; in this embodiment, the input unit 11 takes a large number of different tongue sample images through a high-definition camera device or reads a large number of different samples from an external database
  • the tongue image sample image is input into the sublingual vein feature extraction system 10.
  • the tongue and ventral surface training module 101 constructs multiple positive and negative samples according to different input tongue image samples, the multiple positive and negative samples include multiple positive samples and multiple negative samples, for example, including 200 positive samples and 300 negative samples
  • a positive sample includes image data of the lingual ventral surface area in a lingual sample image
  • a negative sample includes image data of a non-lingual ventral surface area in a lingual sample image.
  • the tongue surface sample image input by the input unit 11 is trained to the tongue and ventral surface training module 101 to obtain a tongue and ventral surface detector.
  • the invention uses a large number of different tongue surface sample images to train a tongue and ventral surface detector for recognizing the tongue surface image, as long as the doctor inputs the tongue surface image to be detected to the tongue and ventral surface detector to detect the tongue and ventral surface image including the lips .
  • Step S22 the opencv_createsamples program in the opencv open source library is used to process multiple positive and negative samples to generate a training data set; in this embodiment, the tongue and ventral training module 101 uses the opencv_createsamples program in the opencv open source library to process multiple positive and negative samples Process to generate a training data set.
  • the opencv_createsamples program is a general program for sample creation in the opencv open source library. Those skilled in the art can process multiple positive and negative samples through the existing opencv_createsamples program to generate a training data set.
  • Step S23 the opencv_traincascade program in the opencv open source library is used to train the training data set to generate the tongue and ventral surface detector; in this embodiment, the tongue and ventral surface training module 101 uses the opencv_traincascade program in the opencv open source library to train the training data set to generate the tongue Ventral detector.
  • the opencv_traincascade program is a general program for classifier training in the opencv open source library. Those skilled in the art can train the training data set to generate a tongue and ventral surface detector through the existing opencv_traincascade program.
  • the tongue and ventral surface detector is used to detect the tongue and ventral surface including the lips from the tongue and facial image to be detected; specifically, the tongue and ventral surface detection module 102 first receives the needs from the input unit 11 when performing tongue and ventral surface detection and recognition
  • the detected tongue image for example, shown in (a) of FIG. 3 is the tongue surface image to be detected, and then the trained tongue and ventral surface detector is used to detect the tongue and ventral surface including the lips from the tongue surface image to be detected, for example 3, (b) shows the ventral surface of the tongue including the lips.
  • Step S25 Determine the position information of the lingual surface based on the image of the lingual surface to be detected, and intercept the lingual surface including the lips according to the position information of the lingual surface; specifically, the lingual surface detection module 102 is based on the lingual surface to be detected
  • the position in the image determines the position information of the tongue and ventral surface rect(x, y, l, w), and then intercepts the tongue and ventral surface containing the lips according to the position information of the tongue and ventral surface, where x and y represent the upper left corner of the box in Figure 3(a) Vertex coordinates, l, w represent the length and width of the box.
  • step S26 threshold segmentation is performed on the intercepted tongue and ventral surface to obtain the shadow area and the tooth area of the tongue and ventral surface; in this embodiment, the tongue and ventral surface segmentation module 103 performs threshold segmentation processing on the intercepted tongue and ventral surface (including lips), and Morphological transformation is performed on the results of the segmentation process to remove some small speckle impurities in the image of the tongue and ventral surface, respectively to obtain the shadow area and the tooth area of the tongue and ventral surface (dark area and tooth area located in the lips), as shown in Figure 3(c)
  • the white area in the represents the shadow area on the ventral surface of the tongue
  • the white area in 3(d) represents the tooth area on the ventral surface of the tongue.
  • Step S27 the tongue and ventral segmentation template is created by using the shadow region and the tooth region of the tongue and ventral surface; in this embodiment, the tongue and ventral segmentation module 103 performs convex hull calculation according to the tongue and ventral shadow region to generate the contour of the tongue and ventral shadow region, and the generated result is as follows: The white outline in Figure 3(e).
  • the tongue and ventral segmentation module 103 uses the canny edge detection algorithm to extract the contour line of the shadow region of the tongue and ventral surface, and then extracts all the coordinate points of the contour line (assuming that there are n points in total), and pairs all the extracted n coordinate points in pairs Make up n ⁇ (n-1)/2 edges; for each edge, check whether the remaining (n-2) points are on the same side of the edge; if all points are on the side of the edge, Then add this edge to the convex hull set as the contour line of the shadow area of the tongue and ventral surface until all the constituent edges have been traversed; create a single-channel template image of the size of the tongue sample image, and add the contour line of the shadow area of the tongue and ventral surface Map to the corresponding position of the single-channel template image, set all pixel values of the non-white area inside the outline to 1, set all pixel values of the white area inside the outline to 3, and set all pixel values of the area outside the outline to all Set to 0, the canny edge detection
  • 0 represents the determined background pixel value (non-tongue surface)
  • 1 represents the determined foreground pixel value (tongue surface)
  • 2 represents the uncertain background pixel value (may be non-tongue surface)
  • 3 represents the uncertain foreground pixel Value (probably tongue and ventral surface).
  • step S28 the tongue image and tongue-segment segmentation template to be detected are input into the openCV open source library's grabCut function to segment the tongue-stomach image; Input into the grabCut function of opencv open source library to segment the tongue and ventral image.
  • the grabCut function of the opencv open source library is an existing tongue and ventral segmentation function, and the grabCut function can perform image segmentation.
  • a person skilled in the art inputs a tongue surface image and a tongue and ventral segmentation template into the grabCut function, and uses the grabCut function You can segment the tongue and ventral image.
  • the tongue image (a) and the tongue and ventral segmentation template (e) to be detected are input into the grabCut function to segment the tongue and ventral image (f).
  • Step S29 the kmeans clustering algorithm in the opencv open source library is used to classify the tongue and ventral image to obtain a classification result image;
  • the tongue and ventral segmentation module 103 first converts the RGB tongue and ventral image (as shown in FIG. 4 (f ) Image) Convert to Lab color space, according to a color channel and b color channel of Lab color space (a color channel represents the range from magenta to green, b color channel represents the range from yellow to blue), and then use opencv
  • the kmeans clustering algorithm in the open source library classifies the tongue and ventral image to obtain the classification result image. For example, it is divided into 3 categories.
  • the classification result image is as (g) in FIG. 4, each color represents a category, such as red, green, blue Three colors.
  • Step S30 segment the sublingual left and right veins from the classification result image, and perform morphological processing on the sublingual left and right veins to obtain sublingual left and right vein templates;
  • the sublingual vein segmentation module 104 selects from the image of the ventral surface of the tongue Two positions of the sublingual left and right veins, the position coordinate points are left (x, y), right (x, y), and the two position coordinate points are mapped to the classification result image as seed points, and then according to the area growth algorithm
  • the sublingual left and right veins are sequentially segmented from the classification result image, and the sublingual left and right veins are morphologically processed to obtain the sublingual left and right vein templates, as shown in (h) in FIG. 4, indicating the sublingual left vein template, as shown in FIG. 4 (I) indicates the right sublingual vein template.
  • Step S31 the sublingual vein images are calculated using the sublingual left and right vein templates; in this embodiment, the sublingual vein feature extraction module 105 calculates the sublingual vein images using the sublingual left and right vein templates, the calculation method is: the sublingual left and right The color channels of the vein template and the ventral surface of the tongue are summed separately, and the results of the calculation are combined to obtain the sublingual vein image. (j) in Figure 4 represents the sublingual vein image.
  • Step S32 extract sublingual vein features from the sublingual vein image; in this embodiment, the sublingual vein feature extraction module 105 extracts a total of 12 sublingual vein features from the sublingual vein image, including the sublingual vein R , G, B color values and H, S, V color values, the length and width of the left vein, the length and width of the right vein, the tongue length ratio of the left vein and the tongue length ratio of the right vein, where:
  • the calculation method for the sublingual vein feature extraction module 105 to extract the R color value of the sublingual vein is: divide the sum of all pixel values of the R channel of the sublingual vein image by the number of pixels of the sublingual vein of the R channel to obtain the R color value; G, B, H, S, V color values are calculated according to the method of extracting R color values;
  • the sublingual vein feature extraction module 105 extracts the length and width of the left vein from the sublingual vein image by calculating the minimum circumscribed rectangle of the left vein in the sublingual vein image.
  • the width is the length and width of the sublingual left vein, respectively, and the calculation steps of the minimum circumscribed rectangle of the left vein include:
  • the calculation method of the sublingual vein feature extraction module 105 to extract the length and width of the right vein from the sublingual vein image is the same as the calculation method of extracting the length and width of the left vein.
  • the calculation method for the sublingual vein feature extraction module 105 to extract the left vein tongue length ratio from the sublingual vein image is: the ratio of the input tongue length divided by the sublingual left vein length is recorded as the left vein tongue length ratio.
  • the calculation method for the sublingual vein feature extraction module 105 to extract the right vein tongue length ratio from the sublingual vein image is: the ratio of the input tongue length divided by the sublingual right vein length is recorded as the right vein tongue length ratio.
  • the inferior vein feature extraction module 105 outputs the extracted 12 sublingual vein features through an output unit 14 to an externally connected display for display or printing on a printer, so that the Chinese doctor can treat the sublingual veins of the Chinese medicine Provide reference for diagnosis, so as to assist TCM doctors to judge the accuracy of TCM sublingual collateral pulse diagnosis.
  • the invention also provides a computer-readable storage medium storing a plurality of computer program instructions, which are loaded by a processor of a computer device and execute each of the sublingual vein feature extraction methods of the invention Method steps.
  • a computer-readable storage medium storing a plurality of computer program instructions, which are loaded by a processor of a computer device and execute each of the sublingual vein feature extraction methods of the invention Method steps.
  • the sublingual vein feature extraction device and method of the present invention obtain a tongue and ventral surface detector by training a large number of tongue sample images to effectively detect the tongue and ventral surface including lips, and improve the accuracy of segmentation of the tongue and ventral surface; by coloring the tongue and ventral surface images
  • the classification process creates sublingual left and right vein templates to effectively segment the sublingual left and right veins, and then the sublingual left and right veins segmented in terms of vein segmentation are more complete, improving the accuracy of sublingual vein feature extraction.
  • the sublingual vein feature extraction device and method of the present invention through a large number of tongue sample image training to obtain a tongue and ventral surface detector to effectively detect the tongue and ventral surface including lips, improve the accuracy of tongue and ventral segmentation ;
  • the sublingual left and right veins are effectively segmented by color classification processing of the tongue and ventral image, and the sublingual left and right veins are effectively segmented, and the sublingual left and right veins segmented in terms of vein segmentation are more complete, improving the accuracy of sublingual vein feature extraction Sex, for Chinese doctors to provide reference for the diagnosis of sublingual collateral pulse of Chinese medicine, so as to assist Chinese doctors to judge the accuracy of the diagnosis of sublingual collateral pulse of traditional Chinese medicine.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种舌下静脉特征提取装置及方法,该方法包括步骤:利用不同的舌面样本图像进行训练生成舌腹面检测器;利用舌腹面检测器从舌面图像检测出包含嘴唇的舌腹面;根据舌腹面的位置信息截取舌腹面;对舌腹面进行阈值分割处理得到舌腹面的暗影区域和牙齿区域;利用暗影区域和牙齿区域创建舌腹面分割模板;利用舌腹面分割模板从舌面图像中分割出舌腹面图像;对舌腹面图像进行分类得到分类结果图像;从分类结果图像中分割出舌下左右静脉,并对舌下左右静脉进行形态学处理得到舌下左右静脉模板;利用舌下左右静脉模板计算舌下静脉图像,并从舌下静脉图像中提取舌下静脉特征。本发明的舌下静脉特征提取方法提高了舌下静脉特征提取的准确性。

Description

舌下静脉特征提取装置及方法 技术领域
本发明涉及中医舌面处理的技术领域,尤其涉及一种舌下静脉特征提取装置及方法。
背景技术
舌下络脉诊法与舌诊一样对各种病证有一定的辅助诊断价值。在证主为瘀证,尤其是血瘀证;在病主为恶性肿瘤、心肺疾病、肝病、血液病尤具诊断价值。它虽非特异性诊断,但能较好地反映患者对致病动因的整体态势,特别是对气血是否盈亏调和,经络是否通畅,有无痰瘀内阻具较大意义。根据中医学中的大量研究,它主要反映的是体循环与微循环的状态及血液有关方面的变化。如果医者具有一定经验,通过详细望诊,排除年龄、个体差异、气候干扰等影响,其结果可以弥补传统舌诊的不足,为医学临床在辨证中提供更多更重要的信息。
然而,由传统中医舌诊方法所得到的诊断结果往往受医生的经验积累以及病人当时所处的环境等因素所影响,主观依赖性较强,缺乏客观化、定量化的依据。舌下络脉诊法的深入发展,仍有赖于基础研究的认真探索和临床观察的客观化,因此要特别重视运用高科技手段创造新的仪器和方法,形成统一、规范的客观指标和观察方法,找出规律,说明机制,实用于临床。
面向计算机化中医舌下络脉诊法研究工作的开展将进一步推动现代信息科学与祖国传统医学的交融发展,对于中医辨证规范化及中医临床、教学、科研手段的现代化,解决制约发挥中医特色优势的重大基础问题,实现中医现代化,具有重要的理论价值和实际意义,是中医舌诊现代化的重要环节。目前,图像处理和模式识别等计算机方法为中医诊断技术提供参考依据,然而,目前现有技术在舌腹面分割方面准确度不高,存在舌下静脉分割方面分割出的舌下静脉不够完整,从而导致提取的舌下静脉特征不够准确,从而影响医生对中医舌下络脉诊判断结果的准确性。
技术问题
本发明的主要目的在于提供一种应用于中医面诊中的舌下静脉特征提取装置及方法,旨在解决现有技术存在舌下静脉特征提取的准确度不高的技术问题。
技术解决方案
为实现上述目的,本发明提供一种舌下静脉特征提取装置,包括适于实现各种计算机程序指令的处理器以及适于存储多条计算机程序指令的存储器,所述计算机程序指令由处理器加载并执行如下步骤:通过输入单元输入不同的舌面样本图像构建多个正负样本;利用opencv开源库中的opencv_ createsamples程序对多个正负样本进行处理生成一个训练数据集;利用opencv开源库中的opencv_traincascade程序对训练数据集进行训练生成舌腹面检测器;利用舌腹面检测器从待检测的舌面图像中检测出包含嘴唇的舌腹面;基于待检测的舌面图像确定舌腹面的位置信息,并根据舌腹面的位置信息截取包含嘴唇的舌腹面;对截取的舌腹面进行阈值分割处理得到舌腹面的暗影区域和牙齿区域;利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板;将待检测的舌面图像和舌腹面分割模板输入到opencv开源库中的grabCut函数中分割出舌腹面图像;利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像;从分类结果图像中分割出舌下左右静脉,并对舌下左右静脉进行形态学处理得到舌下左右静脉模板;利用舌下左右静脉模板计算出舌下静脉图像,并从舌下静脉图像中提取舌下静脉特征。
优选的,所述利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板的步骤包括:利用canny边缘检测算法提取舌腹面暗影区域的轮廓线,提取轮廓线的所有n个坐标点,并将提取的所有n个坐标点两两配对组成n×(n-1)/2条边;对于每条边,检查剩余的(n-2)个点是否在该条边的同一侧;如果所有点都在该条边的一侧,则将该条边加入凸包集合中直到所有边都被遍历过为止,并将该凸包集合作为舌腹面暗影区域的轮廓线;创建舌面样本图像大小的单通道模板图像,并将舌腹面暗影区域的轮廓线映射到单通道模板图像的相应位置;将轮廓线内部非白色区域的所有像素值全部置为1,将轮廓线内部白色区域的所有像素值置为3,将轮廓线外部区域的所有像素值全部置0,以及依照舌腹面牙齿区域将单通道模板图像的相应位置的像素值置为0,得到舌腹面分割模板。
优选的,所述利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像的步骤包括:将RGB的舌腹面图像转换到Lab颜色空间,利用opencv开源库中的kmeans聚类算法并依据舌腹面图像在Lab颜色空间的a和b颜色通道的像素值,对舌腹面图像分类得到分类结果图像。
优选的,所述利用舌下左右静脉模板计算出舌下静脉图像的步骤包括:将舌下左右静脉模板和舌腹面图像的各个颜色通道分别求与运算,并把运算结果合并得到舌下静脉图像。
优选的,所述舌下静脉特征包括舌下静脉的R、G、B颜色值和H、S、V颜色值、左静脉的长度和宽度、右静脉的长度和宽度、左静脉的舌长比和右静脉的舌长比,其中:
提取舌下静脉的R颜色值的计算方法为:将舌下静脉图像的R通道所有像素值之和除以R通道的舌下静脉像素个数,得到R颜色值;G、B、H、S、V颜色值均按提取R颜色值的方法计算;
提取左静脉的长度和宽度的计算方法为:计算舌下静脉图像中左静脉的最小外接矩形,则该矩形的长度和宽度分别是舌下左静脉的长度和宽度,其中,左静脉的最小外接矩形计算步骤包括:
(1)提取左静脉轮廓线并根据轮廓线计算左静脉凸包;
(2)随机选取左静脉凸包上的一条边AB作为起始边,其中A和B为左右两个端点,将AB以端点A为中心旋转θ角度,使AB边平行于坐标横轴x轴,则左静脉凸包的所有点都绕A点旋转了θ角度;
(3)以AB边为作为外接矩形的上边界或下边界,在左静脉凸包上找到y值为最小或y值为最大的一个点,经过该点做一条平行于x轴的直线来确定外接矩形的下边界或上边界,在左静脉凸包上找到x值为最小的左侧点和x值为最大的右侧点,经过该左侧点和右侧点分别做垂直于x轴的两条直线来确定外接矩形的左边界和右边界,这样就得到一个外接矩形,计算并保存AB边的端点坐标、外接矩形的长宽和面积;
(4)顺序选择左静脉凸包上的下一条边BC,重复上述步骤(2)至步骤(3)寻找下一个外接矩形,直到左静脉凸包上所有的边均遍历完毕;
(5)比较所有外接矩形的面积,找出面积最小的外接矩形作为左静脉的最小外接矩形;
提取右静脉的长度和宽度的计算方法与提取左静脉的长度和宽度的计算方法相同;
提取左静脉舌长比的计算方法为:将输入的舌体长度除以舌下左静脉的长度的比例值记为左静脉的舌长比;
提取右静脉舌长比的计算方法为:将输入的舌体长度除以舌下右静脉的长度的比例值记为右静脉的舌长比。
另一方面,本发明还提供一种舌下静脉特征提取方法,该方法包括如下步骤:通过输入单元输入不同的舌面样本图像构建多个正负样本;利用opencv开源库中的opencv_createsamples程序对多个正负样本进行处理生成一个训练数据集;利用opencv开源库中的opencv_traincascade程序对训练数据集进行训练生成舌腹面检测器;利用舌腹面检测器从待检测的舌面图像中检测出包含嘴唇的舌腹面;基于待检测的舌面图像确定舌腹面的位置信息,并根据舌腹面的位置信息截取包含嘴唇的舌腹面;对截取的舌腹面进行阈值分割处理得到舌腹面的暗影区域和牙齿区域;利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板;将待检测的舌面图像和舌腹面分割模板输入到opencv开源库中的grabCut函数中分割出舌腹面图像;利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像;从分类结果图像中分割出舌下左右静脉,并对舌下左右静脉进行形态学处理得到舌下左右静脉模板;利用舌下左右静脉模板计算出舌下静脉图像,并从舌下静脉图像中提取舌下静脉特征。
优选的,所述利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板的步骤包括:利用canny边缘检测算法提取舌腹面暗影区域的轮廓线,提取轮廓线的所有n个坐标点,并将提取的所有n个坐标点两两配对组成n×(n-1)/2条边;对于每条边,检查剩余的(n-2)个点是否在该条边的同一侧;如果所有点都在该条边的一侧,则将该条边加入凸包集合中直到所有边都被遍历过为止,并将该凸包集合作为舌腹面暗影区域的轮廓线;创建舌面样本图像大小的单通道模板图像,将舌腹面暗影区域的轮廓线映射到单通道模板图像的相应位置;将轮廓线内部非白色区域所有像素值全部置为1,将轮廓线内部白色区域所有像素值置为3,将轮廓线外部区域所有像素值全部置0,依照舌腹面牙齿区域将单通道模板图像相应位置的像素值置为0,得到舌腹面分割模板。
优选的,所述利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像的步骤包括:将RGB的舌腹面图像转换到Lab颜色空间,利用opencv开源库中的kmeans聚类算法并依据舌腹面图像在Lab颜色空间的a和b颜色通道的像素值,对舌腹面图像分类得到分类结果图像。
优选的,所述利用舌下左右静脉模板计算出舌下静脉图像的步骤包括:将舌下左右静脉模板和舌腹面图像的各个颜色通道分别求与运算,并把运算结果合并得到舌下静脉图像。
优选的,所述舌下静脉特征包括舌下静脉的R、G、B颜色值和H、S、V颜色值、左静脉的长度和宽度、右静脉的长度和宽度、左静脉的舌长比和右静脉的舌长比,其中:
提取舌下静脉的R颜色值的计算方法为:将舌下静脉图像的R通道所有像素值之和除以R通道的舌下静脉像素个数,得到R颜色值;G、B、H、S、V颜色值均按提取R颜色值的方法计算;
提取左静脉的长度和宽度的计算方法为:计算舌下静脉图像中左静脉的最小外接矩形,则该矩形的长度和宽度分别是舌下左静脉的长度和宽度,其中,左静脉的最小外接矩形计算步骤包括:
(1)提取左静脉轮廓线并根据轮廓线计算左静脉凸包;
(2)随机选取左静脉凸包上的一条边AB作为起始边,其中A和B为左右两个端点,将AB以端点A为中心旋转θ角度,使AB边平行于坐标横轴x轴,则左静脉凸包的所有点都绕A点旋转了θ角度;
(3)以AB边为作为外接矩形的上边界或下边界,在左静脉凸包上找到y值为最小或y值为最大的一个点,经过该点做一条平行于x轴的直线来确定外接矩形的下边界或上边界,在左静脉凸包上找到x值为最小的左侧点和x值为最大的右侧点,经过该左侧点和右侧点分别做垂直于x轴的两条直线来确定外接矩形的左边界和右边界,这样就得到一个外接矩形,计算并保存AB边的端点坐标、外接矩形的长宽和面积;
(4)顺序选择左静脉凸包上的下一条边BC,重复上述步骤(2)至步骤(3)寻找下一个外接矩形,直到左静脉凸包上所有的边均遍历完毕;
(5)比较所有外接矩形的面积,找出面积最小的外接矩形作为左静脉的最小外接矩形;
提取右静脉的长度和宽度的计算方法与提取左静脉的长度和宽度的计算方法相同;
提取左静脉舌长比的计算方法为:将输入的舌体长度除以舌下左静脉的长度的比例值记为左静脉的舌长比;
提取右静脉舌长比的计算方法为:将输入的舌体长度除以舌下右静脉的长度的比例值记为右静脉的舌长比落。
有益效果
相较于现有技术,本发明所述舌下静脉特征提取装置及方法,通过大量舌面样本图像训练得到舌腹面检测器来有效检测出包含嘴唇的舌腹面,提高了舌腹面分割的准确性;通过对舌腹面图像进行颜色分类处理创建舌下左右静脉模板来有效地分割出舌下左右静脉,进而在静脉分割方面分割出的舌下左右静脉更加完整,提高了舌下静脉特征提取的准确性,以供中医生对中医舌下络脉诊提供参考,从而辅助中医生对中医舌下络脉诊判断结果的准确性。
附图说明
图1是本发明舌下静脉特征提取装置的优选实施例的功能模块示意图;
图2是本发明舌下静脉特征提取方法优选实施例的方法流程图;
图3为从原始的舌面图像中分割出舌腹面图像的示意图;
图4为从舌腹面图像中分割出舌下静脉图像的示意图;
图5为从舌下静脉图像中提取舌下静脉特征的示意图。
本发明目的实现、功能特点及优点将结合以下实施例,参照附图做进一步说明。
本发明的实施方式
为更进一步阐述本发明为达成预定发明目的所采取的技术手段及功效,以下结合附图及较佳实施例,对本发明的具体实施方式、结构、特征及其功效,详细说明如下。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
参照图1所示,图1是本发明舌下静脉特征提取装置的优选实施例的功能模块示意图。在本实施例中,所述舌下静脉特征提取装置1可以为安装有舌下静脉特征提取系统10的个人计算机、工作站计算机、中医面像仪、中医四诊仪等具有数据处理功能和图像处理功能的计算机装置。在本实施例中,所述舌下静脉特征提取装置1包括,但不仅限于,舌下静脉特征提取系统10、输入单元11、适于存储多条计算机程序指令的存储器12、执行各种计算机程序指令的处理器13以及输出单元14。所述输入单元11可以为一种高清摄像头等影像输入设备,用于拍摄舌面图像并输入至舌下静脉特征提取装置1中;所述输入单元11也可以为一种图像读取设备,用于从存储有舌面图像的数据库中读取舌面图像并输入至舌下静脉特征提取装置1中。所述存储器12可以为一种只读存储器ROM,随机存储器RAM、电可擦写存储器EEPROM、快闪存储器FLASH、磁盘或光盘等。所述处理器13为一种中央处理器(CPU)、微控制器(MCU)、数据处理芯片、或者具有数据处理功能的信息处理单元。所述输出单元14可以为显示器或者打印机等输出设备,能够将提取的舌下静脉特征输出至显示器上显示或打印机上打印,以供中医生对中医舌诊提供临床参考,从而辅助中医生对中医舌诊判断结果的准确性。
在本实施例中,所述舌下静脉特征提取系统10由多条计算机程序指令组成的程序模块组成,包括但不局限于,舌腹面训练模块101、舌腹面检测模块102、舌腹面分割模块103、舌下静脉分割模块104以及舌下静脉特征提取模块105。本发明所称的模块是指一种能够被舌下静脉特征提取装置1的处理器13执行并且能够完成固定功能的一系列计算机程序指令段,其存储在存储器12中,以下结合图2具体说明每一个模块的具体功能。
参考图2所示,是本发明舌下静脉特征提取方法优选实施例的流程图。在本实施例中,所述舌下静脉特征提取方法的各种方法步骤通过计算机软件程序来实现,该计算机软件程序以计算机程序指令的形式存储于计算机可读存储介质(例如存储器12)中,计算机可读存储介质可以包括:只读存储器、随机存储器、磁盘或光盘等,所述计算机程序指令能够被处理器(例如处理器13)加载并执行如下步骤S21至步骤S32。
步骤S21,输入不同的舌面样本图像构建多个正负样本;在本实施例中,所述输入单元11通过高清摄像设备摄取大量不同的舌面样本图像或者从外部数据库中读取大量不同的舌面样本图像,并输入到舌下静脉特征提取系统10中。所述舌腹面训练模块101根据输入不同的舌面样本图像构建多个正负样本,所述多个正负样本包括多个正样本和多个负样本,例如包括正样本200个,负样本300个,一个正样本包括一个舌面样本图像中舌腹面区域的图像数据,一个负样本包括一个舌面样本图像中非舌腹面区域的图像数据。所述输入单元11输入的舌面样本图像至舌腹面训练模块101进行训练以得到舌腹面检测器。本发明采用大量不同的舌面样本图像来训练一个用于识别舌面图像的舌腹面检测器,只要当中医生输入待检测的舌面图像至舌腹面检测器即可检测出包嘴唇的舌腹面图像。
步骤S22,利用opencv开源库中的opencv_createsamples程序对多个正负样本进行处理生成一个训练数据集;在本实施例中,舌腹面训练模块101利用opencv开源库中的opencv_createsamples程序对多个正负样本进行处理生成一个训练数据集。所述opencv_createsamples程序是opencv开源库中一种样本创建的通用程序,本领域技术人员通过现有的opencv_createsamples程序即可对多个正负样本进行处理生成一个训练数据集。
步骤S23,利用opencv开源库中的opencv_traincascade程序对训练数据集进行训练生成舌腹面检测器;在本实施例中,舌腹面训练模块101利用opencv开源库中的opencv_traincascade程序对训练数据集进行训练生成舌腹面检测器。所述opencv_traincascade程序是opencv开源库中一种分类器训练的通用程序,本领域技术人员通过现有的opencv_traincascade程序即可对所述训练数据集进行训练生成舌腹面检测器。
步骤S24,利用舌腹面检测器从待检测的舌面图像中检测出包含嘴唇的舌腹面;具体地,舌腹面检测模块102在进行舌腹面检测与识别时,首先从所述输入单元11接收需要检测的舌面图像,例如图3中(a)所示为待检测的舌面图像,再利用训练好的舌腹面检测器从待检测的舌面图像中检测出包含嘴唇的舌腹面,例如图3中(b)所示包含嘴唇的舌腹面。
步骤S25,基于待检测的舌面图像确定舌腹面的位置信息,并根据舌腹面的位置信息截取包含嘴唇的舌腹面;具体地,所述舌腹面检测模块102基于待检测的舌腹面在舌面图像中的位置确定舌腹面的位置信息rect(x,y,l,w),再根据舌腹面的位置信息截取包含嘴唇的舌腹面,其中x、y表示图3(a)中方框的左上角顶点坐标,l、w表示方框的长和宽。
步骤S26,对截取的舌腹面进行阈值分割处理得到舌腹面的暗影区域和牙齿区域;在本实施例中,所述舌腹面分割模块103对截取的舌腹面(包含嘴唇)进行阈值分割处理,并对分割处理的结果进行形态学变换以去除舌腹面图像中的一些小斑点杂质,分别得到舌腹面的暗影区域和牙齿区域(位于嘴唇内的暗影区域和牙齿区域),如图3(c)中的白色区域表示舌腹面暗影区域,3(d)中的白色区域表示舌腹面的牙齿区域。
步骤S27,利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板;在本实施例中,舌腹面分割模块103根据舌腹面暗影区域进行凸包计算生成舌腹面暗影区域的轮廓线,生成结果如图3(e)中的白色轮廓线。具体步骤为:舌腹面分割模块103利用canny边缘检测算法提取舌腹面暗影区域的轮廓线,然后提取轮廓线的所有坐标点(假设共有n个点),将提取的所有n个坐标点两两配对组成n×(n-1)/2条边;对于每条边,再检查剩余的(n-2)个点是否在该条边的同一侧;如果所有点都在该条边的一侧,则将该条边加入凸包集合,作为舌腹面暗影区域的轮廓线,直到所有组成的边都被遍历过为止;创建舌面样本图像大小的单通道模板图像,将舌腹面暗影区域的轮廓线映射到单通道模板图像的相应位置,将轮廓线内部非白色区域的所有像素值全部置为1,将轮廓线内部白色区域的所有像素值置为3,将轮廓线外部区域的所有像素值全部置0,依照舌腹面牙齿区域将单通道模板图像的相应位置的像素值置为0,从而得到舌腹面分割模板。其中,0代表确定的背景像素值(非舌腹面),1代表确定的前景像素值(舌腹面),2代表不确定的背景像素值(可能为非舌腹面),3代表不确定的前景像素值(可能为舌腹面)。
步骤S28,将待检测的舌面图像和舌腹面分割模板输入到opencv开源库的grabCut函数中分割出舌腹面图像;在本实施例中,舌腹面分割模块103将舌面图像和舌腹面分割模板输入到opencv开源库的grabCut函数中分割出舌腹面图像。所述opencv开源库的grabCut函数为一种现有的舌腹面分割函数,该grabCut函数能够进行图像分割,本领域技术人员将舌面图像和舌腹面分割模板输入到grabCut函数中,利用该grabCut函数即可分割出舌腹面图像。如图3所示,例如将待检测的舌面图像(a)和舌腹面分割模板(e)输入grabCut函数中分割出舌腹面图像(f)。
步骤S29,利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像;在本实施例中,舌腹面分割模块103首先将RGB的舌腹面图像(如图4中的(f)图像)转换到Lab颜色空间,根据Lab颜色空间的a颜色通道和b颜色通道(a颜色通道表示从洋红色至绿色的范围,b颜色通道表示从黄色至蓝色的范围),再利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像,例如分为3类,分类结果图像如图4中的(g),每种颜色代表一类,例如红色、绿色、蓝色三类。
步骤S30,从分类结果图像中分割出舌下左右静脉,并对舌下左右静脉进行形态学处理得到舌下左右静脉模板;在本实施例中,舌下静脉分割模块104从舌腹面图像中选取舌下左右静脉的两个位置,位置坐标点分别为left(x,y)、right(x,y),把两个位置坐标点分别映射到分类结果图像上作为种子点,然后依据区域生长算法从分类结果图像中依次分割出舌下左右静脉,并对舌下左右静脉进行形态学处理得到舌下左右静脉模板,如图4中的(h)表示舌下左静脉模板,如图4中的(i)表示舌下右静脉模板。
步骤S31,利用舌下左右静脉模板计算出舌下静脉图像;在本实施例中,舌下静脉特征提取模块105利用舌下左右静脉模板计算出舌下静脉图像,计算方法为:将舌下左右静脉模板和舌腹面图像的各个颜色通道分别求与运算,并把运算结果合并得到舌下静脉图像,如图4中的(j)表示舌下静脉图像。
步骤S32,从舌下静脉图像中提取舌下静脉特征;在本实施例中,舌下静脉特征提取模块105从舌下静脉图像中一共提取12个舌下静脉特征,分别包括舌下静脉的R、G、B颜色值和H、S、V颜色值、左静脉的长度和宽度、右静脉的长度和宽度、左静脉的舌长比和右静脉的舌长比,其中:
舌下静脉特征提取模块105提取舌下静脉的R颜色值的计算方法为:将舌下静脉图像的R通道所有像素值之和除以R通道的舌下静脉像素个数,得到R颜色值;G、B、H、S、V颜色值均按提取R颜色值的方法计算;
参考图5所示,舌下静脉特征提取模块105从舌下静脉图像中提取左静脉的长度和宽度的计算方法为:计算舌下静脉图像中左静脉的最小外接矩形,则该矩形的长度和宽度分别是舌下左静脉的长度和宽度,其中,左静脉的最小外接矩形计算步骤包括:
(1)提取左静脉轮廓线并根据轮廓线计算左静脉凸包;
(2)随机选取左静脉凸包上的一条边AB作为起始边,其中A和B为左右两个端点,将AB以端点A为中心旋转θ角度,使AB边平行于坐标横轴x轴,此时左静脉凸包上的所有点都绕A点旋转了θ角度;
(3)以AB边为作为外接矩形的上边界或下边界,在左静脉凸包上找到y值为最小或y值为最大的一个点,经过该点做一条平行于x轴的直线来确定外接矩形的下边界或上边界,在左静脉凸包上找到x值为最小的左侧点和x值为最大的右侧点,经过该左侧点和右侧点分别做垂直于x轴的两条直线来确定外接矩形的左边界和右边界,这样就得到一个外接矩形,计算并保存AB边的端点坐标、外接矩形的长宽和面积;
(4)顺序选择左静脉凸包上的下一条边BC,重复上述步骤(2)至步骤(3)寻找下一个外接矩形,直到左静脉凸包上所有的边均遍历完毕;
(5)比较所有外接矩形的面积,找出面积最小的外接矩形作为左静脉的最小外接矩形;
舌下静脉特征提取模块105从舌下静脉图像中提取右静脉的长度和宽度的计算方法与提取左静脉的长度和宽度的计算方法相同。
舌下静脉特征提取模块105从舌下静脉图像中提取左静脉舌长比的计算方法为:将输入的舌体长度除以舌下左静脉的长度的比例值记为左静脉的舌长比。
舌下静脉特征提取模块105从舌下静脉图像中提取右静脉舌长比的计算方法为:将输入的舌体长度除以舌下右静脉的长度的比例值记为右静脉的舌长比。
作为优选实施例,所述下静脉特征提取模块105通过输出单元14将所提取的12个舌下静脉特征输出至外部连接的显示器上显示或者打印机上打印,以供中医生对中医舌下络脉诊提供参考,从而辅助中医生对中医舌下络脉诊判断结果的准确性。
本发明还一种计算机可读存储介质,该计算机可读存储介质存储多条计算机程序指令,所述计算机程序指令由计算机装置的处理器加载并执行本发明所述舌下静脉特征提取方法的各个方法步骤。本领域技术人员可以理解,上述实施方式中各种方法的全部或部分步骤可以通过相关程序指令完成,该程序可以存储于计算机可读存储介质中,存储介质可以包括:只读存储器、随机存储器、磁盘或光盘等。
本发明所述舌下静脉特征提取装置及方法通过大量舌面样本图像训练得到舌腹面检测器来有效检测出包含嘴唇的舌腹面,提高了舌腹面分割的准确性;通过对舌腹面图像进行颜色分类处理创建舌下左右静脉模板来有效地分割出舌下左右静脉,进而在静脉分割方面分割出的舌下左右静脉更加完整,提高了舌下静脉特征提取的准确性。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
工业实用性
相较于现有技术,本发明所述舌下静脉特征提取装置及方法,通过大量舌面样本图像训练得到舌腹面检测器来有效检测出包含嘴唇的舌腹面,提高了舌腹面分割的准确性;通过对舌腹面图像进行颜色分类处理创建舌下左右静脉模板来有效地分割出舌下左右静脉,进而在静脉分割方面分割出的舌下左右静脉更加完整,提高了舌下静脉特征提取的准确性,以供中医生对中医舌下络脉诊提供参考,从而辅助中医生对中医舌下络脉诊判断结果的准确性。

Claims (10)

  1. 一种舌下静脉特征提取装置,包括适于实现各种计算机程序指令的处理器以及适于存储多条计算机程序指令的存储器,其特征在于,所述计算机程序指令由处理器加载并执行如下步骤:
    通过输入单元输入不同的舌面样本图像构建多个正负样本;
    利用opencv开源库中的opencv_createsamples程序对多个正负样本进行处理生成一个训练数据集;
    利用opencv开源库中的opencv_traincascade程序对训练数据集进行训练生成舌腹面检测器;
    利用舌腹面检测器从待检测的舌面图像中检测出包含嘴唇的舌腹面;
    基于待检测的舌面图像确定舌腹面的位置信息,并根据舌腹面的位置信息截取包含嘴唇的舌腹面;
    对截取的舌腹面进行阈值分割处理得到舌腹面的暗影区域和牙齿区域;
    利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板;
    将待检测的舌面图像和舌腹面分割模板输入到opencv开源库中的grabCut函数中分割出舌腹面图像;
    利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像;
    从分类结果图像中分割出舌下左右静脉,并对舌下左右静脉进行形态学处理得到舌下左右静脉模板;
    利用舌下左右静脉模板计算出舌下静脉图像,并从舌下静脉图像中提取舌下静脉特征。
  2. 如权利要求1所述的舌下静脉特征提取装置,其特征在于,所述利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板的步骤包括:
    利用canny边缘检测算法提取舌腹面暗影区域的轮廓线,提取轮廓线的所有n个坐标点,并将提取的所有n个坐标点两两配对组成n×(n-1)/2条边;
    对于每条边,检查剩余的(n-2)个点是否在该条边的同一侧;
    如果所有点都在该条边的一侧,则将该条边加入凸包集合中直到所有边都被遍历过为止,并将该凸包集合作为舌腹面暗影区域的轮廓线;
    创建舌面样本图像大小的单通道模板图像,并将舌腹面暗影区域的轮廓线映射到单通道模板图像的相应位置;
    将轮廓线内部非白色区域的所有像素值全部置为1,将轮廓线内部白色区域的所有像素值置为3,将轮廓线外部区域的所有像素值全部置0,以及依照舌腹面牙齿区域将单通道模板图像的相应位置的像素值置为0,得到舌腹面分割模板。
  3. 如权利要求1所述的舌下静脉特征提取装置,其特征在于,所述利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像的步骤包括:
    将RGB的舌腹面图像转换到Lab颜色空间,利用opencv开源库中的kmeans聚类算法并依据舌腹面图像在Lab颜色空间的a和b颜色通道的像素值,对舌腹面图像进行分类得到分类结果图像。
  4. 如权利要求1所述的舌下静脉特征提取装置,其特征在于,所述利用舌下左右静脉模板计算出舌下静脉图像的步骤包括:
    将舌下左右静脉模板和舌腹面图像的各个颜色通道分别求与运算,并把运算结果合并得到舌下静脉图像。
  5. 如权利要求1所述的舌下静脉特征提取装置,其特征在于,所述舌下静脉特征包括舌下静脉的R、G、B颜色值和H、S、V颜色值、左静脉的长度和宽度、右静脉的长度和宽度、左静脉的舌长比以及右静脉的舌长比,其中:
    提取舌下静脉的R颜色值的计算方法为:将舌下静脉图像的R通道所有像素值之和除以R通道的舌下静脉像素个数,得到R颜色值;G、B、H、S、V颜色值均按提取R颜色值的方法计算;
    提取左静脉的长度和宽度的计算方法为:计算舌下静脉图像中左静脉的最小外接矩形,则该矩形的长度和宽度分别是舌下左静脉的长度和宽度,其中,左静脉的最小外接矩形计算步骤包括:
    (1)提取左静脉轮廓线并根据轮廓线计算左静脉凸包;
    (2)随机选取左静脉凸包上的一条边AB作为起始边,其中A和B为左右两个端点,将AB以端点A为中心旋转θ角度,使AB边平行于坐标横轴x轴,则左静脉凸包的所有点都绕A点旋转了θ角度;
    (3)以AB边为作为外接矩形的上边界或下边界,在左静脉凸包上找到y值为最小或y值为最大的一个点,经过该点做一条平行于x轴的直线来确定外接矩形的下边界或上边界,在左静脉凸包上找到x值为最小的左侧点和x值为最大的右侧点,经过该左侧点和右侧点分别做垂直于x轴的两条直线来确定外接矩形的左边界和右边界,这样就得到一个外接矩形,计算并保存AB边的端点坐标、外接矩形的长宽和面积;
    (4)顺序选择左静脉凸包上的下一条边BC,重复上述步骤(2)至步骤(3)寻找下一个外接矩形,直到左静脉凸包上所有的边均遍历完毕;
    (5)比较所有外接矩形的面积,找出面积最小的外接矩形作为左静脉的最小外接矩形;
    提取右静脉的长度和宽度的计算方法与提取左静脉的长度和宽度的计算方法相同;
    提取左静脉舌长比的计算方法为:将输入的舌体长度除以舌下左静脉的长度的比例值记为左静脉的舌长比;
    提取右静脉舌长比的计算方法为:将输入的舌体长度除以舌下右静脉的长度的比例值记为右静脉的舌长比。
  6. 一种舌下静脉特征提取方法,其特征在于,该方法包括如下步骤:
    通过输入单元输入不同的舌面样本图像构建多个正负样本;
    利用opencv开源库中的opencv_createsamples程序对多个正负样本进行处理生成一个训练数据集;
    利用opencv开源库中的opencv_traincascade程序对训练数据集进行训练生成舌腹面检测器;
    利用舌腹面检测器从待检测的舌面图像中检测出包含嘴唇的舌腹面;
    基于待检测的舌面图像确定舌腹面的位置信息,并根据舌腹面的位置信息截取包含嘴唇的舌腹面;
    对截取的舌腹面进行阈值分割处理得到舌腹面的暗影区域和牙齿区域;
    利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板;
    将待检测的舌面图像和舌腹面分割模板输入到opencv开源库中的grabCut函数中分割出舌腹面图像;
    利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像;
    从分类结果图像中分割出舌下左右静脉,并对舌下左右静脉进行形态学处理得到舌下左右静脉模板;
    利用舌下左右静脉模板计算出舌下静脉图像,并从舌下静脉图像中提取舌下静脉特征。
  7. 如权利要求6所述的舌下静脉特征提取方法,其特征在于,所述利用舌腹面的暗影区域和牙齿区域创建舌腹面分割模板的步骤包括:
    利用canny边缘检测算法提取舌腹面暗影区域的轮廓线,提取轮廓线的所有n个坐标点,并将提取的所有n个坐标点两两配对组成n×(n-1)/2条边;
    对于每条边,检查剩余的(n-2)个点是否在该条边的同一侧;
    如果所有点都在该条边的一侧,则将该条边加入凸包集合中直到所有边都被遍历过为止,并将该凸包集合作为舌腹面暗影区域的轮廓线;
    创建舌面样本图像大小的单通道模板图像,将舌腹面暗影区域的轮廓线映射到单通道模板图像的相应位置;
    将轮廓线内部非白色区域的所有像素值全部置为1,将轮廓线内部白色区域的所有像素值置为3,将轮廓线外部区域的所有像素值全部置0,以及依照舌腹面牙齿区域将单通道模板图像的相应位置的像素值置为0,得到舌腹面分割模板。
  8. 如权利要求6所述的舌下静脉特征提取方法,其特征在于,所述利用opencv开源库中的kmeans聚类算法对舌腹面图像进行分类得到分类结果图像的步骤包括:
    将RGB的舌腹面图像转换到Lab颜色空间,利用opencv开源库中的kmeans聚类算法并依据舌腹面图像在Lab颜色空间的a和b颜色通道的像素值,对舌腹面图像进行分类得到分类结果图像。
  9. 如权利要求6所述的舌下静脉特征提取方法,其特征在于,所述利用舌下左右静脉模板计算出舌下静脉图像的步骤包括:
    将舌下左右静脉模板和舌腹面图像的各个颜色通道分别求与运算,并把运算结果合并得到舌下静脉图像。
  10. 如权利要求6所述的舌下静脉特征提取方法,其特征在于,所述舌下静脉特征包括舌下静脉的R、G、B颜色值和H、S、V颜色值、左静脉的长度和宽度、右静脉的长度和宽度、左静脉的舌长比以及右静脉的舌长比,其中:
    提取舌下静脉的R颜色值的计算方法为:将舌下静脉图像的R通道所有像素值之和除以R通道的舌下静脉像素个数,得到R颜色值;G、B、H、S、V颜色值均按提取R颜色值的方法计算;
    提取左静脉的长度和宽度的计算方法为:计算舌下静脉图像中左静脉的最小外接矩形,则该矩形的长度和宽度分别是舌下左静脉的长度和宽度,其中,左静脉的最小外接矩形计算步骤包括:
    (1)提取左静脉轮廓线并根据轮廓线计算左静脉凸包;
    (2)随机选取左静脉凸包上的一条边AB作为起始边,其中A和B为左右两个端点,将AB以端点A为中心旋转θ角度,使AB边平行于坐标横轴x轴,则左静脉凸包的所有点都绕A点旋转了θ角度;
    (3)以AB边为作为外接矩形的上边界或下边界,在左静脉凸包上找到y值为最小或y值为最大的一个点,经过该点做一条平行于x轴的直线来确定外接矩形的下边界或上边界,在左静脉凸包上找到x值为最小的左侧点和x值为最大的右侧点,经过该左侧点和右侧点分别做垂直于x轴的两条直线来确定外接矩形的左边界和右边界,这样就得到一个外接矩形,计算并保存AB边的端点坐标、外接矩形的长宽和面积;
    (4)顺序选择左静脉凸包上的下一条边BC,重复上述步骤(2)至步骤(3)寻找下一个外接矩形,直到左静脉凸包上所有的边均遍历完毕;
    (5)比较所有外接矩形的面积,找出面积最小的外接矩形作为左静脉的最小外接矩形;
    提取右静脉的长度和宽度的计算方法与提取左静脉的长度和宽度的计算方法相同;
    提取左静脉舌长比的计算方法为:将输入的舌体长度除以舌下左静脉的长度的比例值记为左静脉的舌长比;
    提取右静脉舌长比的计算方法为:将输入的舌体长度除以舌下右静脉的长度的比例值记为右静脉的舌长比。
PCT/CN2019/120645 2018-11-26 2019-11-25 舌下静脉特征提取装置及方法 WO2020108437A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811419050.8A CN111222371A (zh) 2018-11-26 2018-11-26 舌下静脉特征提取装置及方法
CN201811419050.8 2018-11-26

Publications (1)

Publication Number Publication Date
WO2020108437A1 true WO2020108437A1 (zh) 2020-06-04

Family

ID=70825673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120645 WO2020108437A1 (zh) 2018-11-26 2019-11-25 舌下静脉特征提取装置及方法

Country Status (2)

Country Link
CN (1) CN111222371A (zh)
WO (1) WO2020108437A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163452A (zh) * 2020-08-25 2021-01-01 同济大学 基于深度学习的双目近红外肢体静脉图像的三维重建方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329587B (zh) * 2020-10-30 2024-05-24 苏州中科先进技术研究院有限公司 饮料瓶的分类方法、装置及电子设备
CN113409304B (zh) * 2021-07-15 2022-05-20 深圳市圆道妙医科技有限公司 基于全息多维舌部图像分析方法、系统、设备和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576913A (zh) * 2009-06-12 2009-11-11 中国科学技术大学 基于自组织映射神经网络的舌象自动聚类、可视化和检索系统
CN104298983A (zh) * 2013-07-15 2015-01-21 清华大学 具有分布式用户终端的舌苔图像获取与分析系统
CN104537379A (zh) * 2014-12-26 2015-04-22 上海大学 一种高精度的舌体自动分割方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576913A (zh) * 2009-06-12 2009-11-11 中国科学技术大学 基于自组织映射神经网络的舌象自动聚类、可视化和检索系统
CN104298983A (zh) * 2013-07-15 2015-01-21 清华大学 具有分布式用户终端的舌苔图像获取与分析系统
CN104537379A (zh) * 2014-12-26 2015-04-22 上海大学 一种高精度的舌体自动分割方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163452A (zh) * 2020-08-25 2021-01-01 同济大学 基于深度学习的双目近红外肢体静脉图像的三维重建方法
CN112163452B (zh) * 2020-08-25 2022-11-18 同济大学 基于深度学习的双目近红外肢体静脉图像的三维重建方法

Also Published As

Publication number Publication date
CN111222371A (zh) 2020-06-02

Similar Documents

Publication Publication Date Title
CN109859203B (zh) 基于深度学习的缺陷牙齿图像识别方法
Tania et al. Advances in automated tongue diagnosis techniques
WO2020108437A1 (zh) 舌下静脉特征提取装置及方法
CN108615239B (zh) 基于阈值技术和灰度投影的舌图像分割方法
WO2020038312A1 (zh) 多通道舌体边缘检测装置、方法及存储介质
WO2020108436A1 (zh) 舌面图像分割装置、方法及计算机存储介质
WO2020029915A1 (zh) 基于人工智能的中医舌像分割装置、方法及存储介质
Huang et al. Automated hemorrhage detection from coarsely annotated fundus images in diabetic retinopathy
Li et al. Natural tongue physique identification using hybrid deep learning methods
Yue et al. Automatic acetowhite lesion segmentation via specular reflection removal and deep attention network
CN110648336B (zh) 一种舌质和舌苔的分割方法及装置
CN113889238B (zh) 一种图像识别方法、装置、电子设备及存储介质
WO2020114346A1 (zh) 中医舌诊舌尖红检测装置、方法及计算机存储介质
Chen et al. Application of artificial intelligence in tongue diagnosis of traditional Chinese medicine: a review
CN117036288A (zh) 一种面向全切片病理图像的肿瘤亚型诊断方法
CN108629780B (zh) 基于颜色分解和阈值技术的舌图像分割方法
CN113989269B (zh) 一种基于卷积神经网络多尺度特征融合的中医舌图像齿痕自动检测方法
Zhang et al. CCS-net: cascade detection network with the convolution kernel switch block and statistics optimal anchors block in hypopharyngeal cancer MRI
Zhang et al. A Comparison of Applying Image Processing and Deep Learning in Acne Region Extraction
CN109658382B (zh) 基于图像聚类和灰度投影的舌体定位方法
WO2019232688A1 (zh) 分割磁敏感加权图像中静脉血管的方法、装置和计算设备
Spyridonos et al. Multi-threshold lip contour detection
Li et al. PMJAF-Net: Pyramidal multi-scale joint attention and adaptive fusion network for explainable skin lesion segmentation
Chen et al. The development of a skin inspection imaging system on an Android device
Mukku et al. A Specular Reflection Removal Technique in Cervigrams

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19889529

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19889529

Country of ref document: EP

Kind code of ref document: A1