CN112200792A - Tongue picture prick detecting and counting method - Google Patents

Tongue picture prick detecting and counting method Download PDF

Info

Publication number
CN112200792A
CN112200792A CN202011127825.1A CN202011127825A CN112200792A CN 112200792 A CN112200792 A CN 112200792A CN 202011127825 A CN202011127825 A CN 202011127825A CN 112200792 A CN112200792 A CN 112200792A
Authority
CN
China
Prior art keywords
tongue
prick
pricks
area
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011127825.1A
Other languages
Chinese (zh)
Inventor
周鹏
徐雯
杨佳欣
侯攀登
余辉
何峰
明东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202011127825.1A priority Critical patent/CN112200792A/en
Publication of CN112200792A publication Critical patent/CN112200792A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting and counting the stippling of a tongue picture, which comprises the steps of cutting and selecting a region of a shot original tongue picture, determining a mask image by using the color information and the position information of the original tongue picture and obtaining a tongue body segmentation region by cutting; adjusting the size of the tongue body segmentation area, extracting an RGB tongue picture according to RGB color components, and taking a component B of the RGB tongue picture as a gray scale picture; carrying out a prick extraction process by utilizing the color difference between prick and the surrounding; and filtering out false pricks according to the circularity of the pricks and the area of the pricks, and finally obtaining the connected domain count. The invention can effectively distinguish prick, crack and other influencing factors, so that the positioning and counting of the prick are more accurate, thereby being beneficial to early warning of diseases related to the prick.

Description

Tongue picture prick detecting and counting method
Technical Field
The invention relates to the technical field of image processing, in particular to a method for detecting and counting stippling of tongue pictures.
Background
The tongue diagnosis is an important component of the four diagnostic methods in traditional Chinese medicine, and modern researches show that the tongue is related to various disease factors such as systemic nutritional status, acute inflammation stimulation and the like, and has a certain diagnostic value. However, for thousands of years, TCM has relied on the eyes to observe the information reflected by the tongue picture. The method is subjective generally, has poor repeatability, and has no unified standard for treatment based on syndrome differentiation. In tongue diagnosis, the spots are red or purple-red stars protruding from the tongue surface, and the thorns are the protrusions of the papilla of the tongue. The points and thorns are similar and mostly seen in the tip of the tongue. Point insertion or prickling the tongue indicates extreme heat in the zang-fu organs or extreme heat in the blood system. The spots and thorns are formed by the hyperplasia of fungiform papillae, the increase of the number of the fungiform papillae and the congestion and swelling. Generally, the more points and thorns, the more intense the pathogenic heat. The detection of the prick is also an important component for tongue texture detection.
At present, a method for detecting the prick by using more points is realized by using an edge detection method according to the larger difference between the prick and the surrounding environment, and the method has poorer distinguishing effect on the fake prick caused by the noise on the tongue and has higher requirement on a shooting light source.
Disclosure of Invention
In view of the above, the present invention provides a method for detecting and counting the number of tongue pricks.
In order to solve the technical problems, the invention adopts the technical scheme that: a method for detecting and counting the tongue pricks comprises the following steps:
s1: cutting and selecting a region of the shot original tongue picture, determining a mask image by using the color information and the position information of the original tongue picture, and obtaining a tongue body segmentation region by cutting;
s2: adjusting the size of the tongue body segmentation area, extracting an RGB tongue picture according to RGB color components, and taking a component B of the RGB tongue picture as a gray scale picture;
s3: carrying out a prick extraction process by utilizing the color difference between prick and the surrounding;
s4: and filtering out false pricks according to the circularity of the pricks and the area of the pricks, and finally obtaining the connected domain count.
In the present invention, preferably, the step S3 includes the steps of:
s31: traversing the RGB tongue picture, and obtaining the positions of all pixel points on the RGB tongue picture according to the pixel point judgment condition;
s32: taking the central point of the pixel point, and obtaining a first prick set according to a first prick judgment condition;
s33: and obtaining a second prick set according to the second prick judgment condition.
In the present invention, preferably, the stippling pixel point determination condition is that the value range of the R component of the RGB tongue map is 130-190, the value range of the G component is 70-130, and the value range of the B component is 60-120.
In the present invention, preferably, the first speckle judgment condition is that a difference between an average value of 9 × 9 matrix grayscales around the pixel and an average value of 21 × 21 matrix grayscales around the pixel is greater than 4.5, and the second speckle judgment condition is that a difference between an average value of 23 × 23 matrix grayscales around the pixel and an average value of 35 × 35 matrix grayscales around the pixel is greater than 5.
In the present invention, it is preferable that the matrix size in steps S32 and S33 is determined based on the areas of the first prick and the second prick and the distance between the pricks, the first prick set reflects the position distribution of the small pricks, and the second prick set reflects the position distribution of the large pricks.
In the present invention, preferably, the step S4 includes the steps of:
s41: marking points which meet the requirement of a filling matrix around a first point set and a second point set as filling points to obtain a filling point set, wherein the first point set, the second point set and the filling point set form a filling area;
s42: mapping the first prickling set, the second prickling set and the filling point set in the step S41 to the original tongue picture image, marking the original tongue picture image as white, and marking the rest parts as black;
s43: detecting all the contours by adopting a cvFindContours function, and extracting and obtaining the contours of the connected domain;
s44: excluding connected domains with the area less than 50 or more than 400 and the circularity less than 0.5 according to a circularity screening strategy;
s45: and calculating the number of the residual connected domains, and mapping and marking the residual connected domains to the original tongue picture image as a prick area.
In the present invention, preferably, the circularity screening strategy is to extract the number of pixel points surrounded by the contour line and the minimum circumscribed circle area of the contour line, and then obtain a circularity value by using a circularity area ratio calculation formula: and C is S/C, wherein S is the number of pixel points surrounded by the contour line, and C is the area of the minimum circumcircle of the contour line.
In the present invention, preferably, the process of extracting the minimum circumcircle area of the contour specifically includes the following steps:
t1: selecting two points on the contour, taking the distance between the two points as the diameter and requiring the two points to make a circle on the circle;
t2: judging whether the next point Q of the contour is in the circle or on the circle, if so, continuing the step; if the judgment result is false, the step T3 is entered;
t3: making a circle again by using the point Q and the point on the circle which is farthest from the point Q, and returning to the step T2 until all points in the contour are traversed;
t4: and selecting the diameter of the finally made circle to calculate the area of the circle.
In the present invention, preferably, the step S43 of extracting the contour of the connected domain specifically sets mode in the cvFindContours function parameter as CV _ RETR _ exterior, and set method as CV _ CHAIN _ APPROX _ NONE, and extracts the contour of all the stipples, where CV _ RETR _ exterior is to extract only the outer contour, ignore holes inside the contour, and CV _ CHAIN _ APPROX _ NONE is to convert all the connected code points into points.
In the present invention, preferably, the step S1 includes the steps of:
s11: carrying out down-sampling and Gaussian filtering processing on the original tongue picture image;
s12: marking a mask image by using the color information and the position information of the original tongue picture image;
s13: and performing iterative learning on the mask image by establishing a training model to obtain an iteratively-learned mask image, and cutting the mask image in the original tongue image to obtain a tongue body segmentation area.
The invention has the advantages and positive effects that: the design idea of the invention is that after the tongue body is divided, each tongue pixel point is taken as a central point, the difference value of the gray average values of two matrixes of the size around the central point is used for extracting the optional point of the prick, then the shape characteristic of the prick is used for removing the false prick, and finally all the pricks are obtained and counted. The algorithm is simple and convenient, is realized by utilizing C + + and OpenCV libraries, has high transportability and can be called by various platforms; in addition, the algorithm effectively distinguishes other influencing factors such as prick, crack and the like, so that the positioning and counting of the prick are more accurate, and the early warning of diseases related to the prick is greatly facilitated; the algorithm is less influenced by the environment, and the stability and repeatability of the algorithm are strong.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of an original tongue image after being cropped and selected according to a tongue prick detection and counting method of the present invention;
FIG. 2 is a schematic diagram of an H channel obtained by using color information and position information of an original tongue image according to a tongue prick detecting and counting method of the present invention;
FIG. 3 is a mask image obtained after iterative learning of a tongue prick detection and counting method according to the present invention;
FIG. 4 is a gray scale of the tongue segmentation area after adjusting the size of the tongue segmentation area and extracting B component according to the tongue picture stippling detection and counting method of the present invention;
FIG. 5 is a schematic diagram of a filling area obtained by performing prick extraction according to the tongue prick detecting and counting method of the present invention;
FIG. 6 is a schematic diagram of a tongue prick detection and counting method according to the present invention after performing a circularity screening strategy;
FIG. 7 is a flowchart illustrating the step S1 of the method for detecting and counting the number of tongue pricks according to the present invention;
FIG. 8 is a flowchart illustrating steps S2, S3, and S4 of the method for detecting and counting tongue pricks according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present. When a component is referred to as being "disposed on" another component, it can be directly on the other component or intervening components may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
As shown in fig. 1 to 8, the present invention provides a method for detecting and counting the tongue pricks, comprising the following steps:
s1: cutting and selecting a region of the shot original tongue picture, determining a mask image by using the color information and the position information of the original tongue picture, and obtaining a tongue body segmentation region by cutting;
s2: adjusting the size of the tongue body segmentation area, specifically, on the premise of ensuring that the length-width ratio is not changed, adjusting the width of the picture to 600, and extracting and obtaining an RGB tongue picture according to RGB color components, wherein the B component of the RGB tongue picture is used as a gray scale picture because the most obvious pricks of the B component in the RGB space are found in RGB, HSV and Lab three color spaces;
s3: carrying out a prick extraction process by utilizing the color difference between prick and the surrounding;
s4: since the shape of the prick is generally nearly circular, it can be distinguished from cracks in shape. And filtering out false pricks according to the circularity of the pricks and the area of the pricks, and finally obtaining the connected domain count.
In this embodiment, further, the step S3 includes the following steps:
s31: traversing the RGB tongue picture, and obtaining the positions of all pixel points on the RGB tongue picture according to the pixel point judgment condition;
s32: taking the central point of the pixel point, and obtaining a first prick set according to a first prick judgment condition;
s33: and obtaining a second prick set according to the second prick judgment condition.
In this embodiment, further, the determination condition of the prick pixel point is that the value range of the R component of the RGB tongue map is 130-190, the value range of the G component is 70-130, and the value range of the B component is 60-120. Here, traversing the tongue picture refers to traversing and judging pixel points with the number of rows and the number of columns being multiples of 3. Since the size of the prick is larger than 3 x 3, the judgment of the final prick is not influenced. And the subsequent operation speed is greatly improved.
In this embodiment, the first condition for judging the stippling is that a difference between an average value of 9 × 9 matrix grayscales around the pixel and an average value of 21 × 21 matrix grayscales around the pixel is greater than 4.5, and the second condition for judging the stippling is that a difference between an average value of 23 × 23 matrix grayscales around the pixel and an average value of 35 × 35 matrix grayscales around the pixel is greater than 5.
In the present embodiment, further, the matrix size in steps S32 and S33 is determined based on the areas of the first prick and the second prick and the distance between the pricks, the first prick set reflects the position distribution of the small pricks, and the second prick set reflects the position distribution of the large pricks.
In this embodiment, further, the step S4 includes the following steps:
s41: marking points which meet the requirement of a filling matrix around a first point set and a second point set as filling points to obtain a filling point set, wherein the first point set, the second point set and the filling point set form a filling area, and the step can further improve the calculation speed;
s42: mapping the first prickling set, the second prickling set and the filling point set in the step S41 to the original tongue picture image, marking the original tongue picture image as white, and marking the rest parts as black;
s43: detecting all the contours by adopting a cvFindContours function, and extracting and obtaining the contours of the connected domain;
s44: excluding connected domains with the area less than 50 or more than 400 and the circularity less than 0.5 according to a circularity screening strategy;
s45: and calculating the number of the residual connected domains, putting the edge points of the connected domains meeting the conditions into a data memory, obtaining the size of the data memory and finally obtaining the number of the stippling, and mapping and marking the original tongue picture image as a stippling area.
In this embodiment, the circularity screening strategy further includes extracting the number of pixel points surrounded by the contour line and the minimum circumcircle area of the contour line, and then obtaining a circularity value by using a circularity area ratio calculation formula: and C is S/C, wherein S is the number of pixel points surrounded by the contour line, and C is the area of the minimum circumcircle of the contour line.
In this embodiment, further, the process of extracting the minimum circumcircle area of the contour specifically includes the following steps:
t1: selecting two points on the contour, taking the distance between the two points as the diameter and requiring the two points to make a circle on the circle;
t2: judging whether the next point Q of the contour is in the circle or on the circle, if so, continuing the step; if the judgment result is false, the step T3 is entered;
t3: making a circle again by using the point Q and the point on the circle which is farthest from the point Q, and returning to the step T2 until all points in the contour are traversed;
t4: and selecting the diameter of the finally made circle to calculate the area of the circle.
In this embodiment, further, the step S43 of extracting the contour of the connected domain specifically includes setting mode in the cvFindContours function parameter as CV _ RETR _ exterior, setting method as CV _ CHAIN _ APPROX _ NONE, and extracting the contour of all the stipples, where CV _ RETR _ exterior is to extract only the outer contour, ignore holes inside the contour, and CV _ CHAIN _ APPROX _ NONE is to convert all the connected code points into points.
In this embodiment, further, the step S1 includes the following steps:
s11: carrying out down-sampling and Gaussian filtering processing on the original tongue picture image; wherein the size of the gaussian kernel is 3 x 3;
s12: marking a mask image by using the color information and the position information of the original tongue picture image;
s13: and performing iterative learning on the mask image by establishing a training model to obtain an iteratively-learned mask image, and cutting the mask image in the original tongue image to obtain a tongue body segmentation area.
In step S12, the image is divided into GC _ BGD (═ 0), background using the grabCut function; GC _ FGD (═ 1), foreground; GC _ PR _ BGD (═ 2), possibly background; GC _ PR _ FGD (═ 3), potentially four parts of the foreground. The invention marks the mask image according to the color information and the position information of the image. The method specifically comprises the following steps:
s121: according to the color characteristics and the position information of the tooth, in the HSV color space, 1/3 with H >35, S <0.3 and V >150 pixel points and the height of the pixel points smaller than the image height is used as the tooth area and is set as GC _ PR _ BGD.
S122: according to the color characteristics and the position information of the oral black area, in an HSV color space, a pixel point V is less than 75, or in an RGB color space, the minimum value of three components of R, G and B is less than 45, the height of the pixel point is less than 1/4 of the image height, and the region serving as the oral black area is set as GC _ PR _ BGD.
S123: an adaptive threshold is determined for the H component based on the color characteristics of the face. For image V<75 or S<In the oral black region of 0.3, H is set to 50 or more, and the subsequent histogram rendering is not performed. Calculating a color histogram of H-component pixel values in the range of 0 to 50, setting a certain pixel value x, the pixel value having a number p (x), calculating a pixel value which can maximally distinguish a face from a tongue according to the following formula, wherein x is traversed from 0 to 50, and sigma is set2The maximum x is the adaptive threshold for the face and tongue.
Figure BDA0002733659950000091
The face region having H greater than the adaptive threshold value, or V <45, or S <0.12, is set to GC _ PR _ BGD.
S124: the picture outer frame, i.e., the region having a height less than the picture total height 1/6, greater than the picture total height 5/6, and a width less than the picture total width 1/5, and greater than the picture total width 4/5, is set as GC _ PR _ BGD. Except that the picture region is set to GC _ PR _ FGD.
S125: the region inside the picture, i.e., having a height greater than the picture total height 2/5, less than the picture total height 3/5, and a width greater than the picture total width 2/5, less than the picture total width 3/5, is set to GC _ FGD. An area where the picture is significantly blackish, that is, an area where the minimum value of the RGB three components is less than 10 or the V component is less than 30 is set as GC _ BGD.
The embodiments of the present invention have been described in detail, but the description is only for the preferred embodiments of the present invention and should not be construed as limiting the scope of the present invention. All equivalent changes and modifications made within the scope of the present invention should be covered by the present patent.

Claims (10)

1. A method for detecting and counting the tongue pricks is characterized by comprising the following steps:
s1: cutting and selecting a region of the shot original tongue picture, determining a mask image by using the color information and the position information of the original tongue picture, and obtaining a tongue body segmentation region by cutting;
s2: adjusting the size of the tongue body segmentation area, extracting an RGB tongue picture according to RGB color components, and taking a component B of the RGB tongue picture as a gray scale picture;
s3: carrying out a prick extraction process by utilizing the color difference between prick and the surrounding;
s4: and filtering out false pricks according to the circularity of the pricks and the area of the pricks, and finally obtaining the connected domain count.
2. The method for detecting and counting tongue pricks according to claim 1, wherein said step S3 comprises the steps of:
s31: traversing the RGB tongue picture, and obtaining the positions of all pixel points on the RGB tongue picture according to the pixel point judgment condition;
s32: taking the central point of the pixel point, and obtaining a first prick set according to a first prick judgment condition;
s33: and obtaining a second prick set according to the second prick judgment condition.
3. The method as claimed in claim 2, wherein the determination condition of the stippling pixel point is that the value range of R component of the RGB tongue map is 130-190, the value range of G component is 70-130, and the value range of B component is 60-120.
4. The method as claimed in claim 2, wherein the first condition is that the difference between the average of the 9 × 9 matrix grays around the pixel and the average of the 21 × 21 matrix grays around the pixel is greater than 4.5, and the second condition is that the difference between the average of the 23 × 23 matrix grays around the pixel and the average of the 35 × 35 matrix grays around the pixel is greater than 5.
5. The method for detecting and counting tongue pricks according to claim 1, wherein said step S4 comprises the steps of:
s41: marking points which meet the requirement of a filling matrix around a first point set and a second point set as filling points to obtain a filling point set, wherein the first point set, the second point set and the filling point set form a filling area;
s42: mapping the first prickling set, the second prickling set and the filling point set in the step S41 to the original tongue picture image, marking the original tongue picture image as white, and marking the rest parts as black;
s43: detecting all the contours by adopting a cvFindContours function, and extracting and obtaining the contours of the connected domain;
s44: excluding connected domains with the area less than 50 or more than 400 and the circularity less than 0.5 according to a circularity screening strategy;
s45: and calculating the number of the residual connected domains, and mapping and marking the residual connected domains to the original tongue picture image as a prick area.
6. The method as claimed in claim 5, wherein the circularity screening strategy is to extract the number of pixels surrounded by the contour line and the minimum circumscribed circle area of the contour line, and then obtain the circularity value by using a circularity area ratio calculation formula: and C is S/C, wherein S is the number of pixel points surrounded by the contour line, and C is the area of the minimum circumcircle of the contour line.
7. The method for detecting and counting the tongue pricks according to claim 6, wherein the process of extracting the minimum circumcircle area of the outline specifically comprises the following steps:
t1: selecting two points on the contour, taking the distance between the two points as the diameter and requiring the two points to make a circle on the circle;
t2: judging whether the next point Q of the contour is in the circle or on the circle, if so, continuing the step; if the judgment result is false, the step T3 is entered;
t3: making a circle again by using the point Q and the point on the circle which is farthest from the point Q, and returning to the step T2 until all points in the contour are traversed;
t4: and selecting the diameter of the finally made circle to calculate the area of the circle.
8. The method as claimed in claim 5, wherein the step S43 is to extract the outline of the connected domain by setting mode in the cvFindContours function parameters to CV _ RETR _ EXTERNAL, and setting method to CV _ CHAIN _ APPROX _ NONE, and extracting the outline of all the connected domain, wherein CV _ RETR _ EXTERNAL is to extract only the outline and ignore holes inside the outline, and CV _ CHAIN _ APPROX _ NONE is to convert all the connected domain points into points.
9. The method for detecting and counting tongue pricks according to claim 1, wherein said step S1 comprises the steps of:
s11: carrying out down-sampling and Gaussian filtering processing on the original tongue picture image;
s12: marking a mask image by using the color information and the position information of the original tongue picture image;
s13: and performing iterative learning on the mask image by establishing a training model to obtain an iteratively-learned mask image, and cutting the mask image in the original tongue image to obtain a tongue body segmentation area.
10. The method for detecting and counting tongue pricks according to claim 9, wherein step S12 is specifically to find the H component adaptive threshold of the face and tongue in HSV color space.
CN202011127825.1A 2020-10-20 2020-10-20 Tongue picture prick detecting and counting method Pending CN112200792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011127825.1A CN112200792A (en) 2020-10-20 2020-10-20 Tongue picture prick detecting and counting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011127825.1A CN112200792A (en) 2020-10-20 2020-10-20 Tongue picture prick detecting and counting method

Publications (1)

Publication Number Publication Date
CN112200792A true CN112200792A (en) 2021-01-08

Family

ID=74009626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011127825.1A Pending CN112200792A (en) 2020-10-20 2020-10-20 Tongue picture prick detecting and counting method

Country Status (1)

Country Link
CN (1) CN112200792A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839238A (en) * 2014-02-28 2014-06-04 西安电子科技大学 SAR image super-resolution method based on marginal information and deconvolution
CN110495909A (en) * 2019-09-20 2019-11-26 张家港市第一人民医院 A kind of anaphylactogen pricking method detection system
CN111325713A (en) * 2020-01-21 2020-06-23 浙江省北大信息技术高等研究院 Wood defect detection method, system and storage medium based on neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839238A (en) * 2014-02-28 2014-06-04 西安电子科技大学 SAR image super-resolution method based on marginal information and deconvolution
CN110495909A (en) * 2019-09-20 2019-11-26 张家港市第一人民医院 A kind of anaphylactogen pricking method detection system
CN111325713A (en) * 2020-01-21 2020-06-23 浙江省北大信息技术高等研究院 Wood defect detection method, system and storage medium based on neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
冀敦福等: "《中医舌象辨析》", 31 May 2008 *
吕元婷: ""基于辅助光源的舌诊客观化研究"", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
王学民等: "基于辅助光源的舌象点刺识别方法研究", 《传感技术学报》 *
许家佗等: "舌象图像分析中点刺与瘀点特征的识别", 《上海中医药大学学报》 *
赵飞等: "基于卷积神经网络和图像显著性的心脏CT图像分割", 《北京生物医学工程》 *

Similar Documents

Publication Publication Date Title
CN111292338B (en) Method and system for segmenting choroidal neovascularization from fundus OCT image
CN106651888B (en) Colour eye fundus image optic cup dividing method based on multi-feature fusion
CN109472781B (en) Diabetic retinopathy detection system based on serial structure segmentation
CN110276356A (en) Eye fundus image aneurysms recognition methods based on R-CNN
EP2188779B1 (en) Extraction method of tongue region using graph-based approach and geometric properties
CN104794721B (en) A kind of quick optic disk localization method based on multiple dimensioned spot detection
CN106683080B (en) A kind of retinal fundus images preprocess method
CN107346545A (en) Improved confinement growing method for the segmentation of optic cup image
CN111507932B (en) High-specificity diabetic retinopathy characteristic detection method and storage device
CN106846293B (en) Image processing method and device
CN106558031A (en) A kind of image enchancing method of the colored optical fundus figure based on imaging model
CN107545550A (en) Cell image color cast correction
CN109872337B (en) Eye fundus image optic disc segmentation method based on rapid mean shift
CN106530316B (en) The optic disk dividing method of comprehensive eye fundus image marginal information and luminance information
CN106529420B (en) The optic disk center positioning method of comprehensive eye fundus image marginal information and luminance information
Kumar et al. Automatic optic disc segmentation using maximum intensity variation
CN109829931A (en) A kind of Segmentation Method of Retinal Blood Vessels increasing PCNN based on region
CN112200792A (en) Tongue picture prick detecting and counting method
CN111292285B (en) Automatic screening method for diabetes mellitus based on naive Bayes and support vector machine
CN117197064A (en) Automatic non-contact eye red degree analysis method
CN116012594A (en) Fundus image feature extraction method, fundus image feature extraction device and diagnosis system
CN114359104B (en) Cataract fundus image enhancement method based on hierarchical generation
CN113012184B (en) Micro-hemangioma detection method based on Radon transformation and multi-type image joint analysis
CN113362346B (en) Video disc and video cup segmentation method based on machine learning double-region contour evolution model
CN112927242B (en) Fast optic disc positioning method based on region positioning and group intelligent search algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210108