WO2020029915A1 - 基于人工智能的中医舌像分割装置、方法及存储介质 - Google Patents

基于人工智能的中医舌像分割装置、方法及存储介质 Download PDF

Info

Publication number
WO2020029915A1
WO2020029915A1 PCT/CN2019/099242 CN2019099242W WO2020029915A1 WO 2020029915 A1 WO2020029915 A1 WO 2020029915A1 CN 2019099242 W CN2019099242 W CN 2019099242W WO 2020029915 A1 WO2020029915 A1 WO 2020029915A1
Authority
WO
WIPO (PCT)
Prior art keywords
tongue
image
feature
superpixel
region
Prior art date
Application number
PCT/CN2019/099242
Other languages
English (en)
French (fr)
Inventor
张贯京
葛新科
高伟明
吕超
王海荣
Original Assignee
深圳市前海安测信息技术有限公司
深圳市易特科信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市前海安测信息技术有限公司, 深圳市易特科信息技术有限公司 filed Critical 深圳市前海安测信息技术有限公司
Publication of WO2020029915A1 publication Critical patent/WO2020029915A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4854Diagnosis based on concepts of traditional oriental medicine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to the technical field of traditional Chinese medicine tongue image processing, and in particular, to a traditional Chinese medicine tongue image segmentation device and method based on artificial intelligence, and a computer storage medium.
  • the main research object is the tongue image.
  • Information such as the size, shape, color, cracks, fetal quality, and the presence and location of tooth marks all reflect the patient's health.
  • the diagnosis of TCM is mainly based on the experience of practicing TCM practitioners, and the lack of effective quantitative standards has greatly hindered the modernization of TCM.
  • tongue diagnosis has the same defects.
  • An automated tongue diagnosis system based on image analysis, pattern recognition, and artificial intelligence technology is one of the methods to make up for this kind of defect. The purpose of this type of system is to establish the relationship between tongue image features and tongue image types to achieve automatic tongue diagnosis.
  • tongue image segmentation technology is mainly based on color segmentation or segmentation based on Snakes algorithm.
  • color-based segmentation is susceptible to the effects of light and the tongue and skin are close.
  • the segmentation effect is not ideal; based on the Snakes algorithm, the effect is not ideal when the tongue and surrounding skin are not sufficiently contrasted. Therefore, the existing tongue image segmentation technology has low accuracy and efficiency for tongue segmentation, which affects the inaccuracy of tongue diagnosis in traditional Chinese medicine and lacks practicality in practical applications.
  • the main purpose of the present invention is to provide a traditional Chinese medicine tongue image segmentation device, method and machine storage medium based on artificial intelligence, which aims to solve the problems of low accuracy and efficiency of tongue segmentation in the prior art.
  • the present invention provides a traditional Chinese medicine tongue image segmentation device based on artificial intelligence.
  • the traditional Chinese medicine tongue image segmentation device includes an image acquisition device and an output unit.
  • the traditional Chinese medicine tongue image segmentation device further includes a computer program instruction suitable for implementing And a memory suitable for storing a plurality of computer program instructions, the computer program instructions being loaded by the processor and performing the following steps: capturing an image of the tongue surface containing the tongue through an image acquisition device; and capturing RGB pixels of the tongue image SLIC algorithm is used to generate N superpixel regions; feature groups of each superpixel region are extracted; a pre-trained tongue classifier is used to classify the feature groups of each superpixel region, and the tongue regions and non-pixel regions are identified. Tongue region; remove the tongue region from the tongue image and retain the tongue region to obtain the tongue image, and output the tongue image through the output unit.
  • a tongue feature sample is created in advance, and the tongue feature sample is defined as: if the superpixel region belongs to the tongue region, the superpixel is established The label value corresponding to the feature group of the region is 1; if the super pixel region does not belong to the tongue body region, the label value corresponding to the feature group of the super pixel region is established; each super pixel of the tongue body feature sample is used by the adaboost algorithm
  • the feature set of the region K i is trained with the corresponding label value to generate a tongue classifier, and the tongue classifier is stored in the memory.
  • acquiring the tongue-surface image including the tongue body through the image acquisition device includes the steps of: acquiring a digital image including the tongue body from the mouth of the patient through the image acquisition device, and analyzing the clarity and Neutralization parameters; generate a first control signal according to the clarity of the digital image, and generate a second control signal according to the neutrality parameters of the digital image; drive the stepping motor of the image acquisition device to adjust the camera and tongue according to the first control signal
  • the relative position of the body; the lens axis of the image acquisition device driven according to the second control signal is parallel to the tongue normal and passes through the center of the tongue to capture a clear tongue image.
  • the feature set of the superpixel region includes the color feature, position feature, and maximum gradient feature of the superpixel region
  • the computer program instructions are loaded by the processor and further perform the following steps: calculating all of the values in each superpixel region The average value of the three color channels R, G, and B of the pixel is used as the color feature of the superpixel region, which is expressed as R ', G', and B '; compare all gradient sizes M of each superpixel region, and select the largest gradient As the maximum gradient feature of each superpixel region, it is expressed as M '; the coordinates (x, y) of all pixel points in each superpixel region are accumulated and summed, and the accumulation result is divided by the width and length of the face, respectively.
  • the color feature, position feature, and maximum gradient feature of each superpixel region are combined into a feature group with 6 dimensions (R ', G', B ', M', x ', y') as the feature group of each superpixel region.
  • the present invention also provides an artificial intelligence-based tongue image segmentation method for a traditional Chinese medicine tongue image segmentation device.
  • the traditional Chinese medicine tongue image segmentation device includes an image acquisition device and an output unit.
  • the method includes the following steps: The image acquisition device captures a tongue-like image containing the tongue; processes the RGB pixels of the tongue-like image through the SLIC algorithm to generate N superpixel regions; extracts the feature group of each superpixel region; uses a pre-trained tongue classifier Classify the feature group of each superpixel region, and identify the tongue region and non-tongue region; remove the tongue region from the tongue image and retain the tongue region to obtain the tongue image, and output the image through the output unit.
  • the tongue image is described.
  • the method for segmenting a tongue image of traditional Chinese medicine based on artificial intelligence further includes the following steps: a tongue feature sample is created in advance, and the tongue feature sample is defined as: if the superpixel region belongs to the tongue region, establishing The label value corresponding to the feature group of the superpixel region is 1; if the superpixel region does not belong to the tongue region, the label value corresponding to the feature group of the superpixel region is established to 0; using the adaboost algorithm for each tongue feature sample A feature set of a super pixel region K i is trained with corresponding label values to generate a tongue classifier, and the tongue classifier is stored in a memory.
  • the rules for identifying a tongue region and a non-tongue region are as follows: if a superpixel region has a label value of 1, the superpixel region is identified as a tongue region; if a superpixel region has a label value If it is 0, the superpixel region is identified as a non-tongue region.
  • acquiring the tongue-surface image including the tongue body through the image acquisition device includes the steps of: acquiring a digital image including the tongue body from the mouth of the patient through the image acquisition device, and analyzing the clarity and Neutralization parameters; generate a first control signal according to the clarity of the digital image, and generate a second control signal according to the neutrality parameters of the digital image; drive the stepping motor of the image acquisition device to adjust the camera and tongue according to the first control signal
  • the relative position of the body; the lens axis of the image acquisition device driven according to the second control signal is parallel to the tongue normal and passes through the center of the tongue to capture a clear tongue image.
  • the feature group of the superpixel region includes a color feature, a position feature, and a maximum gradient feature of the superpixel region
  • the step of extracting the feature group of each superpixel region includes the following steps: calculating each superpixel The average value of the three color channels R, G, and B corresponding to all pixel points in the region is used as the color feature of the superpixel region, which is expressed as R ', G', and B '; compare all gradient sizes M of each superpixel region, and select
  • the maximum gradient is the maximum gradient feature of each superpixel region, which is expressed as M '; the coordinates (x, y) of all pixel points in each superpixel region are accumulated and summed, and the accumulated results are divided by the face's
  • the width and length are used as the position features (x ', y') corresponding to each superpixel region; the color features, position features, and maximum gradient features of each superpixel region are combined into a feature group with 6 dimensions (R ', G ', B', M ',
  • the present invention is a computer-readable storage medium that stores a plurality of computer program instructions, wherein the computer program instructions are loaded by a processor of a computer device and execute the artificial intelligence-based traditional Chinese medicine Tongue image segmentation method.
  • the tongue image contains too many pixels, it is time-consuming and the segmentation accuracy is not high to segment the tongue image using the existing Snakes algorithm. That is, the original pixel point) is processed by the SLIC algorithm into a series of superpixels, and the metapixel RGB average is used to replace the metapixels. Since the number of superpixels is much less than the metapixels, the tongue classifier is trained according to the adaboost algorithm and uses this The tongue classifier optimizes the segmentation of the tongue area and the non-tongue area, so it can reduce the calculation amount of the tongue surface image data, improve the calculation speed of the tongue body segmentation, and can effectively improve the accuracy of tongue body segmentation.
  • the field has broad application prospects.
  • FIG. 1 is a block diagram of a preferred embodiment of a traditional Chinese medicine tongue image segmentation device based on artificial intelligence according to the present invention
  • FIG. 2 is a flowchart of a preferred embodiment of a traditional Chinese medicine tongue image segmentation method based on artificial intelligence according to the present invention
  • FIG. 3 is a schematic diagram of processing RGB pixel points of a tongue image into a super pixel region according to the present invention.
  • FIG. 1 is a block diagram of a preferred embodiment of a traditional Chinese medicine tongue image segmentation device based on artificial intelligence according to the present invention.
  • the TCM tongue image segmentation device 1 is installed with a TCM tongue image segmentation system 10 based on artificial intelligence.
  • the TCM tongue image segmentation device 1 may be a personal computer, a large Computers, workstation computers, servers, cloud platform servers, and other computing devices with data processing functions and image processing functions.
  • the TCM tongue image segmentation device 1 includes, but is not limited to, a TCM tongue image segmentation system 10 based on artificial intelligence, an image acquisition device 11, a memory 12 suitable for storing a plurality of computer program instructions, and executing each A computer program instruction processor 13 and an output unit 14.
  • the image acquisition device 11 is a high-definition camera device including at least a stepping motor and a lens, such as a high-definition camera, for capturing a tongue surface image including the tongue from the tongue of a patient.
  • the memory 12 may be a read-only memory ROM, a random access memory RAM, an electrically erasable memory EEPROM, a flash memory FLASH, a magnetic disk, or an optical disk.
  • the processor 13 is a central processing unit (CPU), a microcontroller (MCU), a data processing chip, or an information processing unit having a data processing function.
  • the output unit 14 may be a display screen for displaying a tongue image, or a printer for printing a tongue image.
  • the artificial intelligence-based TCM tongue image segmentation system 10 is composed of program modules composed of various computer program instructions, including, but not limited to, a classification model creation module 101, a tongue image acquisition module 102, and a tongue body.
  • the module referred to in the present invention refers to a series of computer program instruction segments that can be executed by the processor 13 of the TCM tongue image segmentation device 1 and can complete fixed functions, and is stored in the memory 12 of the TCM tongue image segmentation device 1. The specific functions of each module are described in detail below with reference to FIG. 2.
  • FIG. 2 is a flowchart of a preferred embodiment of a traditional Chinese medicine tongue image segmentation method based on the present invention.
  • the various method steps of the method for segmenting the tongue image of traditional Chinese medicine based on artificial intelligence are implemented by a computer software program, and the computer software program is stored in a computer-readable storage medium (for example, memory 12) in the form of computer program instructions.
  • the computer-readable storage medium may include: a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the computer program instructions can be loaded by a processor (for example, the processor 13) and execute the following steps S21 to S25.
  • a tongue image including the tongue is captured by the image acquisition device.
  • the tongue image acquisition module 102 acquires a digital image including the tongue from the mouth of the patient through the image acquisition device 11 and controls the digital image according to the digital image.
  • the image acquisition device 11 captures a clear tongue image.
  • the tongue image acquisition module 102 analyzes the sharpness and the centering parameter of the digital image from the digital image, generates a first control signal according to the sharpness of the digital image, and generates a second control signal according to the centering parameter of the digital image. According to the first control signal, the stepping motor of the image acquisition device 11 is driven to adjust the relative position of the camera and the tongue.
  • the lens axis of the image acquisition device 11 is driven according to the second control signal to be parallel to the tongue normal and pass through the center of the tongue.
  • the clear tongue-surface image is re-acquired, so that the image acquisition device 11 is controlled to adapt to the patient's tongue-extending motion and acquire a tongue-surface image including the tongue from the tongue of the patient.
  • step S22 the RGB pixels of the tongue image are processed by the SLIC algorithm to generate N superpixel regions K i .
  • the tongue feature extraction module 103 processes the RGB pixels of the tongue image by the SLIC algorithm to obtain Tongue image of N superpixel regions K i .
  • the tongue image is composed of a series of pixels, and each pixel is composed of three color channels of R, G, and B.
  • the tongue feature extraction module 103 performs simple linear iterative cluster (SLIC) on the tongue image.
  • the tongue feature extraction module 103 uses the SLIC algorithm to convert the tongue image from the RGB color space to the CIE-Lab color space, corresponding to the (L, a, b) color value and (x, y) of each RGB pixel point.
  • the coordinates form a 5-dimensional vector V [l, a, b, x, y].
  • the similarity of two pixels can be measured by their vector distance. The greater the vector distance, the smaller the similarity between the two pixels.
  • the SLIC algorithm first generates N seed points, and then searches for a number of pixels closest to the seed point in the surrounding space of each seed point, and classifies them into the same class as the seed point until all the pixel points are classified. .
  • a feature group of each super pixel region K i is extracted, including color features, position features, and maximum gradient features.
  • the tongue feature extraction module 103 extracts a feature group of each super pixel region K i .
  • the features include color features, position features, and maximum gradient features of each superpixel region Ki.
  • the tongue feature extraction module 103 calculates the average value of three color channels R, G, and B corresponding to all pixels in each superpixel region K i as the color feature of the super pixel region K i , which is expressed as R ′, G ', B'; Tongue feature extraction module 103 compares all gradient sizes M of each superpixel region Ki, selects the largest gradient as the maximum gradient feature of each superpixel region Ki, which is denoted as M '; tongue feature extraction module 103 Accumulate and sum the coordinates (x, y) of all pixel points in each superpixel region K i , and then divide the accumulation result by the width and length of the human face, respectively, as the position features corresponding to each super pixel region K i ( x ', y'); Finally, the tongue feature extraction module 103 combines the color features, position features, and maximum gradient features of each superpixel region Ki into a feature group with 6 dimensions (R ', G', B ' , M ', x', y ').
  • a pre-trained tongue classifier is used to classify the feature group of each super pixel region K i to identify the tongue region and the non-tongue region.
  • the tongue segmentation module 104 divides each The feature set (R ', G', B ', M', x ', y') of the superpixel region K i is input to a trained tongue classifier, and the tongue classifier is The feature groups of i (R ', G', B ', M', x ', y') are classified to identify the tongue area and the non-tongue area.
  • the tongue segmentation module 104 inputs the feature group (R ', G', B ', M', x ', y') of each superpixel region K i of the tongue image, and uses the tongue classifier to super pixel region K i of features (R ', G', B ', M', x ', y') to establish a corresponding tag value of each super pixel region K i, K i corresponding to each super pixel in accordance with area
  • the label value identifies the tongue area and non-tongue area in the tongue image.
  • a tongue classifier (a tongue classification model) is pre-trained and stored in the memory 12.
  • the tongue segmentation module 104 recognizes The superpixel region K i is a tongue region; if the label value corresponding to a superpixel region K i is 0, the tongue segmentation module 104 recognizes that the superpixel region K i is a non-tongue region;
  • Step S25 Remove the tongue area from the tongue image and retain the tongue area to obtain the tongue image, and output the tongue image through the output unit; in this embodiment, the tongue segmentation module 104 removes the tongue image All the non-tongue areas leave the tongue area, so that a tongue image including only the tongue of the patient is obtained.
  • the tongue segmentation module 104 divides the tongue image from the tongue image through the display screen of the output unit 14, or prints the tongue image through a printer, or sends the tongue image to the doctor's terminal through the network for the doctor to pass the patient's
  • the tongue image diagnoses the size, shape, color, cracks, fetal quality, and presence or absence of tooth marks on the tongue to help doctors perform a TCM tongue diagnosis to obtain the patient's health.
  • a tongue classifier needs to be trained through machine learning and machine in advance and stored in the memory 12.
  • step S21 the following specific steps are further included:
  • a tongue feature sample is created in advance, and feature definition is performed on the tongue feature sample.
  • the classification model creation module 101 creates a tongue feature sample in advance. Specifically, a tongue is obtained as a training tongue. A face image, and according to steps S22 and S23, the feature groups (R ', G', B ', M', x ', y') of all superpixel regions K i of the tongue image are obtained as tongue feature samples, wherein the tongue and the sample is defined as follows: If K i superpixel areas belonging to the tongue region, the establishment of the group corresponding to the feature region pixel super K i value of tag 1; if K i superpixel area does not belong to the tongue region, the establishment of The label value corresponding to the feature group of the superpixel region K i is 0.
  • Step 2 Use the adaboost algorithm to train the tongue feature samples to generate a tongue classifier and store the tongue classifier in the memory; specifically, the classification model creation module 101 uses the adaboost algorithm to train the tongue feature samples to A tongue classifier is generated, and the tongue classifier is stored in the memory 12 for subsequent fast and accurate tongue segmentation of the input tongue surface image.
  • the Adaboost algorithm is a prior art binary classification algorithm, which is not described in detail in the present invention.
  • the classification model creation module 101 uses the adaboost algorithm to characterize each superpixel region K i of the tongue feature sample. Group and the corresponding label values for machine learning and training to obtain the tongue classifier.
  • the present invention also provides a computer-readable storage medium that stores a plurality of computer program instructions.
  • the computer program instructions are loaded by a processor of a computer device and execute the artificial intelligence-based method of segmenting a tongue image of traditional Chinese medicine.
  • the program may be stored in a computer-readable storage medium.
  • the storage medium may include a read-only memory, a random access memory, Disk or CD, etc.
  • the tongue image contains too many pixels, it is time-consuming and the segmentation accuracy is not high to segment the tongue image using the existing Snakes algorithm.
  • the original pixel points) are processed by the SLIC algorithm into a series of superpixels, and the metapixel RGB average is used to replace the metapixels. Since the number of superpixels is much less than the metapixels, the tongue classifier is trained according to the adaboost algorithm and the tongue is used The body classifier optimizes the segmentation of the tongue area and the non-tongue area, so it can reduce the calculation amount of the tongue surface image data, improve the calculation speed of the tongue body segmentation, and can effectively improve the accuracy of tongue body segmentation. Has a wide range of applications.
  • the tongue image contains too many pixels, it is time-consuming and the segmentation accuracy is not high to segment the tongue image using the existing Snakes algorithm. That is, the original pixel point) is processed by the SLIC algorithm into a series of superpixels, and the metapixel RGB average is used to replace metapixels. Since the number of superpixels is much less than the metapixels, the tongue classifier is trained according to the adaboost algorithm and the The tongue classifier optimizes the segmentation of the tongue area and the non-tongue area, so it can reduce the calculation amount of the tongue surface image data, improve the calculation speed of the tongue body segmentation, and can effectively improve the accuracy of tongue body segmentation.
  • the field has broad application prospects.

Abstract

本发明提供一种基于人工智能的中医舌像分割装置、方法及存储介质,该方法包括步骤:通过图像采集设备摄取包含舌体的舌面图像;将舌面图像的RGB像素点经过SLIC算法处理生成N个超像素区域;提取每一个超像素区域的特征组;利用预先训练好的舌体分类器对每个超像素区域的特征组进行分类,并识别出舌体区域和非舌体区域;从舌面图像中剔除非舌体区域并保留舌体区域得到舌体图像,并通过输出单元输出所述舌体图像。本发明能够减少舌面图像数据的运算量,提高舌体分割的运算速度,且能够有效提高舌体分割的准确度,对于中医舌诊领域具有广泛的应用前景。

Description

基于人工智能的中医舌像分割装置、方法及存储介质 技术领域
本发明涉及中医舌像处理的技术领域,尤其涉及一种基于人工智能的中医舌像分割装置、方法及计算机存储介质。
背景技术
作为中医四诊法之一的“望”,其主要研究对象就是舌像。舌体的大小、形状、颜色、裂纹、胎质、以及齿痕有无和位置等信息都反映了病人的健康状况。这使得舌诊成为了中医诊断中非常重要以及广泛使用的诊断方法。然而,中医诊断主要凭借执业中医师的经验来完成,缺乏有效的量化标准,这极大地阻碍了中医的现代化发展之路。作为其中一个重要的中医诊断方法,舌诊具有同样的缺陷。基于图像分析、模式识别和人工智能技术的自动化舌诊系统是弥补此种缺陷的方法之一,此类系统的目的就是建立舌像特征和舌像类型之间的关系,实现舌诊的自动化。舌像特征的提取完全依赖于对图像中舌体的精确分割,因此提出了很多针对舌诊图像分割方法。当前的舌像分割技术主要基于颜色分割或者基于Snakes算法分割。然而,基于颜色的分割容易受到光线的影响以及舌体和皮肤比较接近时分割效果不理想;基于Snakes算法分割,在舌体和周围皮肤对比不够明显的情况下效果也不够理想。因此,现有的舌像分割技术对舌体分割精度和效率不高,从而影响对中医舌诊的不够准确,在实际应用中缺乏实用性。
技术问题
本发明的主要目的在于提供一种基于人工智能的中医舌像分割装置、方法及机存储介质,旨在解决现有技术对舌体分割精度和效率不高的问题。
技术解决方案
为实现上述目的,本发明提供一种基于人工智能的中医舌像分割装置,该中医舌像分割装置包括图像采集设备以及输出单元,该中医舌像分割装置还包括适于实现各种计算机程序指令的处理器以及适于存储多条计算机程序指令的存储器,所述计算机程序指令由处理器加载并执行如下步骤:通过图像采集设备摄取包含舌体的舌面图像;将舌面图像的RGB像素点经过SLIC算法处理生成N个超像素区域;提取每一个超像素区域的特征组;利用预先训练好的舌体分类器对每个超像素区域的特征组进行分类,并识别出舌体区域和非舌体区域;从舌面图像中剔除非舌体区域并保留舌体区域得到舌体图像,并通过输出单元输出所述舌体图像。
进一步地,所述计算机程序指令由处理器加载还执行如下步骤:预先创建一个舌体特征样本,并对舌体特征样本进行特征定义为:若超像素区域属于舌体区域,则建立该超像素区域的特征组对应的标签值为1;若超像素区域不属于舌体区域,则建立该超像素区域的特征组对应的标签值为0;利用adaboost算法对舌体特征样本的每一个超像素区域K i的特征组与对应的标签值进行训练来产生舌体分类器,并将舌体分类器存储在存储器中。
进一步地,所述通过图像采集设备摄取包含舌体的舌面图像包括如下步骤:通过图像采集设备从患者嘴部摄取包含舌体的数字图像,并从数字图像分析出该数字图像的清晰度和对中性参数;根据数字图像的清晰度产生第一控制信号,并根据数字图像的对中性参数产生第二控制信号;根据第一控制信号驱动图像采集设备的步进电机来调整摄像头与舌体的相对位置;根据第二控制信号驱动图像采集设备的镜头轴线平行于舌面法线且通过舌面中心以摄取清晰的舌面图像。
进一步地,所述超像素区域的特征组包括超像素区域的颜色特征、位置特征、最大梯度特征,其中,所述计算机程序指令由处理器加载还执行如下步骤:计算每个超像素区域中所有像素点相应三个颜色通道R、G、B的平均值作为超像素区域的颜色特征,其表示为R’、G’、B’;比较每个超像素区域所有梯度大小M,选取最大的梯度作为每个超像素区域的最大梯度特征,其表示为M’;将每个超像素区域中所有像素点的坐标(x,y)累加求和,把累加结果分别除以人脸的宽度和长度作为每个超像素区域对应的位置特征(x’,y’);将每个超像素区域的颜色特征、位置特征、最大梯度特征组成一个具有6个维度的特征组(R’, G’, B’, M’, x’, y’)作为每个超像素区域的特征组。
另一方面,本发明还提供一种基于人工智能的中医舌像分割方法,应用于中医舌像分割装置中,该中医舌像分割装置包括图像采集设备以及输出单元,该方法包括如下步骤:通过图像采集设备摄取包含舌体的舌面图像;将舌面图像的RGB像素点经过SLIC算法处理生成N个超像素区域;提取每一个超像素区域的特征组;利用预先训练好的舌体分类器对每个超像素区域的特征组进行分类,并识别出舌体区域和非舌体区域;从舌面图像中剔除非舌体区域并保留舌体区域得到舌体图像,并通过输出单元输出所述舌体图像。
进一步地,所述的基于人工智能的中医舌像分割方法还包括如下步骤:预先创建一个舌体特征样本,并对舌体特征样本进行特征定义为:若超像素区域属于舌体区域,则建立该超像素区域的特征组对应的标签值为1;若超像素区域不属于舌体区域,则建立该超像素区域的特征组对应的标签值为0;利用adaboost算法对舌体特征样本的每一个超像素区域K i的特征组与对应的标签值进行训练来产生舌体分类器,并将舌体分类器存储在存储器。
进一步地,所述识别舌体区域和非舌体区域的规则如下:若一个超像素区域对应的标签值为1,则识别该超像素区域为舌体区域;若一个超像素区域对应的标签值为0,则识别该超像素区域为非舌体区域。
进一步地,所述通过图像采集设备摄取包含舌体的舌面图像包括如下步骤:通过图像采集设备从患者嘴部摄取包含舌体的数字图像,并从数字图像分析出该数字图像的清晰度和对中性参数;根据数字图像的清晰度产生第一控制信号,并根据数字图像的对中性参数产生第二控制信号;根据第一控制信号驱动图像采集设备的步进电机来调整摄像头与舌体的相对位置;根据第二控制信号驱动图像采集设备的镜头轴线平行于舌面法线且通过舌面中心以摄取清晰的舌面图像。
进一步地,所述超像素区域的特征组包括超像素区域的颜色特征、位置特征、最大梯度特征,其中,所述提取每一个超像素区域的特征组的步骤包括如下步骤:计算每个超像素区域中所有像素点相应三个颜色通道R、G、B的平均值作为超像素区域的颜色特征,其表示为R’、G’、B’;比较每个超像素区域所有梯度大小M,选取最大的梯度作为每个超像素区域的最大梯度特征,其表示为M’;将每个超像素区域中所有像素点的坐标(x,y)累加求和,把累加结果分别除以人脸的宽度和长度作为每个超像素区域对应的位置特征(x’,y’);将每个超像素区域的颜色特征、位置特征、最大梯度特征组成一个具有6个维度的特征组(R’, G’, B’, M’, x’, y’)作为每个超像素区域的特征组。
再一方面,本发明一种计算机可读存储介质,该计算机存储介质存储多条计算机程序指令,其特征在于,所述计算机程序指令由计算机装置的处理器加载并执行所述基于人工智能的中医舌像分割方法。
有益效果
相较于现有技术,由于舌面图像包含的像素点太多,采用现有Snakes算法对舌面图像进行分割会比较耗时且分割精度不高,本发明对舌面图像所有的元像素(即原来像素点)进行SLIC算法处理成一系列超像素,利用超像素的RGB均值代替元像素,由于超像素相对于元像素的个数少得多,再依据adaboost算法训练舌体分类器并采用该舌体分类器优化分割出舌体区域和非舌体区域,因此能够减少舌面图像数据的运算量,提高舌体分割的运算速度,且能够有效提高舌体分割的准确度,对于中医舌诊领域具有广泛的应用前景。
附图说明
图1是本发明基于人工智能的中医舌像分割装置的优选实施例的方框示意图;
图2是本发明基于人工智能的中医舌像分割方法优选实施例的流程图;
图3是本发明将舌面图像的RGB像素点处理成包含超像素区域的示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
为更进一步阐述本发明为达成预定发明目的所采取的技术手段及功效,以下结合附图及较佳实施例,对本发明的具体实施方式、结构、特征及其功效,详细说明如下。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
参照图1所示,图1是本发明基于人工智能的中医舌像分割装置的优选实施例的方框示意图。在本实施例中,所述中医舌像分割装置1安装有基于人工智能的中医舌像分割系统10,所述中医舌像分割装置1可以为安装有中医舌像分割系统10的个人计算机、大型计算机、工作站计算机、服务器、云平台服务器等具有数据处理功能和图像处理功能的计算装置。
在本实施例中,所述中医舌像分割装置1包括,但不仅限于,基于人工智能的中医舌像分割系统10、图像采集设备11、适于存储多条计算机程序指令的存储器12、执行各种计算机程序指令的处理器13以及输出单元14。所述图像采集设备11为一种至少包括步进电机和镜头的高清摄像装置,例如高清摄像机,用于从患者的舌体摄取包含舌体的舌面图像。所述存储器12可以为一种只读存储器ROM,随机存储器RAM、电可擦写存储器EEPROM、快闪存储器FLASH、磁盘或光盘等。所述处理器13为一种中央处理器(CPU)、微控制器(MCU)、数据处理芯片、或者具有数据处理功能的信息处理单元。所述输出单元14可以为一种用于显示舌体图像的显示屏,也可以为一种用于打印舌体图像的打印机。
在本实施例中,所述基于人工智能的中医舌像分割系统10由各种计算机程序指令组成的程序模块组成,包括但不局限于,分类模型创建模块101、舌像获取模块102、舌体特征提取模块103以及舌体分割模块104。本发明所称的模块是指一种能够被中医舌像分割装置1的处理器13执行并且能够完成固定功能的一系列计算机程序指令段,其存储在中医舌像分割装置1的存储器12中,以下结合图2具体说明每一个模块的具体功能。
参考图2所示,是本发明基于人工智能的中医舌像分割方法优选实施例的流程图。在本实施例中,所述基于人工智能的中医舌像分割方法的各种方法步骤通过计算机软件程序来实现,该计算机软件程序以计算机程序指令的形式存储于计算机可读存储介质(例如存储器12)中,计算机可读存储介质可以包括:只读存储器、随机存储器、磁盘或光盘等,所述计算机程序指令能够被处理器(例如处理器13)加载并执行如下步骤S21至步骤S25。
步骤S21,通过图像采集设备摄取包含舌体的舌面图像;在本实施例中,舌像获取模块102通过图像采集设备11从患者嘴部摄取包含舌体的数字图像,并根据该数字图像控制所述图像采集设备11摄取清晰的舌面图像。舌像获取模块102从所述数字图像分析出该数字图像的清晰度和对中性参数,根据数字图像的清晰度产生第一控制信号,根据数字图像的对中性参数产生第二控制信号,根据第一控制信号驱动图像采集设备11的步进电机来调整摄像头与舌体的相对位置,根据第二控制信号驱动图像采集设备11的镜头轴线平行于舌面法线且通过舌面中心,以重新摄取清晰的舌面图像,从而控制图像采集设备11适应患者的伸舌动作并从患者的舌体摄取包含舌体的舌面图像。
步骤S22,将舌面图像的RGB像素点经过SLIC算法处理生成N个超像素区域K i;在本实施例中,舌体特征提取模块103将舌面图像的RGB像素点经过SLIC算法处理得到包含N个超像素区域K i的舌面图像。如图3所示,舌面图像由一系列像素点组成,每个像素点由R、G、B三个颜色通道组成,舌体特征提取模块103对舌面图像进行SLIC(simple linear iterative cluster)算法处理生成N个超像素区域K i,其中,i= 1,……,N(N为自然数),所述超像素区域K i由多个特征相似的像素点组成的规则或不规则区域。具体地,舌体特征提取模块103利用SLIC算法将舌面图像从RGB颜色空间转换到CIE-Lab颜色空间,对应每个RGB像素点的(L,a,b)颜色值和(x,y)坐标组成一个5维向量V[l, a, b, x, y],两个像素的相似性即可由它们的向量距离来度量,向量距离越大,两个像素相似性越小。所述SLIC算法首先生成N个种子点,然后在每个种子点的周围空间里搜索距离该种子点最近的若干像素,将他们归为与该种子点一类,直到所有像素点都归类完毕。然后计算这N个超像素里所有像素点的平均向量值,重新得到N个聚类中心,然后再以这N个中心去搜索其周围与其最为相似的若干像素,所有像素都归类完后重新得到N个超像素,更新聚类中心,再次迭代,如此反复直到收敛得到N个超像素区域Ki的舌面图像。
步骤S23,提取每一个超像素区域K i的特征组,包括颜色特征、位置特征、最大梯度特征;在本实施例中,舌体特征提取模块103提取每一个超像素区域K i的特征组,所述特征包括每个超像素区域Ki的颜色特征、位置特征、最大梯度特征。其中,舌体特征提取模块103计算每个超像素区域K i中所有像素点相应三个颜色通道R、G、B的平均值作为超像素区域K i的颜色特征,其表示为R’、G’、B’;舌体特征提取模块103比较每个超像素区域Ki所有梯度大小M,选取最大的梯度作为每个超像素区域Ki的最大梯度特征,其表示为M’;舌体特征提取模块103将每个超像素区域K i中所有像素点的坐标(x,y)累加求和,然后把累加结果分别除以人脸的宽度和长度作为每个超像素区域K i对应的位置特征(x’,y’);最后,舌体特征提取模块103将每个超像素区域Ki的颜色特征、位置特征、最大梯度特征组成一个具有6个维度的特征组(R’, G’, B’, M’, x’, y’)作为每个超像素区域K i的特征组。
步骤S24,利用预先训练好的舌体分类器对每个超像素区域K i的特征组进行分类识别出舌体区域和非舌体区域;在本实施例中,舌体分割模块104将每个超像素区域K i的特征组(R’, G’, B’, M’, x’, y’)输入到已训练好的舌体分类器,该舌体分类器对每个超像素区域K i的特征组(R’, G’, B’, M’, x’, y’)进行分类识别出舌体区域和非舌体区域。具体地,舌体分割模块104输入舌面图像的每个超像素区域K i的特征组(R’, G’, B’, M’, x’, y’),利用舌体分类器对每个超像素区域K i的特征组(R’, G’, B’, M’, x’, y’)建立每个超像素区域K i对应的标签值,根据每个超像素区域K i对应的标签值识别出舌面图像中的舌体区域和非舌体区域。在本实施例中,预先训练好舌体分类器(一种舌体分类模型)并存储在存储器12中,若一个超像素区域K i对应的标签值为1,则舌体分割模块104识别出该超像素区域K i为舌体区域;若一个超像素区域K i对应的标签值为0,则舌体分割模块104识别出该超像素区域K i为非舌体区域;
步骤S25,从舌面图像中剔除非舌体区域并保留舌体区域得到舌体图像,并通过输出单元输出所述舌体图像;在本实施例中,舌体分割模块104剔除舌面图像中所有的非舌体区域则留下舌体区域,从而得到仅包括清晰患者舌体的舌体图像。此外,舌体分割模块104将舌体图像通过输出单元14的显示屏分割出的舌体图像,或者通过打印机打印舌体图像,或者将舌体图像通过网络发送至医生终端,供医生通过患者的舌体图像诊断舌体的大小、形状、颜色、裂纹、胎质、以及齿痕有无和位置等信息,从而辅助医生进行中医舌诊获得患者的健康状况。
在利用本发明所述基于人工智能的中医舌像分割方法进行舌像分割之前,需要预先通过机器学习和机器训练一个舌体分类器,并存储在存储器12中。在本实例中,所述步骤S21之前还包括如下具体步骤:
步骤一,预先创建一个舌体特征样本,并对舌体特征样本进行特征定义;在本实施例中,分类模型创建模块101预先创建一个舌体特征样本,具体地,获取一幅作为训练的舌面图像,并根据步骤S22和步骤S23获取该舌面图像的所有超像素区域K i的特征组(R’, G’, B’, M’, x’, y’)作为舌体特征样本,并对舌体特征样本定义如下:如果超像素区域K i属于舌体区域,建立该超像素区域K i的特征组对应的标签值为1;如果超像素区域K i不属于舌体区域,建立该超像素区域K i的特征组对应的标签值为0。
步骤二,利用adaboost算法对舌体特征样本进行训练来产生舌体分类器,并将舌体分类器存储在存储器中;具体地,分类模型创建模块101利用adaboost算法对舌体特征样本进行训练来产生一个舌体分类器,并将舌体分类器存储在存储器12中,以供后续对输入的舌面图像进行快速准确地舌体分割。在本实施例中,所述Adaboost算法是一种现有技术的二分类算法,本发明不作具体赘述,分类模型创建模块101利用adaboost算法对舌体特征样本的每一个超像素区域K i的特征组以及相应的标签值进行机器学习与训练,即可获得舌体分类器。
本发明还一种计算机可读存储介质,该计算机可读存储介质存储多条计算机程序指令,所述计算机程序指令由计算机装置的处理器加载并执行所述基于人工智能的中医舌像分割方法。本领域技术人员可以理解,上述实施方式中各种方法的全部或部分步骤可以通过相关程序指令完成,该程序可以存储于计算机可读存储介质中,存储介质可以包括:只读存储器、随机存储器、磁盘或光盘等。
在本实施例中,由于舌面图像包含的像素点太多,采用现有Snakes算法对舌面图像进行分割会比较耗时且分割精度不高,本发明对舌面图像所有的元像素(即原来像素点)进行SLIC算法处理成一系列超像素,利用超像素的RGB均值代替元像素,由于超像素相对于元像素的个数少得多,再依据adaboost算法训练舌体分类器并采用该舌体分类器优化分割出舌体区域和非舌体区域,因此能够减少舌面图像数据的运算量,提高舌体分割的运算速度,且能够有效提高舌体分割的准确度,对于中医舌诊领域具有广泛的应用前景。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
工业实用性
相较于现有技术,由于舌面图像包含的像素点太多,采用现有Snakes算法对舌面图像进行分割会比较耗时且分割精度不高,本发明对舌面图像所有的元像素(即原来像素点)进行SLIC算法处理成一系列超像素,利用超像素的RGB均值代替元像素,由于超像素相对于元像素的个数少得多,再依据adaboost算法训练舌体分类器并采用该舌体分类器优化分割出舌体区域和非舌体区域,因此能够减少舌面图像数据的运算量,提高舌体分割的运算速度,且能够有效提高舌体分割的准确度,对于中医舌诊领域具有广泛的应用前景。

Claims (10)

  1. 一种基于人工智能的中医舌像分割装置,该中医舌像分割装置包括图像采集设备以及输出单元,其特征在于,该中医舌像分割装置包括适于实现各种计算机程序指令的处理器以及适于存储多条计算机程序指令的存储器,所述计算机程序指令由处理器加载并执行如下步骤:
    通过图像采集设备摄取包含舌体的舌面图像;
    将舌面图像的RGB像素点经过SLIC算法处理生成N个超像素区域;
    提取每一个超像素区域的特征组;
    利用预先训练好的舌体分类器对每个超像素区域的特征组进行分类,并识别出舌体区域和非舌体区域;
    从舌面图像中剔除非舌体区域并保留舌体区域得到舌体图像,并通过输出单元输出所述舌体图像。
  2. 如权利要求1所述的基于人工智能的中医舌像分割装置,其特征在于,所述计算机程序指令由处理器加载还执行如下步骤:
    预先创建一个舌体特征样本,并对舌体特征样本进行特征定义为:若超像素区域属于舌体区域,则建立该超像素区域的特征组对应的标签值为1;若超像素区域不属于舌体区域,则建立该超像素区域的特征组对应的标签值为0;
    利用adaboost算法对舌体特征样本的每一个超像素区域K i的特征组与对应的标签值进行训练来产生舌体分类器,并将舌体分类器存储在存储器中。
  3. 如权利要求1所述的基于人工智能的中医舌像分割装置,其特征在于,所述通过图像采集设备摄取包含舌体的舌面图像包括如下步骤:
    通过图像采集设备从患者嘴部摄取包含舌体的数字图像,并从数字图像分析出该数字图像的清晰度和对中性参数;
    根据数字图像的清晰度产生第一控制信号,并根据数字图像的对中性参数产生第二控制信号;
    根据第一控制信号驱动图像采集设备的步进电机来调整摄像头与舌体的相对位置;
    根据第二控制信号驱动图像采集设备的镜头轴线平行于舌面法线且通过舌面中心以摄取清晰的舌面图像。
  4. 如权利要求1所述的基于人工智能的中医舌像分割装置,其特征在于,所述超像素区域的特征组包括超像素区域的颜色特征、位置特征、最大梯度特征,其中,所述计算机程序指令由处理器加载还执行如下步骤:
    计算每个超像素区域中所有像素点相应三个颜色通道R、G、B的平均值作为超像素区域的颜色特征,其表示为R’、G’、B’;
    比较每个超像素区域所有梯度大小M,选取最大的梯度作为每个超像素区域的最大梯度特征,其表示为M’;
    将每个超像素区域中所有像素点的坐标(x,y)累加求和,把累加结果分别除以人脸的宽度和长度作为每个超像素区域对应的位置特征(x’,y’);
    将每个超像素区域的颜色特征、位置特征、最大梯度特征组成一个具有6个维度的特征组(R’, G’, B’, M’, x’, y’)作为每个超像素区域的特征组。
  5. 一种基于人工智能的中医舌像分割方法,应用于中医舌像分割装置中,该中医舌像分割装置包括图像采集设备以及输出单元,其特征在于,该方法包括如下步骤:
    通过图像采集设备摄取包含舌体的舌面图像;
    将舌面图像的RGB像素点经过SLIC算法处理生成N个超像素区域;
    提取每一个超像素区域的特征组;
    利用预先训练好的舌体分类器对每个超像素区域的特征组进行分类,并识别出舌体区域和非舌体区域;
    从舌面图像中剔除非舌体区域并保留舌体区域得到舌体图像,并通过输出单元输出所述舌体图像。
  6. 如权利要求5所述的基于人工智能的中医舌像分割方法,其特征在于,该方法还包括如下步骤:
    预先创建一个舌体特征样本,并对舌体特征样本进行特征定义为:若超像素区域属于舌体区域,则建立该超像素区域的特征组对应的标签值为1;若超像素区域不属于舌体区域,则建立该超像素区域的特征组对应的标签值为0;
    利用adaboost算法对舌体特征样本的每一个超像素区域K i的特征组与对应的标签值进行训练来产生舌体分类器,并将舌体分类器存储在存储器中。
  7. 如权利要求6所述的基于人工智能的中医舌像分割方法,其特征在于,所述识别舌体区域和非舌体区域的规则如下:
    若一个超像素区域对应的标签值为1,则识别该超像素区域为舌体区域;
    若一个超像素区域对应的标签值为0,则识别该超像素区域为非舌体区域。
  8. 如权利要求5所述的基于人工智能的中医舌像分割方法,其特征在于,所述通过图像采集设备摄取包含舌体的舌面图像包括如下步骤:
    通过图像采集设备从患者嘴部摄取包含舌体的数字图像,并从数字图像分析出该数字图像的清晰度和对中性参数;
    根据数字图像的清晰度产生第一控制信号,并根据数字图像的对中性参数产生第二控制信号;
    根据第一控制信号驱动图像采集设备的步进电机来调整摄像头与舌体的相对位置;
    根据第二控制信号驱动图像采集设备的镜头轴线平行于舌面法线且通过舌面中心以摄取清晰的舌面图像。
  9. 如权利要求5所述的基于人工智能的中医舌像分割方法,其特征在于,所述超像素区域的特征组包括超像素区域的颜色特征、位置特征、最大梯度特征,其中,所述提取每一个超像素区域的特征组的步骤包括如下步骤:
    计算每个超像素区域中所有像素点相应三个颜色通道R、G、B的平均值作为超像素区域的颜色特征,其表示为R’、G’、B’;
    比较每个超像素区域所有梯度大小M,选取最大的梯度作为每个超像素区域的最大梯度特征,其表示为M’;
    将每个超像素区域中所有像素点的坐标(x,y)累加求和,把累加结果分别除以人脸的宽度和长度作为每个超像素区域对应的位置特征(x’,y’);
    将每个超像素区域的颜色特征、位置特征、最大梯度特征组成一个具有6个维度的特征组(R’, G’, B’, M’, x’, y’)作为每个超像素区域的特征组。
  10. 一种计算机可读存储介质,该计算机可读存储介质存储多条计算机程序指令,其特征在于,所述计算机程序指令由计算机装置的处理器加载并执行如权利要求5至9任一项所述基于人工智能的中医舌像分割方法。
PCT/CN2019/099242 2018-08-06 2019-08-05 基于人工智能的中医舌像分割装置、方法及存储介质 WO2020029915A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810884613.4 2018-08-06
CN201810884613.4A CN110807775A (zh) 2018-08-06 2018-08-06 基于人工智能的中医舌像分割装置、方法及存储介质

Publications (1)

Publication Number Publication Date
WO2020029915A1 true WO2020029915A1 (zh) 2020-02-13

Family

ID=69413409

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/099242 WO2020029915A1 (zh) 2018-08-06 2019-08-05 基于人工智能的中医舌像分割装置、方法及存储介质

Country Status (2)

Country Link
CN (1) CN110807775A (zh)
WO (1) WO2020029915A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362334A (zh) * 2020-03-04 2021-09-07 北京悦熙兴中科技有限公司 一种舌照处理方法及装置
CN113837987A (zh) * 2020-12-31 2021-12-24 京东科技控股股份有限公司 舌部图像采集方法、装置及计算机设备
CN113989269A (zh) * 2021-11-14 2022-01-28 北京工业大学 一种基于卷积神经网络多尺度特征融合的中医舌图像齿痕自动检测方法
CN114627136A (zh) * 2022-01-28 2022-06-14 河南科技大学 一种基于特征金字塔网络的舌象分割与对齐方法
CN116030352A (zh) * 2023-03-29 2023-04-28 山东锋士信息技术有限公司 融合多尺度分割和超像素分割的长时序土地利用分类方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538398B (zh) * 2021-07-28 2023-11-24 平安科技(深圳)有限公司 基于特征匹配的舌苔分类方法、装置、设备及介质
CN113706515B (zh) * 2021-08-31 2023-07-18 平安科技(深圳)有限公司 舌像异常确定方法、装置、计算机设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013106842A2 (en) * 2012-01-13 2013-07-18 The Charles Stark Draper Laboratory, Inc. Stem cell bioinformatics
CN105930798A (zh) * 2016-04-21 2016-09-07 厦门快商通科技股份有限公司 基于学习的面向手机应用的舌像快速检测分割方法
CN105930815A (zh) * 2016-05-04 2016-09-07 中国农业大学 一种水下生物检测方法和系统
CN106510636A (zh) * 2016-11-29 2017-03-22 深圳市易特科信息技术有限公司 中医舌像自动检测系统及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013106842A2 (en) * 2012-01-13 2013-07-18 The Charles Stark Draper Laboratory, Inc. Stem cell bioinformatics
CN105930798A (zh) * 2016-04-21 2016-09-07 厦门快商通科技股份有限公司 基于学习的面向手机应用的舌像快速检测分割方法
CN105930815A (zh) * 2016-05-04 2016-09-07 中国农业大学 一种水下生物检测方法和系统
CN106510636A (zh) * 2016-11-29 2017-03-22 深圳市易特科信息技术有限公司 中医舌像自动检测系统及方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362334A (zh) * 2020-03-04 2021-09-07 北京悦熙兴中科技有限公司 一种舌照处理方法及装置
CN113837987A (zh) * 2020-12-31 2021-12-24 京东科技控股股份有限公司 舌部图像采集方法、装置及计算机设备
CN113837987B (zh) * 2020-12-31 2023-11-03 京东科技控股股份有限公司 舌部图像采集方法、装置及计算机设备
CN113989269A (zh) * 2021-11-14 2022-01-28 北京工业大学 一种基于卷积神经网络多尺度特征融合的中医舌图像齿痕自动检测方法
CN113989269B (zh) * 2021-11-14 2024-04-02 北京工业大学 一种基于卷积神经网络多尺度特征融合的中医舌图像齿痕自动检测方法
CN114627136A (zh) * 2022-01-28 2022-06-14 河南科技大学 一种基于特征金字塔网络的舌象分割与对齐方法
CN114627136B (zh) * 2022-01-28 2024-02-27 河南科技大学 一种基于特征金字塔网络的舌象分割与对齐方法
CN116030352A (zh) * 2023-03-29 2023-04-28 山东锋士信息技术有限公司 融合多尺度分割和超像素分割的长时序土地利用分类方法

Also Published As

Publication number Publication date
CN110807775A (zh) 2020-02-18

Similar Documents

Publication Publication Date Title
WO2020029915A1 (zh) 基于人工智能的中医舌像分割装置、方法及存储介质
Zhang et al. Research on face detection technology based on MTCNN
WO2022012110A1 (zh) 胚胎光镜图像中细胞的识别方法及系统、设备及存储介质
WO2022001571A1 (zh) 一种基于超像素图像相似度的计算方法
CN109711268B (zh) 一种人脸图像筛选方法及设备
CN110827312A (zh) 一种基于协同视觉注意力神经网络的学习方法
CN112750106A (zh) 一种基于非完备标记的深度学习的核染色细胞计数方法、计算机设备、存储介质
CN116012721B (zh) 一种基于深度学习的水稻叶片病斑检测方法
CN114821014A (zh) 基于多模态与对抗学习的多任务目标检测识别方法及装置
CN114549557A (zh) 一种人像分割网络训练方法、装置、设备及介质
CN114333062B (zh) 基于异构双网络和特征一致性的行人重识别模型训练方法
CN115270184A (zh) 视频脱敏、车辆的视频脱敏方法、车载处理系统
CN112446417B (zh) 基于多层超像素分割的纺锤形果实图像分割方法及系统
CN114372962A (zh) 基于双粒度时间卷积的腹腔镜手术阶段识别方法与系统
CN112418033B (zh) 基于mask rcnn神经网络的滑坡坡面分割识别方法
CN111311602A (zh) 中医面诊的嘴唇图像分割装置及方法
CN117437691A (zh) 一种基于轻量化网络的实时多人异常行为识别方法及系统
CN112308827A (zh) 基于深度卷积神经网络的毛囊检测方法
CN112991281A (zh) 视觉检测方法、系统、电子设备及介质
CN110363240B (zh) 一种医学影像分类方法与系统
CN111914796A (zh) 基于深度图和骨骼点的人体行为识别方法
CN115719497A (zh) 一种学生专注度识别方法及系统
CN114882372A (zh) 一种目标检测的方法及设备
Qi et al. Deep Learning Based Image Recognition In Animal Husbandry
CN109165565A (zh) 一种基于耦合动态马尔科夫网络的视频目标发现与分割方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19846603

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19846603

Country of ref document: EP

Kind code of ref document: A1