CN113129277A - Tongue coating detection system based on convolutional neural network - Google Patents

Tongue coating detection system based on convolutional neural network Download PDF

Info

Publication number
CN113129277A
CN113129277A CN202110357271.2A CN202110357271A CN113129277A CN 113129277 A CN113129277 A CN 113129277A CN 202110357271 A CN202110357271 A CN 202110357271A CN 113129277 A CN113129277 A CN 113129277A
Authority
CN
China
Prior art keywords
tongue
image
neural network
convolutional neural
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110357271.2A
Other languages
Chinese (zh)
Inventor
曾云辉
张子怡
廖梓钧
李嘉明
罗坤亭
景毅
戴源志
王榕
陈世帆
娄越
孔锐
郭洪飞
袁博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202110357271.2A priority Critical patent/CN113129277A/en
Publication of CN113129277A publication Critical patent/CN113129277A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Primary Health Care (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The tongue coating detection system based on the convolutional neural network comprises a hardware platform and a remote service data terminal. The hardware platform is provided with physical operation keys, a power supply module, an image acquisition module, a data transmission module and a display screen. And the power module is driven by the physical operation keys to supply power to the whole hardware platform. The image acquisition module acquires and stores the tongue image picture and transmits the tongue image picture to the data transmission module. And the data transmission module processes the tongue picture and transmits the tongue picture to a remote service data end through a narrow-band Internet of things. And the remote service data terminal classifies the processed tongue image pictures based on the convolutional neural network, finds out a matched tongue diagnosis report and a medical guidance suggestion, and returns the tongue diagnosis report and the medical guidance suggestion to the display screen for display through the narrow-band Internet of things. The tongue image detection system based on the convolutional neural network, the narrow-band Internet of things technology and the image acquisition and processing technology is simple to operate, has reliable detection results, and provides a convenient way for people to know the own physical health condition.

Description

Tongue coating detection system based on convolutional neural network
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a tongue coating detection system based on a Convolutional Neural Network (CNN).
Background
The Chinese civilization treasure house contains profound philosophy thought, cultural knowledge and economic and social resources, gathers rich Chinese traditional culture essence, and is the blood vessels and soul of Chinese nationality. The formation and development of the traditional Chinese medicine theory and the introduction of the traditional Chinese medicine theory draw the advanced concept of the contemporary Chinese culture, organically combine the understanding of the life and disease occurrence and development law of people, effectively ensure the life and prosperity of the Chinese nation, and play an important role in carrying forward and developing the excellent traditional Chinese culture. However, with the development of modern science and technology, western medicine is prevalent, and traditional Chinese medicine is still in place, and protection is not sufficient, so that many theories of traditional Chinese medicine and prescriptions are gradually lost and become irreparable loss.
Tongue diagnosis is one of the most common diagnostic methods in TCM, and doctors can know the physiological and pathological changes of human body by observing the tongue, and thus know the physical condition of patients. The tongue of a healthy person should be soft and moist, pale red, white coating with moderate dryness and wetness, which is called "pale red with thin white coating". The tongue is connected with the viscera through meridians and collaterals, and the five zang organs all inherit qi in the stomach, while stomach qi ascends to the tongue, and the pathological changes of the viscera, whether cold or heat, deficiency or excess, can be reflected through the tongue. Therefore, the health condition can be known and self-care and conditioning can be carried out in time by observing the tongue condition frequently. However, the tongue diagnosis has its limitations, and requires a doctor to accumulate his experience for a long time and to learn a long time for a system, so that a correct diagnosis can be obtained through the tongue diagnosis, and the diagnosis result is affected by the abundance of the doctor's experience, the skill level, the external environmental conditions, and the like, so that the focus is strong, the repeatability is poor, and the development of the tongue diagnosis in traditional Chinese medicine is restricted. The correctness of the diagnosis affects the result of the treatment, and the accuracy of the treatment affects the cultural confidence of our nation in Chinese medicine. And sometimes, the patient cannot find a doctor for diagnosis and treatment in time due to time problems, and the delay of the illness state can be caused. Therefore, the objective and popularization of tongue diagnosis is urgent.
In the existing research, a tongue diagnosis auxiliary medical system (patent application number: 201710126305.0) and a micro-cloud intelligent tongue diagnosis instrument (patent application number: 201720341765.0) based on tongue surface and sublingual comprehensive analysis are disclosed for the existing related design schemes. The scheme disclosed above shows that the existing tongue diagnosis instrument mainly assists a doctor in diagnosis, and one instrument needs more operations of the doctor so as to analyze the result. The disadvantages are that: the audience range of the instrument is small, so that doctors can be assisted in diagnosis, and common users cannot use the instrument; the tongue image can not be shot by a single person; the instrument system cannot be updated in a networking way; based on this, both tongue diagnosis instruments mainly existing today are difficult to be put into the market due to their respective disadvantages.
Disclosure of Invention
The invention aims to provide a tongue coating detection system based on a convolutional neural network, which can be shot by a single person without the participation of doctors, is convenient and simple to operate, has objective and reliable detection results and is popular.
In order to realize the purpose, the invention adopts the following technical scheme:
tongue image detection system based on convolutional neural network, characterized by including:
a hardware platform and a remote service data terminal;
the hardware platform comprises a physical operation key, a power supply module, an image acquisition module, a data transmission module and a display screen which are connected in sequence;
the image acquisition module provides an instruction for guiding the patient to shoot by himself and acquires and stores tongue image pictures;
the data transmission module is used for processing the tongue image picture;
and the remote service data terminal is used for classifying tongue image pictures processed by the data transmission module (004) based on an algorithm of a convolutional neural network and matching the tongue image pictures to corresponding tongue diagnosis reports and medical guidance suggestions.
Further:
the physical operation key is used for receiving the start/stop instruction of the patient, controlling the power supply module and starting/stopping the power supply of the hardware platform.
Further:
the image acquisition module comprises a light source unit, a camera unit and a tongue image storage unit which are connected in sequence; the light source unit is used for providing a light source required for acquiring the tongue image picture.
The light source unit is used for providing a light source required for acquiring a tongue image;
the camera shooting unit provides an instruction for guiding a patient to shoot by himself and shoots the tongue image picture;
the tongue image storage unit stores and transmits the tongue image picture to the data transmission module.
Further:
the data transmission module (004) comprises an image processing unit (103) and a first narrow-band Internet of things terminal module (104);
the remote service data terminal (001) comprises a second narrow-band Internet of things terminal module (102), a data processing toolkit (101) and a database which are connected in sequence;
the image processing unit (103), the first narrow-band Internet of things terminal module (104) and the display screen (105) are sequentially connected in a serial port coupling mode;
the second narrow-band Internet of things terminal module (102) is coupled and connected with the data processing tool kit (101) through a serial port, and is in communication connection with the first narrow-band Internet of things terminal module (104) through a narrow-band Internet of things;
the image processing unit (103) adjusts the format and size of the tongue image picture, and transmits the adjusted tongue image picture to the data processing toolkit (101) through a first narrow-band Internet of things terminal module (104) and a second narrow-band Internet of things terminal module (102) in sequence;
the data processing toolkit (101) classifies the tongue image picture adjusted by the image processing unit (103) based on the algorithm of a convolutional neural network and matches the tongue image picture with a corresponding tongue diagnosis report and a corresponding medical guidance suggestion in the database;
the display screen is used for displaying tongue picture, tongue diagnosis report and medical guidance suggestion.
Further:
the data processing toolkit comprises a deep learning model and a full connection network;
extracting effective characteristic vectors from the tongue image picture adjusted by the image processing unit by the deep learning model;
and the full-connection network classifies the effective characteristic vectors and outputs tongue image classification results.
Further:
the deep learning model adopts a deep convolution _ v3 model.
Further:
the deep convolution inclusion _ v3 model was pre-trained with the ImageNet dataset.
Further:
the full-connection network adopts a single hidden layer feedforward neural network, and the single hidden layer feedforward neural network comprises an input layer, a hidden layer and an output layer;
the input layer is provided with d input neurons, the hidden layer is provided with q hidden units, and the output layer is provided with 1 output unit;
the h neuron of the hidden layer has the threshold value of gammahThe threshold value of the jth neuron of the output layer is thetaj
The connection weight of the ith neuron of the input layer and the h neuron of the hidden layer is a ring h, and the connection weight of the h neuron of the hidden layer and the jth neuron of the output layer is Whj
The hidden layer uses the ReLU activation function, f-1.(x) ═ max (0, x), and the output layer uses the Softmax function
Figure BDA0003003900190000041
Are respectively denoted by f1(VTX + γ) and f2(WTB + θ); wherein v is a weight matrix of size dxq, x is a sample input vector of size dx1, and γ is a threshold vector of size qx1; w is a weight matrix of qxl size, and B ═ f1(.) a threshold vector of size qx 1, θ is l x 1;
the convolutional neural network forward propagation output is y ═ f2(WTB + θ) (probability value of network prediction), where y is the label vector corresponding to the sample, with size l × 1;
calculating Loss function Loss as Y by adopting cross entropy1log(y1)+…+Yilog(yi)+…+Yllog(yl) And adopt a reverse directionOverfitting is reduced by a Back Propagation (BP) algorithm, a gradient descent minimization loss function and a dropout technology in deep learning, and an optimal model classification parameter is found.
Further:
the tongue image classification results include normal, cracked, thick, prickled, tooth-mark, and peeled tongue coating.
The use method of the tongue image detection system based on the convolutional neural network comprises the following steps:
s1, turning on a power supply through a physical operation key to enable a hardware platform to enter a working state;
s2, selecting proper light source conditions, extending out the tongue according to the prompt of the image acquisition module, adjusting the tongue up and down and left and right, and selecting an optimal angle for shooting;
s3, storing the photographed tongue picture by using the image acquisition module and transmitting the tongue picture to the data transmission module;
s4, the image processing unit processes the tongue picture and transmits the processed tongue picture to a remote service data end through a narrow-band Internet of things;
s5, the remote service data end classifies the tongue picture processed by the data transmission module and matches the tongue picture with a corresponding tongue diagnosis report and a corresponding medical guidance suggestion in a database;
and S6, the remote service data end transmits the tongue diagnosis report and the medical guidance suggestion to the display screen for display through the narrow-band Internet of things.
The invention has the beneficial effects that:
firstly, the tongue coating detection system sends corresponding data of tongue image pictures to the remote service data end through the narrow-band Internet of things (NB-IOT) module, the remote service data end classifies and matches corresponding tongue diagnosis reports and medical guidance suggestions based on the algorithm of the Convolutional Neural Network (CNN), and the narrow-band Internet of things (NB-IOT) module controls the display to display the tongue image pictures of a user and the corresponding tongue diagnosis reports and medical guidance suggestions, so that tongue coating diagnosis is objective and reliable. Secondly, the tongue coating detection system provided by the invention can finish shooting tongue image pictures by a single person, and further can know the health condition of the body through the corresponding tongue inspection report without the assistance of doctors or other people, so that the time for searching or waiting for doctors is greatly saved, the user can not be helped to seek medical treatment because the user is afraid of being known by other people to be unconscious, and the user can find out the abnormal condition of the body in time. Thirdly, the tongue coating detection system provided by the invention returns corresponding medical guidance suggestions, which is beneficial for users to know the physical conditions of the users in time and take correct measures to recover health.
Drawings
Fig. 1 is a schematic structural diagram of a tongue coating detection system based on a convolutional neural network in an embodiment of the present invention.
FIG. 2 is a flow chart of data processing of the data processing toolkit according to the embodiment of the present invention.
Fig. 3A to 3F are schematic diagrams of tongue images acquired in the embodiment of the present invention.
Fig. 4A to 4F are examples of tongue image effective feature maps extracted by the convolutional neural network-based algorithm in the embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a single hidden layer feedforward neural network according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a fully connected network according to an embodiment of the present invention.
FIG. 7 is a flowchart illustrating tongue picture processing according to an embodiment of the present invention.
FIG. 8 is a flowchart illustrating the operation of the tongue coating detection system based on the convolutional neural network according to the embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.
It should be noted that the terms "first", "second", "left" and "right" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first," "second," "left," and "right" may explicitly or implicitly include one or more of the features.
With the continuous increase of data throughput and computing power of computers and the continuous and deep research of human visual patterns by researchers, learning and training are carried out by means of a large amount of data by means of efficient algorithm design, machine learning algorithm and pathological image combination become possible, and meanwhile, the proposal of deep learning algorithm brings new vitality to image recognition. The tongue diagnosis is more objective by utilizing the technologies of the convolutional neural network, the deep learning and the like, so that the result of the tongue diagnosis is more accurate, and some traditional Chinese medicine knowledge about the tongue diagnosis is attached, so that the form of the tongue diagnosis is not limited by the traditional Chinese medicine diagnosis and treatment; the convolutional neural network is applied to tongue image recognition, and can be automatically recognized in a machine learning mode, so that a more objective analysis result is obtained.
As shown in fig. 1, an embodiment of the present invention provides a tongue coating detection system based on a Convolutional Neural Network (CNN), the detection system includes a hardware platform 002 and a remote service data terminal 001, wherein the hardware platform 002 includes a physical operation key 005, a power module 006, an image acquisition module 003, a data transmission module 004 and a display screen 105, which are connected in sequence; the image acquisition module (003) comprises a light source unit 007, a camera unit 008 and a tongue image storage unit 009 which are connected in sequence; the data transmission module 004 comprises an image processing unit 103 and a first narrowband internet of things (NB-IOT) terminal module 104 which are connected in sequence; the remote service data terminal 001 includes a second narrowband internet of things (NB-IOT) terminal module 102, a data processing tool kit (SDK)101, and a database, which are connected in sequence. The image processing unit 103 is coupled with the first narrowband internet of things (NB-IOT) terminal module 104 and the display screen 105 through serial ports, and the second narrowband internet of things (NB-IOT) terminal module 102 is coupled with the data processing kit (SDK)101 through serial ports and is in communication connection with the first narrowband internet of things (NB-IOT) terminal module 104 of the data transmission module 004, so that the interactive communication between the hardware platform 002 and the remote service data terminal 001 is realized.
The tongue image storage unit 009 may locally transmit the image information to the image processing unit 103 of the data transmission module 004. The physical operation button 005 is used for receiving a start/stop instruction of the patient, controlling the power module 006 to start/stop the power supply of the whole hardware platform 002, so that the hardware platform 002 starts/stops working. The power module 006 is used for supplying power to the image acquisition module 003, the data transmission module 004 and the display screen 105, so that each part of the hardware platform 002 normally works in an open state. The image acquisition module 003 is used for providing instructions for guiding the patient to take pictures by himself and acquiring, storing and transmitting the picture of the tongue image of the user to the image processing unit 103 of the data transmission module 004. The image processing unit 103 of the data transmission module 004 is configured to adjust the format and size of the tongue image picture, and transmit the adjusted tongue image picture to the second narrowband internet of things (NB-IOT) terminal module 102 of the remote service data terminal 001 through the first narrowband internet of things (NB-IOT) terminal module 104, and the second narrowband internet of things (NB-IOT) terminal module 102 transmits the adjusted tongue image picture to the data processing kit (SDK) 101. After receiving the adjusted tongue image picture transmitted by the second narrowband internet of things (NB-IOT) terminal module 102, the data processing kit (SDK)101 first classifies the adjusted tongue image picture by using a Convolutional Neural Network (CNN) -based algorithm to obtain a tongue image classification result; the tongue image classification result is then matched with the database, and the matched tongue diagnosis report and medical guidance suggestion are found and returned to the first narrowband internet of things (NB-IOT) terminal module 104 through the second narrowband internet of things (NB-IOT) terminal module 102. The display screen 105 is configured to receive a tongue diagnosis report and a medical guidance suggestion received by the first narrowband internet of things (NB-IOT) terminal module 104 from the second narrowband internet of things (NB-IOT) terminal module 102, and display a tongue image picture and a corresponding tongue diagnosis report and a corresponding medical guidance suggestion. Preferably, the system may employ databases of hospitals and related medical facilities.
As shown in fig. 8, an embodiment of the present invention provides a tongue coating detection system based on a Convolutional Neural Network (CNN), and the working flow of the system is as follows:
firstly, a patient presses a physical operation key to start a power supply module; the image acquisition module starts to acquire and store tongue image pictures of the patient; the data transmission module is used for carrying out primary processing on the acquired tongue image picture and transmitting the tongue image picture to a remote service data end through a narrow-band Internet of things; and the remote service data base classifies the tongue image pictures after the initial processing based on the algorithm of the convolutional neural network, matches corresponding tongue diagnosis reports and medical guidance suggestions according to the tongue image classification results, and finally transmits the tongue diagnosis reports and the medical guidance suggestions to a display through a narrow-band Internet of things. The patient can view the individual tongue image picture, tongue diagnosis report and medical guidance advice of the patient on the display screen.
As shown in fig. 7, a specific processing flow of a tongue image picture in a tongue coating detection system based on a Convolutional Neural Network (CNN) provided by the embodiment of the present invention is as follows:
firstly, tongue image collection is carried out, collected tongue image pictures are input into a tongue image processing unit for preliminary processing, and then are transmitted to a data processing toolkit through a narrow-band Internet of things, the data processing toolkit classifies the tongue image pictures after the preliminary processing based on an algorithm of a convolutional neural network, and finally, a tongue image classification result is obtained.
In a preferred embodiment, as shown in fig. 6, the left side of fig. 6 is a trained deep convolution inclusion _ v3 model, the network stores fitting network parameters, has the capability of extracting strong features of an image, and uses the depth model to extract small-sample tongue image features; the right side is a fully-connected neural network, and a 3-layer fully-connected neural network is used and comprises an input layer, a hidden layer and an output layer. Inclusion _ v3+3NN as inclusion _ v3 model + dense1(2048, ReLU) + dense2(1024, ReLU) + dense3(6, Softmax), fine-tuned fully-connected neural networks using extracted strong features for small sample tongue image classification.
In a preferred embodiment, the adjusted tongue picture is data processed in a data processing kit (SDK)101, as shown in fig. 2. Firstly, the adjusted tongue picture is input into a deep convolution addition _ v3 model, and the deep convolution addition _ v3 model trained by ImageNet data set has the capability of extracting low-level features (such as points, lines, curves, edges and the like) and abstract features of the image. By using the ideas of model migration and model fine tuning, the deep convolution _ v3 model knowledge (all stored convolution layer parameters) pre-trained on the ImageNet data set is migrated to the tongue image classification task, so that effective feature vector extraction is performed on the tongue image, the tongue image training sample size and the model training time are reduced, and the classification accuracy is improved. And inputting the data processed by the deep convolution _ v3 model into a full-connection network.
In a specific embodiment, as shown in fig. 3A to 3F, the normal tongue, the cracked tongue, the thick coating, the barbed tongue, the tooth-mark tongue and the tongue image with the peeled tongue coating are sequentially obtained, each image is respectively input into a trained inclusion _ v3 model to obtain a vector with 2048-dimensional characteristics, and an effective characteristic map with a pixel size of 32 × 64 is visualized, as shown in fig. 4A to 4F. The feature maps in fig. 4A to 4F correspond to the tongue image pictures in fig. 3A to 3F one by one in sequence, and corresponding tongue image classification results are also output after small-sample tongue image classification is performed by the fully-connected neural network.
In a preferred embodiment, as shown in fig. 5, the fully-connected network employs a multi-layer feedforward neural network, preferably a single hidden layer feedforward neural network. The network has d input neurons, q hidden layer units and 1 output unit. The threshold of the h-th neuron of the hidden layer is gammajThe threshold value of the jth neuron of the output layer is thetajThe connection weight of the ith neuron of the input layer and the h neuron of the hidden layer is VihThe connection weight of the h-th neuron of the hidden layer and the j-th neuron of the output layer is Whj. The hidden layer uses the ReLU activation function, f-1.(x) ═ max (0, x), and the output layer uses the Softmax function
Figure BDA0003003900190000081
Are respectively denoted by f1(VTX + γ) and f2(WTB + θ). Wherein v is a weight matrix of size dxq, x is a sample input vector of size dx1, and γ is a threshold vector of size qx1; w is a weight matrix of qxl size, and B ═ f1(.) is of size qx 1, and θ is a threshold vector of size l x 1. The network forward propagation output is y ═ f2(WTB + θ) (probability value of network prediction), where y is the label vector corresponding to the sample, with size l × 1. Calculating Loss function Loss as Y by adopting cross entropy1log(y1)+…+Yilog(yi)+…+Yllog(yl) And reducing overfitting by adopting a Back Propagation (BP) algorithm, a gradient descent minimization loss function and a dropout technology in deep learning, and searchingAnd finding out the optimal model classification parameters.
In a preferred embodiment, the light source unit 007 is used for providing a light source required for acquiring an image, and suitable light source conditions can be selected according to specific environmental conditions. The near-infrared standard light source is used for shooting sublingual images, and the visible light standard light source is used for shooting lingual images. Preferably, the light source unit 007 is a visible-near infrared light tungsten halogen lamp light source.
In a preferred embodiment, the camera unit 008 is used for a user to take a tongue image under appropriate light source conditions.
In a preferred embodiment, the tongue image storage unit 009 is used to store and transmit the user tongue image photographed by the photographing unit 008 to the image processing unit 103 of the data transmission module 004. Preferably, the tongue image storage unit 009 adopts a Static Random Access Memory (SRAM) model CY7C1482V33 of laplacian (CYPRESS), and the CY7C1482V33 has a storage space size of 72Mb and can be configured to store 2 mx 36 bits, 4 mx 18 bits and 1 mx 72 bits, and can store 2 images simultaneously. CY7C1482V33 supports read and write speeds up to 250 MHz.
As shown in fig. 1, an embodiment of the present invention further provides an operation method of a tongue coating detection system based on a Convolutional Neural Network (CNN), where the use method of the detection system includes the following steps:
s1, a power supply is turned on through a physical operation key 005, so that a hardware platform 002 enters a working state;
s2, selecting proper light source conditions, extending the tongue out according to the prompt of the image acquisition module 003, adjusting the tongue up and down and left and right, selecting an optimal angle and shooting through the image acquisition module 003;
s3, the image acquisition module 003 stores the photographed tongue picture and transmits the same to the data transmission module 004;
s4, the data transmission module 004 processes the tongue picture and transmits the processed tongue picture to a remote service data terminal 001 through a narrow-band Internet of things;
s5, the remote service data terminal 001 classifies the tongue image pictures processed by the data transmission module 004 based on the algorithm of the convolutional neural network and matches the tongue diagnosis report and the medical guidance suggestion corresponding to the tongue diagnosis report and the medical guidance suggestion in a database;
and S6, the remote service data terminal 001 transmits the tongue diagnosis report and the medical guidance suggestion to the display screen (105) through the narrow-band Internet of things for display.
The background of the present invention may contain background information related to the problem or environment of the present invention and does not necessarily describe the prior art. Accordingly, the inclusion in the background section is not an admission of prior art by the applicant.
The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention. In the description herein, references to the description of the terms "embodiment," "preferred embodiment," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the claims.

Claims (10)

1. Tongue fur detection system based on convolutional neural network, characterized by comprising:
a hardware platform (002) and a remote service data terminal (001);
the hardware platform (002) comprises a physical operation key (005), a power module (006), an image acquisition module (003), a data transmission module (004) and a display screen (105) which are connected in sequence;
the image acquisition module (003) is used for providing an instruction for guiding a patient to shoot by himself and acquiring and storing tongue image pictures;
the data transmission module (004) is used for processing the tongue image picture;
the remote service data terminal (001) is used for classifying tongue image pictures processed by the data transmission module (004) based on an algorithm of a convolutional neural network and matching the tongue image pictures to corresponding tongue diagnosis reports and medical guidance suggestions.
2. The convolutional neural network based tongue coating detection system as claimed in claim 1, wherein the physical operation button (005) is used for receiving the patient activation/deactivation instruction, controlling the power module (006) to activate/deactivate the power supply of the hardware platform (002).
3. The convolutional neural network-based tongue coating detection system as claimed in claim 1,
the image acquisition module (003) comprises a light source unit (007), a camera unit (008) and a tongue image storage unit (009) which are connected in sequence;
the light source unit (007) provides a light source required for acquiring the tongue image picture;
the camera shooting unit (008) provides an instruction for guiding the patient to shoot by himself and shoots the tongue image picture;
the tongue image storage unit (009) stores and transmits the tongue image picture to the data transmission module (004).
4. The convolutional neural network-based tongue coating detection system as claimed in claim 1,
the data transmission module (004) comprises an image processing unit (103) and a first narrow-band Internet of things terminal module (104);
the remote service data terminal (001) comprises a second narrow-band Internet of things terminal module (102), a data processing toolkit (101) and a database which are connected in sequence;
the image processing unit (103), the first narrow-band Internet of things terminal module (104) and the display screen (105) are sequentially connected in a serial port coupling mode;
the second narrow-band Internet of things terminal module (102) is coupled and connected with the data processing tool kit (101) through a serial port, and is in communication connection with the first narrow-band Internet of things terminal module (104) through a narrow-band Internet of things;
the image processing unit (103) adjusts the format and size of the tongue image picture, and transmits the adjusted tongue image picture to the data processing toolkit (101) through a first narrow-band Internet of things terminal module (104) and a second narrow-band Internet of things terminal module (102) in sequence;
the data processing toolkit (101) classifies the tongue image picture adjusted by the image processing unit (103) based on the algorithm of a convolutional neural network and matches the tongue image picture with a corresponding tongue diagnosis report and a corresponding medical guidance suggestion in the database;
the display screen (105) is used for displaying the tongue image picture, the tongue diagnosis report and the medical guidance suggestion.
5. The convolutional neural network-based tongue coating detection system of claim 4, wherein the data processing toolkit (101) comprises a deep learning model and a fully connected network;
the deep learning model extracts effective characteristic vectors from the tongue image picture adjusted by the image processing unit (103);
and the full-connection network classifies the effective characteristic vectors and outputs tongue image classification results.
6. The convolutional neural network-based tongue coating detection system of claim 5, wherein the deep learning model adopts a deep convolution _ v3 model.
7. The convolutional neural network-based tongue coating detection system of claim 6, wherein the deep convolution _ v3 model is pre-trained with ImageNet data set.
8. The convolutional neural network-based tongue coating detection system of claim 5, wherein the fully-connected network employs a single hidden layer feed-forward neural network comprising an input layer, a hidden layer and an output layer;
the input layer is provided with d input neurons, the hidden layer is provided with q hidden units, and the output layer is provided with 1 output unit;
the h neuron of the hidden layer has a threshold value of gammahThe threshold value of the jth neuron of the output layer is thetaj
The connection weight of the ith neuron of the input layer and the h neuron of the hidden layer is VihThe connection weight of the h-th neuron of the hidden layer and the j-th neuron of the output layer is Whj
The hidden layer uses a ReLU activation function, f-1, (x) max (0, x), and the output layer uses a Softmax function
Figure FDA0003003900180000021
Are respectively denoted by f1(VTX + γ) and f2(WTB + θ); wherein v is a weight matrix of size dxq, x is a sample input vector of size dx1, and γ is a threshold vector of size qx1; w is a weight matrix of qxl size, and B ═ f1(.) a threshold vector of size qx 1, θ is l x 1;
the convolutional neural network forward propagation output is y ═ f2(WTB + θ) (probability value of network prediction), where y is the label vector corresponding to the sample, with size l × 1;
calculating Loss function Loss as Y by adopting cross entropy1log(y1)+…+Yilog(yi)+…+Yllog(yl) And reducing overfitting by adopting a Back Propagation (BP) algorithm, a gradient descent minimization loss function and a dropout technology in deep learning, and finding out an optimal model classification parameter.
9. The convolutional neural network-based tongue coating detection system of any one of claims 5 to 7, wherein the tongue image classification result comprises normal, cracked tongue, thick coating, barbed tongue, tooth-mark tongue, and tongue coating flaking.
10. The use method of the tongue image detection system based on the convolutional neural network is characterized by comprising the following steps of:
s1, turning on a power supply through a physical operation key (005) to enable a hardware platform (002) to enter a working state;
s2, selecting proper light source conditions, extending the tongue out according to the prompt of the image acquisition module (003), adjusting the tongue up and down and left and right, selecting an optimal angle and shooting through the image acquisition module (003);
s3, the image acquisition module (003) stores the photographed tongue image picture and transmits the picture to the data transmission module (004);
s4, the data transmission module (004) processes the tongue image picture and transmits the processed tongue image picture to a remote service data end (001) through a narrow-band Internet of things;
s5, the remote service data terminal (001) classifies tongue image pictures processed by the data transmission module (004) based on an algorithm of a convolutional neural network and matches the tongue image pictures with corresponding tongue diagnosis reports and medical guidance suggestions in a database;
and S6, the remote service data terminal (001) transmits the tongue diagnosis report and the medical guidance suggestion to the display screen (105) through the narrow-band Internet of things for display.
CN202110357271.2A 2021-04-01 2021-04-01 Tongue coating detection system based on convolutional neural network Pending CN113129277A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110357271.2A CN113129277A (en) 2021-04-01 2021-04-01 Tongue coating detection system based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110357271.2A CN113129277A (en) 2021-04-01 2021-04-01 Tongue coating detection system based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN113129277A true CN113129277A (en) 2021-07-16

Family

ID=76774685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110357271.2A Pending CN113129277A (en) 2021-04-01 2021-04-01 Tongue coating detection system based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN113129277A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372390A (en) * 2016-08-25 2017-02-01 姹ゅ钩 Deep convolutional neural network-based lung cancer preventing self-service health cloud service system
CN207799406U (en) * 2018-01-16 2018-08-31 中铁四局集团有限公司 A kind of municipal mud-processing equipment remote monitoring system based on NB-IoT
CN109685088A (en) * 2017-10-18 2019-04-26 上海仪电(集团)有限公司中央研究院 Narrow band communication intelligent image analysis system based on cloud separation convolutional neural networks
CN109700433A (en) * 2018-12-28 2019-05-03 深圳铁盒子文化科技发展有限公司 A kind of tongue picture diagnostic system and lingual diagnosis mobile terminal
CN110097107A (en) * 2019-04-23 2019-08-06 安徽大学 Alternaria mali roberts disease recognition and classification method based on convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372390A (en) * 2016-08-25 2017-02-01 姹ゅ钩 Deep convolutional neural network-based lung cancer preventing self-service health cloud service system
CN109685088A (en) * 2017-10-18 2019-04-26 上海仪电(集团)有限公司中央研究院 Narrow band communication intelligent image analysis system based on cloud separation convolutional neural networks
CN207799406U (en) * 2018-01-16 2018-08-31 中铁四局集团有限公司 A kind of municipal mud-processing equipment remote monitoring system based on NB-IoT
CN109700433A (en) * 2018-12-28 2019-05-03 深圳铁盒子文化科技发展有限公司 A kind of tongue picture diagnostic system and lingual diagnosis mobile terminal
CN110097107A (en) * 2019-04-23 2019-08-06 安徽大学 Alternaria mali roberts disease recognition and classification method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨晶东 等: "基于迁移学习的全连接神经网络舌象分类方法", 《第二军医大学学报》 *

Similar Documents

Publication Publication Date Title
CN111340819B (en) Image segmentation method, device and storage medium
US11610310B2 (en) Method, apparatus, system, and storage medium for recognizing medical image
CN110619962B (en) Doctor-patient sharing network medical service system
CN106295186B (en) A kind of system of the aided disease diagnosis based on intelligent inference
CN110390674B (en) Image processing method, device, storage medium, equipment and system
CN109948671B (en) Image classification method, device, storage medium and endoscopic imaging equipment
CN108416065A (en) Image based on level neural network-sentence description generates system and method
Huan et al. Deep convolutional neural networks for classifying body constitution based on face image
Zhang et al. Hybrid graph convolutional network for semi-supervised retinal image classification
CN116189884B (en) Multi-mode fusion traditional Chinese medicine physique judging method and system based on facial vision
CN108877923A (en) A method of the tongue fur based on deep learning generates prescriptions of traditional Chinese medicine
CN106725341A (en) A kind of enhanced lingual diagnosis system
CN115147376A (en) Skin lesion intelligent identification method based on deep Bayesian distillation network
CN117316369B (en) Chest image diagnosis report automatic generation method for balancing cross-mode information
Gavrilov et al. Deep learning based skin lesions diagnosis
CN113129277A (en) Tongue coating detection system based on convolutional neural network
CN107341189A (en) A kind of indirect labor carries out the method and system of examination, classification and storage to image
CN109711306B (en) Method and equipment for obtaining facial features based on deep convolutional neural network
CN108846327A (en) A kind of intelligent distinguishing system and method for mole and melanoma
Lopez-Tiro et al. On the in vivo recognition of kidney stones using machine learning
EP4344642A1 (en) Computer-implemented method for setting x-ray acquisition parameters
Li et al. GL-FusionNet: Fusing global and local features to classify deep and superficial partial thickness burn
CN117352133A (en) Multi-mode data-based multi-task combined learning traditional Chinese medicine virtual-actual functional state identification method
CN118039057B (en) Household health service robot based on multi-mode large model and intelligent interaction method
WO2023181417A1 (en) Imaging device, program, and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210716