CN109199334B - Tongue picture constitution identification method and device based on deep neural network - Google Patents

Tongue picture constitution identification method and device based on deep neural network Download PDF

Info

Publication number
CN109199334B
CN109199334B CN201811143472.7A CN201811143472A CN109199334B CN 109199334 B CN109199334 B CN 109199334B CN 201811143472 A CN201811143472 A CN 201811143472A CN 109199334 B CN109199334 B CN 109199334B
Authority
CN
China
Prior art keywords
tongue picture
tongue
picture
detected
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811143472.7A
Other languages
Chinese (zh)
Other versions
CN109199334A (en
Inventor
甘少敏
伍梓境
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaowu Health Technology Shanghai Co ltd
Original Assignee
Xiaowu Health Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaowu Health Technology Shanghai Co ltd filed Critical Xiaowu Health Technology Shanghai Co ltd
Priority to CN201811143472.7A priority Critical patent/CN109199334B/en
Publication of CN109199334A publication Critical patent/CN109199334A/en
Application granted granted Critical
Publication of CN109199334B publication Critical patent/CN109199334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4542Evaluating the mouth, e.g. the jaw
    • A61B5/4552Evaluating soft tissue within the mouth, e.g. gums or tongue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4854Diagnosis based on concepts of traditional oriental medicine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a tongue picture constitution distinguishing method and a tongue picture constitution distinguishing device based on a deep neural network, wherein the method comprises the following steps: training 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture characteristic models through a deep neural network algorithm; acquiring a tongue picture to be detected; identifying whether the tongue picture to be detected is a tongue picture or not through a tongue picture identification model; if so, respectively calculating the probabilities of the 11 constitutions represented by the tongue picture to be detected through the tongue picture constitutional models; respectively calculating the probability of 5 tongue picture characteristics represented by the tongue picture to be detected through 5 tongue picture characteristic models; combining the sub-features with the highest probability in each of the 5 tongue picture features into a feature combination, and determining a plurality of constitutions corresponding to the feature combination; and taking the constitution with the highest probability in the plurality of constitutions as the constitution represented by the tongue picture to be detected. By the method and the device, the physique types can be automatically identified according to the tongue picture, and a user can conveniently detect the physique types of the user.

Description

Tongue picture constitution identification method and device based on deep neural network
Technical Field
The invention relates to the technical field of machine vision applied to traditional Chinese medicine constitution identification, in particular to a tongue picture constitution identification method and tongue picture constitution identification equipment based on a deep neural network.
Background
Constitutional phenomena is an important manifestation of human life activities, and refers to the comprehensive and relatively stable inherent traits of morphological structure, physiological function and psychological state formed in the human life process based on innate endowments and acquired aftermath. The discussion of constitutions in traditional Chinese medicine begins in the internal classic of yellow emperor in the western Han dynasty, and gradually forms traditional Chinese medicine constitutions after 2 thousands of years of development, and the traditional Chinese medicine constitutions begin with the artificial research of life individuals and aim at researching the constitutional features, evolution rules, influence factors and classification standards of different constitutions so as to be applied to guiding the prevention, diagnosis and treatment, rehabilitation and health preservation of diseases. In 2009, 9 th, traditional Chinese medicine constitution classification and judgment issues a traditional Chinese medicine constitution standard, which divides constitutions into nine types, namely mild constitution, qi deficiency, yang deficiency, yin deficiency, phlegm dampness, damp heat, blood stasis, qi stagnation and specific constitution, wherein the nine types of the standard are classified, although the standard has certain guidance, universality and referenceability, when the constitution is actually clinically identified, an experienced traditional Chinese medicine teacher is often required to identify the constitution, and for an individual to identify the constitution, more than 60 questionnaires need to be answered to identify the constitution, and in 60 questionnaires, the answers of the individual are subjective, and the answers considered to be good are often selected in the subconscious mind, so that the final judgment result is often incorrect.
In the development of traditional Chinese medicine for thousands of years, traditional Chinese medicine summarizes a plurality of diagnostic methods, wherein inspection diagnosis is the most important part in the traditional Chinese medicine diagnostics, in recent years, academic institutions propose to use information technology to assist traditional Chinese medicine to identify physique based on tongue diagnosis, a method which is often used is to use special shooting equipment, extract picture characteristics one by cutting image pictures and identify according to a classification method of nine major physiques, because the design initial stage of the classification method of nine major physiques is suitable for questionnaires, when the classification method is distinguished by the image characteristics of the tongue, characteristic repetition is often caused, and meanwhile, the classification method of traditional machine vision is used, and is positioned in methods of color mode identification, RGB model identification, traditional texture algorithm texture analysis and the like, but the effects of the methods are not very good.
With the development of hardware technology and big data technology, great development has been made in artificial intelligence and neural network in recent years. Deep learning gradually extracts features from a bottom layer to a high layer from input data by establishing a layered model structure similar to a human brain, so that a mapping relation from a bottom layer signal to high layer semantics can be well established. The essence of deep learning is that more useful features are learned by constructing a machine learning model with a plurality of hidden layers and massive training data, so that the accuracy of classification or prediction is finally improved, the deep learning is mainly based on big data, the features are learned by utilizing the big data, and rich information in the massive data is fully explored.
Disclosure of Invention
The invention provides a tongue picture physique distinguishing method and device based on a deep neural network, which can automatically identify the physique type according to a tongue picture and facilitate a user to detect the physique type of the user.
According to one aspect of the invention, a tongue picture constitution identification method based on a deep neural network is provided, and comprises the following steps: training 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture characteristic models through a deep neural network algorithm; wherein, the tongue constitutions comprise 11 constitutions; acquiring a tongue picture to be detected; identifying whether the tongue picture to be detected is a tongue picture or not through a tongue picture identification model; if so, respectively calculating the probabilities of the 11 constitutions represented by the tongue picture to be detected through the tongue picture constitutional models; respectively calculating the probability of 5 tongue picture characteristics represented by the tongue picture to be detected through 5 tongue picture characteristic models; according to the probabilities of the 5 tongue picture characteristics represented by the tongue picture to be detected, combining the sub-characteristics with the highest probability in each of the 5 tongue picture characteristics into a characteristic combination, and determining a plurality of constitutions corresponding to the characteristic combination; and according to the probabilities of the 11 constitutions represented by the tongue picture to be detected, taking the constitution with the highest probability in the plurality of constitutions as the constitution represented by the tongue picture to be detected.
Preferably, after the tongue picture to be detected is obtained and before the tongue picture to be detected is identified as the tongue picture by the tongue picture identification model, the method further comprises the following steps: judging whether the tongue picture to be detected is a picture containing a complete face; if not, identifying whether the pixel value of the tongue picture to be detected is larger than 244 multiplied by 244; if yes, the pixel value of the tongue picture to be detected is adjusted to 244 x 244.
Preferably, the tongue picture to be detected is identified whether to be a tongue picture or not through the tongue picture identification model, and the method comprises the following steps: respectively calculating the probability of whether the tongue picture to be detected is a tongue picture by adopting a CNN algorithm through a tongue picture identification model; comparing whether the probability that the tongue picture to be detected is the tongue picture is greater than the probability that the tongue picture to be detected is not the tongue picture; if so, determining that the tongue picture to be detected is a tongue picture; if not, determining that the tongue picture to be detected is not a tongue picture.
Preferably, the method for calculating the probabilities of the 11 constitutions represented by the tongue picture to be detected by the tongue picture constitutional model comprises the following steps: adjusting the tongue picture to be detected into a square; cutting the to-be-detected tongue picture adjusted to be square into 5-10 pixel values in the left, right, upper and lower directions respectively to obtain a left moving tongue picture, a right moving tongue picture, an upper moving tongue picture and a lower moving tongue picture; respectively calculating the probabilities of 11 constitutions represented by a tongue picture to be detected, a left tongue picture to be moved, a right tongue picture to be moved, an upper tongue picture to be moved and a lower tongue picture to be moved by adopting a CNN algorithm through a tongue picture constitutional model after initializing tongue picture constitutional model parameters; and respectively calculating 5 groups of probability values of each constitution of the 11 constitutions to obtain the probability of each constitution of the 11 constitutions represented by the tongue picture to be detected.
Preferably, the probability of 5 tongue picture characteristics represented by the tongue picture to be detected is calculated by 5 tongue picture characteristic models respectively, and the method comprises the following steps: adjusting the tongue picture to be detected into a square; cutting the to-be-detected tongue picture adjusted to be square into 5-10 pixel values in the left, right, upper and lower directions respectively to obtain a left moving tongue picture, a right moving tongue picture, an upper moving tongue picture and a lower moving tongue picture; respectively calculating the probabilities of 5 tongue picture characteristics represented by a tongue picture to be detected, a left tongue picture to be moved, a right tongue picture to be moved, an upper tongue picture to be moved and a lower tongue picture by adopting a CNN algorithm through a tongue picture characteristic model after initializing tongue picture characteristic model parameters; and respectively calculating 5 groups of probability values of each of the 5 tongue picture characteristics to obtain the probability of the 5 tongue picture characteristics represented by the tongue picture to be detected.
According to another aspect of the present invention, there is also provided a tongue picture physique recognition apparatus based on a deep neural network, including: the model acquisition unit is used for training 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture characteristic models through a deep neural network algorithm; wherein, the tongue constitutions comprise 11 constitutions; the picture acquisition unit is used for acquiring a tongue picture to be detected; the image identification unit is used for identifying whether the tongue picture to be detected is a tongue picture or not through the tongue picture identification model; the first calculating unit is used for respectively calculating the probabilities of the 11 constitutions represented by the tongue picture to be detected through the tongue constitution model when the picture identifying unit identifies that the tongue picture to be detected is the tongue picture; the second calculating unit is used for calculating the probability of 5 tongue picture characteristics represented by the tongue picture to be detected through 5 tongue picture characteristic models when the picture identifying unit identifies that the tongue picture to be detected is a tongue picture; the characteristic combination determining unit is used for forming a characteristic combination by the sub-characteristics with the highest probability in each of the 5 tongue picture characteristics according to the probabilities of the 5 tongue picture characteristics represented by the tongue picture to be detected and determining a plurality of constitutions corresponding to the characteristic combination; and the constitution determining unit is used for taking the constitution with the highest probability in the plurality of constitutions as the constitution represented by the tongue picture to be detected according to the probabilities of the 11 constitutions represented by the tongue picture to be detected.
Preferably, a tongue picture constitution discriminating apparatus based on a deep neural network further comprises: the judging unit is used for judging whether the tongue picture to be detected is a picture containing a complete face or not after the picture acquiring unit acquires the tongue picture to be detected and before the picture identifying unit identifies whether the tongue picture to be detected is the tongue picture or not through the tongue picture identifying model; the pixel identification unit is used for identifying whether the pixel value of the tongue picture to be detected is greater than 244 multiplied by 244 when the judgment unit judges that the tongue picture to be detected is a picture without a complete human face; and the pixel adjusting unit is used for adjusting the pixel value of the tongue picture to be detected to be 244 multiplied by 244 when the pixel identifying unit identifies that the pixel value of the tongue picture to be detected is larger than 244 multiplied by 244.
Preferably, the picture recognition unit includes: the calculating subunit is used for respectively calculating the probability of whether the tongue picture to be detected is the tongue picture by adopting a CNN algorithm through the tongue picture identification model; the comparison subunit is used for comparing whether the probability that the tongue picture to be detected is the tongue picture is greater than the probability that the tongue picture to be detected is not the tongue picture; the first determining subunit is used for determining that the tongue picture to be detected is the tongue picture when the probability that the tongue picture to be detected is the tongue picture is compared by the comparing subunit and is greater than the probability that the tongue picture to be detected is not the tongue picture; and the second determining subunit is used for determining that the tongue picture to be detected is not the tongue picture when the probability that the tongue picture to be detected is compared by the comparing subunit is smaller than the probability that the tongue picture to be detected is not the tongue picture.
Preferably, the first calculation unit includes: the first adjusting subunit is used for adjusting the tongue picture to be detected into a square; the second cutting subunit is used for cutting the tongue picture to be detected adjusted into the square into 5-10 pixel values in the left, right, upper and lower four directions respectively to obtain a left tongue picture, a right tongue picture, an upper tongue picture and a lower tongue picture; the first calculating subunit is used for respectively calculating the probabilities of the 11 constitutions represented by the tongue picture to be detected, the left tongue picture, the right tongue picture, the upper tongue picture and the lower tongue picture by adopting a CNN algorithm through the tongue picture constitutional model after initializing the tongue picture constitutional model parameters; and the first probability obtaining subunit is used for respectively calculating 5 groups of probability values of each constitution of the 11 constitutions to obtain the probability of each constitution of the 11 constitutions represented by the tongue picture to be detected.
Preferably, the second calculation unit includes: the second adjusting subunit is used for adjusting the tongue picture to be detected into a square; the second cutting subunit is used for cutting the tongue picture to be detected adjusted into the square into 5-10 pixel values in the left, right, upper and lower four directions respectively to obtain a left tongue picture, a right tongue picture, an upper tongue picture and a lower tongue picture; the second calculating subunit is used for respectively calculating the probabilities of 5 tongue picture characteristics represented by the tongue picture to be detected, the left tongue picture, the right tongue picture, the upward tongue picture and the downward tongue picture by adopting a CNN algorithm through the tongue picture characteristic model after the tongue picture characteristic model parameters are initialized; and the second probability obtaining subunit is used for respectively calculating 5 groups of probability values of each of the 5 tongue picture features to obtain the probabilities of the 5 tongue picture features represented by the tongue picture to be detected.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, 1 tongue picture identification model, 1 tongue picture constitution model and 5 tongue picture characteristic models are trained through a large number of tongue picture samples, when tongue picture constitutions are detected, the tongue picture to be detected is transmitted into constitution identification equipment, the constitution identification equipment firstly identifies whether the tongue picture to be detected is a tongue picture according to the 1 tongue picture identification model, if so, the constitution identification equipment calculates the probabilities of 11 constitution types and the probabilities of 5 tongue picture characteristics represented by the tongue picture to be detected according to the 1 tongue picture constitution model and the 5 tongue picture characteristic models, and finally selects the constitution type with the highest probability from the constitution types corresponding to the combination characteristics formed by the highest probability sub-characteristics of the tongue picture characteristics as the constitution of the tongue picture to be detected. The physique identification method enhances the precision of physique identification and also improves the efficiency of physique identification, and the physique identification equipment can be a mobile phone, so that a user can directly detect the physique type of the user by directly shooting a tongue picture of the user by using the mobile phone, the use of the user is more convenient, and the method is suitable for large-scale popularization and use.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of a tongue picture constitution identification method based on a deep neural network according to an embodiment of the present invention;
fig. 2 is a block diagram illustrating a tongue constitution identification apparatus based on a deep neural network according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another tongue constitution identification method based on a deep neural network according to an embodiment of the present invention;
fig. 4 is a flowchart of another tongue picture constitution identification method based on a deep neural network according to a second embodiment of the present invention.
Detailed Description
The technical solution of the present invention will be described below with reference to the accompanying drawings, but the described embodiments are only a part of the embodiments of the present invention, and all other embodiments obtained by those skilled in the art without any inventive work belong to the scope of the present invention.
The embodiment of the invention provides a tongue picture constitution identification method based on a deep neural network, and fig. 1 is a flow chart of the tongue picture constitution identification method based on the deep neural network according to the embodiment of the invention, as shown in fig. 1, the method comprises the following steps:
step S101: training 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture characteristic models through a deep neural network algorithm; wherein, the tongue constitution comprises 11 constitutions;
step S102: acquiring a tongue picture to be detected;
step S103: identifying whether the tongue picture to be detected is a tongue picture or not through a tongue picture identification model, and if so, executing the step S104-step S107; if not, ending the process;
step S104: respectively calculating the probabilities of the 11 constitutions represented by the tongue picture to be detected through the tongue picture constitutional models;
step S105: respectively calculating the probability of 5 tongue picture characteristics represented by the tongue picture to be detected through 5 tongue picture characteristic models;
step S106: according to the probabilities of the 5 tongue picture characteristics represented by the tongue picture to be detected, combining the sub-characteristics with the highest probability in each of the 5 tongue picture characteristics into a characteristic combination, and determining a plurality of constitutions corresponding to the characteristic combination;
step S107: and according to the probabilities of the 11 constitutions represented by the tongue picture to be detected, taking the constitution with the highest probability in the plurality of constitutions as the constitution represented by the tongue picture to be detected.
In the implementation process, after step S102 and before step S103, it is further required to determine whether the tongue picture to be detected is a picture including a complete face; if the tongue picture to be detected is not a picture containing a complete human face, identifying whether the pixel value of the tongue picture to be detected is larger than 244 multiplied by 244; if the pixel value of the tongue picture to be detected is larger than 244 x 244, the pixel value of the tongue picture to be detected is adjusted to 244 x 244.
In step S103, firstly, the probability of whether the tongue picture to be detected is a tongue picture is calculated by a tongue picture recognition model and a CNN algorithm, respectively; comparing whether the probability that the tongue picture to be detected is the tongue picture is greater than the probability that the tongue picture to be detected is not the tongue picture; if so, determining that the tongue picture to be detected is a tongue picture; if not, determining that the tongue picture to be detected is not a tongue picture.
In step S104, the picture of tongue picture to be detected is adjusted to be square; cutting the to-be-detected tongue picture adjusted to be square into 5-10 pixel values in the left, right, upper and lower directions respectively to obtain a left moving tongue picture, a right moving tongue picture, an upper moving tongue picture and a lower moving tongue picture; respectively calculating the probabilities of 11 constitutions represented by a tongue picture to be detected, a left tongue picture to be moved, a right tongue picture to be moved, an upper tongue picture to be moved and a lower tongue picture to be moved by adopting a CNN algorithm through a tongue picture constitutional model after initializing tongue picture constitutional model parameters; and respectively calculating 5 groups of probability values of each constitution of the 11 constitutions to obtain the probability of each constitution of the 11 constitutions represented by the tongue picture to be detected.
In step S105, the tongue picture to be detected is adjusted to be square; cutting the to-be-detected tongue picture adjusted to be square into 5-10 pixel values in the left, right, upper and lower directions respectively to obtain a left moving tongue picture, a right moving tongue picture, an upper moving tongue picture and a lower moving tongue picture; respectively calculating the probabilities of 5 tongue picture characteristics represented by a tongue picture to be detected, a left tongue picture to be moved, a right tongue picture to be moved, an upper tongue picture to be moved and a lower tongue picture by adopting a CNN algorithm through a tongue picture characteristic model after initializing tongue picture characteristic model parameters; and respectively calculating 5 groups of probability values of each of the 5 tongue picture characteristics to obtain the probability of the 5 tongue picture characteristics represented by the tongue picture to be detected.
Through the steps, the physique types can be automatically identified according to the tongue picture, and a user can conveniently detect the physique types of the user.
The embodiment of the invention also provides tongue picture constitution identification equipment 20 based on the deep neural network, which is used for realizing the tongue picture constitution identification method based on the deep neural network.
Fig. 2 is a block diagram illustrating a tongue picture constitution identification apparatus 20 based on a deep neural network according to an embodiment of the present invention, as shown in fig. 2, the apparatus 20 including: the model acquisition unit 201 is used for training 1 tongue picture recognition model, 1 tongue picture physique model and 5 tongue picture characteristic models through a deep neural network algorithm; wherein, the tongue constitutions comprise 11 constitutions; the picture acquisition unit 202 is used for acquiring a tongue picture to be detected; the picture identification unit 203 is used for identifying whether the tongue picture to be detected is a tongue picture or not through the tongue picture identification model; the first calculating unit 204 is configured to calculate, when the image identifying unit 203 identifies that the tongue picture to be detected is a tongue picture, probabilities of 11 constitutions represented by the tongue picture to be detected through the tongue constitutional model respectively; the second calculating unit 205 is configured to calculate, when the image identifying unit 203 identifies that the tongue picture to be detected is a tongue picture, probabilities of 5 tongue feature types represented by the tongue picture to be detected through the 5 tongue feature models respectively; a feature combination determining unit 206, configured to combine sub-features with the highest probability in each of the 5 tongue picture features into a feature combination according to the probabilities of the 5 tongue picture features represented by the tongue picture to be detected, and determine a plurality of constitutions corresponding to the feature combination; the constitution determining unit 207 is configured to use the constitution with the highest probability among the plurality of constitutions as the constitution represented by the tongue picture to be detected according to the probabilities of the 11 constitutions represented by the tongue picture to be detected.
For a tongue picture constitution identification device 20 based on a deep neural network, further comprising: the judging unit 208 is configured to judge whether the tongue picture to be detected is a picture including a complete face after the picture obtaining unit 202 obtains the tongue picture to be detected and before the picture identifying unit 203 identifies whether the tongue picture to be detected is a tongue picture through the tongue picture identifying model; the pixel identification unit 209 is configured to identify whether the pixel value of the tongue picture to be detected is greater than 244 × 244 when the determination unit 208 determines that the tongue picture to be detected is a picture that does not include a complete face; and the pixel adjusting unit 210 is configured to adjust the pixel value of the tongue picture to be detected to 244 × 244 when the pixel identifying unit 209 identifies that the pixel value of the tongue picture to be detected is greater than 244 × 244.
For a tongue picture constitution identification device 20 based on a deep neural network, the picture recognition unit 203 includes: the calculating subunit 2031 is configured to calculate, by using the tongue picture identification model, probabilities of whether the tongue picture to be detected is a tongue picture by using a CNN algorithm; the comparison subunit 2032 is configured to compare whether the probability that the tongue picture to be detected is the tongue picture is greater than the probability that the tongue picture to be detected is not the tongue picture; the first determining subunit 2033, configured to determine that the tongue picture to be detected is the tongue picture when the probability that the tongue picture to be detected is the tongue picture compared by the comparing subunit 2032 is greater than the probability that the tongue picture to be detected is not the tongue picture; the second determining subunit 2034 is configured to determine that the tongue picture to be detected is not a tongue picture when the probability that the tongue picture to be detected is compared by the comparing subunit 2032 is smaller than the probability that the tongue picture to be detected is not a tongue picture.
For a tongue picture constitution identification device 20 based on a deep neural network, the first calculation unit 204 includes: a first adjusting subunit 2041, configured to adjust the tongue picture to be detected into a square; a second cutting subunit 2042, configured to cut the to-be-detected tongue picture adjusted to be square into 5-10 pixel values in the left, right, upper, and lower directions, respectively, to obtain a left-moving tongue picture, a right-moving tongue picture, an upper-moving tongue picture, and a lower-moving tongue picture; the first calculating subunit 2043 is configured to calculate, by initializing the tongue picture physical fitness model after the tongue picture physical fitness model parameters, the probabilities of the 11 physical constitutions represented by the to-be-detected tongue picture, the left-to-be-detected tongue picture, the right-to-be-detected tongue picture, the up-to-be-detected tongue picture, and the down-to-be-detected tongue picture by using a CNN algorithm; the first probability obtaining subunit 2044 is configured to calculate 5 group probability values of each of the 11 constitutions, respectively, to obtain a probability of each of the 11 constitutions represented by the tongue picture to be detected.
For a tongue picture constitution discriminating apparatus 20 based on a deep neural network, the second calculating unit 205 includes: a second adjustment subunit 2051, configured to adjust the tongue picture to be detected into a square; a second cutting subunit 2052, configured to cut the to-be-detected tongue picture adjusted to be square into 5-10 pixel values in the left, right, upper, and lower directions, respectively, to obtain a left-shift tongue picture, a right-shift tongue picture, an upper-shift tongue picture, and a lower-shift tongue picture; a second calculating subunit 2053, configured to calculate probabilities of 5 tongue picture features represented by the to-be-detected tongue picture, the left-shifted tongue picture, the right-shifted tongue picture, the upward-shifted tongue picture, and the downward-shifted tongue picture, respectively, by using a CNN algorithm through the tongue picture feature model after initializing the tongue picture feature model parameters; and a second probability obtaining subunit 2054, configured to calculate 5 groups of probability values of each of the 5 tongue picture features, respectively, to obtain probabilities of the 5 tongue picture features represented by the tongue picture to be detected.
It should be noted that the tongue picture constitution identification device based on the deep neural network described in the apparatus embodiment corresponds to the above method embodiment, and the specific implementation process thereof has been described in detail in the method embodiment, and is not described herein again.
In order to make the technical solution and implementation method of the present invention clearer, the following describes the implementation process in detail with reference to the preferred embodiments.
Example one
In this embodiment, another tongue picture constitution identification method based on a deep neural network is provided, as shown in fig. 3, fig. 3 is a flowchart of another tongue picture constitution identification method based on a deep neural network according to an embodiment of the present invention, and includes the following steps:
step S301: the constitution identification equipment trains 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture characteristic models through a deep neural network algorithm;
in the embodiment of the present invention, the constitution identification device may select 4 deep neural network algorithms for performing the related model training, or may select 5 deep neural network algorithms for performing the related model training, if 4 deep neural network algorithm training models are selected, the constitution identification device may respectively train 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture feature models with the 4 deep neural network algorithms, that is, train 1 × 4 tongue picture recognition models, 1 × 4 tongue picture constitution models and 5 × 4 tongue picture feature models together, and the specific implementation steps of the constitution identification device training 1 tongue picture constitution model and 5 tongue picture feature models with one deep neural network algorithm are as follows: marking the tongue picture of the specimen with the characteristics of physique and tongue picture respectively; respectively training constitutions and characteristics by using marked specimen tongue picture; performing training to obtain 1 tongue picture constitution model and 5 tongue picture characteristic models;
it should be noted that in the embodiment of the present invention, the constitutions are divided into 11 types, which are cold coagulation and blood stasis, cold dampness, spleen and stomach qi deficiency, qi and blood deficiency, excessive heat damaging the body fluid, damp heat, excess heat, food accumulation and phlegm turbidity, stomach qi and yin deficiency, yin deficiency and blood stasis, the tongue manifestations are divided into 5 types, which are tongue color, tongue image form, tongue fur absence, tongue fur thickness and tongue fur color, wherein the tongue color can be divided into ' red tongue quality ', ' purple tongue quality ', ' light and tender tongue image form, the tongue image form can be divided into ' swollen tongue shape ', ' thin tongue shape ', ' teeth mark on tongue ', ' normal tongue shape ', the tongue fur absence can be divided into ' no tongue fur or little tongue fur ', ' white tongue fur ', ' thick tongue ' thin tongue ', the tongue fur color can be divided into ' no tongue fur or little tongue fur ', ' yellow tongue ', ' white tongue fur ', ' grey tongue fur ', ' white tongue fur ', the tongue fur can be divided into ' tongue shape ', the tongue image can be divided into ' tongue image form, 'tongue coating is black', and the physique and tongue picture characteristics of a large number of sample tongue picture pictures need to be labeled manually;
as an optional implementation manner, the specific implementation manner of using the marked specimen tongue picture to respectively train the constitutions and the features is as follows: firstly, selecting 4 or 5 deep convolutional neural network algorithms, adjusting a sample tongue picture into a square, increasing data of picture data, such as up-down, left-right, turning and the like, respectively training the tongue picture and the tongue picture with increased data through each algorithm of the selected 4 or 5 deep convolutional neural network algorithms, wherein 1 tongue constitution model and 5 tongue characteristic models can be obtained after respectively training the tongue picture and the tongue picture with increased data through one deep convolutional neural network algorithm, and when the model is trained in an initial state, the model data of ImageNet is used as initialization data;
in the embodiment of the invention, the physique identification device trains the tongue picture identification model through each algorithm of 4 or 5 selected deep neural network algorithms, specifically, the specific training method for training 1 tongue picture identification model through one deep neural network algorithm is to train a non-tongue picture and a marked tongue picture by selecting a deep convolutional neural network algorithm to obtain a tongue picture identification model; the selected deep convolutional neural network algorithm can be densenert 169;
step S302: the constitution identification equipment acquires a tongue picture to be detected;
as an optional implementation manner, the physique identification device may be a handheld electronic device, such as a mobile phone, and when a user wants to know the physique condition of the user, the user can take a tongue picture of the user through a photographing function on the mobile phone, and the tongue picture is used as a tongue picture to be detected for processing and detection;
step S303: the constitution identification equipment judges whether the tongue picture to be detected is a picture containing a complete human face; if not, executing step S304, if yes, ending the process;
in the embodiment of the invention, the physique identification device only detects the tongue picture, and when the physique identification device judges that the tongue picture to be detected is a picture containing a complete human face, the tongue picture to be detected is not a complete tongue picture, so that the identification rate of the physique identification device is reduced during detection, and the physique corresponding to the tongue picture to be detected cannot be accurately obtained, so that when the physique identification device judges that the tongue picture to be detected is a picture containing a complete human face, the detection is rejected until a qualified tongue picture to be detected is obtained;
step S304: the constitution identification equipment identifies whether the pixel value of the tongue picture to be detected is larger than 244 multiplied by 244; if yes, executing step S305 to step S307; if not, ending the process;
step S305: the constitution identification equipment adjusts the pixel value of the tongue picture to be detected to 244 multiplied by 244;
in the embodiment of the invention, the pixel value of the tongue picture to be detected is adjusted to 244 multiplied by 244 to adjust the tongue picture to be detected to be square, so that a deep convolutional neural network algorithm can be better applied;
step S306: the constitution identification equipment respectively calculates the probability of whether the tongue picture to be detected is a tongue picture by adopting a CNN algorithm through a tongue picture identification model;
step S307: the constitution identification equipment compares whether the probability that the tongue picture to be detected is the tongue picture is greater than the probability that the tongue picture to be detected is not the tongue picture; if not, go to step S308; if yes, executing step S309-step S313;
step S308: the constitution identification equipment determines a tongue picture to be detected to be a non-tongue picture;
step S309: the constitution identification equipment determines that the tongue picture to be detected is a tongue picture;
step S310: the constitution identification equipment respectively calculates the probabilities of the 11 constitutions represented by the tongue picture to be detected through the tongue picture constitution model;
in the embodiment of the invention, the constitution identification device calculates the probability that the tongue picture to be detected respectively represents 11 constitution types such as congealing cold and blood stasis, cold and dampness, spleen and stomach qi deficiency, qi and blood deficiency, excessive heat damaging fluid, damp and heat, excess heat, food retention and phlegm turbidity, stomach qi and yin deficiency, blood stasis and the like;
step S311: the constitution identification equipment respectively calculates the probability of 5 tongue picture characteristics represented by the tongue picture to be detected through 5 tongue picture characteristic models;
in the embodiment of the invention, the constitution identification device respectively calculates the probabilities of 5 tongue picture characteristics represented by the tongue picture to be detected, specifically, the constitution identification device respectively calculates the probabilities of ' red tongue, purple tongue and tender tongue ' in the tongue color type represented by the tongue picture to be detected, respectively calculates the probabilities of ' fat tongue ', ' thin tongue ', ' teeth mark on tongue ' and ' normal tongue ' in the tongue picture shape type, respectively calculates the probabilities of ' no or little tongue coating ', ' coated tongue ' and ' coated tongue ' in the coated tongue type, respectively calculates the probabilities of ' no or little tongue coating ', ' thick tongue coating ' and ' thin tongue coating ' in the coated tongue thickness type, and respectively calculates the probabilities of ' no or little coated tongue ', ' yellow tongue coating ', ' white tongue coating ', ' grey coated tongue ' and black tongue ' in the tongue color type;
step S312: the constitution identification equipment combines the sub-characteristics with the highest probability in each tongue picture characteristic of the 5 tongue picture characteristics into a characteristic combination according to the probabilities of the 5 tongue picture characteristics represented by the tongue picture to be detected, and determines a plurality of constitutions corresponding to the characteristic combination;
in the embodiment of the present invention, after obtaining the probabilities of 11 constitutions and the 5 tongue image feature probabilities, selecting sub-features with highest probability from ' tongue fat ', ' tongue thin ', ' tongue with tooth marks ' and ' tongue normal ' of tongue quality and color feature type classification, selecting sub-features with highest probability from ' tongue fat ', ' tongue thin ', ' tongue with tooth marks ' and ' tongue normal ' of tongue image and shape feature type classification, selecting sub-features with highest probability from ' no tongue fur or less tongue fur ' and ' tongue fur ' of tongue fur-free feature type classification, selecting sub-features with highest probability from ' no tongue fur or less tongue fur ', ' thick tongue fur ' and thin tongue fur ' of tongue fur thickness feature type classification, and selecting sub-features with highest probability from ' no tongue fur or less tongue fur ', ' yellow tongue fur ', ' white tongue fur ', ' grey tongue fur ' and ' black tongue fur ' of tongue color feature type classification; after 5 sub-features are determined, combining the 5 sub-features into a feature combination, and determining the constitution type corresponding to the feature combination, wherein the constitution type can be one or more;
for example, 5 seeds selected by the constitution identification device are respectively characterized in that the color of the tongue is light white, the shape of the tongue is thin, the tongue is coated with a thin tongue coating, and the tongue coating is yellow, the constitution corresponding to the characteristic combination is damp heat, stomach qi and yin deficiency, qi and blood deficiency and cold dampness, the highest probability constitution is selected by comparing the probabilities of the four constitutions of damp heat, stomach qi and yin deficiency, qi and blood deficiency and cold dampness, and the highest probability constitution is the constitution type represented by the tongue picture to be detected;
step S313: the constitution identification equipment takes the constitution with the highest probability in the plurality of constitutions as the constitution represented by the tongue picture to be detected according to the probabilities of the 11 constitutions represented by the tongue picture to be detected.
Example two
Fig. 4 is a flowchart of another tongue constitution identification method based on a deep neural network according to a second embodiment of the present invention, which includes the following steps:
step S401: the constitution identification equipment trains 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture characteristic models through a deep neural network algorithm;
in the embodiment of the present invention, the constitution identification device may select 4 deep neural network algorithms for performing the related model training, or may select 5 deep neural network algorithms for performing the related model training, if 4 deep neural network algorithm training models are selected, the constitution identification device may respectively train 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture feature models with the 4 deep neural network algorithms, that is, train 1 × 4 tongue picture recognition models, 1 × 4 tongue picture constitution models and 5 × 4 tongue picture feature models together, and the specific implementation steps of the constitution identification device training 1 tongue picture constitution model and 5 tongue picture feature models with one deep neural network algorithm are as follows: marking the tongue picture of the specimen with the characteristics of physique and tongue picture respectively; respectively training constitutions and characteristics by using marked specimen tongue picture; performing training to obtain 1 tongue picture constitution model and 5 tongue picture characteristic models;
it should be noted that in the embodiment of the present invention, the constitutions are divided into 11 types, which are cold coagulation and blood stasis, cold dampness, spleen and stomach qi deficiency, qi and blood deficiency, excessive heat damaging the body fluid, damp heat, excess heat, food accumulation and phlegm turbidity, stomach qi and yin deficiency, yin deficiency and blood stasis, the tongue manifestations are divided into 5 types, which are tongue color, tongue image form, tongue fur absence, tongue fur thickness and tongue fur color, wherein the tongue color can be divided into ' red tongue quality ', ' purple tongue quality ', ' light and tender tongue image form, the tongue image form can be divided into ' swollen tongue shape ', ' thin tongue shape ', ' teeth mark on tongue ', ' normal tongue shape ', the tongue fur absence can be divided into ' no tongue fur or little tongue fur ', ' white tongue fur ', ' thick tongue ' thin tongue ', the tongue fur color can be divided into ' no tongue fur or little tongue fur ', ' yellow tongue ', ' white tongue fur ', ' grey tongue fur ', ' white tongue fur ', the tongue fur can be divided into ' tongue shape ', the tongue image can be divided into ' tongue image form, 'tongue coating is black', and the physique and tongue picture characteristics of a large number of sample tongue picture pictures need to be labeled manually;
as an optional implementation manner, the specific implementation manner of using the marked specimen tongue picture to respectively train the constitutions and the features is as follows: firstly, selecting 4 or 5 deep convolutional neural network algorithms, adjusting a sample tongue picture into a square, increasing data of picture data, such as up-down, left-right, turning and the like, respectively training the tongue picture and the tongue picture with increased data through each algorithm of the selected 4 or 5 deep convolutional neural network algorithms, wherein 1 tongue constitution model and 5 tongue characteristic models can be obtained after respectively training the tongue picture and the tongue picture with increased data through one deep convolutional neural network algorithm, and when the model is trained in an initial state, the model data of ImageNet is used as initialization data;
in the embodiment of the invention, the physique identification device trains the tongue picture identification model through each algorithm of 4 or 5 selected deep neural network algorithms, specifically, the specific training method for training 1 tongue picture identification model through one deep neural network algorithm is to train a non-tongue picture and a marked tongue picture by selecting a deep convolutional neural network algorithm to obtain a tongue picture identification model; the selected deep convolutional neural network algorithm can be densenert 169;
step S402: the constitution identification equipment acquires a tongue picture to be detected;
as an optional implementation manner, the physique identification device may be a handheld electronic device, such as a mobile phone, and when a user wants to know the physique condition of the user, the user can take a tongue picture of the user through a photographing function on the mobile phone, and the tongue picture is used as a tongue picture to be detected for processing and detection;
step S403: the constitution identification equipment judges whether the tongue picture to be detected is a picture containing a complete human face; if not, executing step S404, if yes, ending the process;
in the embodiment of the invention, the physique identification device only detects the tongue picture, and when the physique identification device judges that the tongue picture to be detected is a picture containing a complete human face, the tongue picture to be detected is not a complete tongue picture, so that the identification rate of the physique identification device is reduced during detection, and the physique corresponding to the tongue picture to be detected cannot be accurately obtained, so that when the physique identification device judges that the tongue picture to be detected is a picture containing a complete human face, the detection is rejected until a qualified tongue picture to be detected is obtained;
step S404: the constitution identification equipment identifies whether the pixel value of the tongue picture to be detected is larger than 244 multiplied by 244; if yes, executing step S405 to step S407; if not, ending the process;
step S405: the constitution identification equipment adjusts the pixel value of the tongue picture to be detected to 244 multiplied by 244;
step S406: the constitution identification equipment respectively calculates the probability of whether the tongue picture to be detected is a tongue picture by adopting a CNN algorithm through a tongue picture identification model;
step S407: the constitution identification equipment compares whether the probability that the tongue picture to be detected is the tongue picture is greater than the probability that the tongue picture to be detected is not the tongue picture; if not, go to step S408; if yes, executing step S409 to step S413;
step S408: the constitution identification equipment determines a tongue picture to be detected to be a non-tongue picture;
step S409: the constitution identification equipment determines that the tongue picture to be detected is a tongue picture;
step S410: the physique distinguishing equipment adjusts the tongue picture to be detected into a square;
in the embodiment of the invention, the reason that the constitution identification equipment adjusts the tongue picture to be detected into a square is to better use a deep convolutional neural network algorithm;
step S411: the constitution identification equipment cuts the tongue picture to be detected which is adjusted to be square into 5-10 pixel values in the left, right, upper and lower four directions respectively to obtain a left tongue picture, a right tongue picture, an upper tongue picture and a lower tongue picture; continuing to execute step S412 to step S413 and step S414 to step S417, respectively;
step S412: the constitution identification equipment adopts a CNN algorithm to respectively calculate the probabilities of 11 constitutions represented by a tongue picture to be detected, a left tongue picture to be moved, a right tongue picture to be moved, an upper tongue picture to be moved and a lower tongue picture by initializing the tongue picture constitution model after tongue picture constitution model parameters;
step S413: the constitution identification equipment respectively calculates 5 groups of probability values of each constitution of 11 constitutions to obtain the probability of each constitution of 11 constitutions represented by the tongue picture to be detected; continuing to execute step S416 to step S417;
in the embodiment of the present invention, after calculating 5 groups of probability values of each of 11 constitutions, the constitution identification device averages the 5 groups of probability values of each of the constitutions to obtain a probability average value of each of the 11 constitutions;
step S414: the constitution identification equipment adopts a CNN algorithm to respectively calculate the probabilities of 5 tongue picture characteristics represented by a tongue picture to be detected, a left tongue picture, a right tongue picture, an upper tongue picture and a lower tongue picture by initializing the tongue picture characteristic model after tongue picture characteristic model parameters;
step S415: the constitution identification equipment respectively calculates 5 groups of probability values of each tongue picture characteristic of the 5 tongue picture characteristics to obtain the probability of the 5 tongue picture characteristics represented by the tongue picture to be detected;
in the embodiment of the invention, the constitution identification device calculates 5 groups of probability values of each kind of tongue feature of 5 kinds of tongue features, namely after 5 groups of probability values of a plurality of sub-features of each kind of tongue feature classification of 5 kinds of tongue features are calculated, 5 groups of probability values of the plurality of sub-features in each kind of tongue feature are averaged to obtain the probability of the plurality of sub-features of each kind of tongue feature classification of 5 kinds of tongue features represented by a tongue picture to be detected;
step S416: the constitution identification equipment combines the sub-characteristics with the highest probability in each tongue picture characteristic of the 5 tongue picture characteristics into a characteristic combination according to the probabilities of the 5 tongue picture characteristics represented by the tongue picture to be detected, and determines a plurality of constitutions corresponding to the characteristic combination;
in the embodiment of the present invention, after obtaining the probabilities of 11 constitutions and the 5 tongue image feature probabilities, selecting sub-features with highest probability from ' tongue fat ', ' tongue thin ', ' tongue with tooth marks ' and ' tongue normal ' of tongue quality and color feature type classification, selecting sub-features with highest probability from ' tongue fat ', ' tongue thin ', ' tongue with tooth marks ' and ' tongue normal ' of tongue image and shape feature type classification, selecting sub-features with highest probability from ' no tongue fur or less tongue fur ' and ' tongue fur ' of tongue fur-free feature type classification, selecting sub-features with highest probability from ' no tongue fur or less tongue fur ', ' thick tongue fur ' and thin tongue fur ' of tongue fur thickness feature type classification, and selecting sub-features with highest probability from ' no tongue fur or less tongue fur ', ' yellow tongue fur ', ' white tongue fur ', ' grey tongue fur ' and ' black tongue fur ' of tongue color feature type classification; after 5 sub-features are determined, combining the 5 sub-features into a feature combination, and determining the constitution type corresponding to the feature combination, wherein the constitution type can be one or more;
for example, 5 seeds selected by the constitution identification device are respectively characterized in that the color of the tongue is light white, the shape of the tongue is thin, the tongue is coated with a thin tongue coating, and the tongue coating is yellow, the constitution corresponding to the characteristic combination is damp heat, stomach qi and yin deficiency, qi and blood deficiency and cold dampness, the highest probability constitution is selected by comparing the probabilities of the four constitutions of damp heat, stomach qi and yin deficiency, qi and blood deficiency and cold dampness, and the highest probability constitution is the constitution type represented by the tongue picture to be detected;
step S417: the constitution identification equipment takes the constitution with the highest probability in the plurality of constitutions as the constitution represented by the tongue picture to be detected according to the probabilities of the 11 constitutions represented by the tongue picture to be detected.
In summary, according to the above embodiments, 1 tongue picture recognition model, 1 tongue picture constitution model, and 5 tongue picture characteristic models are trained through a large number of tongue picture samples, when tongue picture constitutions are detected, the tongue picture to be detected is transmitted to the constitution identification device, the constitution identification device first identifies whether the tongue picture to be detected is a tongue picture according to the 1 tongue picture recognition model, if so, the constitution identification device calculates the probabilities of 11 kinds of constitution types and the probabilities of 5 kinds of tongue features represented by the tongue picture to be detected according to the 1 tongue picture constitution model and the 5 tongue picture characteristic models, and finally selects the constitution type with the highest probability from the constitution types corresponding to the combination features composed of the highest probability sub-features of the tongue picture features as the constitution of the tongue picture to be detected. The physique identification method enhances the precision of physique identification and also improves the efficiency of physique identification, and the physique identification equipment can be a mobile phone, so that a user can directly detect the physique type of the user by directly shooting a tongue picture of the user by using the mobile phone, the use of the user is more convenient, and the method is suitable for large-scale popularization and use.

Claims (8)

1. A tongue picture constitution distinguishing method based on a deep neural network is characterized by comprising the following steps:
training 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture characteristic models through a deep neural network algorithm; wherein, the tongue manifestation constitutions comprise 11 constitutions;
acquiring a tongue picture to be detected;
identifying whether the tongue picture to be detected is a tongue picture or not through the tongue picture identification model;
if so, adjusting the tongue picture to be detected into a square;
cutting the to-be-detected tongue picture adjusted to be square into 5-10 pixel values in the left, right, upper and lower directions respectively to obtain a left moving tongue picture, a right moving tongue picture, an upper moving tongue picture and a lower moving tongue picture;
respectively calculating the probabilities of 11 constitutions represented by a tongue picture to be detected, a left tongue picture to be moved, a right tongue picture to be moved, an upper tongue picture to be moved and a lower tongue picture to be moved by adopting a CNN algorithm through the tongue picture constitutional model after initializing the tongue picture constitutional model parameters;
respectively calculating 5 groups of probability values of each constitution of the 11 constitutions to obtain the probability of each constitution of the 11 constitutions represented by the tongue picture to be detected;
calculating the probability of 5 tongue picture characteristics represented by the tongue picture to be detected through the 5 tongue picture characteristic models respectively; the 5 tongue images are respectively characterized by tongue body color, tongue image shape, tongue coating thickness and tongue coating color;
according to the probabilities of the 5 tongue picture characteristics represented by the tongue picture to be detected, combining the sub-characteristics with the highest probability in each of the 5 tongue picture characteristics into a characteristic combination, and determining a plurality of constitutions corresponding to the characteristic combination;
and according to the probabilities of the 11 constitutions represented by the tongue picture to be detected, taking the constitution with the highest probability in the plurality of constitutions as the constitution represented by the tongue picture to be detected.
2. The method according to claim 1, wherein after the tongue picture to be detected is obtained and before the tongue picture to be detected is identified as the tongue picture by the tongue picture identification model, the method further comprises the following steps:
judging whether the tongue picture to be detected is a picture containing a complete face;
if the tongue picture to be detected is not a picture containing a complete human face, identifying whether the pixel value of the tongue picture to be detected is larger than 244 multiplied by 244;
and if the pixel value of the tongue picture to be detected is larger than 244 x 244, adjusting the pixel value of the tongue picture to be detected to be 244 x 244.
3. The method according to claim 2, wherein the step of identifying whether the tongue picture to be detected is a tongue picture through the tongue picture identification model comprises the following steps:
respectively calculating the probability of whether the tongue picture to be detected is a tongue picture by adopting a CNN algorithm through the tongue picture identification model;
comparing whether the probability that the tongue picture to be detected is the tongue picture is greater than the probability that the tongue picture to be detected is not the tongue picture;
if so, determining that the tongue picture to be detected is a tongue picture;
and if not, determining that the tongue picture to be detected is not a tongue picture.
4. The method according to claim 3, wherein the calculating the probability of 5 tongue features represented by the tongue picture to be detected through the 5 tongue feature models respectively comprises the following steps:
adjusting the tongue picture to be detected into a square;
cutting the to-be-detected tongue picture adjusted to be square into 5-10 pixel values in the left, right, upper and lower directions respectively to obtain a left moving tongue picture, a right moving tongue picture, an upper moving tongue picture and a lower moving tongue picture;
respectively calculating the probabilities of 5 tongue picture characteristics represented by a tongue picture to be detected, a left tongue picture to be moved, a right tongue picture to be moved, an upper tongue picture to be moved and a lower tongue picture by adopting a CNN algorithm through the tongue picture characteristic model after initializing tongue picture characteristic model parameters;
and respectively calculating 5 groups of probability values of each tongue picture characteristic of the 5 tongue picture characteristics to obtain the probability of the 5 tongue picture characteristics represented by the tongue picture to be detected.
5. A tongue picture constitution identification device based on a deep neural network, comprising:
the model acquisition unit is used for training 1 tongue picture recognition model, 1 tongue picture constitution model and 5 tongue picture characteristic models through a deep neural network algorithm; wherein, the tongue manifestation constitutions comprise 11 constitutions;
the picture acquisition unit is used for acquiring a tongue picture to be detected;
the picture identification unit is used for identifying whether the tongue picture to be detected is a tongue picture or not through the tongue picture identification model;
the first calculating unit is used for respectively calculating the probabilities of the 11 constitutions represented by the tongue picture to be detected through the tongue constitution model when the picture identifying unit identifies that the tongue picture to be detected is a tongue picture;
the first calculation unit includes:
the first adjusting subunit is used for adjusting the tongue picture to be detected into a square;
the second cutting subunit is used for cutting the tongue picture to be detected adjusted into the square into 5-10 pixel values in the left, right, upper and lower four directions respectively to obtain a left tongue picture, a right tongue picture, an upper tongue picture and a lower tongue picture;
the first calculating subunit is used for respectively calculating the probabilities of the 11 constitutions represented by the tongue picture to be detected, the left tongue picture, the right tongue picture, the upper tongue picture and the lower tongue picture by adopting a CNN algorithm through the tongue picture constitutional model after the tongue picture constitutional model parameters are initialized;
the first probability obtaining subunit is used for respectively calculating 5 groups of probability values of each of the 11 constitutions to obtain the probability of each of the 11 constitutions represented by the tongue picture to be detected;
the second calculating unit is used for respectively calculating the probabilities of 5 tongue picture characteristics represented by the tongue picture to be detected through the 5 tongue picture characteristic models when the picture identifying unit identifies that the tongue picture to be detected is a tongue picture; the 5 tongue images are respectively characterized by tongue body color, tongue image shape, tongue coating thickness and tongue coating color;
the characteristic combination determining unit is used for forming a characteristic combination by the sub-characteristics with the highest probability in each of the 5 tongue picture characteristics according to the probabilities of the 5 tongue picture characteristics represented by the tongue picture to be detected and determining a plurality of constitutions corresponding to the characteristic combination;
and the constitution determining unit is used for taking the constitution with the highest probability in the plurality of constitutions as the constitution represented by the tongue picture to be detected according to the probabilities of the 11 constitutions represented by the tongue picture to be detected.
6. The apparatus of claim 5, further comprising:
the judging unit is used for judging whether the tongue picture to be detected is a picture containing a complete face or not after the picture acquiring unit acquires the tongue picture to be detected and before the picture identifying unit identifies whether the tongue picture to be detected is a tongue picture or not through the tongue picture identifying model;
the pixel identification unit is used for identifying whether the pixel value of the tongue picture to be detected is greater than 244 multiplied by 244 when the judgment unit judges that the tongue picture to be detected is a picture not containing a complete human face;
and the pixel adjusting unit is used for adjusting the pixel value of the tongue picture to be detected to 244 x 244 when the pixel identifying unit identifies that the pixel value of the tongue picture to be detected is larger than 244 x 244.
7. The apparatus of claim 6, wherein the picture recognition unit comprises:
the calculating subunit is used for respectively calculating the probability of whether the tongue picture to be detected is a tongue picture by adopting a CNN algorithm through the tongue picture identification model;
the comparison subunit is used for comparing whether the probability that the tongue picture to be detected is the tongue picture is greater than the probability that the tongue picture to be detected is not the tongue picture;
the first determining subunit is used for determining that the tongue picture to be detected is a tongue picture when the probability that the tongue picture to be detected is the tongue picture is compared by the comparing subunit is greater than the probability that the tongue picture to be detected is not a tongue picture;
and the second determining subunit is used for determining that the tongue picture to be detected is not the tongue picture when the probability that the tongue picture to be detected is compared by the comparing subunit is smaller than the probability that the tongue picture to be detected is not the tongue picture.
8. The apparatus according to claim 7, wherein the second calculation unit comprises:
the second adjusting subunit is used for adjusting the tongue picture to be detected into a square;
the second cutting subunit is used for cutting the tongue picture to be detected adjusted into the square into 5-10 pixel values in the left, right, upper and lower four directions respectively to obtain a left tongue picture, a right tongue picture, an upper tongue picture and a lower tongue picture;
the second calculating subunit is used for respectively calculating the probabilities of the 5 tongue picture characteristics represented by the tongue picture to be detected, the left tongue picture to be detected, the right tongue picture to be detected, the upper tongue picture to be detected and the lower tongue picture by adopting a CNN algorithm through the tongue picture characteristic model after the tongue picture characteristic model parameters are initialized;
and the second probability obtaining subunit is used for respectively calculating 5 groups of probability values of each of the 5 tongue picture features to obtain the probabilities of the 5 tongue picture features represented by the tongue picture to be detected.
CN201811143472.7A 2018-09-28 2018-09-28 Tongue picture constitution identification method and device based on deep neural network Active CN109199334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811143472.7A CN109199334B (en) 2018-09-28 2018-09-28 Tongue picture constitution identification method and device based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811143472.7A CN109199334B (en) 2018-09-28 2018-09-28 Tongue picture constitution identification method and device based on deep neural network

Publications (2)

Publication Number Publication Date
CN109199334A CN109199334A (en) 2019-01-15
CN109199334B true CN109199334B (en) 2021-06-22

Family

ID=64982045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811143472.7A Active CN109199334B (en) 2018-09-28 2018-09-28 Tongue picture constitution identification method and device based on deep neural network

Country Status (1)

Country Link
CN (1) CN109199334B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109770875A (en) * 2019-03-27 2019-05-21 上海铀米机器人科技有限公司 A kind of human body constitution discrimination method and system based on neural network classifier
CN110675389A (en) * 2019-09-27 2020-01-10 珠海格力电器股份有限公司 Food recommendation method, storage medium and intelligent household equipment
CN111209801A (en) * 2019-12-24 2020-05-29 新绎健康科技有限公司 Traditional Chinese medicine fat tongue identification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110057573A (en) * 2009-11-24 2011-06-01 한국 한의학 연구원 Method for separating color information using tongue photo
CN103239206A (en) * 2012-02-10 2013-08-14 陈舒怡 Physical constitution instrument used in Chinese traditional medicine
CN106683087A (en) * 2016-12-26 2017-05-17 华南理工大学 Coated tongue constitution distinguishing method based on depth neural network
CN107397530A (en) * 2017-07-19 2017-11-28 广州华久信息科技有限公司 A kind of tcm constitution distinguishes conditioning system with emotion
CN107610087A (en) * 2017-05-15 2018-01-19 华南理工大学 A kind of tongue fur automatic division method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110057573A (en) * 2009-11-24 2011-06-01 한국 한의학 연구원 Method for separating color information using tongue photo
CN103239206A (en) * 2012-02-10 2013-08-14 陈舒怡 Physical constitution instrument used in Chinese traditional medicine
CN106683087A (en) * 2016-12-26 2017-05-17 华南理工大学 Coated tongue constitution distinguishing method based on depth neural network
CN107610087A (en) * 2017-05-15 2018-01-19 华南理工大学 A kind of tongue fur automatic division method based on deep learning
CN107397530A (en) * 2017-07-19 2017-11-28 广州华久信息科技有限公司 A kind of tcm constitution distinguishes conditioning system with emotion

Also Published As

Publication number Publication date
CN109199334A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
JP6951327B2 (en) Methods and systems for inspecting visual aspects
CN109199334B (en) Tongue picture constitution identification method and device based on deep neural network
AU2012328140B2 (en) System and method for identifying eye conditions
CN111563887B (en) Intelligent analysis method and device for oral cavity image
CN107886503A (en) A kind of alimentary canal anatomical position recognition methods and device
CN110251066A (en) Based on the not positive system and method for subjective distance measuring measurement ophthalmic refractive
WO2015158100A1 (en) Emotional bandwidth determination and emotional damage judgment method thereof
CN109637660B (en) Tongue diagnosis analysis method and system based on deep convolutional neural network
CN113693552A (en) Visual fatigue monitoring method and device, electronic equipment and readable storage medium
CN102567734A (en) Specific value based retina thin blood vessel segmentation method
CN104382552B (en) A kind of comprehensive visual function detection equipment
CN110495888B (en) Standard color card based on tongue and face images of traditional Chinese medicine and application thereof
CN109242792B (en) White balance correction method based on white object
CN110660454A (en) Cancer pain real-time assessment instrument and assessment method thereof
KR101652739B1 (en) Eyesight examination method, eyesight examination apparatus and download server storing a program of the eyesight examination method
CN113707305A (en) Health scheme recommendation method, device, equipment and medium based on deep learning
CN109241963A (en) Blutpunkte intelligent identification Method in capsule gastroscope image based on Adaboost machine learning
CN108427988A (en) A kind of alimentary canal anatomical position identification device
CN105809653B (en) Image processing method and device
CN104352340B (en) A kind of comprehensive visual functional training apparatus and method for
CN109711306A (en) A kind of method and apparatus obtaining facial characteristics based on depth convolutional neural networks
CN115359548A (en) Handheld intelligent pupil detection device and detection method
KR102595429B1 (en) Apparatus and method for automatic calculation of bowel preparation
Walsh et al. Inversion produces opposite size illusions for faces and bodies
Lin et al. Automatic sublingual vein feature extraction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant