CN115954101A - Health degree management system and management method based on AI tongue diagnosis image processing - Google Patents

Health degree management system and management method based on AI tongue diagnosis image processing Download PDF

Info

Publication number
CN115954101A
CN115954101A CN202310238957.9A CN202310238957A CN115954101A CN 115954101 A CN115954101 A CN 115954101A CN 202310238957 A CN202310238957 A CN 202310238957A CN 115954101 A CN115954101 A CN 115954101A
Authority
CN
China
Prior art keywords
abnormal
tongue
image
acquiring
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310238957.9A
Other languages
Chinese (zh)
Inventor
焦晶
陈伟杰
巩振坤
曹军婷
程斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yinuo Technology Co ltd
Original Assignee
Nanjing Yinuo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yinuo Technology Co ltd filed Critical Nanjing Yinuo Technology Co ltd
Priority to CN202310238957.9A priority Critical patent/CN115954101A/en
Publication of CN115954101A publication Critical patent/CN115954101A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention belongs to the technical field of tongue diagnosis detection, and particularly relates to a health degree management system and a health degree management method based on AI tongue diagnosis image processing. The method can judge whether the health degree of the doctor is abnormal in advance according to an image comparison mode, analyze the abnormal focus characteristics under the premise of determining the abnormal health degree, extract the focus characteristics according to the tongue image information of the doctor, evaluate the focus characteristics according to the area occupied by the focus characteristics and the color depth, output the health degree score of the doctor according to the evaluation result, and then the doctor can make a corresponding diagnosis and treatment scheme according to the health degree score.

Description

Health degree management system and management method based on AI tongue diagnosis image processing
Technical Field
The invention belongs to the technical field of tongue diagnosis detection, and particularly relates to a health degree management system and a health degree management method based on AI tongue diagnosis image processing.
Background
The tongue diagnosis is a simple and effective method for assisting diagnosis and identification by observing changes of color and form of the tongue, common diseases have the phenomena of red tongue coating, cracked tongue coating and the like, but patients mostly need to queue and register in hospitals when needing to see a doctor, and then corresponding doctors are searched for diagnosis, more time is wasted in the process, and the doctors only make a diagnosis according to the tongue coating diseases of the patients when seeing the doctor, and then provide corresponding diagnosis reports or directly inform the diagnosis results of the patients.
The existing tongue diagnosis process adopting image processing mostly uploads the whole tongue image to an identification terminal, and then is evaluated one by a doctor, but the tongue image information has more representations and is possibly influenced with each other, so that some misleading factors can appear in the evaluation process of the doctor, and the artificial evaluation is easy to cause fatigue, so that the deviation can appear in the output tongue diagnosis result.
Disclosure of Invention
The invention aims to provide a health degree management system and a health degree management method based on AI tongue diagnosis image processing, which can extract focus characteristics according to tongue image information of a doctor, then respectively evaluate according to the area occupied by the focus characteristics and the color depth degree, and finally output a health degree score of the doctor according to an evaluation result.
The technical scheme adopted by the invention is as follows:
a health degree management method based on AI tongue diagnosis image processing comprises the following steps:
acquiring identity information of the medical personnel, wherein the identity information of the medical personnel comprises name, sex, age, height, weight, basic diseases and mobile phone numbers;
acquiring tongue image information of a doctor, wherein the tongue image information comprises tongue surface information and sublingual information;
acquiring a standard tongue image and comparing the standard tongue image with the tongue image information;
if the standard tongue image is consistent with the tongue image information, judging that the health degree of the patient is normal;
if the standard tongue image information is inconsistent with the tongue image information, judging that the health degree of the patient is abnormal, and calibrating the tongue image information into an abnormal tongue image;
acquiring an abnormal tongue image, and inputting the abnormal tongue image into a feature extraction model to obtain abnormal features and abnormal areas corresponding to the abnormal features;
calculating the ratio of the abnormal region in the tongue image, and calibrating the abnormal region as a disease parameter to be evaluated;
and constructing an evaluation interval, comparing the evaluation interval with the disease parameters to be evaluated, and outputting the health degree score of the patient according to the comparison result.
In a preferred embodiment, the step after acquiring the tongue image information of the medical staff comprises:
acquiring a focus characteristic information set;
acquiring abnormal tongue images of the medical personnel and calibrating the abnormal tongue images as images to be diagnosed;
equally dividing the image to be diagnosed into a plurality of regions to be evaluated, and comparing the regions to be evaluated with the focus characteristic information set one by one;
and focus characteristics corresponding to all the areas to be evaluated are screened out from the focus characteristic information set and are marked as abnormal focuses, wherein the number of the abnormal focuses is n, and the value of n is a positive integer.
In a preferred scheme, after the abnormal focuses are screened out, the abnormal focuses are sorted according to the number of the abnormal focuses;
acquiring color information and boundary texture information of each abnormal focus;
defining a foreground image in a region containing color information and boundary texture information of an abnormal focus, and defining the rest part as a background image;
and realizing the segmentation of the foreground image and the background image by adopting an interactive segmentation algorithm to obtain a regionalized abnormal focus.
In a preferred embodiment, the step of obtaining the abnormal tongue image and inputting the abnormal tongue image into the feature extraction model to obtain the abnormal features includes:
acquiring edge feature points of all abnormal lesions;
constructing a virtual coordinate system, acquiring pixel coordinates of edge feature points of all abnormal focuses one by one, respectively summarizing the pixel coordinates into a sample set to be evaluated, and numbering the pixel coordinates one by one;
comparing pixel coordinates in a sample set to be evaluated under adjacent numbers one by one, screening out the pixel coordinate closest to the adjacent numbers, and determining the distance between the pixel coordinates as an offset;
and splicing the plurality of abnormal focuses one by one according to the offset, and calibrating the spliced image as the abnormal features.
In a preferred embodiment, the abnormal focus is shifted, which comprises the following steps:
respectively acquiring the highest point coordinate and the lowest point coordinate of the adjacent abnormal focus, and calibrating the coordinates into critical coordinates;
acquiring a longitudinal coordinate difference value in the critical coordinate;
if the difference value of the longitudinal coordinates is larger than zero, judging that the adjacent abnormal focus is longitudinally spliced;
and if the difference value of the longitudinal coordinates is less than or equal to zero, judging that the adjacent abnormal focus is transversely spliced.
In a preferred embodiment, after the abnormal region corresponding to the abnormal feature is determined, the abnormal region is input to the calculation module and the proportion of the abnormal region is calculated, and the specific process is as follows:
acquiring the edge pixel point coordinates of the abnormal area in the virtual coordinate system;
obtaining an objective function from the calculation module;
inputting the edge pixel point coordinates into a target function to obtain the area of an abnormal area;
and acquiring the total area of the tongue image, comparing the total area with the area of the abnormal area to obtain the ratio of the abnormal area in the tongue image, and calibrating the ratio as a disease parameter to be evaluated.
In a preferable scheme, when the abnormal region is determined, binarization processing is carried out on the abnormal region;
acquiring coordinates of each pixel point in the abnormal area;
and constructing a super-red algorithm, and inputting each pixel point into the super-red algorithm one by one to obtain a deepened gray value and a deepened image of the abnormal region.
In a preferred embodiment, the step of outputting the score of the health degree of the patient comprises:
acquiring an evaluation interval, wherein the evaluation interval comprises a first evaluation interval and a second evaluation interval, the first evaluation interval is used for evaluating deepened gray values, the second evaluation interval is used for evaluating the area of an abnormal area, the comparison priority of the first evaluation interval is higher than that of the second evaluation interval, a plurality of second evaluation intervals are arranged, each second evaluation interval corresponds to one health degree score, the value of the health degree score is 0-100, one first evaluation interval is arranged, and the corresponding health degree score is 0;
acquiring a first evaluation interval;
comparing the deepened gray values with a first evaluation interval one by one, and judging whether deepened gray values exceeding the first evaluation interval exist in the abnormal area or not;
if yes, judging the health degree score of the patient to be 0;
and if the abnormal area does not exist, acquiring the area of the abnormal area, and comparing the area with the second evaluation interval to obtain the health degree score of the patient.
The invention also provides a health degree management system based on AI tongue diagnosis image processing, which is applied to the health degree management method based on AI tongue diagnosis image processing and comprises the following steps:
the system comprises a front-end data acquisition module, a management module and a management module, wherein the front-end data acquisition module is used for acquiring the identity information of the patient, and the identity information of the patient comprises name, sex, age, height, weight, basic diseases and mobile phone number;
the image acquisition module is used for acquiring tongue image information of the doctor, wherein the tongue image information comprises tongue surface information and sublingual information;
the judging module is used for acquiring a standard tongue image and comparing the standard tongue image with the tongue image information;
if the standard tongue image is consistent with the tongue image information, judging that the health degree of the medical staff is normal;
if the standard tongue image information is inconsistent with the tongue image information, judging that the health degree of the patient is abnormal, and calibrating the tongue image information into an abnormal tongue image;
the characteristic extraction module is used for acquiring the abnormal tongue image and inputting the abnormal tongue image into the characteristic extraction model to obtain abnormal characteristics and an abnormal area corresponding to the abnormal characteristics;
the calculation module is used for calculating the proportion of the abnormal area in the tongue image and calibrating the abnormal area as a disease parameter to be evaluated;
and the evaluation module is used for constructing an evaluation interval, comparing the evaluation interval with the disease parameter to be evaluated, and outputting the health degree score of the patient according to the comparison result.
And, a health management terminal based on AI tongue diagnosis image processing, comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores a computer program executable by the at least one processor, the computer program being executed by the at least one processor to enable the at least one processor to perform the above-mentioned health management method based on AI tongue diagnosis image processing.
The invention has the technical effects that:
the invention can judge whether the health degree of the doctor is abnormal in advance according to the image comparison mode, analyzes the abnormal focus characteristics under the premise of determining the abnormal health degree, extracts the focus characteristics according to the tongue image information of the doctor, evaluates the focus characteristics according to the area and the color depth of the occupied area of the focus characteristics, and outputs the health degree score of the doctor according to the evaluation result, and then the doctor can make a corresponding diagnosis and treatment scheme according to the health degree score.
Drawings
FIG. 1 is a schematic block diagram of a system provided by an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method provided by an embodiment of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, embodiments accompanying figures of the present invention are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one preferred embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Referring to fig. 1 and 2, the present invention provides a health management method based on AI tongue inspection image processing, including:
s1, acquiring identity information of a patient, wherein the identity information of the patient comprises name, sex, age, height, weight, basic diseases and mobile phone number;
s2, tongue image information of the patient is collected, wherein the tongue image information comprises tongue surface information and sublingual information;
s3, acquiring a standard tongue image, and comparing the standard tongue image with tongue image information;
if the standard tongue image is consistent with the tongue image information, judging that the health degree of the patient is normal;
if the standard tongue image is inconsistent with the tongue image information, judging that the health degree of the doctor is abnormal, and calibrating the tongue image information into an abnormal tongue image;
s4, acquiring an abnormal tongue image, and inputting the abnormal tongue image into the feature extraction model to obtain abnormal features and abnormal areas corresponding to the abnormal features;
s5, calculating the ratio of the abnormal area in the tongue image, and calibrating the ratio as a disease parameter to be evaluated;
and S6, establishing an evaluation interval, comparing the evaluation interval with the parameters of the diseases to be evaluated, and outputting the health degree score of the patient according to the comparison result.
As described in the above steps S1-S6, the tongue diagnosis is a simple and effective method for assisting diagnosis and identification by observing changes in color and form of the tongue, common symptoms include red tongue coating, cracked tongue coating, etc., when a patient needs to see a doctor, the patient mostly needs to queue and register in a hospital, and then search for a corresponding doctor for a diagnosis, which requires a lot of wasted time, and when the doctor visits a doctor, the doctor only diagnoses the tongue coating symptoms of the patient, and then presents a corresponding diagnosis report or directly informs the patient of the diagnosis result, based on which it is known that tongue coatings corresponding to different symptoms are inconsistent, and combining with an informatization technology, the patient can take a picture by himself/herself and upload the picture to an image acquisition module, and calibrate the picture to tongue image information, wherein in the tongue diagnosis process, not only the tongue surface needs to be observed, but also the tongue surface needs to be observed, and during the tongue image information acquisition process, the user can increase the accuracy rate of tongue image information acquisition based on corresponding operation prompts in an interactive terminal, avoid repeated acquisition of the tongue surface and the situation of abnormal tongue image information, and the abnormal situation that the abnormal situation exists in the future analysis of the abnormal situation, and the abnormal situation is determined by the human-computer, and the abnormal situation is not found by the abnormal human-machine analysis, and the abnormal situation is determined by the abnormal situation, and the abnormal situation is found by the abnormal situation, and finally, evaluating the health score of the doctor, wherein before calculating the area of the abnormal region, it needs to evaluate whether the tongue image information has an excessively serious symptom degree in advance, and if the phenomenon occurs, the health score of the doctor is judged to be 0, and the area of the abnormal region is not continuously evaluated.
In a preferred embodiment, the step after acquiring the tongue image information of the medical examiner comprises:
s201, acquiring a focus characteristic information set;
s202, obtaining the abnormal tongue image of the patient, and calibrating the abnormal tongue image as an image to be diagnosed;
s203, equally dividing the image to be diagnosed into a plurality of regions to be evaluated, and comparing the regions one by one with a focus characteristic information set;
s204, focus characteristics corresponding to all the areas to be evaluated are screened out from the focus characteristic information set, and are marked as abnormal focuses, wherein the number of the abnormal focuses is n, and the value of n is a positive integer.
As described in the foregoing steps S201-S204, when determining that the health degree of the medical staff is abnormal, it is necessary to determine a specific focus in advance, and through presetting a focus feature information set, a focus with corresponding different features is pre-recorded in the focus feature information set, and through comparison with the image to be diagnosed, it is possible to determine the specific focus.
Secondly, after abnormal focuses are screened out, sorting is carried out according to the number of the abnormal focuses;
acquiring color information and boundary texture information of each abnormal focus;
defining a region containing color information and boundary texture information of the abnormal focus as a foreground image, and defining the rest part as a background image;
and (3) realizing the segmentation of the foreground image and the background image by adopting an interactive segmentation algorithm to obtain a regionalized abnormal focus.
In the above, the distribution position of each lesion on the lingual surface or the sublingual surface is also uncertain, the same lesion may be distributed in the same area, or may be distributed in each area on the lingual surface or the sublingual surface, and in combination with the prior experience, for an abnormal lesion with a large distribution area, the probability of a large degree of severity is relatively high, and certainly, the phenomenon of a large degree of severity of an abnormal lesion with a small distribution area is not excluded.
In a preferred embodiment, the step of obtaining the abnormal tongue image and inputting the abnormal tongue image into the feature extraction model to obtain the abnormal features includes:
acquiring edge feature points of all abnormal focuses;
s401, constructing a virtual coordinate system, acquiring pixel coordinates of edge feature points of all abnormal lesions one by one, respectively summarizing the pixel coordinates into a sample set to be evaluated, and numbering the sample set one by one;
s402, comparing pixel coordinates in a sample set to be evaluated under adjacent numbers one by one, screening out the pixel coordinate closest to the adjacent numbers, and determining the distance between the pixel coordinates as an offset;
and S403, splicing the multiple abnormal focuses one by one according to the offset, and calibrating the spliced image as an abnormal feature.
As described in the above steps S401 to S403, after the abnormal lesions are segmented from the tongue image information, in order to facilitate the combined analysis, the abnormal lesions in different areas without connection relation are spliced together, so that the abnormal lesions can be uniformly analyzed subsequently.
In a preferred embodiment, the method for shifting the lesion comprises the following steps:
stp1, respectively acquiring the highest point coordinate and the lowest point coordinate of adjacent abnormal focuses, and calibrating the highest point coordinate and the lowest point coordinate as critical coordinates;
stp2, acquiring a longitudinal coordinate difference value in the critical coordinate;
if the difference value of the longitudinal coordinates is larger than zero, judging that adjacent abnormal focuses are longitudinally spliced;
and if the difference value of the vertical coordinates is less than or equal to zero, judging that the adjacent abnormal focus is transversely spliced.
As described in the above steps Stp1-Stp2, in the process of abnormal lesion migration, although the shortest distance between the pixel coordinates is determined, there may be an intersection situation of the abnormal lesions, and for this phenomenon, it is necessary to acquire the highest point coordinates and the lowest point coordinates of adjacent abnormal lesions, sort them in the horizontal coordinate order, and then compare the vertical coordinates, and the specific calculation process is as follows:
Figure SMS_1
in, is greater than or equal to>
Figure SMS_2
Represents a difference in ordinate value, < > or >>
Figure SMS_3
Is in position->
Figure SMS_4
If the difference value of the vertical coordinates is greater than zero, the highest point coordinate of the abnormal focus next to the position is higher than the lowest point coordinate of the abnormal focus of the previous position, and at this time, if the transverse splicing is performed, the adjacent abnormal focuses are likely to overlap, so that the longitudinal splicing is required for the phenomenon, one abnormal focus is subjected to twice offset, so that the highest point and the lowest point of the adjacent abnormal focuses are spliced together, otherwise, the abnormal focuses are subjected to the transverse splicing, the transverse splicing process can refer to the longitudinal splicing process, and repeated details are not repeated herein.
In a preferred embodiment, after determining the abnormal area corresponding to the abnormal feature, the abnormal area is input to the calculation module and the percentage of the abnormal area is calculated, which specifically includes the following steps:
s501, obtaining edge pixel point coordinates of the abnormal area in a virtual coordinate system;
s502, acquiring a target function from a calculation module;
s503, inputting the edge pixel point coordinates into a target function to obtain the area of the abnormal area;
s504, the total area of the tongue image is obtained and compared with the area of the abnormal area, the proportion of the abnormal area in the tongue image is obtained, and the proportion is calibrated as a disease parameter to be evaluated.
As described in the foregoing steps S501 to S504, the calculation module is executed to calculate the area of the abnormal region, so as to provide corresponding data support for subsequent evaluation of the health degree of the medical staff, specifically, a Canny algorithm can be used to obtain edge feature points of the abnormal region, track the edges of the abnormal region, and synchronously generate a tracking curve, so as to obtain a two-dimensional closed abnormal region, and then, in combination with the objective function in the calculation module:
Figure SMS_5
the area of the abnormal area can be calculated, wherein>
Figure SMS_8
Indicates the area of the abnormal region>
Figure SMS_10
Representing the number of pixel coordinates, in>
Figure SMS_11
Represents the interval 1 to>
Figure SMS_12
In the horizontal coordinate of all pixel coordinates, and>
Figure SMS_13
represents the interval 1 to>
Figure SMS_14
The ordinate of all the pixel coordinates is combined with the formula->
Figure SMS_6
The proportion of the abnormal area in the tongue image can be calculated, wherein the proportion is based on the tongue image>
Figure SMS_7
Represents a parameter which is characteristic of the condition to be assessed>
Figure SMS_9
The total area of the tongue image is represented, and then subsequent evaluation operation can be performed based on the parameters to be evaluated, so that the health degree of the medical staff is judged.
In a preferred embodiment, when the abnormal region is determined, the abnormal region is subjected to binarization processing;
acquiring coordinates of each pixel point in the abnormal area;
and constructing a super-red algorithm, and inputting each pixel point into the super-red algorithm one by one to obtain a deepened gray value and a deepened image of the abnormal region.
In this embodiment, the color depth of the segmented abnormal region is not likely to be different for the evaluation of the doctor, and the color depth is particularly unfavorableThe sign may correspond to a relatively serious disease, after the disease is autonomously evaluated, the subjective evaluation still needs to be performed by a doctor, in order to highlight the characteristics of a focus, the super-red algorithm is adopted to emphatically display a disease area in the embodiment, so as to evaluate the disease degree of a doctor, wherein the phenomenon that tongue fur is whitish or yellowed and the like may occur in the tongue diagnosis process, but the phenomenon that the color depth is strengthened is still not beneficial for the doctor to distinguish, the characteristic color can be converted into red before the characteristic color is input into the super-red algorithm, which is a relatively conventional and easily-realized technical means, the characteristic color is not limited and is not described in the text, and after the characteristic color is converted into red, the super-red algorithm is utilized:
Figure SMS_15
in the formula (I), wherein,
Figure SMS_16
,/>
Figure SMS_17
,/>
Figure SMS_18
three-channel gray value representing an abnormal region>
Figure SMS_19
The gray value calculated by the hyper-red algorithm is represented, and based on the formula, the color depth of the abnormal area can be strengthened to obtain a deepened gray value and a deepened image.
In a preferred embodiment, the step of outputting the score of the health degree of the patient comprises:
s601, obtaining evaluation intervals, wherein the evaluation intervals comprise a first evaluation interval and a second evaluation interval, the first evaluation interval is used for evaluating deepened gray values, the second evaluation interval is used for evaluating the area of an abnormal area, the comparison priority of the first evaluation interval is higher than that of the second evaluation interval, the number of the second evaluation intervals is multiple, each second evaluation interval corresponds to one health degree score, the value of the health degree score is 0-100, the number of the first evaluation intervals is one, and the corresponding health degree score is 0;
s602, acquiring a first evaluation interval;
s603, comparing the deepened gray values with the first evaluation interval one by one, and judging whether deepened gray values exceeding the first evaluation interval exist in the abnormal area or not;
if yes, judging the health degree score of the patient to be 0;
if the abnormal area does not exist, the area of the abnormal area is obtained and compared with the second evaluation interval, and the health degree score of the patient is obtained.
As described in the above steps S601-S603, when the health degree of the clinician is evaluated, a score system is used to determine an output result, wherein the health degree score is 0 to 100, the higher the score value is, the more normal the health degree of the clinician tends to be, and before evaluating the area of the abnormal area, a phenomenon evaluation deepening gray value is required, if a deepening gray value for the first evaluation interval exists in the abnormal area, the health degree score of the clinician is directly output as 0, which indicates that the clinician needs to receive timely treatment, and the area of the abnormal area does not need to be evaluated, after the deepening gray value exceeding the first evaluation interval does not exist in the abnormal area, the clinician evaluates the parameters to be evaluated, which is specifically compared with the second evaluation interval, the second evaluation interval is provided with a plurality of health degree scores respectively corresponding to different health degree scores, for example, the second evaluation interval of the clinician parameters is set as (0, 20), (20, 40), (60, 80, 100), the clinician can set an absolute score as 80, and a health degree score for the clinician can be set as an absolute score, and a health degree score for the clinician can be used as a health degree score for a medical process for a medical review, and a medical procedure can be set for a medical review, and a medical procedure can be used as a medical review.
The invention also provides: a health management system based on AI tongue diagnosis image processing is applied to the health management method based on AI tongue diagnosis image processing, and comprises the following steps:
the system comprises a front-end data acquisition module, a management module and a management module, wherein the front-end data acquisition module is used for acquiring the identity information of the patient, and the identity information of the patient comprises name, gender, age, height, weight, basic diseases and mobile phone number;
the image acquisition module is used for acquiring tongue image information of the patient, wherein the tongue image information comprises tongue surface information and sublingual information;
the judging module is used for acquiring a standard tongue image and comparing the standard tongue image with tongue image information;
if the standard tongue image is consistent with the tongue image information, judging that the health degree of the patient is normal;
if the standard tongue image information is inconsistent with the tongue image information, judging that the health degree of the doctor is abnormal, and calibrating the tongue image information into an abnormal tongue image;
the characteristic extraction module is used for acquiring the abnormal tongue image and inputting the abnormal tongue image into the characteristic extraction model to obtain abnormal characteristics and an abnormal area corresponding to the abnormal characteristics;
the calculation module is used for calculating the proportion of the abnormal area in the tongue image and calibrating the abnormal area as a disease parameter to be evaluated;
and the evaluation module is used for constructing an evaluation interval, comparing the evaluation interval with the parameters of the diseases to be evaluated and outputting the health degree score of the medical staff according to the comparison result.
As described above, in the tongue diagnosis process, not only the tongue surface but also the tongue under the tongue need to be observed, and in the process of acquiring tongue image information by a patient himself, the image acquisition module can increase the accuracy of tongue image information acquisition based on corresponding operation prompts, and avoid wasting the time of the patient due to repeated acquisition, which is a conventional human-computer interaction technical means (similar to portrait recognition), and detailed description is omitted here, and certainly, identity information of the patient needs to be included before, so that the tongue image information can be accurately matched with the patient after the tongue image information is analyzed, the tongue image information acquired by the data acquisition module is uploaded to the determination module, the determination module determines whether an abnormal condition exists in the uploaded tongue image information by using standard tongue image information, and for a phenomenon that no abnormal condition exists, the health degree of the diagnostician is determined to be normal, for a condition that an abnormal condition exists, the health degree of the diagnostician is determined to be abnormal, and then the abnormal part is extracted by using the feature extraction module, and the area of the abnormal area is calculated by using the calculation module, and finally, the health degree score of the heat source is evaluated by the evaluation module.
And, a health management terminal based on AI tongue diagnosis image processing, comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
the memory stores a computer program executable by the at least one processor, and the computer program is executed by the at least one processor, so that the at least one processor can execute the health management method based on AI tongue diagnosis image processing.
Those skilled in the art will appreciate that the health management terminal of the present invention may be specially designed and manufactured for the required purposes, or may comprise known devices in general-purpose computers. These devices have stored therein computer programs or applications that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, apparatus, article, or method comprising the element.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and these improvements and modifications should also be construed as the protection scope of the present invention. Structures, devices, and methods of operation not specifically described or illustrated herein are generally practiced in the art without specific recitation or limitation.

Claims (10)

1. A health degree management method based on AI tongue diagnosis image processing is characterized in that: the method comprises the following steps:
acquiring the identity information of the patient, wherein the identity information of the patient comprises name, gender, age, height, weight, basic diseases and mobile phone number;
acquiring tongue image information of a doctor, wherein the tongue image information comprises tongue surface information and sublingual information;
acquiring a standard tongue image and comparing the standard tongue image with the tongue image information;
if the standard tongue image is consistent with the tongue image information, judging that the health degree of the patient is normal;
if the standard tongue image information is inconsistent with the tongue image information, judging that the health degree of the patient is abnormal, and calibrating the tongue image information into an abnormal tongue image;
acquiring an abnormal tongue image, and inputting the abnormal tongue image into a feature extraction model to obtain abnormal features and abnormal areas corresponding to the abnormal features;
calculating the ratio of the abnormal region in the tongue image, and calibrating the abnormal region as a disease parameter to be evaluated;
and constructing an evaluation interval, comparing the evaluation interval with the disease parameters to be evaluated, and outputting a health degree score of the medical staff according to a comparison result.
2. The AI tongue diagnosis image processing-based health management method according to claim 1, wherein: the step after the tongue image information of the patient is collected comprises the following steps:
acquiring a focus characteristic information set;
acquiring abnormal tongue images of the personnel in need of diagnosis, and calibrating the abnormal tongue images as images to be diagnosed;
equally dividing the image to be diagnosed into a plurality of regions to be evaluated, and comparing the regions to be evaluated with the focus characteristic information set one by one;
and focus characteristics corresponding to all the areas to be evaluated are screened out from the focus characteristic information set and are marked as abnormal focuses, wherein the number of the abnormal focuses is n, and the value of n is a positive integer.
3. The AI tongue diagnosis image processing-based health management method according to claim 2, wherein: after the abnormal focus is screened out, sorting according to the number of the abnormal focus;
acquiring color information and boundary texture information of each abnormal focus;
defining a foreground image in a region containing color information and boundary texture information of an abnormal focus, and defining the rest part as a background image;
and realizing the segmentation of the foreground image and the background image by adopting an interactive segmentation algorithm to obtain a regionalized abnormal focus.
4. The AI-tongue-image-processing-based health management method of claim 3, wherein: the step of obtaining the abnormal tongue image and inputting the abnormal tongue image into the feature extraction model to obtain the abnormal features comprises the following steps:
acquiring edge feature points of all abnormal focuses;
constructing a virtual coordinate system, acquiring pixel coordinates of edge feature points of all abnormal focuses one by one, respectively summarizing the pixel coordinates into a sample set to be evaluated, and numbering the pixel coordinates one by one;
comparing pixel coordinates in a sample set to be evaluated under adjacent numbers one by one, screening out the pixel coordinate closest to the adjacent numbers, and determining the distance between the pixel coordinates as an offset;
and splicing a plurality of abnormal focuses one by one according to the offset, and calibrating the spliced image as an abnormal feature.
5. The AI tongue diagnosis image processing-based health management method according to claim 4, wherein: when the abnormal focus is shifted, the method comprises the following steps:
respectively acquiring the highest point coordinate and the lowest point coordinate of the adjacent abnormal focus, and calibrating the coordinates into critical coordinates;
acquiring a longitudinal coordinate difference value in the critical coordinate;
if the difference value of the longitudinal coordinates is larger than zero, judging that the adjacent abnormal focus is longitudinally spliced;
and if the difference value of the longitudinal coordinates is less than or equal to zero, judging that the adjacent abnormal focus is transversely spliced.
6. The AI tongue diagnosis image processing-based health management method according to claim 4, wherein: after the abnormal area corresponding to the abnormal feature is determined, the abnormal area is input to the calculation module and the occupation ratio of the abnormal area is calculated, and the specific process is as follows:
acquiring the edge pixel point coordinates of the abnormal area in the virtual coordinate system;
obtaining an objective function from the calculation module;
inputting the edge pixel point coordinates into a target function to obtain the area of an abnormal area;
and acquiring the total area of the tongue image, comparing the total area with the area of the abnormal area to obtain the ratio of the abnormal area in the tongue image, and calibrating the ratio as a disease parameter to be evaluated.
7. The AI tongue diagnosis image processing-based health management method according to claim 6, wherein: when the abnormal area is determined, carrying out binarization processing on the abnormal area;
acquiring coordinates of each pixel point in the abnormal area;
and constructing a super-red algorithm, and inputting each pixel point into the super-red algorithm one by one to obtain a deepened gray value and a deepened image of the abnormal region.
8. The AI tongue inspection image processing-based health management method according to claim 1, wherein; the step of outputting the health score of the medical staff comprises the following steps:
acquiring an evaluation interval, wherein the evaluation interval comprises a first evaluation interval and a second evaluation interval, the first evaluation interval is used for evaluating deepened gray values, the second evaluation interval is used for evaluating the area of an abnormal area, the comparison priority of the first evaluation interval is higher than that of the second evaluation interval, a plurality of second evaluation intervals are arranged, each second evaluation interval corresponds to one health degree score, the value of the health degree score is 0-100, one first evaluation interval is arranged, and the corresponding health degree score is 0;
acquiring a first evaluation interval;
comparing the deepened gray values with a first evaluation interval one by one, and judging whether deepened gray values exceeding the first evaluation interval exist in the abnormal area or not;
if yes, judging the health degree score of the patient to be 0;
and if the abnormal area does not exist, acquiring the area of the abnormal area, and comparing the area with the second evaluation interval to obtain the health degree score of the patient.
9. A health degree management system based on AI tongue diagnosis image processing, which is applied to the health degree management method based on AI tongue diagnosis image processing according to any one of claims 1 to 8, and is characterized in that: the method comprises the following steps:
the system comprises a front-end data acquisition module, a management module and a management module, wherein the front-end data acquisition module is used for acquiring the identity information of the patient, and the identity information of the patient comprises name, sex, age, height, weight, basic diseases and mobile phone number;
the image acquisition module is used for acquiring tongue image information of the doctor, wherein the tongue image information comprises tongue surface information and sublingual information;
the judging module is used for acquiring a standard tongue image and comparing the standard tongue image with the tongue image information;
if the standard tongue image is consistent with the tongue image information, judging that the health degree of the patient is normal;
if the standard tongue image information is inconsistent with the tongue image information, judging that the health degree of the patient is abnormal, and calibrating the tongue image information into an abnormal tongue image;
the characteristic extraction module is used for acquiring the abnormal tongue image and inputting the abnormal tongue image into the characteristic extraction model to obtain abnormal characteristics and an abnormal area corresponding to the abnormal characteristics;
the calculation module is used for calculating the ratio of the abnormal area in the tongue image and calibrating the ratio as a disease parameter to be evaluated;
and the evaluation module is used for constructing an evaluation interval, comparing the evaluation interval with the disease parameters to be evaluated and outputting the health degree score of the patient according to the comparison result.
10. A health degree management terminal based on AI tongue diagnosis image processing is characterized in that: the method comprises the following steps:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method for health management based on AI tongue diagnosis image processing according to any one of claims 1 to 8.
CN202310238957.9A 2023-03-14 2023-03-14 Health degree management system and management method based on AI tongue diagnosis image processing Pending CN115954101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310238957.9A CN115954101A (en) 2023-03-14 2023-03-14 Health degree management system and management method based on AI tongue diagnosis image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310238957.9A CN115954101A (en) 2023-03-14 2023-03-14 Health degree management system and management method based on AI tongue diagnosis image processing

Publications (1)

Publication Number Publication Date
CN115954101A true CN115954101A (en) 2023-04-11

Family

ID=87282824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310238957.9A Pending CN115954101A (en) 2023-03-14 2023-03-14 Health degree management system and management method based on AI tongue diagnosis image processing

Country Status (1)

Country Link
CN (1) CN115954101A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117172967A (en) * 2023-08-11 2023-12-05 广州市抖品品牌管理有限公司 Enterprise brand propaganda service management system
CN117522865A (en) * 2024-01-03 2024-02-06 长春中医药大学 Traditional Chinese medicine health monitoring system based on image recognition technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1367454A (en) * 2002-03-25 2002-09-04 北京工业大学 Chinese medicine tongue colour, fur colour and tongue fur thickness analysis method based on multiclass support vector machine
CN1973757A (en) * 2006-10-11 2007-06-06 哈尔滨工业大学 Computerized disease sign analysis system based on tongue picture characteristics
CN114947756A (en) * 2022-07-29 2022-08-30 杭州咏柳科技有限公司 Atopic dermatitis severity intelligent evaluation decision-making system based on skin image
CN115644799A (en) * 2022-09-07 2023-01-31 中国科学院微电子研究所 Tongue picture characteristic data processing method based on machine vision
CN115760858A (en) * 2023-01-10 2023-03-07 东南大学附属中大医院 Kidney pathological section cell identification method and system based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1367454A (en) * 2002-03-25 2002-09-04 北京工业大学 Chinese medicine tongue colour, fur colour and tongue fur thickness analysis method based on multiclass support vector machine
CN1973757A (en) * 2006-10-11 2007-06-06 哈尔滨工业大学 Computerized disease sign analysis system based on tongue picture characteristics
CN114947756A (en) * 2022-07-29 2022-08-30 杭州咏柳科技有限公司 Atopic dermatitis severity intelligent evaluation decision-making system based on skin image
CN115644799A (en) * 2022-09-07 2023-01-31 中国科学院微电子研究所 Tongue picture characteristic data processing method based on machine vision
CN115760858A (en) * 2023-01-10 2023-03-07 东南大学附属中大医院 Kidney pathological section cell identification method and system based on deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117172967A (en) * 2023-08-11 2023-12-05 广州市抖品品牌管理有限公司 Enterprise brand propaganda service management system
CN117172967B (en) * 2023-08-11 2024-06-14 宁远众创空间创业服务有限公司 Enterprise brand propaganda service management system
CN117522865A (en) * 2024-01-03 2024-02-06 长春中医药大学 Traditional Chinese medicine health monitoring system based on image recognition technology
CN117522865B (en) * 2024-01-03 2024-03-22 长春中医药大学 Traditional Chinese medicine health monitoring system based on image recognition technology

Similar Documents

Publication Publication Date Title
US11935644B2 (en) Deep learning automated dermatopathology
US11631175B2 (en) AI-based heat map generating system and methods for use therewith
US11922348B2 (en) Generating final abnormality data for medical scans based on utilizing a set of sub-models
US11468564B2 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN115954101A (en) Health degree management system and management method based on AI tongue diagnosis image processing
CN111402260A (en) Medical image segmentation method, system, terminal and storage medium based on deep learning
CN111047591A (en) Focal volume measuring method, system, terminal and storage medium based on deep learning
US20220037019A1 (en) Medical scan artifact detection system and methods for use therewith
CN111652300A (en) Spine curvature classification method, computer device and storage medium
US20240161035A1 (en) Multi-model medical scan analysis system and methods for use therewith
CN111652862B (en) Spinal column sequential classification method, computer device, and storage medium
CN114332132A (en) Image segmentation method and device and computer equipment
CN110570425B (en) Pulmonary nodule analysis method and device based on deep reinforcement learning algorithm
KR20180045473A (en) System, method and computer program for melanoma detection using image analysis
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
CN112634221B (en) Cornea hierarchy identification and lesion positioning method and system based on images and depth
CN110718299B (en) Rapid prediction device for liver cancer risk level
CN109767468B (en) Visceral volume detection method and device
CN115690556A (en) Image recognition method and system based on multi-modal iconography characteristics
KR102384083B1 (en) Apparatus and Method for Diagnosing Sacroiliac Arthritis and Evaluating the Degree of Inflammation using Magnetic Resonance Imaging
CN114170415A (en) TMB classification method and system based on histopathology image depth domain adaptation
Lensink et al. Segmentation of pulmonary opacification in chest ct scans of covid-19 patients
CN116245881B (en) Renal interstitial fibrosis assessment method and system based on full-field recognition
CN118193770B (en) Medical image retrieval method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230411