CN116187470A - Tongue diagnosis image color correction model training method, color correction method and equipment - Google Patents
Tongue diagnosis image color correction model training method, color correction method and equipment Download PDFInfo
- Publication number
- CN116187470A CN116187470A CN202310071035.3A CN202310071035A CN116187470A CN 116187470 A CN116187470 A CN 116187470A CN 202310071035 A CN202310071035 A CN 202310071035A CN 116187470 A CN116187470 A CN 116187470A
- Authority
- CN
- China
- Prior art keywords
- image
- tongue
- training
- color
- color correction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 117
- 238000012937 correction Methods 0.000 title claims abstract description 93
- 238000003745 diagnosis Methods 0.000 title claims abstract description 73
- 238000000034 method Methods 0.000 title claims abstract description 48
- 239000003086 colorant Substances 0.000 claims abstract description 16
- 230000011218 segmentation Effects 0.000 claims abstract description 14
- 238000004590 computer program Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000010801 machine learning Methods 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 6
- 238000007689 inspection Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
- A61B5/0088—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/45—For evaluating or diagnosing the musculoskeletal system or teeth
- A61B5/4538—Evaluating a particular part of the muscoloskeletal system or a particular medical condition
- A61B5/4542—Evaluating the mouth, e.g. the jaw
- A61B5/4552—Evaluating soft tissue within the mouth, e.g. gums or tongue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4854—Diagnosis based on concepts of traditional oriental medicine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Dentistry (AREA)
- Veterinary Medicine (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Artificial Intelligence (AREA)
- Alternative & Traditional Medicine (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Rheumatology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Physical Education & Sports Medicine (AREA)
- Mathematical Physics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The invention discloses a tongue diagnosis image color correction model training method, a color correction method and equipment, comprising the following steps: acquiring a shot tongue image in a flash state of preset colors as initial images to obtain a plurality of initial images, wherein the preset colors are N types, and N is a positive integer larger than 3; for each initial image, a semantic segmentation network is adopted, a tongue region in the initial image is identified, a first mark is generated for the tongue region, a second mark is generated for the region outside the tongue region, and a training image containing the first mark and the second mark is obtained; determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image; and training the initial color correction model based on the color cast information and the training image to obtain a color correction model of the trained tongue diagnosis image. The color correction model can quickly identify color cast caused by different light rays on the color of the real tongue, and is beneficial to improving the accuracy of color correction.
Description
Technical Field
The present invention relates to the field of data processing, and in particular, to a method and apparatus for training a color correction model of a tongue diagnosis image.
Background
With the development of electronic information technology, online inquiry is more and more convenient for people's life, and tongue diagnosis is widely applied in traditional Chinese medicine diagnosis and is accepted by most people. Professional tongue diagnosis instruments have been developed, and users can obtain professional detection reports by taking pictures by stretching out the tongue on a professional machine. There are also companies that provide apps for tongue diagnosis that remotely assist the user in tongue diagnosis detection. Professional tongue diagnosis instruments are accurate but expensive, and must be registered to a hospital for going to do; for some sub-healthy minor illnesses, it is not necessary to go to the hospital.
The tongue diagnosis instrument can emit standard white light to irradiate the tongue and then carry out photographing diagnosis, so that the color of the tongue can be accurately known; however, the tongue inspection app is to take photos under different illumination conditions by means of natural light, so that the tongue inspection app has great influence on the judgment of tongue color, possibly under the sun, possibly under a fluorescent lamp, possibly in a daytime room, and the natural light conditions are various, so that different diagnosis results can be obtained when a plurality of tongue inspection apps are taken under the changing environment, people take the tongue photos by using the tongue inspection app, possibly under the sun, possibly under the fluorescent lamp, possibly in the daytime room, and the natural light conditions are various, and no method is adopted, so that the color of the taken image has errors, and the quality of the tongue inspection image cannot meet the application requirements.
Disclosure of Invention
The embodiment of the invention provides a color correction model training method, a color correction method, a device, computer equipment and a storage medium for tongue diagnosis images, so as to improve the quality of tongue diagnosis images.
In order to solve the above technical problems, an embodiment of the present application provides a method for training a color correction model of a tongue diagnosis image, where the method for training a color correction model of a tongue diagnosis image includes:
acquiring a shot tongue image in a flash state of preset colors as initial images, and obtaining a plurality of initial images, wherein the preset colors are N types, and N is a positive integer larger than 3;
for each initial image, a semantic segmentation network is adopted, a tongue region in the initial image is identified, a first mark is generated for the tongue region, a second mark is generated for a region outside the tongue region, and a training image containing the first mark and the second mark is obtained;
determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image;
and training an initial color correction model based on the color cast information and the training image to obtain a trained tongue diagnosis image color correction model.
Optionally, the preset colors include white, red, green and blue, and the shooting parameters are the same when the initial image is acquired.
Optionally, the determining the color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image includes:
shooting under a natural light state to obtain a reference image, wherein shooting parameters of the reference image are the same as those of the initial image;
taking an image of a first mark area in the training image as an identification area image, and acquiring an image of an area corresponding to the first mark area in the reference image as a reference area image;
and determining color cast information of the training image corresponding to a preset color by the identification area image and the reference area image.
Optionally, when the preset color is white, the acquiring, as the initial image, a captured tongue image in a flash state of the preset color includes: setting a white reference object in the shooting visual field range;
the determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image comprises:
and performing color cast calculation on the image of the first mark area in the training image and the white reference object to obtain the white color cast information.
Optionally, the training of the initial color correction model is a machine learning model or a neural network model.
In order to solve the above technical problem, the embodiment of the present application further provides a color correction method for a tongue diagnosis image, including:
acquiring a tongue diagnosis image to be calibrated;
and calibrating the tongue diagnosis image to be calibrated by adopting a trained color correction model of the tongue diagnosis image to obtain a calibration image.
In order to solve the above technical problem, an embodiment of the present application further provides a color correction model training device for tongue diagnosis images, including:
the initial image acquisition module is used for acquiring a shot tongue image in a flash state of a preset color as an initial image to obtain a plurality of initial images, wherein the preset colors are N types, and N is a positive integer larger than 3;
the training image generation module is used for identifying tongue regions in the initial images by adopting a semantic segmentation network aiming at each initial image, generating first marks for the tongue regions, and generating second marks for regions outside the tongue regions to obtain training images containing the first marks and the second marks;
the color cast information determining module is used for determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image;
and the correction model training module is used for training an initial color correction model based on the color cast information and the training image to obtain a color correction model of the trained tongue diagnosis image.
Optionally, the color cast information determining module includes:
a reference image acquisition unit, configured to acquire a reference image by shooting in a natural light state, where a shooting parameter of the reference image is the same as a shooting parameter of the initial image;
the identification area determining unit is used for taking an image of a first mark area in the training image as an identification area image and acquiring an image of an area corresponding to the first mark area in the reference image as a reference area image;
and the comparison calculation unit is used for determining the color cast information of the training image corresponding to the preset color by comparing the identification area image with the reference area image.
Optionally, when the preset color is white, the initial image acquisition module is included in a shooting visual field range, and a white reference object is set;
the color cast information determining module comprises:
and the color cast calculation unit is used for carrying out color cast calculation on the image of the first mark area in the training image and the white reference object to obtain the white color cast information.
In order to solve the above technical problem, an embodiment of the present application further provides a color correction device for a tongue diagnosis image, which is characterized by comprising:
the image acquisition module to be calibrated is used for acquiring tongue diagnosis images to be calibrated;
and the image calibration module is used for calibrating the tongue diagnosis image to be calibrated by adopting a trained tongue diagnosis image color correction model to obtain a calibration image.
In order to solve the above technical problem, the embodiments of the present application further provide a computer device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the steps of the method for training the color correction model of the tongue diagnosis image when executing the computer program, or the processor implements the steps of the method for color correction of the tongue diagnosis image when executing the computer program.
In order to solve the above technical problem, the embodiments of the present application further provide a computer readable storage medium storing a computer program, where the computer program implements the steps of the method for training a color correction model of a tongue diagnosis image when executed by a processor, or implements the steps of the method for color correction of a tongue diagnosis image when executed by a processor.
According to the tongue diagnosis image color correction model training method, the tongue diagnosis image color correction device, the computer equipment and the storage medium, shot tongue images in a flash state with preset colors are obtained and used as initial images, a plurality of initial images are obtained, the preset colors are N types, and N is a positive integer larger than 3; for each initial image, a semantic segmentation network is adopted, a tongue region in the initial image is identified, a first mark is generated for the tongue region, a second mark is generated for the region outside the tongue region, and a training image containing the first mark and the second mark is obtained; determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image; and training the initial color correction model based on the color cast information and the training image to obtain a color correction model of the trained tongue diagnosis image. The color correction model can quickly identify color cast caused by different light rays on the color of the real tongue, and is beneficial to improving the accuracy of color correction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a color correction model training method for tongue images of the present application;
FIG. 3 is a flow chart of one embodiment of a color correction method for a tongue diagnostic image of the present application;
FIG. 4 is a schematic diagram of one embodiment of a color correction model training apparatus for tongue images according to the present application;
FIG. 5 is a schematic view of the structure of one embodiment of a color correction device for tongue inspection images according to the present application
FIG. 6 is a schematic structural diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, as shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture E interface display perts Group Audio Layer III, moving Picture expert compression standard audio layer 3), MP4 players (Moving Picture E interface display perts Group Audio Layer IV, moving Picture expert compression standard audio layer 4), laptop and desktop computers, and so on.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the method for training the color correction model of the tongue diagnosis image provided in the embodiment of the present application is executed by the server, and accordingly, the device for training the color correction model of the tongue diagnosis image is set in the server.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. Any number of terminal devices, networks and servers may be provided according to implementation requirements, and the terminal devices 101, 102 and 103 in the embodiments of the present application may specifically correspond to application systems in actual production.
Referring to fig. 2, fig. 2 shows a training method for a color correction model of a tongue diagnosis image according to an embodiment of the present invention, and the method is applied to a server in fig. 1 for illustration, and is described in detail as follows:
s201: and acquiring a shot tongue image in a flash state of a preset color as an initial image to obtain a plurality of initial images, wherein the preset color is N types, and N is a positive integer larger than 3.
Optionally, the preset colors include white, red, green and blue, and the photographing parameters are the same when the initial image is acquired.
In one embodiment, when shooting tongue images, the mobile phone screen is controlled to emit flashes of various colors such as white, red, green and blue, and a tongue photo is shot under each flash. It should be noted that, when taking each photo, the camera must use the same shooting parameters (such as exposure time and white balance); the photographing effect is not influenced due to different photographing parameters of the camera.
S202: and aiming at each initial image, adopting a semantic segmentation network, identifying a tongue region in the initial image, generating a first mark for the tongue region, and generating a second mark for a region outside the tongue region to obtain a training image containing the first mark and the second mark.
The semantic segmentation network is used for carrying out semantic segmentation on the image, and the method adopts the Unet as the semantic segmentation network, specifically comprises the following steps: taking a plurality of initial images containing tongue, marking the tongue area to obtain a first mark, marking the non-tongue area (such as lips) for a second mark, training a semantic segmentation Model, and enabling the tongue mark (first mark) =model (Model photo) ", so that the Model has the capacity of identifying the tongue area, and subsequently, giving a new image without the mark to the Model, so that the tongue area can be identified according to the semantic identification Model.
S203: and determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image.
Optionally, determining the color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image includes:
shooting under a natural light state to obtain a reference image, wherein shooting parameters of the reference image are the same as those of the initial image;
taking an image of a first mark area in the training image as an identification area image, and acquiring an image of an area corresponding to the first mark area in the reference image as a reference area image;
and determining color cast information of the training image corresponding to the preset color by the identification area image and the reference area image.
In a specific embodiment, when the mobile phone screen flashes to red, the tongue image shot by the front camera is a "red photo", when the mobile phone screen does not flash, the tongue image shot by the front camera is used as a reference image, after the tongue area is identified by adopting the semantic segmentation model, the pixel value of the reference area image is subtracted from the pixel value of the tongue area (identification area image) of the red photo, which is color cast information corresponding to red, namely F red.
Optionally, when the preset color is white, acquiring a captured tongue image in a flash state of the preset color as the initial image includes: setting a white reference object in the shooting visual field range;
according to the training image and the preset colors corresponding to the training image, determining the color cast information corresponding to each preset color comprises:
and performing color cast calculation on the image of the first mark area in the training image and the white reference object to obtain white color cast information.
When the mobile phone flashes white light, the tongue photo is "Pic white", and color bias (such as redness or purple bias) may exist due to the influence of ambient light. If a small piece of white paper is stuck to the corner of the mouth of a photographer, after the photo is shot, the real color cast information can be accurately calculated according to the color of the shot white paper (for example, the shot white paper is greenish, and the current environment light is greenish). The true Color cast information is set to "color_offset_true".
The tongue color is known to be determined by the difference in reflectance of the tongue surface to light of three colors of red, green, and blue. If the reflectivity of the tongue to red light is large, the tongue is reddish, and the more "F red" is; i.e. "FRed" is proportional to the tongue reflectance for red light-! We can evaluate the true color of the tongue by means of "F red, F green, F blue" without being affected by ambient light. At the same time, it should also be noted that "fhread, fh" calculated by means of flashing is erroneous, since the mobile device screen flashes are weaker.
S204: and training the initial color correction model based on the color cast information and the training image to obtain a color correction model of the trained tongue diagnosis image.
Optionally, the training of the initial color correction model is a machine learning model or a neural network model. In this embodiment, the auxiliary flash emitted by the mobile phone screen is used to evaluate the reflection intensity of the tongue on different lights, but the reflection intensity has a certain error (which cannot be completely equal to the color of the tongue), so the reflection intensity information and the real tongue photo are input into a model by means of the robustness of the deep learning model to predict and evaluate the real color cast condition of the tongue photo, and the real color of the tongue is evaluated.
In one embodiment, upon acquisition of "Pic white", "F red, F green, F blue", "color_offset_true", a Model may be trained,
the specific model structure is as follows: color_offset_true=model (Pic white, F red, F green, F blue)
The Pic white is a tongue photo shot by the front camera when the mobile phone screen emits white light, and F red, F green and F blue are respectively color cast information of red, green and blue, so as to obtain color cast = Model (Pic white, F red, F green and F blue) of the predicted natural light, and the Model is adjusted so that the mean square error of the color cast of the real natural light-the color cast of the predicted natural light is minimum.
In this embodiment, a captured tongue image in a flash state of a preset color is obtained and used as an initial image to obtain a plurality of initial images, wherein the preset color is N types, and N is a positive integer greater than 3; for each initial image, a semantic segmentation network is adopted, a tongue region in the initial image is identified, a first mark is generated for the tongue region, a second mark is generated for the region outside the tongue region, and a training image containing the first mark and the second mark is obtained; determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image; and training the initial color correction model based on the color cast information and the training image to obtain a color correction model of the trained tongue diagnosis image. The color correction model can quickly identify color cast caused by different light rays on the color of the real tongue, and is beneficial to improving the accuracy of color correction.
Referring to fig. 3, fig. 3 shows a color correction method for a tongue diagnosis image according to an embodiment of the present invention, and the method is applied to the server in fig. 1 for illustration, and is described in detail as follows:
s205: and acquiring tongue diagnosis images to be calibrated.
S206: and calibrating the tongue diagnosis image to be calibrated by adopting a trained tongue diagnosis image color correction model to obtain a calibration image.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Fig. 4 shows a schematic block diagram of a tongue diagnosis image color correction model training apparatus in one-to-one correspondence with the tongue diagnosis image color correction model training method of the above-described embodiment. As shown in fig. 4, the color correction model training apparatus of the tongue diagnosis image includes an initial image acquisition module 31, a training image generation module, a color cast information determination module 33, and a correction model training module 34. The functional modules are described in detail as follows:
an initial image obtaining module 31, configured to obtain, as initial images, a plurality of initial images, where the initial images are captured in a flash state of a preset color, and the preset color is N, and N is a positive integer greater than 3;
the training image generating module 32 is configured to identify, for each initial image, a tongue region in the initial image by using a semantic segmentation network, generate a first mark for the tongue region, and generate a second mark for a region outside the tongue region, so as to obtain a training image including the first mark and the second mark;
the color cast information determining module 33 is configured to determine color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image;
the correction model training module 34 is configured to perform training of an initial color correction model based on the color cast information and the training image, and obtain a color correction model of the trained tongue diagnosis image.
Optionally, the color cast information determining module 33 includes:
the reference image acquisition unit is used for shooting under a natural light state to obtain a reference image, and the shooting parameters of the reference image are the same as those of the initial image;
the identification area determining unit is used for taking an image of a first mark area in the training image as an identification area image, and acquiring an image of an area corresponding to the first mark area in the reference image as a reference area image;
and the comparison calculation unit is used for determining the color cast information of the training image corresponding to the preset color by comparing the identification area image with the reference area image.
Optionally, when the preset color is white, the initial image acquisition module is included in the shooting visual field range, and a white reference object is set; the color cast information determination module 33 includes:
and the color cast calculation unit is used for carrying out color cast calculation on the image of the first mark area in the training image and the white reference object to obtain white color cast information.
For specific limitations of the device for training the color correction model of the tongue image, reference may be made to the above limitation of the method for training the color correction model of the tongue image, and details thereof are not repeated herein. The modules in the tongue diagnosis image color correction model training device can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 5 shows a schematic block diagram of a color correction apparatus of a tongue diagnosis image in one-to-one correspondence with the color correction method of the tongue diagnosis image of the above-described embodiment. As shown in fig. 5, the color correction device of the tongue diagnosis image includes an image acquisition module 35 to be calibrated and an image calibration module 36. The functional modules are described in detail as follows:
the to-be-calibrated image acquisition module 35 is used for acquiring a tongue diagnosis image to be calibrated;
the image calibration module 36 is configured to calibrate the tongue diagnostic image to be calibrated by using the trained color correction model of the tongue diagnostic image, so as to obtain a calibration image.
For specific definition of the color correction module device for tongue diagnosis image, reference may be made to the definition of the color correction method for tongue diagnosis image hereinabove, and the detailed description thereof will be omitted. The above-mentioned various modules in the color correction device of the tongue diagnosis image may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 6, fig. 6 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only a computer device 4 having a component connection memory 41, a processor 42, a network interface 43 is shown in the figures, but it is understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is typically used for storing an operating system and various application software installed on the computer device 4, such as program codes for controlling electronic files, etc. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute a program code stored in the memory 41 or process data, such as a program code for executing control of an electronic file.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
The present application further provides another embodiment, namely, a computer readable storage medium, where an interface display program is stored, where the interface display program is executable by at least one processor, so that the at least one processor performs the steps of the color correction model training method for tongue diagnosis images as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
It is apparent that the embodiments described above are only some embodiments of the present application, but not all embodiments, the preferred embodiments of the present application are given in the drawings, but not limiting the patent scope of the present application. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a more thorough understanding of the present disclosure. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing, or equivalents may be substituted for elements thereof. All equivalent structures made by the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the protection scope of the application.
Claims (10)
1. A color correction model training method for tongue diagnostic images, comprising:
acquiring a shot tongue image in a flash state of preset colors as initial images, and obtaining a plurality of initial images, wherein the preset colors are N types, and N is a positive integer larger than 3;
for each initial image, a semantic segmentation network is adopted, a tongue region in the initial image is identified, a first mark is generated for the tongue region, a second mark is generated for a region outside the tongue region, and a training image containing the first mark and the second mark is obtained;
determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image;
and training an initial color correction model based on the color cast information and the training image to obtain a trained tongue diagnosis image color correction model.
2. The method for training a color correction model for a tongue diagnostic image according to claim 1, wherein the preset colors include white, red, green and blue, and the photographing parameters are the same when the initial image is obtained.
3. The method for training a color correction model of a tongue diagnosis image according to claim 2, wherein the determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image comprises:
shooting under a natural light state to obtain a reference image, wherein shooting parameters of the reference image are the same as those of the initial image;
taking an image of a first mark area in the training image as an identification area image, and acquiring an image of an area corresponding to the first mark area in the reference image as a reference area image;
and determining color cast information of the training image corresponding to a preset color by the identification area image and the reference area image.
4. The method for training a color correction model for tongue diagnosis image according to claim 2, wherein when the preset color is white, the step of acquiring the photographed tongue image in a flash state of the preset color as an initial image comprises: setting a white reference object in the shooting visual field range;
the determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image comprises:
and performing color cast calculation on the image of the first mark area in the training image and the white reference object to obtain the white color cast information.
5. The method for training a color correction model for a tongue diagnostic image according to any one of claims 1 to 4, wherein the training of the initial color correction model is a machine learning model or a neural network model.
6. A color correction method for a tongue diagnosis image, comprising:
acquiring a tongue diagnosis image to be calibrated;
calibrating the tongue diagnostic image to be calibrated by adopting a color correction model of the trained tongue diagnostic image to obtain a calibration image, wherein the color correction model of the trained tongue diagnostic image is trained according to the method of any one of claims 1 to 5.
7. A color correction model training device for a tongue diagnosis image, characterized in that the color correction model training device for a tongue diagnosis image comprises:
the initial image acquisition module is used for acquiring a shot tongue image in a flash state of a preset color as an initial image to obtain a plurality of initial images, wherein the preset colors are N types, and N is a positive integer larger than 3;
the training image generation module is used for identifying tongue regions in the initial images by adopting a semantic segmentation network aiming at each initial image, generating first marks for the tongue regions, and generating second marks for regions outside the tongue regions to obtain training images containing the first marks and the second marks;
the color cast information determining module is used for determining color cast information corresponding to each preset color according to the training image and the preset color corresponding to the training image;
and the correction model training module is used for training an initial color correction model based on the color cast information and the training image to obtain a color correction model of the trained tongue diagnosis image.
8. A color correction device for a tongue diagnosis image, comprising:
the image acquisition module to be calibrated is used for acquiring tongue diagnosis images to be calibrated;
the image calibration module is used for calibrating the tongue diagnosis image to be calibrated by adopting a color correction model of the trained tongue diagnosis image to obtain a calibration image, wherein the color correction model of the trained tongue diagnosis image is obtained by training according to the method of any one of claims 1 to 5.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method for training a color correction model of a tongue image according to any one of claims 1 to 5 when executing the computer program or the method for color correction of a tongue image according to claim 6 when executing the computer program.
10. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method for training a color correction model for a tongue diagnosis image according to any one of claims 1 to 5, or the computer program when executed by a processor implements the method for color correction for a tongue diagnosis image according to claim 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310071035.3A CN116187470A (en) | 2023-01-18 | 2023-01-18 | Tongue diagnosis image color correction model training method, color correction method and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310071035.3A CN116187470A (en) | 2023-01-18 | 2023-01-18 | Tongue diagnosis image color correction model training method, color correction method and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116187470A true CN116187470A (en) | 2023-05-30 |
Family
ID=86432126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310071035.3A Pending CN116187470A (en) | 2023-01-18 | 2023-01-18 | Tongue diagnosis image color correction model training method, color correction method and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116187470A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117094966A (en) * | 2023-08-21 | 2023-11-21 | 青岛美迪康数字工程有限公司 | Tongue image identification method and device based on image amplification and computer equipment |
-
2023
- 2023-01-18 CN CN202310071035.3A patent/CN116187470A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117094966A (en) * | 2023-08-21 | 2023-11-21 | 青岛美迪康数字工程有限公司 | Tongue image identification method and device based on image amplification and computer equipment |
CN117094966B (en) * | 2023-08-21 | 2024-04-05 | 青岛美迪康数字工程有限公司 | Tongue image identification method and device based on image amplification and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114424253B (en) | Model training method and device, storage medium and electronic equipment | |
CN108012389B (en) | Light adjusting method, terminal device and computer readable storage medium | |
CN103929598B (en) | A kind of automatic explosion method and camera module detection method | |
CN111355941B (en) | Image color real-time correction method, device and system | |
CN112102402B (en) | Flash light spot position identification method and device, electronic equipment and storage medium | |
TWI732698B (en) | Anti-counterfeiting method and system for under-screen fingerprint identification | |
CN108090908B (en) | Image segmentation method, device, terminal and storage medium | |
CN116187470A (en) | Tongue diagnosis image color correction model training method, color correction method and equipment | |
CN108551552A (en) | Image processing method, device, storage medium and mobile terminal | |
CN105577982A (en) | Image processing method and terminal | |
CN111556219A (en) | Image scanning device and image scanning method | |
CN109871205B (en) | Interface code adjustment method, device, computer device and storage medium | |
CN109040729B (en) | Image white balance correction method and device, storage medium and terminal | |
CN114512085A (en) | Visual color calibration method of TFT (thin film transistor) display screen | |
KR20210008075A (en) | Time search method, device, computer device and storage medium (VIDEO SEARCH METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM) | |
CN110636222B (en) | Photographing control method and device, terminal equipment and storage medium | |
US10922569B2 (en) | Method and apparatus for detecting model reliability | |
WO2018028165A1 (en) | Terminal and manufacturing process thereof | |
CN110210401B (en) | Intelligent target detection method under weak light | |
CN110162362B (en) | Dynamic control position detection and test method, device, equipment and storage medium | |
CN116506737A (en) | Method, device, equipment and storage medium for determining exposure parameters | |
CN116456070A (en) | Camera calibration method and device based on digital twin and computer storage medium | |
CN112233194B (en) | Medical picture optimization method, device, equipment and computer readable storage medium | |
CN113034449A (en) | Target detection model training method and device and communication equipment | |
CN113850836A (en) | Employee behavior identification method, device, equipment and medium based on behavior track |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |