WO2022126622A1 - 黄疸分析系统及其方法 - Google Patents

黄疸分析系统及其方法 Download PDF

Info

Publication number
WO2022126622A1
WO2022126622A1 PCT/CN2020/137680 CN2020137680W WO2022126622A1 WO 2022126622 A1 WO2022126622 A1 WO 2022126622A1 CN 2020137680 W CN2020137680 W CN 2020137680W WO 2022126622 A1 WO2022126622 A1 WO 2022126622A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
image data
jaundice
processing module
training
Prior art date
Application number
PCT/CN2020/137680
Other languages
English (en)
French (fr)
Inventor
陈阶晓
陈郁媗
Original Assignee
陈阶晓
陈郁媗
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 陈阶晓, 陈郁媗 filed Critical 陈阶晓
Priority to JP2023537919A priority Critical patent/JP2023554145A/ja
Priority to PCT/CN2020/137680 priority patent/WO2022126622A1/zh
Priority to EP20965636.2A priority patent/EP4265181A1/en
Priority to CN202080107974.3A priority patent/CN116801798A/zh
Priority to US18/267,909 priority patent/US20240054641A1/en
Publication of WO2022126622A1 publication Critical patent/WO2022126622A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1032Determining colour for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/42Detecting, measuring or recording for evaluating the gastrointestinal, the endocrine or the exocrine systems
    • A61B5/4222Evaluating particular parts, e.g. particular organs
    • A61B5/4244Evaluating particular parts, e.g. particular organs liver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present invention relates to a jaundice analysis system and method, in particular to a jaundice analysis system and method that can judge whether a target object has jaundice symptoms according to the eye white image of the target object.
  • Jaundice symptoms are symptoms of many metabolic-related diseases, and the most common signs are yellowing of the skin and whites of the eyes.
  • patients with symptoms of jaundice may ignore their own disease or organ failure because their skin color is yellowish, or they may mistakenly believe that the skin color is yellow only because of tanning and other factors. This will lead to patients with jaundice or organ disease that has evolved to a more severe condition by the time they are found and diagnosed. Accordingly, there is a need for a jaundice analysis system and method that can determine whether the target object has jaundice symptoms according to the eye white image of the target object.
  • the concept of the present invention is to provide a jaundice analysis system and method which can judge whether the target object has jaundice symptoms according to the eye white image of the target object.
  • the present invention provides a jaundice analysis system, comprising: a database; and a processing device for accessing the database, the processing device comprising: a data processing module for generating first training data based on the first image data, the data processing module associating the first training data with the first category of data, and storing the first training data in the database; and a deep learning module for training a target convolutional neural network with the first training data associated with the first category of data module to obtain a trained convolutional neural network module; wherein the first image data includes a first white-of-eye image; wherein the database is communicatively connected to the data processing module and/or the deep learning module; wherein the trained data of the processing device
  • the convolutional neural network module obtains judgment data according to the input image data, and the input image data includes the second white image of the target object; wherein the judgment data indicates the bilirubin concentration range of the target object.
  • the deep learning module obtains the trained convolutional neural network module by means of transfer learning. That is, the target convolutional neural network module is a convolutional neural network module that has been trained (the training is not limited to judging symptoms of jaundice or judging bilirubin concentration).
  • the data processing module performs a first trimming process on the first image data to generate first trimmed image data, wherein the data processing module generates the first trimmed image data based on the first trimmed image data.
  • first training data first training data.
  • the data processing module performs mirror processing on the first image data to generate mirror image data, wherein the data processing module generates the first training data based on the mirror image data.
  • the data processing module performs a second cropping process on the mirrored image data to generate second cropped image data, wherein the data processing module generates the second cropped image data based on the second cropped image data.
  • first training data wherein the second cropped image data has a specific image shape.
  • the data processing module performs a third cropping process on the second cropped image data to generate third cropped image data, wherein the data processing module is based on the third cropped image data to Generate the first training data.
  • the data processing module performs de-reflection processing on the first image data to generate de-reflection image data, wherein the data processing module generates the first training data based on the de-reflection image data.
  • the data processing module generates second image data based on the first image data, and generates second training data based on the second image data, and the data processing module associates the second training data with second type of data, and store the second training data in the database; wherein the deep learning module uses the first training data associated with the first type of data and the second training data associated with the second type of data
  • the target convolutional neural network module is trained to obtain the trained convolutional neural network module.
  • the data processing module performs one of image translation processing, image rotation processing and image flip processing on the first image data to generate the second image data.
  • the jaundice analysis system further includes: a mobile device, which stores the input image data; wherein the processing device further includes: a communication module, which communicates with the mobile device and the trained convolution of the processing device.
  • a neural network module the communication module receives the input image data from the mobile device, and transmits the judgment data to the mobile device.
  • a jaundice analysis method is further provided, which is applied to a jaundice analysis system.
  • the jaundice analysis method comprises: generating first training data based on first image data by a data processing module of the jaundice analysis system, and the data processing module generates first training data based on the first image data.
  • the target convolutional neural network module with the first training data associated with the first category of data by the deep learning module of the jaundice analysis system to obtain a trained volume A convolutional neural network module; and the trained convolutional neural network module of the jaundice analysis system obtains judgment data according to input image data, and the input image data includes a second white image of the target object; wherein the first image data includes A first white eye image; wherein the judgment data indicates the bilirubin concentration range of the target object.
  • the deep learning module obtains the trained convolutional neural network module by means of transfer learning. That is, the target convolutional neural network module has been trained (this training is not limited to judging symptoms of jaundice or judging bilirubin concentration, but can be, for example, training for judging other things through images, but not The convolutional neural network module for
  • generating the first training data based on the first image data further includes: performing a first cropping process on the first image data by the data processing module to generate a first cropped image data; wherein the data processing module generates the first training data based on the first cropped image data.
  • the generating the first training data based on the first image data further includes: performing mirror processing on the first image data by the data processing module to generate mirrored image data; wherein the The data processing module generates the first training data based on the mirrored image data.
  • the generating the first training data based on the first image data further includes: performing a second cropping process on the mirrored image data by the data processing module to generate a second cropped image data; wherein the data processing module generates the first training data based on the second cropped image data; wherein the second cropped image data has a specific image shape.
  • the generating the first training data based on the first image data further includes: performing a third cropping process on the second cropped image data by the data processing module to generate a third crop cropping image data; wherein the data processing module generates the first training data based on the third cropping image data.
  • the generating the first training data based on the first image data further comprises: performing de-reflection processing on the first image data by the data processing module to generate de-reflection image data; wherein the The data processing module generates the first training data based on the de-reflected image data.
  • the jaundice analysis method further comprises: generating second image data by the data processing module based on the first image data; and generating second training data based on the second image data by the data processing module , the data processing module associates the second training data with the second category of data; wherein the deep learning module uses the first training data associated with the first category of data and the second training data associated with the second category of data
  • the training data trains the target convolutional neural network module to obtain the trained convolutional neural network module.
  • the data processing module performs one of image translation processing, image rotation processing and image flip processing on the first image data to generate the second image data.
  • the jaundice analysis method further comprises: receiving the input image data from a mobile device by a communication module of the jaundice analysis system; and transmitting the judgment data to the mobile device by the communication module.
  • FIG. 1 is a system architecture diagram of a specific embodiment of a jaundice analysis system according to the present invention.
  • FIG. 2A is a schematic diagram of a specific embodiment of the first image data.
  • FIG. 2B is a schematic diagram of a specific embodiment of the first image data.
  • FIG. 2C is a schematic diagram of a specific embodiment of the first image data.
  • FIG. 3 is a schematic diagram of a specific embodiment of mirrored image data.
  • FIG. 4 is a schematic diagram of a specific embodiment of the second cropped image data.
  • FIG. 5 is a schematic diagram of a specific embodiment of generating third cropped image data.
  • FIG. 6 is a schematic diagram of a specific embodiment of generating de-reflection image data.
  • FIG. 7 is a flow chart of a specific embodiment of the jaundice analysis method of the present invention.
  • the jaundice analysis system 100 includes a database 110 and a processing device 120 .
  • the processing device 120 can access the database 110 , or the processing device 120 can be said to be communicatively connected to the database 110 .
  • the processing device 120 includes a data processing module 122 and a deep learning module 124 .
  • the data processing module 122 is communicatively connected to the database 110
  • the deep learning module 124 is communicatively connected to the database 110 .
  • the data processing module 122 can generate the first training data based on the first image data, and the data processing module 122 can associate the first training data with the first type of data, and store the first training data to database 110.
  • the first image data includes a first white of the eye image
  • the first type of data indicates a bilirubin concentration value or a bilirubin concentration range. It should be understood that jaundice is caused by excessive bilirubin in the body's blood, which causes the skin, whites of the eyes, and mucous membranes to be yellowish in color.
  • the bilirubin concentration value or the bilirubin concentration range can also be regarded as the degree of jaundice symptoms (ie, can be regarded as the first category of data indicating the degree of jaundice symptoms).
  • the first image data is stored in the database 110 .
  • the data processing module 122 may generate different first training data based on different first image data, wherein different first training data may be respectively associated with different first types of data.
  • different first training data may also be respectively associated with the same first category of data.
  • the data processing module 122 can generate a corresponding first set of first training data based on the first set of first image data, and each first training data in the first set of first training data is associated with the first set of training data indicating the first First category data for bilirubin concentration range.
  • the data processing module 122 can generate a corresponding second set of first training data based on the second set of first image data, each of the first training data in the second set of first training data is associated with the indicated second bilirubin Another first category of data for concentration ranges. Wherein the first bilirubin concentration range is different from the second bilirubin concentration range.
  • the processing device 120 or the data processing module 122 receives the first image data (for example, the processing device 120 or the data processing module 122 obtains the first image data from the database 110 ), the first image data is associated with the first image data. A type of data, and the data processing module 122 correlates the first training data generated based on the first image data with the first type of data accordingly.
  • the deep learning module 124 may use the first training data associated with the first category of data to train a target convolutional neural network module to obtain (or may be referred to as generating) a trained volume The product neural network module 128 .
  • the deep learning module 124 obtains the trained convolutional neural network module 128 in a transfer learning manner. That is, the target convolutional neural network module is a trained convolutional neural network module.
  • the deep learning module 124 is based on EfficientNetB5 for transfer learning. It should be appreciated that the trained convolutional neural network module 128 derived by the deep learning module 124 is also included in the processing device 120 .
  • the trained convolutional neural network module 128 of the processing device 120 can obtain judgment data according to the input image data, wherein the input image data includes the second white image of the target subject, and the judgment data indicates the bile redness of the target subject concentration range.
  • the deep learning module 124 in the process of training the target convolutional neural network module with the first training data associated with the first type of data, can generate various filters by itself to capture various feature values.
  • the filter may be, for example, a histogram filter (Histogram), a histogram equalization filter (Clahe (Adaptive Histogram)), a Gaussian filter (Guassian), etc., but not limited thereto.
  • the deep learning module 124 may be communicatively coupled to the target convolutional neural network module and the trained convolutional neural network module 128 .
  • the deep learning module 124 may include a target convolutional neural network module and a trained convolutional neural network module 128 .
  • the data processing module 122 may be communicatively coupled to the deep learning module 124 and/or the trained convolutional neural network module 128 .
  • the data processing module 122 in order to obtain more training data, in order to improve the accuracy of the trained convolutional neural network module 128 in determining the bilirubin concentration range or the degree of jaundice, can generate a second image data based on the first image data. image data, and can generate second training data based on the second image data. Next, the data processing module 122 may associate the second training data with the second category data, and store the second training data in the database 110 .
  • the second category data is the first category data associated with the first image data.
  • the deep learning module 124 trains the target convolutional neural network module with the first training data associated with the first category of data and the second training data associated with the second category of data, so as to obtain the trained convolutional neural network Network module 128 .
  • the data processing module 122 performs image translating processing on the first image data (for example, performing various translation processes such as horizontal translation and vertical translation on the first image data, but not limited thereto) , image rotation processing (image rotating processing, such as rotating the first image data from 0 degrees to 180 degrees, but not limited thereto), image flipping processing (image flipping processing, such as horizontally flipping the first image data) , vertical flip and other flip processing, but not limited thereto) and gap compensation constant processing (gap compensation constant processing) to generate the second image data.
  • image translating processing for example, performing various translation processes such as horizontal translation and vertical translation on the first image data, but not limited thereto
  • image rotation processing image rotating processing, such as rotating the first image data from 0 degrees to 180 degrees, but not limited thereto
  • image flipping processing image flipping processing, such as horizontally flipping the first image data
  • vertical flip and other flip processing but not limited thereto
  • gap compensation constant processing gap compensation constant processing
  • the processing device 120 may further include a communication module 126 .
  • the processing module 122 may receive data (eg, input image data) from the device 900 through the communication module 126 , or may transmit data (eg, judgment data) to the device 900 through the communication module 126 .
  • the communication module 126 is communicatively connected to the device 900 and the trained convolutional neural network module 128 of the processing device 120 .
  • the device 900 can be, for example, a computer, a mobile device (a computer can also be regarded as a mobile device), a remote server, etc., but not limited thereto.
  • the device 900 can also be regarded as a part of the jaundice analysis system 100 , wherein the input image data is stored in the device 900 .
  • the apparatus 900 includes an image capture device, and the apparatus 900 can capture an image and generate input image data by means of the image capture device.
  • the input image data includes first input image data and second input image data.
  • the first input image data includes a left-eye white image of the target object, and the second input image data includes a right-eye white image of the target object.
  • the communication module 126 can be communicatively connected to the data processing module 122 and/or the deep learning module 124 .
  • the jaundice analysis system 100 of the present invention includes one or more processors, and implements the database 110 and the processing device 120 in a cooperative manner of hardware and software.
  • the processing device 120 includes one or more processors, and implements the data processing module 122 , the deep learning module 124 , the communication module 126 and the trained convolutional neural network module 128 in a cooperative manner of hardware and software.
  • the apparatus 900 includes one or more processors and implements the image capture device in a cooperative manner of hardware and software.
  • the bilirubin concentration range indicated by the first category data associated with the first image data 201 and 202 in FIG. 2A is 0-1.2 mg/dL.
  • the bilirubin concentrations indicated by the first category of data associated with the first image data 203, 204 of Figure 2B range from 1.3 to 3.5 mg/dL.
  • the range of bilirubin concentrations indicated by the first category of data associated with the first image data 205, 206 of Figure 2C is greater than 3.6 mg/dL.
  • the data processing module of the jaundice analysis system of the present invention can perform a first cropping process on the first image data to generate the first cropped image data, whereby each first image data can be cropped or adjusted to the same image size.
  • the data processing module may generate the first training data based on the first cropped image data.
  • the mirrored image data 302 and 304 are mirrored image data generated by performing different mirroring processes on the first image data by the data processing module.
  • the short side of the image data 304 is supplemented by means of mirroring processing (ie, mirroring is performed with the short side as the symmetry axis), so that the length and width of the image data 304 are equal.
  • the first cropped image data can be used as a kind of first image data. That is, the data processing module performs mirroring processing on the first image data that has undergone the first cropping processing. In a specific embodiment, the data processing module generates the first training data based on the mirrored image data.
  • FIG. 4 illustrates a schematic diagram of a specific embodiment of the second cropped image data.
  • the data processing module may perform a second cropping process on the mirrored image data 304 in FIG. 3 to generate second cropped image data 402 .
  • the cropping method of the second cropping process is to use one side of the mirror image data 304 as a diameter to make a tangent circle (inscribed circle of a square), to cut off the image content outside the tangent circle, and only Image content within the tangent circle is preserved.
  • the data processing module can also crop different mirror image data into the same image size in this way. It should be understood that FIG.
  • the data processing module performs the second cropping process, not only can the image be cropped into a circle, but the image can be cropped into a specific image shape as required.
  • the data processing module generates the first training data based on the second cropped image data.
  • FIG. 5 illustrates a schematic diagram of a specific embodiment of generating the third cropped image data.
  • the data processing module may perform a third cropping process on the second cropped image data 502 to generate third cropped image data 504 .
  • the cutting method of the third cropping process is to take a square of a specific size from the center point of the second cropped image data 502, and only keep the image content in the square, so as to reduce unnecessary features.
  • the data processing module generates the first training data based on the third cropped image data.
  • FIG. 6 illustrates a schematic diagram of a specific embodiment of generating de-reflected image data.
  • the data processing module may perform de-reflection processing on the first image data 602 to generate de-reflection image data 604 .
  • the white part of the reflection of the eyeball in the image can be prevented from interfering with the deep learning (that is, the training of the target convolutional neural network module by the deep learning module).
  • the de-reflection processing is performed by using the imitation function of the image retouching software Photoimpact to remove the reflective parts of the eyes.
  • the data processing module generates the first training data based on the de-reflected image data.
  • first cropped image data, the mirrored image data, the second cropped image data, the third cropped image data and the de-reflection image data can all be regarded as a kind of first image data, and the data processing module can The image data is subjected to a first cropping process and/or a mirroring process and/or a second cropping process and/or a third cropping process and/or a de-reflection process.
  • FIG. 7 illustrates a flow chart of a specific embodiment of a method for analyzing jaundice according to the present invention.
  • the jaundice analysis method 700 is applied to the jaundice analysis system.
  • the jaundice analysis method 700 starts at step 710 , the data processing module of the jaundice analysis system generates first training data based on the first image data, and the data processing module associates the first training data with the first category data.
  • the first image data includes a first white-of-eye image.
  • the deep learning module of the jaundice analysis system trains the target convolutional neural network module with the first training data associated with the first category of data to obtain the trained convolutional neural network module.
  • the deep learning module obtains the trained convolutional neural network module in a transfer learning manner.
  • the target convolutional neural network module is a trained convolutional neural network module, wherein the training of the target convolutional neural network module beforehand is not limited to judging symptoms of jaundice or judging bilirubin concentration, but can be, for example, For the training of judging other things through images, but not limited to this.
  • step 730 is performed, and the input image data is received by the communication module of the jaundice analysis system automatically from the mobile device (eg, mobile phone, tablet computer, etc., but not limited thereto).
  • the input image data includes the second eye white image of the target object.
  • the input image data includes first input image data and second input image data.
  • the first input image data includes a left-eye white image of the target object
  • the second input image data includes a right-eye white image of the target object. It should be understood that step 730 can also be executed before step 710 or 720 as required.
  • step 740 is then performed, and judgment data is obtained by the trained convolutional neural network module of the jaundice analysis system according to the input image data.
  • the judgment data indicates the bilirubin concentration range of the target object (the bilirubin concentration range can also be regarded as the degree of jaundice).
  • the communication module of the jaundice analysis system transmits the judgment data to the mobile device.
  • the generating the first training data based on the first image data may further include: performing a first cropping process on the first image data by the data processing module to generate the first cropped image data.
  • the data processing module generates first training data based on the first cropped image data.
  • the data processing module performs a first cropping process on the first image data according to the first instruction.
  • the first instruction may be, for example, an image cropping operation performed by the user through a mouse, or may be, for example, a default image cropping instruction, but not limited thereto.
  • the generating the first training data based on the first image data may further include: performing mirror processing on the first image data by the data processing module to generate mirrored image data.
  • the data processing module generates the first training data based on the mirrored image data.
  • the data processing module performs mirror processing on the first image data according to the second instruction.
  • the second instruction may be, for example, an image mirroring operation performed by the user through a mouse, or may be, for example, a default image mirroring instruction, but not limited thereto.
  • the generating the first training data based on the first image data may further include: performing a second cropping process on the mirrored image data by the data processing module to generate the second cropped image data.
  • the second cropped image data has a specific image shape, and the data processing module generates the first training data based on the second cropped image data.
  • the data processing module performs the second cropping process on the mirrored image data according to the third instruction.
  • the third instruction may be, for example, an image cropping operation performed by the user through a mouse, or may be, for example, a default image cropping instruction, but not limited thereto.
  • the generating the first training data based on the first image data may further include: performing a third cropping process on the second cropped image data by the data processing module to generate third cropped image data.
  • the data processing module generates first training data based on the third cropped image data.
  • the data processing module performs a third cropping process on the second cropped image data according to the fourth instruction.
  • the fourth instruction may be, for example, an image cropping operation performed by the user through a mouse, or may be, for example, a default image cropping instruction, but not limited thereto.
  • the generating the first training data based on the first image data may further include: performing de-reflection processing on the first image data by the data processing module to generate de-reflection image data.
  • the data processing module generates the first training data based on the de-reflection image data.
  • the data processing module performs de-reflection processing on the first image data according to the fifth instruction.
  • the fourth instruction may be, for example, an image de-reflection operation performed by the user through a mouse, or may be, for example, a default image de-reflection instruction, but not limited thereto.
  • the jaundice analysis method 700 may further include: by the data processing module based on the first An image data generates second image data; the data processing module generates second training data based on the second image data; and the data processing module associates the second training data with the second category data.
  • the second category data is the first category data associated with the first image data.
  • the deep learning module trains the target convolutional neural network module with the first training data associated with the first category of data and the second training data associated with the second category of data, so as to obtain the trained convolutional neural network module.
  • the data processing module performs one of image translation processing, image rotation processing, image flip processing, and void complement constant processing on the first image data to generate the second image data.
  • image translation processing image rotation processing
  • image flip processing image flip processing
  • void complement constant processing on the first image data to generate the second image data.
  • the manner in which the data processing module generates the second image data based on the first image data is not limited to this.

Abstract

一种黄疸分析系统(100),包含:数据库(110);以及处理装置(120),存取数据库(110),处理装置(120)包含:数据处理模块(122),基于影像数据产生训练数据,数据处理模块(122)将训练数据关联于类别数据,并将训练数据储存至数据库(110);以及深度学习模块(124),以关联于类别数据的训练数据训练目标卷积神经网络模块,以获得经训练卷积神经网络模块(128);其中影像数据包含第一眼白影像;其中处理装置(120)的经训练卷积神经网络模块(128)根据输入影像数据而得出判断数据,输入影像数据包含目标对象的第二眼白影像;其中判断数据指示出目标对象的胆红素浓度范围。

Description

黄疸分析系统及其方法 技术领域
本发明关于一种黄疸分析系统及其方法,特别关于一种可根据目标对象的眼白影像,以判断目标对象是否具有黄疸症状的黄疸分析系统及其方法。
背景技术
黄疸症状为许多代谢相关疾病的症状,最典型常见的体征为皮肤及眼白变黄。然而,具有黄疸症状的患者有可能因为本身肤色即偏黄,或误认肤色变黄仅是因为晒黑等等因素,而忽略了本身的疾病或器官衰竭的病况。此将导致患者于发现、诊断出黄疸或器官病变时,其病情已演变得更为严重。据此,将需要一种可依据目标对象的眼白影像,以判断目标对象是否具有黄疸症状的黄疸分析系统及其方法。
发明内容
为了解决上述问题,本发明的构想在于提供一种可依据目标对象的眼白影像,以判断目标对象是否具有黄疸症状的黄疸分析系统及其方法。
基于前述构想,本发明提供一种黄疸分析系统,包含:数据库;以及处理装置,存取该数据库,该处理装置包含:数据处理模块,基于第一影像数据产生第一训练数据,该数据处理模块将该第一训练数据关联于第一类别数据,并将该第一训练数据储存至该数据库;以及深度学习模块,以关联于该第一类别数据的该第一训练数据训练目标卷积神经网 络模块,以获得经训练卷积神经网络模块;其中该第一影像数据包含第一眼白影像;其中该数据库通讯连接该数据处理模块及/或该深度学习模块;其中该处理装置的该经训练卷积神经网络模块根据输入影像数据而得出判断数据,该输入影像数据包含目标对象的第二眼白影像;其中该判断数据指示出该目标对象的胆红素浓度范围。
于本发明的优选实施例中,该深度学习模块以转移学习(transfer learning)的方式,获得该经训练卷积神经网络模块。亦即,该目标卷积神经网络模块为已经受过训练(此训练并不限定是用于判断黄疸症状或判断胆红素浓度)的卷积神经网络模块。
于本发明的优选实施例中,该数据处理模块将该第一影像数据进行第一裁切处理以产生第一裁切影像数据,其中该数据处理模块基于该第一裁切影像数据以产生该第一训练数据。
于本发明的优选实施例中,该数据处理模块将该第一影像数据进行镜射处理以产生镜射影像数据,其中该数据处理模块基于该镜射影像数据以产生该第一训练数据。
于本发明的优选实施例中,该数据处理模块将该镜射影像数据进行第二裁切处理以产生第二裁切影像数据,其中该数据处理模块基于该第二裁切影像数据以产生该第一训练数据;其中该第二裁切影像数据具有特定影像形状。
于本发明的优选实施例中,该数据处理模块将该第二裁切影像数据进行第三裁切处理以产生第三裁切影像数据,其中该数据处理模块基于该第三裁切影像数据以产生该第一训练数据。
于本发明的优选实施例中,该数据处理模块将该第一影像数据进行去反光处理以产生去反光影像数据,其中该数据处理模块基于该去反光影像数据以产生该第一训练数据。
于本发明的优选实施例中,该数据处理模块基于该第一影像数据产生第二影像数据,并基于该第二影像数据产生第二训练数据,该数据处理模块将该第二训练数据关联于第二类别数据,并将该第二训练数据储存至该数据库;其中该深度学习模块以关联于该第一类别数据的该第一训练数据与关联于该第二类别数据的该第二训练数据训练该目标卷积神经网络模块,以获得该经训练卷积神经网络模块。
于本发明的优选实施例中,该数据处理模块对该第一影像数据进行影像平移处理、影像旋转处理以及影像翻转处理其中一者,以产生该第二影像数据。
于本发明的优选实施例中,该黄疸分析系统进一步包含:行动装置,储存该输入影像数据;其中该处理装置进一步包含:通讯模块,通讯连接该行动装置以及该处理装置的该经训练卷积神经网络模块,该通讯模块自该行动装置接收该输入影像数据,并将该判断数据传送至该行动装置。
根据本发明的目的,再提供一种黄疸分析方法,应用于黄疸分析系统,该黄疸分析方法包含:由该黄疸分析系统的数据处理模块基于第一影像数据产生第一训练数据,该数据处理模块并将该第一训练数据关联于第一类别数据;由该黄疸分析系统的深度学习模块以关联于该第一类别数据的该第一训练数据训练目标卷积神经网络模块,以获得经训练卷积神经网络模块;以及由该黄疸分析系统的该经训练卷积神经网络模块根据输入影像数据而得出判断数据,该输入影像数据包含目标对象的第二眼白影像;其中该第 一影像数据包含第一眼白影像;其中该判断数据指示出该目标对象的胆红素浓度范围。
于本发明的优选实施例中,该深度学习模块以转移学习的方式获得该经训练卷积神经网络模块。亦即,该目标卷积神经网络模块为已经受过训练(此训练并不限定用于判断黄疸症状或判断胆红素浓度,而可例如为藉由影像以判断其他事物的训练,但不以此为限)的卷积神经网络模块。
于本发明的优选实施例中,所述基于该第一影像数据产生该第一训练数据进一步包含:由该数据处理模块将该第一影像数据进行第一裁切处理以产生第一裁切影像数据;其中该数据处理模块基于该第一裁切影像数据以产生该第一训练数据。
于本发明的优选实施例中,所述基于该第一影像数据产生该第一训练数据进一步包含:由该数据处理模块将该第一影像数据进行镜射处理以产生镜射影像数据;其中该数据处理模块基于该镜射影像数据以产生该第一训练数据。
于本发明的优选实施例中,所述基于该第一影像数据产生该第一训练数据进一步包含:由该数据处理模块将该镜射影像数据进行第二裁切处理以产生第二裁切影像数据;其中该数据处理模块基于该第二裁切影像数据以产生该第一训练数据;其中该第二裁切影像数据具有特定影像形状。
于本发明的优选实施例中,所述基于该第一影像数据产生该第一训练数据进一步包含:由该数据处理模块将该第二裁切影像数据进行第三裁切处理以产生第三裁切影像数据;其中该数据处理模块基于该第三裁切影像数据以产生该第一训练数据。
于本发明的优选实施例中,所述基于该第一影像数据产生该第一训练数据进一步包含:由该数据处理模块将该第一影像数据进行去反光处理以产生去反光影像数据;其中该数据处理模块基于该去反光影像数据以产生该第一训练数据。
于本发明的优选实施例中,该黄疸分析方法进一步包含:由该数据处理模块基于该第一影像数据产生第二影像数据;以及由该数据处理模块基于该第二影像数据产生第二训练数据,该数据处理模块并将该第二训练数据关联于第二类别数据;其中该深度学习模块以关联于该第一类别数据的该第一训练数据与关联于该第二类别数据的该第二训练数据训练该目标卷积神经网络模块,以获得该经训练卷积神经网络模块。
于本发明的优选实施例中,该数据处理模块对该第一影像数据进行影像平移处理、影像旋转处理以及影像翻转处理其中一者,以产生该第二影像数据。
于本发明的优选实施例中,该黄疸分析方法进一步包含:由该黄疸分析系统的通讯模块自行动装置接收该输入影像数据;以及由该通讯模块将该判断数据传送至该行动装置。
本发明前述各方面及其它方面依据下述的非限制性具体实施例详细说明以及参照附随的附图将更趋于明了。
附图说明
图1为本发明黄疸分析系统具体实施例的系统架构图。
图2A为第一影像数据的具体实施例的示意图。
图2B为第一影像数据的具体实施例的示意图。
图2C为第一影像数据的具体实施例的示意图。
图3为镜射影像数据的具体实施例的示意图。
图4为第二裁切影像数据具体实施例的示意图。
图5为产生第三裁切影像数据具体实施例的示意图。
图6为产生去反光影像数据具体实施例的示意图。
图7为本发明黄疸分析方法具体实施例的流程图。
具体实施方式
请参阅图1,其例示说明了根据本发明黄疸分析系统具体实施例的系统架构图。如图1所示实施例,黄疸分析系统100包含数据库110、处理装置120。处理装置120可存取数据库110,或可称处理装置120通讯连接数据库110。处理装置120包含数据处理模块122、深度学习模块124。优选地,数据处理模块122通讯连接数据库110,且深度学习模块124通讯连接数据库110。在图1所示实施例中,数据处理模块122可基于第一影像数据产生第一训练数据,且数据处理模块122可将第一训练数据关联于第一类别数据,并将第一训练数据储存至数据库110。其中,第一影像数据包含第一眼白影像,而第一类别数据指示出胆红素浓度值或胆红素浓度范围。应了解,黄疸的产生原因为身体血液内胆红素(bilirubin)过多而造成皮肤、眼白、黏膜处等颜色偏黄。因此,胆红素浓度值或胆红素浓度范围亦可视为黄疸症状的程度(亦即,可视为第一类别数据指示出黄疸症状的程度)。在具体实施例中,第一影像数据储存于数据库110中。
应了解,数据处理模块122可基于不同的第一影像数据以产生不同的第一训练数据,其中不同的第一训练数据可分别关联于不同的第一类别 数据。可选地,不同的第一训练数据亦可分别关联于相同的第一类别数据。举例而言,数据处理模块122可基于第一组第一影像数据以产生对应的第一组第一训练数据,第一组第一训练资料中的各个第一训练资料皆关联于指示出第一胆红素浓度范围的第一类别数据。数据处理模块122并可基于第二组第一影像数据以产生对应的第二组第一训练数据,第二组第一训练资料中的各个第一训练资料皆关联于指示出第二胆红素浓度范围的另一第一类别数据。其中第一胆红素浓度范围不同于第二胆红素浓度范围。在具体实施例中,处理装置120或数据处理模块122接收第一影像数据(例如由处理装置120或数据处理模块122自数据库110取得第一影像数据)时,第一影像数据即已关联于第一类别数据,数据处理模块122并据此以将基于第一影像数据所产生的第一训练数据关联于第一类别数据。
在图1所示实施例中,深度学习模块124可使用关联于该第一类别数据的该第一训练数据,以训练目标卷积神经网络模块,藉以获得(或可称为产生)经训练卷积神经网络模块128。优选地,深度学习模块124以转移学习(transfer learning)的方式获得经训练卷积神经网络模块128。亦即,目标卷积神经网络模块为一个已经过训练的卷积神经网络模块。在具体实施例中,深度学习模块124基于EfficientNetB5以进行转移学习。应了解,深度学习模块124所得出的经训练卷积神经网络模块128亦是包含于处理装置120中。如此,处理装置120的经训练卷积神经网络模块128即可根据输入影像数据而得出判断数据,其中该输入影像数据包含目标对象的第二眼白影像,该判断数据指示出目标对象的胆红素浓度范围。
在具体实施例中,深度学习模块124在以关联于第一类别数据的第一训练数据训练目标卷积神经网络模块的过程中,可自行产生多种滤镜以撷取 各种特征值。其中滤镜可例如为直方图滤镜(Histogram)、直方图均匀化滤镜(Clahe(Adaptive Histogram))以及高斯滤镜(Guassian)等,但不以此为限。在具体实施例中,深度学习模块124可通讯连接目标卷积神经网络模块以及经训练卷积神经网络模块128。在具体实施例中,深度学习模块124可包含目标卷积神经网络模块以及经训练卷积神经网络模块128。在具体实施例中,数据处理模块122可通讯连接深度学习模块124及/或经训练卷积神经网络模块128。
在具体实施例中,为了获得较多的训练资料,以期提高经训练卷积神经网络模块128判断胆红素浓度范围或黄疸程度的准确度,数据处理模块122可基于第一影像数据产生第二影像数据,并可基于第二影像数据产生第二训练数据。接着,数据处理模块122可将第二训练数据关联于第二类别数据,并将第二训练数据储存至数据库110。优选地,第二类别数据即为关联于第一影像数据的第一类别数据。在具体实施例中,深度学习模块124以关联于第一类别数据的第一训练数据以及关联于第二类别数据的第二训练数据,训练目标卷积神经网络模块,藉以获得经训练卷积神经网络模块128。
在不同具体实施例中,数据处理模块122对第一影像数据进行影像平移处理(image translating processing,例如对第一影像数据进行水平平移、垂直平移等各种平移处理,但不以此为限)、影像旋转处理(image rotating processing,例如对第一影像数据进行0度到180度的旋转处理,但不以此为限)、影像翻转处理(image flipping processing,例如对第一影像数据进行水平翻转、垂直翻转等各种翻转处理,但不以此为限)以及空隙补常数处理(gap compensation constant processing)其中一者, 以产生第二影像数据。然应了解,数据处理模块122基于第一影像数据以产生第二影像数据的方式并不限于此。
在具体实施例中,处理装置120可进一步包含通讯模块126。处理模块122可藉由通讯模块126以自装置900接收数据(例如输入影像数据),或可藉由通讯模块126以传送数据(例如判断数据)至装置900。通讯模块126可通讯连接装置900以及处理装置120的经训练卷积神经网络模块128。装置900可例如为计算机、行动装置(计算机亦可视为一种行动装置)、远程服务器等,但不以此为限。在具体实施例中,装置900亦可视为黄疸分析系统100的一部份,其中输入影像数据储存于装置900中。在具体实施例中,装置900包含影像捕获设备,装置900可藉由影像捕获设备以撷取影像并产生输入影像数据。优选地,输入影像数据包含第一输入影像数据与第二输入影像数据。第一输入影像数据包含目标对象的左眼白影像,第二输入影像数据包含目标对象的右眼白影像。在具体实施例中,通讯模块126可通讯连接数据处理模块122及/或深度学习模块124。
在具体实施例中,本发明的黄疸分析系统100包含一个或多个处理器,并以硬件与软件协同运作的方式实施数据库110、处理装置120。在具体实施例中,处理装置120包含一个或多个处理器,并以硬件与软件协同运作的方式实施数据处理模块122、深度学习模块124、通讯模块126以及经训练卷积神经网络模块128。在具体实施例中,装置900包含一个或多个处理器,并以硬件与软件协同运作的方式实施影像捕获设备。
请参阅图2A至图2C,其例示了不同的第一影像数据。其中,与图2A的第一影像数据201、202相关联的第一类别数据所指示出的胆红素浓度范围为0~1.2mg/dL。与图2B的第一影像数据203、204相关联的第一类别 数据所指示出的胆红素浓度范围为1.3~3.5mg/dL。与图2C的第一影像数据205、206相关联的第一类别数据所指示出的胆红素浓度范围为大于3.6mg/dL。在具体实施例中,本发明的黄疸分析系统的数据处理模块可将第一影像数据进行第一裁切处理以产生第一裁切影像数据,藉此可将各个第一影像数据裁切或调整成相同的影像大小。数据处理模块并可基于第一裁切影像数据以产生该第一训练数据。
请参阅图3,其例示了不同的镜射影像数据。其中,镜射影像数据302、304由数据处理模块将第一影像数据进行不同的镜射处理所产生的镜射影像数据。其中,影像数据304的短边透过镜射处理的方式补值(即以短边为对称轴进行镜射处理),以使影像数据304的长与宽相等。在具体实施例中,可将第一裁切影像数据做为一种第一影像数据。亦即,数据处理模块针对已经过第一裁切处理的第一影像数据进行镜射处理。在具体实施例中,数据处理模块基于镜射影像数据以产生第一训练数据。
请参阅图4,其例示了第二裁切影像数据具体实施例的示意图。如图4所示实施例,数据处理模块可将图3中的镜射影像数据304进行第二裁切处理以产生第二裁切影像数据402。在具体实施例中,第二裁切处理的裁切方式为以镜射影像数据304的一边为直径做相切圆(正方形的内切圆),切除此相切圆以外的影像内容,而只保留相切圆以内的影像内容。数据处理模块并可以此方式将不同的镜射影像数据裁切成相同的影像大小。应了解,图4在此仅为例示,数据处理模块在进行第二裁切处理时,并非仅可将影像裁切为圆形,而可视需求将影像裁切为特定影像形状。在具体实施例中,数据处理模块基于第二裁切影像数据以产生第一训练数据。
请参阅图5,其例示了产生第三裁切影像数据具体实施例的示意图。如图5所示实施例,数据处理模块可将第二裁切影像数据502进行第三裁切处理以产生第三裁切影像数据504。在具体实施例中,第三裁切处理的裁切方式为以第二裁切影像数据502的中心点取特定尺寸的正方形,并只保留正方形内的影像内容,藉以减少不必要的特征。在具体实施例中,数据处理模块基于第三裁切影像数据以产生第一训练数据。
请参阅图6,其例示了产生去反光影像数据具体实施例的示意图。如图6所示实施例,数据处理模块可将第一影像数据602进行去反光处理以产生去反光影像数据604。如此,即可避免影像中的眼球反光之白色部分对深度学习(即深度学习模块对目标卷积神经网络模块进行的训练)造成干扰。在具体实施例中,去反光处理的执行方式为使用影像修图软件Photoimpact的仿制功能将眼睛反光的部分去掉。在具体实施例中,数据处理模块基于去反光影像数据以产生第一训练数据。
应了解,第一裁切影像数据、镜射影像数据、第二裁切影像数据、第三裁切影像数据以及去反光影像数据皆可视为一种第一影像数据,数据处理模块并可对此些影像数据进行第一裁切处理及/或镜射处理及/或第二裁切处理及/或第三裁切处理及/或去反光处理。
请参阅图7,其例示说明了根据本发明黄疸分析方法具体实施例的流程图。如图7所示实施例,黄疸分析方法700是应用于黄疸分析系统。黄疸分析方法700开始于步骤710,由黄疸分析系统的数据处理模块基于第一影像数据产生第一训练数据,并由数据处理模块将第一训练数据关联于第一类别数据。其中,第一影像数据包含第一眼白影像。接着,进行步骤720,由黄疸分析系统的深度学习模块以关联于第一类别数据的第一训练数据, 训练目标卷积神经网络模块,以获得经训练卷积神经网络模块。在具体实施例中,深度学习模块以转移学习的方式获得该经训练卷积神经网络模块。亦即,该目标卷积神经网络模块为已经受过训练的卷积神经网络模块,其中目标卷积神经网络模块先行受过的训练并不限定用于判断黄疸症状或判断胆红素浓度,而可例如为藉由影像以判断其他事物的训练,但不以此为限。
在完成步骤720之后,接着进行步骤730,由黄疸分析系统的通讯模块自行动装置(例如手机、平板计算机等,但不以此为限)接收输入影像数据。其中,输入影像数据包含目标对象的第二眼白影像。优选地,输入影像数据包含第一输入影像数据与第二输入影像数据。第一输入影像数据包含目标对象的左眼白影像,第二输入影像数据包含目标对象的右眼白影像。应了解,步骤730亦可视需求而于步骤710或720之前先行执行。
在完成步骤710至步骤730之后,接着进行步骤740,由黄疸分析系统的经训练卷积神经网络模块根据输入影像数据而得出判断资料。其中,判断数据指示出该目标对象的胆红素浓度范围(胆红素浓度范围亦可视为黄疸的程度)。接着,进行步骤750,由黄疸分析系统的通讯模块将判断数据传送至行动装置。
在具体实施例中,所述基于第一影像数据产生第一训练数据可进一步包含:由数据处理模块将第一影像数据进行第一裁切处理,以产生第一裁切影像数据。其中,数据处理模块基于第一裁切影像数据以产生第一训练数据。在具体实施例中,数据处理模块根据第一指令以将第一影像数据进行第一裁切处理。第一指令可例如为用户透过鼠标进行影像裁切操作,或可例如为默认的影像裁切指令,但不以此为限。
在具体实施例中,所述基于第一影像数据产生第一训练数据可进一步包含:由数据处理模块将第一影像数据进行镜射处理,以产生镜射影像数据。其中,数据处理模块基于镜射影像数据以产生第一训练数据。在具体实施例中,数据处理模块根据第二指令以将第一影像数据进行镜射处理。第二指令可例如为用户透过鼠标进行影像镜射操作,或可例如为默认的影像镜射指令,但不以此为限。
在具体实施例中,所述基于第一影像数据产生第一训练数据可进一步包含:由数据处理模块将镜射影像数据进行第二裁切处理,以产生第二裁切影像数据。其中,第二裁切影像数据具有特定影像形状,数据处理模块基于第二裁切影像数据以产生第一训练数据。在具体实施例中,数据处理模块根据第三指令以将镜射影像数据进行第二裁切处理。第三指令可例如为用户透过鼠标进行影像裁切操作,或可例如为默认的影像裁切指令,但不以此为限。
在具体实施例中,所述基于第一影像数据产生第一训练数据可进一步包含:由数据处理模块将第二裁切影像数据进行第三裁切处理,以产生第三裁切影像数据。其中,数据处理模块基于第三裁切影像数据以产生第一训练数据。在具体实施例中,数据处理模块根据第四指令以将第二裁切影像数据进行第三裁切处理。第四指令可例如为用户透过鼠标进行影像裁切操作,或可例如为默认的影像裁切指令,但不以此为限。
在具体实施例中,所述基于第一影像数据产生第一训练数据可进一步包含:由数据处理模块将第一影像数据进行去反光处理,以产生去反光影像数据。其中,数据处理模块基于去反光影像数据以产生第一训练数据。在具体实施例中,数据处理模块根据第五指令以将第一影像数据 进行去反光处理。第四指令可例如为用户透过鼠标进行影像去反光操作,或可例如为默认的影像去反光指令,但不以此为限。
在具体实施例中,为了获得较多的训练资料,以期提高经训练卷积神经网络模块判断胆红素浓度范围或黄疸程度的准确度,黄疸分析方法700可进一步包含:由数据处理模块基于第一影像数据产生第二影像数据;由数据处理模块基于第二影像数据产生第二训练数据;以及由数据处理模块将第二训练数据关联于第二类别数据。优选地,第二类别数据即为关联于第一影像数据的第一类别数据。在具体实施例中,深度学习模块以关联于第一类别数据的第一训练数据以及关联于第二类别数据的第二训练数据,训练目标卷积神经网络模块,藉以获得经训练卷积神经网络模块。
在不同具体实施例中,数据处理模块对第一影像数据进行影像平移处理、影像旋转处理、影像翻转处理以及空隙补常数处理中的其中一种处理手段,以产生第二影像数据。然应了解,数据处理模块基于第一影像数据以产生第二影像数据的方式并不限于此。
至此,本发明的黄疸分析系统及其方法已经由上述说明及附图加以说明。然应了解,本发明的各个具体实施例仅是做为说明之用,在不脱离本发明要求保护的范围与精神下可进行各种改变,且均应包含于本发明的专利范围中。因此,本说明书所描述的各具体实施例并非用以限制本发明,本发明的真实范围与精神揭示于以下权利要求中。
符号说明
100     黄疸分析系统
110     数据库
120          处理装置
122          数据处理模块
124          深度学习模块
126          通讯模块
128          经训练卷积神经网络模块
201~206     第一影像数据
302、304     镜射影像数据
402          第二裁切影像数据
502          第二裁切影像数据
504          第三裁切影像数据
602          第一影像数据
604          去反光影像数据
700          黄疸分析方法
710~750     步骤
900          装置

Claims (20)

  1. 一种黄疸分析系统,包含:
    数据库;以及
    处理装置,存取该数据库,该处理装置包含:
    数据处理模块,基于第一影像数据产生第一训练数据,该数据处理模块将该第一训练数据关联于第一类别数据,并将该第一训练数据储存至该数据库;以及
    深度学习模块,以关联于该第一类别数据的该第一训练数据训练目标卷积神经网络模块,以获得经训练卷积神经网络模块;
    其中该第一影像数据包含第一眼白影像;
    其中该数据库通讯连接该数据处理模块及/或该深度学习模块;
    其中该处理装置的该经训练卷积神经网络模块根据输入影像数据而得出判断数据,该输入影像数据包含目标对象的第二眼白影像;
    其中该判断数据指示出该目标对象的胆红素浓度范围。
  2. 如权利要求1的黄疸分析系统,其中该深度学习模块以转移学习的方式获得该经训练卷积神经网络模块。
  3. 如权利要求1的黄疸分析系统,其中该数据处理模块将该第一影像数据进行第一裁切处理以产生第一裁切影像数据,其中该数据处理模块基于该第一裁切影像数据以产生该第一训练数据。
  4. 如权利要求1的黄疸分析系统,其中该数据处理模块将该第一影像数据进行镜射处理以产生镜射影像数据,其中该数据处理模块基于该镜射影像数据以产生该第一训练数据。
  5. 如权利要求4的黄疸分析系统,其中该数据处理模块将该镜射影像数据进行第二裁切处理以产生第二裁切影像数据,其中该数据处理模块基于该第二裁切影像数据以产生该第一训练数据;其中该第二裁切影像数据具有特定影像形状。
  6. 如权利要求5的黄疸分析系统,其中该数据处理模块将该第二裁切影像数据进行第三裁切处理以产生第三裁切影像数据,其中该数据处理模块基于该第三裁切影像数据以产生该第一训练数据。
  7. 如权利要求1的黄疸分析系统,其中该数据处理模块将该第一影像数据进行去反光处理以产生去反光影像数据,其中该数据处理模块基于该去反光影像数据以产生该第一训练数据。
  8. 如权利要求1的黄疸分析系统,其中该数据处理模块基于该第一影像数据产生第二影像数据,并基于该第二影像数据产生第二训练数据,该数据处理模块将该第二训练数据关联于第二类别数据,并将该第二训练数据储存至该数据库;其中该深度学习模块以关联于该第一类别数据的该第一训练数据与关联于该第二类别数据的该第二训练数据训练该目标卷积神经网络模块,以获得该经训练卷积神经网络模块。
  9. 如权利要求8的黄疸分析系统,其中该数据处理模块对该第一影像数据进行影像平移处理、影像旋转处理以及影像翻转处理其中一者,以产生该第二影像数据。
  10. 如权利要求1的黄疸分析系统,进一步包含:
    行动装置,储存该输入影像数据;
    其中该处理装置进一步包含:
    通讯模块,通讯连接该行动装置以及该处理装置的该经训练卷积神经网络模块,该通讯模块自该行动装置接收该输入影像数据,并将该判断数据传送至该行动装置。
  11. 一种黄疸分析方法,应用于黄疸分析系统,该黄疸分析方法包含:
    由该黄疸分析系统的数据处理模块基于第一影像数据产生第一训练数据,该数据处理模块并将该第一训练数据关联于第一类别数据;
    由该黄疸分析系统的深度学习模块以关联于该第一类别数据的该第一训练数据训练目标卷积神经网络模块,以获得经训练卷积神经网络模块;以及
    由该黄疸分析系统的该经训练卷积神经网络模块根据输入影像数据而得出判断数据,该输入影像数据包含目标对象的第二眼白影像;
    其中该第一影像数据包含第一眼白影像;
    其中该判断数据指示出该目标对象的胆红素浓度范围。
  12. 如权利要求11的黄疸分析方法,其中该深度学习模块以转移学习的方式获得该经训练卷积神经网络模块。
  13. 如权利要求11的黄疸分析方法,其中所述基于该第一影像数据产生该第一训练数据进一步包含:
    由该数据处理模块将该第一影像数据进行第一裁切处理以产生第一裁切影像数据;
    其中该数据处理模块基于该第一裁切影像数据以产生该第一训练数据。
  14. 如权利要求11的黄疸分析方法,其中所述基于该第一影像数据产生该第一训练数据进一步包含:
    由该数据处理模块将该第一影像数据进行镜射处理以产生镜射影像数据;
    其中该数据处理模块基于该镜射影像数据以产生该第一训练数据。
  15. 如权利要求14的黄疸分析方法,其中所述基于该第一影像数据产生该第一训练数据进一步包含:
    由该数据处理模块将该镜射影像数据进行第二裁切处理以产生第二裁切影像数据;
    其中该数据处理模块基于该第二裁切影像数据以产生该第一训练数据;
    其中该第二裁切影像数据具有特定影像形状。
  16. 如权利要求15的黄疸分析方法,其中所述基于该第一影像数据产生该第一训练数据进一步包含:
    由该数据处理模块将该第二裁切影像数据进行第三裁切处理以产生第三裁切影像数据;
    其中该数据处理模块基于该第三裁切影像数据以产生该第一训练数据。
  17. 如权利要求11的黄疸分析方法,其中所述基于该第一影像数据产生该第一训练数据进一步包含:
    由该数据处理模块将该第一影像数据进行去反光处理以产生去反光影像数据;
    其中该数据处理模块基于该去反光影像数据以产生该第一训练数据。
  18. 如权利要求11的黄疸分析方法,进一步包含:
    由该数据处理模块基于该第一影像数据产生第二影像数据;以及
    由该数据处理模块基于该第二影像数据产生第二训练数据,该数据处理模块并将该第二训练数据关联于第二类别数据;
    其中该深度学习模块以关联于该第一类别数据的该第一训练数据与关联于该第二类别数据的该第二训练数据训练该目标卷积神经网络模块,以获得该经训练卷积神经网络模块。
  19. 如权利要求18的黄疸分析方法,其中该数据处理模块对该第一影像数据进行影像平移处理、影像旋转处理以及影像翻转处理其中一者,以产生该第二影像数据。
  20. 如权利要求11的黄疸分析方法,进一步包含:
    由该黄疸分析系统的通讯模块自行动装置接收该输入影像数据;以及
    由该通讯模块将该判断数据传送至该行动装置。
PCT/CN2020/137680 2020-12-18 2020-12-18 黄疸分析系统及其方法 WO2022126622A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2023537919A JP2023554145A (ja) 2020-12-18 2020-12-18 黄疸分析システム及びその方法
PCT/CN2020/137680 WO2022126622A1 (zh) 2020-12-18 2020-12-18 黄疸分析系统及其方法
EP20965636.2A EP4265181A1 (en) 2020-12-18 2020-12-18 Jaundice analysis system and method thereof
CN202080107974.3A CN116801798A (zh) 2020-12-18 2020-12-18 黄疸分析系统及其方法
US18/267,909 US20240054641A1 (en) 2020-12-18 2020-12-18 Jaundice analysis system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137680 WO2022126622A1 (zh) 2020-12-18 2020-12-18 黄疸分析系统及其方法

Publications (1)

Publication Number Publication Date
WO2022126622A1 true WO2022126622A1 (zh) 2022-06-23

Family

ID=82058873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137680 WO2022126622A1 (zh) 2020-12-18 2020-12-18 黄疸分析系统及其方法

Country Status (5)

Country Link
US (1) US20240054641A1 (zh)
EP (1) EP4265181A1 (zh)
JP (1) JP2023554145A (zh)
CN (1) CN116801798A (zh)
WO (1) WO2022126622A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7187790B2 (en) * 2002-12-18 2007-03-06 Ge Medical Systems Global Technology Company, Llc Data processing and feedback method and system
CN104856680A (zh) * 2015-05-11 2015-08-26 深圳贝申医疗技术有限公司 一种新生儿黄疸的自动检测方法及系统
CN105466864A (zh) * 2015-12-30 2016-04-06 中国科学院苏州生物医学工程技术研究所 基于色度分析的巩膜黄染检测仪
CN108430326A (zh) * 2015-12-22 2018-08-21 皮克特鲁斯公司 基于图像确定胆红素
CN109009132A (zh) * 2018-07-09 2018-12-18 京东方科技集团股份有限公司 一种黄疸监测方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7187790B2 (en) * 2002-12-18 2007-03-06 Ge Medical Systems Global Technology Company, Llc Data processing and feedback method and system
CN104856680A (zh) * 2015-05-11 2015-08-26 深圳贝申医疗技术有限公司 一种新生儿黄疸的自动检测方法及系统
CN108430326A (zh) * 2015-12-22 2018-08-21 皮克特鲁斯公司 基于图像确定胆红素
CN105466864A (zh) * 2015-12-30 2016-04-06 中国科学院苏州生物医学工程技术研究所 基于色度分析的巩膜黄染检测仪
CN109009132A (zh) * 2018-07-09 2018-12-18 京东方科技集团股份有限公司 一种黄疸监测方法及装置

Also Published As

Publication number Publication date
CN116801798A (zh) 2023-09-22
JP2023554145A (ja) 2023-12-26
EP4265181A1 (en) 2023-10-25
US20240054641A1 (en) 2024-02-15

Similar Documents

Publication Publication Date Title
US20220076420A1 (en) Retinopathy recognition system
WO2021068523A1 (zh) 眼底图像黄斑中心定位方法、装置、电子设备及存储介质
Keel et al. Development and validation of a deep‐learning algorithm for the detection of neovascular age‐related macular degeneration from colour fundus photographs
US10755410B2 (en) Method and apparatus for acquiring information
CN110428410B (zh) 眼底医学图像处理方法、装置、设备及存储介质
US20200193004A1 (en) Biometric identification systems and associated devices
CN104867148B (zh) 预定种类物体图像的获取方法、装置及口腔远程诊断系统
JP6800351B2 (ja) 電極シートのバリを検出するための方法および装置
US20210343007A1 (en) Quality control method and system for remote fundus screening, and storage device
WO2021190656A1 (zh) 眼底图像黄斑中心定位方法及装置、服务器、存储介质
WO2022126622A1 (zh) 黄疸分析系统及其方法
TWI767456B (zh) 黃疸分析系統及其方法
CN113408593A (zh) 一种基于改进的ResNeSt卷积神经网络模型的糖尿病性视网膜病变图像分类方法
CN116030042B (zh) 一种针对医生目诊的诊断装置、方法、设备及存储介质
US20230165458A1 (en) Cloud Based Corneal Surface Difference Mapping System and Method
CN109003264B (zh) 一种视网膜病变图像类型识别方法、装置和存储介质
CN112806957B (zh) 一种基于深度学习的圆锥角膜和亚临床圆锥角膜检测系统
CN111259774A (zh) 用于获取课件信息的方法、授课设备、备课设备及服务器
WO2023137904A1 (zh) 基于眼底图像的病变检测方法、装置、设备及存储介质
US20200226747A1 (en) Method for obtaining a fluorescent fundus image and a device thereof
Blair et al. Development of LuxIA, a Cloud-Based AI Diabetic Retinopathy Screening Tool Using a Single Color Fundus Image
CN111291825B (zh) 病灶分类模型训练方法、装置、计算机设备和存储介质
CN116363740B (zh) 基于深度学习的眼科疾病类别智能分析方法及装置
Lestari et al. The Validity of Artificial Intelligence (AI) for Diabetic Retinopathy Screening in Asia: A Systematic Review
WO2020214612A1 (en) Device navigation and capture of media data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20965636

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080107974.3

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 18267909

Country of ref document: US

Ref document number: 2023537919

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020965636

Country of ref document: EP

Effective date: 20230718