CN110428377B - Data expansion method, device, equipment and medium - Google Patents

Data expansion method, device, equipment and medium Download PDF

Info

Publication number
CN110428377B
CN110428377B CN201910683255.5A CN201910683255A CN110428377B CN 110428377 B CN110428377 B CN 110428377B CN 201910683255 A CN201910683255 A CN 201910683255A CN 110428377 B CN110428377 B CN 110428377B
Authority
CN
China
Prior art keywords
chromaticity
target image
source
new
correction coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910683255.5A
Other languages
Chinese (zh)
Other versions
CN110428377A (en
Inventor
孙旭
杨叶辉
王磊
许言午
黄艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Confucius Health Technology Co ltd
Original Assignee
Beijing Confucius Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Confucius Health Technology Co ltd filed Critical Beijing Confucius Health Technology Co ltd
Priority to CN201910683255.5A priority Critical patent/CN110428377B/en
Publication of CN110428377A publication Critical patent/CN110428377A/en
Application granted granted Critical
Publication of CN110428377B publication Critical patent/CN110428377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a data expansion method, a device, equipment and a medium, and relates to the field of image processing. The method comprises the following steps: acquiring a source target image from a target machine type, and extracting chromaticity information of the source target image; and adjusting chromaticity information of the source target image based on the chromaticity range of the target image to generate a new target image. The embodiment of the invention provides a data expansion method, a device, equipment and a medium, which realize the expansion of target images of one model to target images of other models.

Description

Data expansion method, device, equipment and medium
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a data expansion method, a device, equipment and a medium.
Background
Retinal fundus images play an important role in fundus disease screening and diagnosis. The acquisition of a high-performance fundus image analysis model requires a large amount of high-quality annotation training data. In order to ensure generalization and robustness of model output in an actual application scene, the distribution of training data is basically consistent with the data distribution in the actual application scene.
But as fundus cameras of retina fundus image capturing apparatuses, there are many models currently used in the market. For fundus cameras of different models, due to the adoption of different hardware (such as an illumination light source, a photosensitive element and the like) and software (such as a digital image post-processing technology and the like) configurations, the obtained retina fundus images have obvious differences in chromaticity (shown in fig. 1). This causes a phenomenon in which the retinal fundus images photographed by different models have inconsistent data distribution.
However, due to the limitation of the equipment cost, the data set adopted by the model in the training phase generally includes only one model of retinal fundus image or only a small number of models of retinal fundus image, resulting in the problem of a single training data set. Furthermore, the model obtained by training based on a single data set has low analysis accuracy rate on the retina fundus images of the model which does not participate in training, which restricts the practical application of the retina fundus image analysis model.
Disclosure of Invention
The embodiment of the invention provides a data expansion method, a device, equipment and a medium, which are used for realizing the expansion of target images of one model to target images of other models, wherein the target images of the other models which are expanded can be used for model training so as to improve the analysis accuracy of the model to target images of different models.
In a first aspect, an embodiment of the present invention provides a data expansion method, where the method includes:
acquiring a source target image from a target machine type, and extracting chromaticity information of the source target image;
and adjusting chromaticity information of the source target image based on the chromaticity range of the target image to generate a new target image.
In a second aspect, an embodiment of the present invention further provides a data expansion apparatus, where the apparatus includes:
the system comprises a chromaticity information extraction module, a chromaticity information extraction module and a target model extraction module, wherein the chromaticity information extraction module is used for obtaining a source target image from the target model and extracting chromaticity information of the source target image;
and the chromaticity information adjusting module is used for adjusting chromaticity information of the source target image based on the chromaticity range of the target image to generate a new target image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the data expansion method as described in any of the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, where the program is executed by a processor to implement a data expansion method according to any one of the embodiments of the present invention.
According to the embodiment of the invention, the chromaticity information of the source target image is adjusted based on the chromaticity range of the target image so as to simulate other target images of other machine types, and further, the initial analysis model is trained by using the simulated other target images of other machine types and the source target image so as to improve the analysis accuracy of the trained target analysis model on the other target images of other machine types.
Drawings
Fig. 1 is a retinal fundus image obtained by photographing the same fundus with fundus camera models produced by different manufacturers in the prior art;
FIG. 2 is a flowchart of a data expansion method according to a first embodiment of the present invention;
FIG. 3 is a flowchart of a data expansion method according to a second embodiment of the present invention;
FIG. 4 is a flowchart of a data expansion method according to a third embodiment of the present invention;
fig. 5 is a schematic view of a fundus image obtained by gamma correction of different gamma values for color channels of the same fundus image according to the third embodiment of the present invention;
FIG. 6 is a flowchart of a data expansion method according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a data expansion device according to a fifth embodiment of the present invention;
fig. 8 is a schematic structural diagram of an apparatus according to a sixth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 2 is a flowchart of a data expansion method according to an embodiment of the invention. The method and the device are applicable to the situations that source target images of target models are utilized to simulate other target images of other models, and analysis models of the target images are trained by utilizing the source target images and the other target images so as to improve analysis accuracy of the analysis models on the other target images.
Alternatively, the target image may be an image of arbitrary content. Typically, the target image is a retinal fundus image. The method may be performed by a data expansion device, which may be implemented by software and/or hardware, referring to fig. 2, and the data expansion method provided in this embodiment includes:
s110, acquiring a source target image from a target model, and extracting chromaticity information of the source target image.
The target model is a model for acquiring a source target image.
The source target image is an image that includes target content and is used to augment the new target image.
The target content may be any content to be analyzed, such as a diseased organ, an environment to be analyzed, and the like.
Typically, the target content is the fundus retina.
The chrominance information is information describing the chrominance of the image, and specifically may be the numerical values of three color channels of RGB in the image, or may be the U component and the V component of the image.
S120, adjusting chromaticity information of the source target image based on the chromaticity range of the target image, and generating a new target image.
The target image chromaticity range refers to chromaticity variation ranges of target images in different machine types.
Specifically, the target image chromaticity range can be flexibly selected by those skilled in the art as needed, which is not limited.
Optionally, before adjusting the chromaticity information of the source target image based on the target image chromaticity range, the method further includes:
and determining the chromaticity range of the target image according to other target images acquired from at least one other model.
The other model refers to a model other than the target model.
The other target images are target images acquired by devices of other models.
To improve accuracy in determining a chromaticity range of a target image, the determining the chromaticity range of the target image according to other target images acquired from at least one other model includes:
clustering the other target images based on the chromaticity information;
calculating the chromaticity mean value of other target images in each category;
and determining the chromaticity range of the target image according to the chromaticity mean value of each category.
Specifically, clustering the other target images based on the chromaticity information includes:
carrying out weighted summation on the values of the color channels in other target images, and determining chromaticity values of the other target images;
other target images with the same or similar chromaticity values are gathered into the same category.
Optionally, clustering the other target images based on the chrominance information includes:
comparing the values of the same color channels in different other target images;
and clustering the other target images according to the numerical comparison result of each color channel.
Determining the target image chromaticity range according to the chromaticity mean value of each category comprises the following steps:
and determining the maximum value and the minimum value of the chromaticity mean value of each category, and determining the chromaticity range of the target image according to the determined maximum value and the determined minimum value.
Specifically, determining the maximum value and the minimum value of the chromaticity mean value of each category, and determining the chromaticity range of the target image according to the determined maximum value and the determined minimum value, including:
comparing the chromaticity mean values of the categories, and determining the maximum value and the minimum value in the chromaticity mean values of the categories according to the comparison result;
and taking a chromaticity range formed by the determined maximum value and the determined minimum value as the target image chromaticity range.
Alternatively, other possible solutions will occur to those skilled in the art based on the above disclosure, and the present embodiment is not limited thereto.
According to the technical scheme, the chromaticity information of the source target image is adjusted based on the chromaticity range of the target image, so that simulation of other target images of other machine types is realized, further, by using the simulated other target images of other machine types and the source target image, a single model can be assisted to be simultaneously matched with a plurality of fundus cameras of different types, training data are not required to be additionally acquired and marked for different machine types, the cost is saved, the robustness and universality of the model are improved by training an initial analysis model, the practical value of related products is remarkably improved, and the analysis accuracy of the target analysis model on other target images of other machine types is improved.
Further, after generating the new target image, the method further comprises:
and training the initial analysis model by utilizing the source target image and the generated new target image to obtain a target analysis model.
The target analysis model is a model for analyzing a target image. The analysis may be any analysis, and the present embodiment is not limited thereto.
By utilizing the technical scheme provided by the embodiment of the invention, the initial analysis model is trained through the expanded data set, so that the robustness and universality of the model are improved, the practical value of related products is remarkably improved, and the analysis accuracy of the target analysis model on other target images of other models is improved.
Example two
Fig. 3 is a flowchart of a data expansion method according to a second embodiment of the present invention. The present embodiment is an alternative solution provided by further developing the step of "generating a new target image based on the target image chromaticity range and adjusting the chromaticity information of the source target image" based on the target image chromaticity range. Referring to fig. 3, the data expansion method provided in this embodiment includes:
s210, acquiring a source target image from a target model, and extracting chromaticity information of the source target image.
S220, generating a chromaticity correction coefficient randomly.
The chromaticity correction coefficient is a parameter for adjusting chromaticity information of the source target image.
Specifically, the chromaticity correction coefficient may be generated by a random generator.
S230, screening the chromaticity correction coefficient according to the chromaticity range of the target image.
Specifically, the filtering the chroma correction coefficient according to the target image chroma range includes:
determining a minimum correction coefficient for correcting the chromaticity of the source target image to a minimum chromaticity value in the target image chromaticity range and a maximum correction coefficient for correcting the chromaticity of the source target image to a maximum chromaticity value in the target image chromaticity range according to the target image chromaticity range and the chromaticity information of the source target image;
and if the chromaticity correction coefficient is larger than or equal to the minimum correction coefficient and smaller than or equal to the maximum correction coefficient, determining that the chromaticity correction coefficient passes the screening.
S240, chromaticity adjustment is carried out on the source target image by using the chromaticity correction coefficient through screening, and a new target image is generated.
Specifically, the generating a new target image by performing chromaticity adjustment on the source target image by using the chromaticity correction coefficient through screening includes:
if the source target image is in an RGB format, respectively correcting the three color channel values in the source target image by using the filtered chromaticity correction coefficients based on a gamma correction algorithm;
and combining the corrected color channel values to generate a new target image.
Specifically, if the number of color channels of the source-target image is three, the number of the randomly generated chromaticity correction coefficients may be three or one. If one, the values of the three color channels can be corrected based on the same chromaticity correction coefficient.
Optionally, the generating a new target image by performing chromaticity adjustment on the source target image by using the color correction coefficient through filtering includes:
if the source target image is in YUV format, respectively adjusting the original U component and the original V component in the source target image by using the color correction coefficient through screening to generate a new U component and a new V component;
and combining the original Y component, the new U component and the new V component in the source target image to generate a new target image.
And respectively adjusting the original U component and the original V component in the source target image by using the color correction coefficients through screening to generate a new U component and a new V component, wherein the method comprises the following steps:
calculating the product of the color correction coefficient passing through the screening and the original U component in the source target image, and taking the product as a new U component;
and calculating the product of the color correction coefficient passing through the screening and the original V component in the source target image, and taking the product as a new V component.
According to the technical scheme, the chromaticity correction coefficient is randomly generated; screening the chromaticity correction coefficient according to the chromaticity range of the target image; and performing chromaticity adjustment on the source target image by using the chromaticity correction coefficient which passes through the screening. Thereby realizing random adjustment of the source target image. Because of the random adjustment, the probability of other target images of other models simulated in probability is the same, and further the balance training of different models is realized.
Example III
Fig. 4 is a flowchart of a data expansion method according to a third embodiment of the present invention. The present embodiment is an alternative proposal based on the above embodiment taking the target image as the retinal fundus image as an example. Referring to fig. 4, the data expansion method provided in this embodiment includes:
s310, reading a source color retina fundus image.
The source color retinal fundus image is represented in the computer as a matrix of size h×w×c, where h represents the image height, w represents the image width, and c represents the number of image channels. The color retinal fundus image includes three color channels of RGB (R for red, G for green, and B for blue).
S320, generating a three-dimensional numerical vector gamma= { gamma by using a random number generator RGB (As correction coefficient), the value range is
Figure BDA0002145490290000091
Wherein n is a manually set positive number greater than 1, and the specific value of n is determined according to the chromaticity range of the retinal fundus image, and can be generally set to 2 or 3.
S330, gamma correction is respectively carried out on the numerical values of the three color channels of the source color retina fundus image.
Wherein the R channel correction factor is set to gamma R The G-channel correction coefficient is set to be gamma G The B-channel correction coefficient is set to γ B
Gamma correction is to perform nonlinear operation on the gray value of an input image to enable the gray value of the output image to be in an exponential relationship with the gray value of the input image, wherein the mathematical expression is
Figure BDA0002145490290000092
Wherein V is o Represents the output gray value, V i Represents the input gray value, and γ represents the correction coefficient. When gamma correction is performed on different color channel values of the color fundus image by using different gamma values, color fundus images having different chromaticities can be obtained. FIG. 5 illustrates the gray scale values of different color channels of the same fundus image using different gamma valuesExamples of fundus images obtained after gamma correction.
S340, merging the RGB three color channels after gamma correction again to obtain a new color fundus image.
According to the technical scheme provided by the embodiment of the invention, gamma correction with different degrees is applied to the numerical values of different color channels of the retina fundus image, so that the chromaticity information of the input image is changed to simulate the color forming schemes of various different models.
Example IV
Fig. 6 is a flowchart of a data expansion method according to a fourth embodiment of the present invention. This embodiment is an alternative to the above embodiment by taking as an example the training of applying the extended new color fundus image to the retinal fundus image model. Referring to fig. 6, the data expansion method provided in this embodiment includes:
s410, reading a source color retina fundus image.
S420, generating a three-dimensional numerical vector gamma= { gamma by using a random number generator RGB (As correction coefficient), the value range is
Figure BDA0002145490290000101
S430, gamma correction is respectively carried out on the numerical values of the three color channels of the source color retina fundus image.
S440, merging the RGB three color channels after gamma correction again to obtain a new color fundus image.
S450, using the expanded new color fundus image and the read source fundus image for subsequent model training.
Wherein the expanded new color fundus image is a simulated color fundus image of other model.
The source fundus image is a color fundus image of the target model.
Specifically, the extended new color fundus image and the read source fundus image are used for subsequent model training, including:
training the initial analysis model by using the expanded new color fundus image and the read source fundus image to obtain a target analysis model of the retina fundus image. According to the technical scheme, the simulated color fundus images of other models and the simulated color fundus images of the target model are used as sample data for subsequent model training. Therefore, the robustness and universality of the model are improved, the practical value of related products is improved, and the analysis accuracy of the model on other color fundus images of other models is improved.
In other words, the embodiment of the invention can realize the following effects:
(1) The embodiment of the invention can assist in realizing that a single model is simultaneously adapted to a plurality of different model fundus cameras, does not need to additionally acquire and label training data aiming at different models, improves the robustness and universality of the model while saving the cost, and obviously improves the practical value of related products.
(2) The embodiment of the invention is used as a universal data expansion method and is suitable for the training process of various retina fundus image analysis models, including but not limited to: fundus disease classification and classification, such as diabetic retinopathy classification, maculopathy classification, and the like; the eyeground key structure is segmented and positioned, such as optic disc segmentation, macula fovea positioning, eyeground blood vessel segmentation and the like; the detection and segmentation of the key focus of the fundus disease, such as microaneurysm detection, drusen segmentation and the like.
The application of the embodiments of the present invention can be described as:
the product or project of the invention is applied to various fundus image analysis systems including but not limited to fundus disease grading and classifying systems, fundus key structure positioning and dividing systems, fundus focus detection and dividing systems and the like. Taking an AI (Artificial Intelligence ) fundus screening integrated machine as an example, an operator adopts a fundus camera as a person to be screened to shoot a fundus image, a background AI algorithm automatically analyzes the fundus image and outputs risk indexes of fundus diseases such as glaucoma, maculopathy, diabetic retinopathy and the like, if a classification model adopted by the system only uses images of one model as training source data, and only uses a general image expansion method (such as image overturning, image rotation, contrast adjustment, exposure adjustment, overlapping Gaussian noise and the like) to expand a training data set, when the system is applied to fundus cameras of other models, the model output accuracy can be influenced, and the performance is difficult to be ensured. If the method provided by the embodiment of the invention is added in the model training process for data expansion, the model can still obtain reliable output when the method is applied to other model fundus cameras.
It should be noted that, after the technical teaching of the present embodiment, a person skilled in the art is motivated to combine schemes of any implementation manner described in the foregoing embodiment, so as to improve the accuracy of analysis of the model on the target images of different models.
Example five
Fig. 7 is a schematic structural diagram of a data expansion device according to a fifth embodiment of the present invention. Referring to fig. 7, the data expansion apparatus provided in this embodiment includes: a chrominance information extraction module 10 and a chrominance information adjustment module 20.
The system comprises a chromaticity information extraction module 10, a target model generation module and a chromaticity information extraction module, wherein the chromaticity information extraction module 10 is used for acquiring a source target image from the target model and extracting chromaticity information of the source target image;
the chromaticity information adjusting module 20 is configured to adjust chromaticity information of the source target image based on the chromaticity range of the target image, and generate a new target image.
According to the embodiment of the invention, the chromaticity information of the source target image is adjusted based on the chromaticity range of the target image so as to simulate other target images of other machine types, and further, the initial analysis model is trained by using the simulated other target images of other machine types and the source target image so as to improve the analysis accuracy of the trained target analysis model on the other target images of other machine types.
Further, the chroma information adjusting module comprises a correction coefficient generating unit, a coefficient screening unit and a chroma adjusting unit.
Wherein, the correction coefficient generation unit is used for randomly generating a chromaticity correction coefficient;
the coefficient screening unit is used for screening the chromaticity correction coefficient according to the chromaticity range of the target image;
and the chromaticity adjusting unit is used for performing chromaticity adjustment on the source target image by using the chromaticity correction coefficient which passes through the screening to generate a new target image.
Further, the chromaticity adjusting unit is specifically configured to:
if the source target image is in an RGB format, respectively correcting the three color channel values in the source target image by using the filtered chromaticity correction coefficients based on a gamma correction algorithm;
and combining the corrected color channel values to generate a new target image.
Further, the chromaticity adjusting unit is specifically configured to:
if the source target image is in YUV format, respectively adjusting the original U component and the original V component in the source target image by using the color correction coefficient through screening to generate a new U component and a new V component;
and combining the original Y component, the new U component and the new V component in the source target image to generate a new target image.
Further, the apparatus further comprises: and a chromaticity range determining module.
The chromaticity range determining module is configured to determine, before adjusting chromaticity information of the source target image based on a chromaticity range of the target image, the chromaticity range of the target image according to other target images acquired from at least one other model.
Further, the chromaticity range determining module includes: the device comprises a clustering unit, a chromaticity mean value calculating unit and a chromaticity range determining unit.
The clustering unit is used for clustering the other target images based on the chromaticity information;
the chromaticity mean value calculation unit is used for calculating chromaticity mean values of other target images in each category;
and the chromaticity range determining unit is used for determining the chromaticity range of the target image according to the chromaticity mean value of each category.
Further, the apparatus further comprises: and a model training module.
The model training module is used for training an initial analysis model by utilizing the source target image and the generated new target image after generating the new target image so as to obtain a target analysis model
The data expansion device provided by the embodiment of the invention can execute the data expansion method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example six
Fig. 8 is a schematic structural diagram of an apparatus according to a sixth embodiment of the present invention. Fig. 8 illustrates a block diagram of an exemplary device 12 suitable for use in implementing embodiments of the present invention. The device 12 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 8, device 12 is in the form of a general purpose computing device. Components of device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. Device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, commonly referred to as a "hard disk drive"). Although not shown in fig. 8, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
Device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with device 12, and/or any devices (e.g., network card, modem, etc.) that enable device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, device 12 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, via network adapter 20. As shown, network adapter 20 communicates with other modules of device 12 over bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing the data expansion method provided by the embodiment of the present invention.
Example seven
The seventh embodiment of the present invention further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the data expansion method according to any one of the embodiments of the present invention, the method comprising:
acquiring a source target image from a target machine type, and extracting chromaticity information of the source target image;
and adjusting chromaticity information of the source target image based on the chromaticity range of the target image to generate a new target image.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. A data expansion method applied to expanding retina fundus images, comprising:
acquiring a source target image from a target machine type, and extracting chromaticity information of the source target image; the source target image comprises target content and is used for expanding an image of a new target image, and the target content is fundus retina;
carrying out weighted summation on the values of the color channels in other target images, and determining chromaticity values of the other target images;
collecting other target images with the same or similar chromaticity values into the same category;
calculating the chromaticity mean value of other target images in each category;
comparing the chromaticity mean values of the categories, and determining the maximum value and the minimum value in the chromaticity mean values of the categories according to the comparison result;
taking a chromaticity range formed by the maximum value and the minimum value as a target image chromaticity range;
adjusting chromaticity information of the source target image based on the chromaticity range of the target image to generate a new target image; wherein the target image chromaticity range refers to chromaticity variation ranges of the retinal fundus images in different machine types; the model refers to a fundus camera as a retinal fundus image acquisition device;
wherein the adjusting chromaticity information of the source target image based on the chromaticity range of the target image to generate a new target image includes:
randomly generating a chromaticity correction coefficient; wherein the chromaticity correction coefficient is a parameter for performing chromaticity information adjustment on the source target image, and is generated by a random generator;
determining a minimum correction coefficient for correcting the chromaticity of the source target image to a minimum chromaticity value in the target image chromaticity range and a maximum correction coefficient for correcting the chromaticity of the source target image to a maximum chromaticity value in the target image chromaticity range according to the target image chromaticity range and the chromaticity information of the source target image;
if the chromaticity correction coefficient is larger than or equal to the minimum correction coefficient and smaller than or equal to the maximum correction coefficient, determining that the chromaticity correction coefficient passes screening;
and performing chromaticity adjustment on the source target image by using the chromaticity correction coefficient which passes through the screening to generate a new target image.
2. The method of claim 1, wherein the chromaticity adjusting the source target image using the chromaticity correction coefficients that pass the filtering to generate a new target image comprises:
if the source target image is in an RGB format, respectively correcting the three color channel values in the source target image by using the filtered chromaticity correction coefficients based on a gamma correction algorithm;
and combining the corrected color channel values to generate a new target image.
3. The method of claim 1, wherein the chromaticity adjusting the source target image using the color correction coefficients that pass the filtering to generate a new target image comprises:
if the source target image is in YUV format, respectively adjusting the original U component and the original V component in the source target image by using the color correction coefficient through screening to generate a new U component and a new V component;
and combining the original Y component, the new U component and the new V component in the source target image to generate a new target image.
4. The method of claim 1, wherein the adjusting the chrominance information of the source target image based on the target image chrominance range, after generating a new target image, further comprises:
and training the initial analysis model by utilizing the source target image and the generated new target image to obtain a target analysis model.
5. A data expansion device for expanding a retinal fundus image, comprising:
the system comprises a chromaticity information extraction module, a chromaticity information extraction module and a target model extraction module, wherein the chromaticity information extraction module is used for obtaining a source target image from the target model and extracting chromaticity information of the source target image; the source target image comprises target content and is used for expanding an image of a new target image, and the target content is fundus retina;
the chromaticity information adjusting module is used for adjusting chromaticity information of the source target image based on the chromaticity range of the target image to generate a new target image; the target image chromaticity range refers to chromaticity variation ranges of the target image in different machine types; wherein the model refers to a fundus camera as a retinal fundus image acquisition device;
wherein, the chroma information adjustment module includes:
a correction coefficient generation unit for randomly generating a chromaticity correction coefficient; wherein the chromaticity correction coefficient is a parameter for performing chromaticity information adjustment on the source target image, and is generated by a random generator;
the coefficient screening unit is used for screening the chromaticity correction coefficient according to the chromaticity range of the target image;
a chromaticity adjusting unit, configured to perform chromaticity adjustment on the source target image by using the chromaticity correction coefficient that passes through the screening, and generate a new target image;
the coefficient screening unit is specifically configured to determine, according to the chromaticity range of the target image and chromaticity information of the source target image, a minimum correction coefficient for correcting chromaticity of the source target image to a minimum chromaticity value in the chromaticity range of the target image, and a maximum correction coefficient for correcting chromaticity of the source target image to a maximum chromaticity value in the chromaticity range of the target image; if the chromaticity correction coefficient is larger than or equal to the minimum correction coefficient and smaller than or equal to the maximum correction coefficient, determining that the chromaticity correction coefficient passes screening;
the chromaticity range determining module is used for determining the chromaticity range of the target image according to other target images acquired from at least one other model before the chromaticity information of the source target image is adjusted based on the chromaticity range of the target image;
wherein, the chromaticity range determining module includes:
the clustering unit is used for clustering the other target images based on the chromaticity information;
the chromaticity mean value calculation unit is used for calculating chromaticity mean values of other target images in each category;
a chromaticity range determining unit, configured to determine a chromaticity range of the target image according to chromaticity average values of the respective classes;
the clustering unit is specifically configured to perform weighted summation on values of color channels in other target images, and determine chromaticity values of the other target images; collecting other target images with the same or similar chromaticity values into the same category;
the chromaticity range determining unit is specifically configured to compare chromaticity average values of the categories, and determine a maximum value and a minimum value in the chromaticity average values of the categories according to a comparison result; and taking a chromaticity range formed by the maximum value and the minimum value as a target image chromaticity range.
6. The device according to claim 5, wherein the chromaticity adjusting unit is specifically configured to:
if the source target image is in an RGB format, respectively correcting the three color channel values in the source target image by using the filtered chromaticity correction coefficients based on a gamma correction algorithm;
and combining the corrected color channel values to generate a new target image.
7. The device according to claim 5, wherein the chromaticity adjusting unit is specifically configured to:
if the source target image is in YUV format, respectively adjusting the original U component and the original V component in the source target image by using the color correction coefficient through screening to generate a new U component and a new V component;
and combining the original Y component, the new U component and the new V component in the source target image to generate a new target image.
8. The apparatus of claim 5, wherein the apparatus further comprises:
and the model training module is used for training the initial analysis model by utilizing the source target image and the generated new target image after generating the new target image by adjusting the chromaticity information of the source target image based on the chromaticity range of the target image so as to obtain a target analysis model.
9. An electronic device, the device comprising:
one or more processors;
storage means for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the data augmentation method of any of claims 1-4.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the data expansion method according to any of claims 1-4.
CN201910683255.5A 2019-07-26 2019-07-26 Data expansion method, device, equipment and medium Active CN110428377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910683255.5A CN110428377B (en) 2019-07-26 2019-07-26 Data expansion method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910683255.5A CN110428377B (en) 2019-07-26 2019-07-26 Data expansion method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN110428377A CN110428377A (en) 2019-11-08
CN110428377B true CN110428377B (en) 2023-06-30

Family

ID=68412754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910683255.5A Active CN110428377B (en) 2019-07-26 2019-07-26 Data expansion method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN110428377B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112710996B (en) * 2020-12-08 2022-08-23 中国人民解放军海军航空大学 Radar micro-motion target identification data set expansion method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002027266A (en) * 2000-07-04 2002-01-25 Ricoh Co Ltd Image processing system and image processing method and recording medium
JP2012088551A (en) * 2010-10-20 2012-05-10 Mitsubishi Electric Corp Color correction processing unit, color correction processing method and multi-display device
CN109523485A (en) * 2018-11-19 2019-03-26 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal
CN109671036A (en) * 2018-12-26 2019-04-23 上海联影医疗科技有限公司 A kind of method for correcting image, device, computer equipment and storage medium
CN109903256A (en) * 2019-03-07 2019-06-18 京东方科技集团股份有限公司 Model training method, chromatic aberration calibrating method, device, medium and electronic equipment

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4189328B2 (en) * 2004-01-16 2008-12-03 セイコーエプソン株式会社 Image processing apparatus, image display apparatus, image processing method, and image processing program
US7936919B2 (en) * 2005-01-18 2011-05-03 Fujifilm Corporation Correction of color balance of face images depending upon whether image is color or monochrome
JP2007208413A (en) * 2006-01-31 2007-08-16 Olympus Corp Color correction apparatus, color correction method, and color correction program
JP6282156B2 (en) * 2014-03-31 2018-02-21 キヤノン株式会社 Image device, image processing method, control program, and storage medium
CN104586397B (en) * 2015-01-26 2017-01-18 北京工业大学 Traditional Chinese medicine tongue color classification perception quantification method combined by equal sense distance method and cluster analysis
CN105488509A (en) * 2015-11-19 2016-04-13 Tcl集团股份有限公司 Image clustering method and system based on local chromatic features
CN106920218B (en) * 2015-12-25 2019-09-10 展讯通信(上海)有限公司 A kind of method and device of image procossing
CN108230233A (en) * 2017-05-16 2018-06-29 北京市商汤科技开发有限公司 Data enhancing, treating method and apparatus, electronic equipment and computer storage media
CN107798661B (en) * 2017-10-17 2020-04-28 华南理工大学 Self-adaptive image enhancement method
CN107958470A (en) * 2017-12-18 2018-04-24 维沃移动通信有限公司 A kind of color correcting method, mobile terminal
CN109345469B (en) * 2018-09-07 2021-10-22 苏州大学 Speckle denoising method in OCT imaging based on condition generation countermeasure network
CN109902717A (en) * 2019-01-23 2019-06-18 平安科技(深圳)有限公司 Lesion automatic identifying method, device and computer readable storage medium
CN110009574B (en) * 2019-02-13 2023-01-17 中山大学 Method for reversely generating high dynamic range image from low dynamic range image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002027266A (en) * 2000-07-04 2002-01-25 Ricoh Co Ltd Image processing system and image processing method and recording medium
JP2012088551A (en) * 2010-10-20 2012-05-10 Mitsubishi Electric Corp Color correction processing unit, color correction processing method and multi-display device
CN109523485A (en) * 2018-11-19 2019-03-26 Oppo广东移动通信有限公司 Image color correction method, device, storage medium and mobile terminal
CN109671036A (en) * 2018-12-26 2019-04-23 上海联影医疗科技有限公司 A kind of method for correcting image, device, computer equipment and storage medium
CN109903256A (en) * 2019-03-07 2019-06-18 京东方科技集团股份有限公司 Model training method, chromatic aberration calibrating method, device, medium and electronic equipment

Also Published As

Publication number Publication date
CN110428377A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN110428475B (en) Medical image classification method, model training method and server
JP7058373B2 (en) Lesion detection and positioning methods, devices, devices, and storage media for medical images
US20190279402A1 (en) Methods and Systems for Human Imperceptible Computerized Color Transfer
AU2019275232A1 (en) Multi-sample whole slide image processing via multi-resolution registration
EP3755204A1 (en) Eye tracking method and system
Tennakoon et al. Image quality classification for DR screening using convolutional neural networks
CN111753908A (en) Image classification method and device and style migration model training method and device
WO2019015477A1 (en) Image correction method, computer readable storage medium and computer device
WO2021174821A1 (en) Fundus color photo image grading method and apparatus, computer device, and storage medium
CN102138157A (en) Color constancy method and system
CN111985281A (en) Image generation model generation method and device and image generation method and device
WO2019120025A1 (en) Photograph adjustment method and apparatus, storage medium and electronic device
CN110473176B (en) Image processing method and device, fundus image processing method and electronic equipment
WO2023284236A1 (en) Blind image denoising method and apparatus, electronic device, and storage medium
CN110428377B (en) Data expansion method, device, equipment and medium
JP2021528767A (en) Visual search methods, devices, computer equipment and storage media
CN112288697B (en) Method, apparatus, electronic device and readable storage medium for quantifying degree of abnormality
WO2020087434A1 (en) Method and device for evaluating resolution of face image
CN117152507B (en) Tooth health state detection method, device, equipment and storage medium
CN113724282A (en) Image processing method and related product
CN107392870A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN112700396A (en) Illumination evaluation method and device for face picture, computing equipment and storage medium
CN113034449B (en) Target detection model training method and device and communication equipment
CN114972065A (en) Training method and system of color difference correction model, electronic equipment and mobile equipment
Florea et al. Avoiding the deconvolution: Framework oriented color transfer for enhancing low-light images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210419

Address after: 100000 Room 220, 2nd Floor, Building 4, No. 1, Shangdi East Road, Haidian District, Beijing

Applicant after: Beijing Confucius Health Technology Co.,Ltd.

Address before: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 10 Baidu building, layer 2

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant