CN111739008A - Image processing method, device, equipment and readable storage medium - Google Patents

Image processing method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111739008A
CN111739008A CN202010579769.9A CN202010579769A CN111739008A CN 111739008 A CN111739008 A CN 111739008A CN 202010579769 A CN202010579769 A CN 202010579769A CN 111739008 A CN111739008 A CN 111739008A
Authority
CN
China
Prior art keywords
channel
blood vessel
vessel region
original image
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010579769.9A
Other languages
Chinese (zh)
Other versions
CN111739008B (en
Inventor
孙旭
杨叶辉
王磊
许言午
黄艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010579769.9A priority Critical patent/CN111739008B/en
Publication of CN111739008A publication Critical patent/CN111739008A/en
Application granted granted Critical
Publication of CN111739008B publication Critical patent/CN111739008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a readable storage medium, and belongs to the technical field of image processing in artificial intelligence. The specific implementation scheme is as follows: the electronic equipment obtains a blood vessel probability map by using the original image, continues to generate a plurality of expansion samples by using the blood vessel probability map and the original image, trains a machine model by using the expansion samples, and further processes the target image by using the machine model. In the process, the electronic equipment generates a plurality of expansion samples based on the blood vessel probability map and the original image, the expansion samples are added into the training set to train the machine model, new samples do not need to be collected and labeled again, and a large amount of labor cost and time cost are saved.

Description

Image processing method, device, equipment and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing in artificial intelligence, in particular to an image processing method, an image processing device, image processing equipment and a readable storage medium.
Background
Fundus diseases are eye diseases caused by damage to retina, eye blood vessels or optic nerves, and become one of the main causes of blindness. Physical and pathological changes in the blood vessels of the eye are associated with a variety of ocular diseases such as glaucoma, hypertensive retinopathy and the like.
Typically, a machine model is used to analyze the fundus map. In the machine model training process, a large number of fundus images are collected and marked to obtain training data, and then a machine model with high performance is obtained through methods such as machine learning. For example, a large amount of training data with vessel position labels is collected, and a vessel segmentation model is trained. And a blood vessel region probability map can be obtained by utilizing a blood vessel segmentation model subsequently, so as to assist the extraction of other fundus physiological structures and the screening of various eye diseases. However, the process of collecting and labeling the fundus image is time-consuming, and large-scale labeling is difficult in practice, which brings great time and expense cost for the construction of the artificial intelligence system. In addition, due to the influence of factors such as shooting illumination conditions, fundus blood vessel abnormality and the like, local external characteristics of the fundus can be obviously changed, such as the occurrence of blood vessel white sheaths and the like. If such local characterization diversity is not considered in the training sample construction, the robustness of the trained machine model is crossed, and the application in real scenes is difficult. To solve this problem, it is common practice to fully consider the diversity of local representations of the fundus oculi image when the training data is labeled manually, and cover as much sample data as possible.
However, the difficulty of considering the local characterization of the fundus during the construction of the data set is very large, a large amount of capital investment is needed for the original sample screening, and the time consumption and the cost are high.
Disclosure of Invention
The application provides an image processing method, an image processing device, an image processing apparatus and a readable storage medium, wherein a plurality of expansion samples are obtained based on an original fundus image, the expansion samples are added into a training set to train a machine model, new samples do not need to be collected and labeled again, and a large amount of labor cost and time cost are saved.
In a first aspect, an embodiment of the present application provides an image processing method, including: obtaining a blood vessel region probability map by using an original image, wherein the blood vessel region probability map is used for indicating the probability that each pixel point in the original image is a blood vessel pixel point, generating an extended sample by using the blood vessel region probability map and the original image, and processing a target image by using a machine model trained by the extended sample.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the system comprises an acquisition module, a generation module and a processing module, wherein the acquisition module is used for acquiring a blood vessel region probability map by using an original image, the blood vessel region probability map is used for indicating the probability that each pixel point in the original image is a blood vessel pixel point, the generation module is used for generating an extended sample by using the blood vessel region probability map and the original image, and the processing module is used for processing a target image by using a machine model trained by the extended sample.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the first aspect or any possible implementation of the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product containing instructions, which when run on an electronic device, cause the electronic device computer to perform the method of the first aspect or the various possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing an electronic device to perform the method of the first aspect or the various possible implementations of the first aspect.
In a sixth aspect, an embodiment of the present application provides a model training method, including: obtaining a blood vessel region probability map by using an original image, wherein the blood vessel region probability map is used for indicating the probability that each pixel point in the original image is a blood vessel pixel point, generating an extended sample by using the blood vessel region probability map and the original image, and training a machine model by using the extended sample.
One embodiment in the above application has the following advantages or benefits: the electronic equipment generates a plurality of expansion samples based on the blood vessel probability graph and the original image, the expansion samples are added into the training set to train the machine model, new samples do not need to be collected and labeled again, and a large amount of labor cost and time cost are saved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic network architecture diagram of an image processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application;
fig. 3 is an original map and a blood vessel region probability map in the image processing method provided in the embodiment of the present application;
FIG. 4 is a flow chart of another image processing method provided by the embodiment of the application;
FIG. 5A is a schematic diagram of an extended sample in an image processing method according to an embodiment of the present application;
fig. 5B is a schematic diagram of a fundus white sheath image in the image processing method provided by the embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing an image processing method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
At present, with the rapid development of Artificial Intelligence (AI), machine models are widely applied in the technical field of image processing such as fundus image analysis. In order to train a machine model with universality and robustness, a common practice is to collect a large number of fundus images and mark the fundus images so as to obtain training data, and then obtain a machine model with higher performance by a machine learning method and the like. However, marking blood vessels and the like in the fundus image often consumes a large amount of labor cost and time cost, resulting in difficulty in large-scale labeling in practice. In addition, due to the influence of shooting illumination and the like, local external characteristics of blood vessels in the fundus map can be obviously changed, so that the diversity of the local characteristics of the fundus map is required to be fully considered when a machine model is trained, and training data of various local characteristics are collected as far as possible.
However, the difficulty of considering the local characterization of the fundus during the construction of the data set is very large, a large amount of capital investment is needed for the original sample screening, and the time consumption and the cost are high.
In view of this, embodiments of the present application provide an image processing method, an apparatus, a device, and a readable storage medium, which obtain a plurality of extended samples based on an original fundus image, and add the extended samples to a training set to train a machine model, so that new samples do not need to be collected and labeled again, thereby saving a lot of labor cost and time cost.
First, terms related to the embodiments of the present application will be explained.
Vessel segmentation: the blood vessel in the original image is distinguished from the background, for example, each pixel point in the original image is distinguished at a pixel level to determine a blood vessel pixel point.
Blood vessel region probability map: the probability values which are in one-to-one correspondence with the pixel points in the original image are included, and the probability that each pixel point in the original image belongs to a blood vessel (namely, a blood vessel pixel point) is represented.
Vessel region attention map: one pixel contains three color channels: red (R) channel, green (G) channel, blue (B) channel, different color channels respectively corresponding to different vessel region attention maps. For a specific color channel, the blood vessel region attention map is used to indicate the variation amplitude of the channel value of the color channel at the pixel point of the original image.
Fig. 1 is a schematic network architecture diagram of an image processing method according to an embodiment of the present application. Referring to fig. 1, the network architecture includes a server 1, a terminal device 2 and a camera 3, the server 1 and the terminal device 2 are connected, and the terminal device 2 is located in various institutions with medical attributes, such as hospitals and medical research institutes, and is capable of acquiring a plurality of original images. For example, the image capturing device on the terminal device 2 captures a user or the like to obtain an original image. As another example, the terminal device 2 captures an original image by the camera 3. The camera 3 is, for example, a fundus camera or the like.
When the terminal device 2 executes the image processing method according to the embodiment of the present application, the terminal device 2 obtains the blood vessel probability map based on the original image, and then survives a plurality of extended samples based on the blood vessel probability map and the original image. And finally, training out a machine model by using the extended samples.
When the server 1 executes the image processing method according to the embodiment of the application, the terminal device 2 sends the original image to the server 1, calls the model training function provided by the server 1, obtains the blood vessel probability map based on the original image by the server 1, and survives a plurality of extended samples based on the blood vessel probability map and the original image. And finally, training a machine model by using the expansion sample and sending the machine model to the terminal equipment 2. The server 1 is, for example, a cluster server, an independently deployed server, and the like, and the embodiment of the present application is not limited.
Next, the image processing method according to the embodiment of the present application will be described in detail based on the above-mentioned noun explanation and the network architecture shown in fig. 1. For example, see fig. 2.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application, where an execution subject of the present application is an electronic device, and the electronic device is, for example, the server or the terminal device in fig. 1. The embodiment comprises the following steps:
101. obtaining a blood vessel region probability map by using an original image, wherein the blood vessel region probability map is used for indicating the probability that each pixel point in the original image is a blood vessel pixel point.
Illustratively, the original image is, for example, a fundus image obtained by photographing the fundus of the user with a fundus camera, such as a hypertensive fundus image, a glaucoma fundus image, or the like. The electronic device acquires an original image by an image acquisition device such as a fundus camera. Then, the electronic device performs pixel-based blood vessel segmentation on the original image, so as to obtain a blood vessel region probability map. For example, please refer to fig. 3, fig. 3 is an original graph and a blood vessel region probability graph in the image processing method according to the embodiment of the present application.
Referring to fig. 3, the left side is an original image captured by a fundus camera, and the right side is a blood vessel region probability map obtained by performing blood vessel segmentation on the original image. In practical implementation, the electronic device determines the probability that each pixel point in the original image is a blood vessel pixel point by using an unsupervised machine learning method, such as a morphological transformation (morphological transformation) method, so as to obtain a blood vessel region probability map. As can be seen from fig. 3: the probability map of the blood vessel region reflects the probability that each pixel point in the original map is a blood vessel pixel point, and the higher the probability is, the whiter the color of the pixel point is, so that the pixel points of other regions except the white region are not blood vessel pixel points.
102. And generating an expanded sample by using the blood vessel region probability map and the original image.
In the embodiment of the application, probability values corresponding to pixel points in an original image one to one are stored in a blood vessel region probability map, each probability value is between 0 and 1, if the probability of a certain pixel point is 0, the pixel point is not a blood vessel pixel point, and if the probability of a certain pixel point is 1, the pixel point is a blood vessel pixel point. The electronic equipment randomly changes the probability values, and a new image can be generated according to the changed probability values and the original image, wherein the new image is an expansion sample.
For example, an arbitrary pixel point a in the original image is denoted as a (R, G, B), and P denotes the probability that the pixel point a is a blood vessel pixel point. The electronic device randomly generates a random coefficient, and multiplies the probability P by the random coefficient to obtain the changed probability P'. And then, correcting the gray value of the corresponding pixel point in the original image by utilizing the probability P ' to obtain a new pixel point A ', and forming an expanded sample by all the new pixel points A '.
During the correction process, the electronic device corrects all or part of red (R), green (G) and blue (B) channels of the pixel point a. For example, pixel a (255, 189, 213), where P is 0.9 and the random coefficient is 0.67, P 'is 0.6, the electronic device corrects only the R channel in the original image to obtain pixel a' (153, 189, 213). After the electronic device corrects each pixel point which may be a blood vessel region in the original image, the corrected pixel point and other non-corrected pixel points in the original image form a new image, and the new image is an extended sample.
In the process of generating the extended samples, the electronic device continuously changes the random coefficients, and different extended samples can be obtained by using different random coefficients. That is, after acquiring the blood vessel region probability map based on an original image, the electronic device can obtain a plurality of extended samples, such as 200 samples, 1000 samples, or more samples, based on the original image and the blood vessel region probability map.
The above-described correction process is not simple to perform image inversion, image rotation, contrast adjustment, exposure adjustment, gaussian noise superimposition, and the like on the entire original image. But only the pixel points with the probability greater than 0 are multiplied by the random coefficient (when the random coefficient is not 0) in the blood vessel region probability map, and 0 may not be 0. Therefore, the above-described correction process corrects only a partial region of the original image, that is, a region which may be a blood vessel. In this way, the brightness, contrast, color, etc. of the blood vessel region in the expanded sample are changed compared to the original image, but the brightness, contrast, color, etc. of the non-blood vessel region of the expanded sample are not changed.
103. And processing a target image by using the machine model trained by the extended sample.
Illustratively, the electronic device treats the augmented sample as a sample in a sample set, the sample in the sample set further comprising an original image, and the like. Then, the electronic device trains the initial model according to the samples in the sample set, and continuously optimizes each parameter of the initial model to train a machine model. In the subsequent image processing process, the electronic equipment processes the target image by using the machine model.
According to the image processing method provided by the embodiment of the application, the electronic equipment obtains the blood vessel probability map by using the original image, continues to generate a plurality of expansion samples by using the blood vessel probability map and the original image, trains a machine model by using the expansion samples, and further processes the target image by using the machine model. In the process, the electronic equipment generates a plurality of expansion samples based on the blood vessel probability map and the original image, the expansion samples are added into the training set to train the machine model, new samples do not need to be collected and labeled again, and a large amount of labor cost and time cost are saved.
The following describes in detail an image processing method according to an embodiment of the present application, taking an original image as an example, specifically, an original fundus oculi image.
Fig. 4 is a flowchart of another image processing method provided in an embodiment of the present application. The embodiment comprises the following steps:
201. the electronic device acquires an original image.
Illustratively, the electronic apparatus acquires an original image represented as a matrix of gradation values of h × w × c by a fundus camera or the like, as shown in the left diagram in fig. 3. Wherein h represents the height of the original image, w represents the width of the original image, c represents the number of image channels, including three color channels of an R channel, a G channel and a B channel, and each element in the gray value matrix is calculated by the electronic device according to the channel values of the R channel, the G channel and the B channel of the pixel point of the original image.
202. The electronic equipment acquires a blood vessel region probability map by using the original image.
Illustratively, when the original image is a hypertensive fundus image, the electronic device obtains a blood vessel region probability map M of the original image, which is shown in the right graph in fig. 3, by using an unsupervised machine learning method, such as a morphological transformation method. The blood vessel region probability map is a two-dimensional image with the same height and width as the original image, and is represented as a numerical matrix of h multiplied by w, the value range of each element in the matrix is [0, 1], and the larger the numerical value is, the larger the probability that the corresponding pixel point is the blood vessel pixel point is. In addition, when the original image is a fundus image such as glaucoma, the electronic device can acquire a blood vessel region probability map by using a supervision method.
203. The electronic equipment determines a blood vessel region attention map corresponding to the color channel by using the blood vessel region probability map.
The color channel comprises at least one of an R channel, a G channel or a B channel, and the blood vessel region attention map is used for indicating the change amplitude of the pixel point channel value of the original image.
Illustratively, the electronic device generates a three-dimensional numerical vector γ ═ γ using a random coefficient generatorR,γG,γB},γR、γG、γBThe random coefficients of the R, G and B channels are represented, respectively, and are also referred to as random attenuation coefficients. The values of the random coefficients are all [0, 1]]The larger the random coefficient, the larger the attenuation amplitude.
After the electronic equipment determines the three-dimensional numerical value vector, the electronic equipment determines the three-dimensional numerical value vectorMultiplying the three random coefficients by the blood vessel region probability maps obtained in step 202 respectively to obtain a blood vessel region attention map M corresponding to the R channelRBlood vessel region attention map M corresponding to G channelGBlood vessel region attention map M corresponding to B channelBThe method comprises the steps of generating a blood vessel region attention map by utilizing a random coefficient and a blood vessel region probability map, wherein the random coefficient is multiplied by the blood vessel region probability map M, namely multiplying each element in a numerical matrix corresponding to the random coefficient and the blood vessel region probability map M, and the numerical matrix is the numerical matrix with the size of h × w.
According to the above, it can be seen that: the vessel region attention map is actually a numerical matrix with a size h × w, and elements in the numerical matrix may be smaller than corresponding elements in the probability of the vessel region, and the values of corresponding elements in the two numerical matrices may be equal only if the random coefficient is 1.
The electronic device then generates an augmented sample using the at least one vessel region interest map and the original image. For example, the electronic device changes one or more of the R channel, the G channel, or the B channel of the pixel points in the original image according to the blood vessel region attention map, and takes the pixel points after the edge as the pixel points of the extended sample.
It will be appreciated that the brightness, contrast, exposure, etc. of the image is related to the R, B, or G channel values of the pixel points. Thus, the electronic device can change the brightness, contrast, etc. of the blood vessel region in the original image by changing one or more of the R-channel, B-channel, or G-channel values of the pixel points. By adopting the scheme, the electronic equipment can obtain diversified extended samples by changing the brightness of the blood vessel region in the original image and the like so as to perform local transformation with the original image, and then the machine model with better robustness can be trained by the diversified samples.
Optionally, in the above embodiment, when the electronic device generates an extended sample by using the blood vessel region attention map and the original image, the electronic device adjusts the gray level of a pixel point in the original image by using the blood vessel region attention map to obtain the extended sample. For example, see step 204 in FIG. 4, etc.
204. And the electronic equipment transforms the gray value matrix of the original image according to the blood vessel region attention map.
Illustratively, the electronic device transforms values of the R channel, the G channel, and the B channel of a pixel point of the blood vessel region of the original image according to the blood vessel region attention maps corresponding to the R channel, the G channel, and the B channel, respectively, to obtain new values of the R channel, the G channel, and the B channel.
By adopting the scheme, the gray value of the pixel point of the blood vessel region in the original image is adjusted, so that a new expansion sample is generated, and a large amount of labor cost and time cost are saved.
205. And obtaining an extended sample according to the new values of the R channel, the G channel and the B channel.
In the above steps 204 and 205, the electronic device adjusts the corresponding channel values of the pixel points in the original image according to the blood vessel region attention map of each color channel. For example, the electronic device determines a first variation amplitude of an R channel value of a pixel point in an original image according to an R channel blood vessel region attention map, and adjusts the R channel value of the pixel point according to the first variation amplitude; determining a second change amplitude of the G channel value of the pixel point in the original image according to the G channel vessel region attention image, and adjusting the G channel value of the pixel point according to the second change amplitude; and determining a third change amplitude of the B channel value of the pixel point in the original image according to the B channel blood vessel region attention image, and adjusting the B channel value of the pixel point according to the third change amplitude. And then, the electronic equipment determines the gray level of the pixel points in the original image according to the adjusted R channel value, the adjusted G channel value and the adjusted B channel value so as to obtain the extended sample.
Illustratively, the electronic device is based on the following formula:
Figure BDA0002552741600000091
wherein i ∈ { R, G, B }, represents the corresponding color channel, a ∈ [0,255 ]]Representing randomly selected correction gray values, ViWhich represents the channel values before the transformation,
Figure BDA0002552741600000092
representing the transformed channel values.
Taking a as 200 and the pixel point a (255, 185, 210) in the original image as an example, assume that the R-channel blood vessel region attention map M corresponding to the R-channelRIf the attention value corresponding to the pixel point a is 0.6, the value of the R channel after transformation is 255 × (1-0.6) +200 × 0.6.6 ═ 222GIf the attention value corresponding to the pixel point a is 0.6, the converted G channel value is 185 × (1-0.6) +200 × 0.6 ═ 194GIf the attention value corresponding to the pixel a is 0.4, the value of the G channel after transformation is 210 × (1-0.4) +200 × 0.4.4 ═ 206.
In the above embodiment, the extended samples can be generated by using different random coefficients for different color channels of an original image, each set of random coefficients corresponds to one extended sample, and a plurality of extended samples can be obtained by using a plurality of sets of random numbers. For example, referring to fig. 5A, fig. 5A is a schematic diagram of an extended sample in an image processing method provided in an embodiment of the present application.
Referring to fig. 5A, assuming that the original image is the left image in fig. 3, 3 extended samples are obtained according to the original image, and only the brightness, contrast, etc. of the blood vessel region in the 3 extended samples are changed. Wherein (a) represents fundus images with whitish blood vessels, similar to the appearance of white sheath of fundus, and (b) and (c) are two other fundus images with different visual effects.
Fig. 5B is a schematic diagram of a fundus white sheath image in the image processing method provided in the embodiment of the present application. Referring to fig. 5B, the image is a real image of the fundus camera, in which the blood vessels are whitish. In the subsequent process of training the model, even if there is no image shown in fig. 5B, the electronic device can train an image similar to that shown in fig. 5B based on the image on the left side in fig. 3, where the image similar to fig. 5B is (a) in fig. 5A.
By adopting the scheme, the electronic equipment adjusts the gray scale of the pixel point to automatically generate the expansion sample by adjusting at least one of the R channel value, the G channel value or the B channel value of the pixel point of the blood vessel region in the original image, thereby saving a large amount of labor cost and time cost.
206. Training a machine model using the augmented samples.
Illustratively, the electronic device uses the new image obtained by the expansion, i.e. the expansion sample, for subsequent machine model training, such as training of a retinal fundus map model, to improve the performance and robustness of the machine model.
207. The target image is processed using the machine model.
When the electronic equipment processes a target image by using the machine model trained by the extended sample, the target image is input into the machine model, and the machine model is used for acquiring a blood vessel segmentation map of the target image. For example, if the target image is a hypertensive fundus image, the machine model processes the hypertensive fundus image to output a vessel segmentation map or the like. The electronic equipment can further analyze the blood vessel shape, the artery-vein ratio and the like according to the blood vessel segmentation map.
By adopting the scheme, diversified samples are used in the process of training the machine model, so that the trained machine model has universality and robustness, and the image processing quality can be improved.
While the above describes a specific implementation of the image processing method according to the embodiment of the present application, the following is an embodiment of the apparatus according to the present application, and may be used to implement the embodiment of the method according to the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The apparatus may be integrated in or implemented by a server. As shown in fig. 6, in the present embodiment, the image processing apparatus 100 may include:
an obtaining module 11, configured to obtain a blood vessel region probability map by using an original image, where the blood vessel region probability map is used to indicate a probability that each pixel point in the original image is a blood vessel pixel point.
A generating module 12, configured to generate an extended sample by using the blood vessel region probability map and the original image.
And the processing module 13 is configured to process the target image by using the machine model trained by the extended sample.
In a feasible design, the generating module 12 is specifically configured to determine a blood vessel region attention map corresponding to a color channel by using the blood vessel region probability map, where the color channel includes at least one of a red R channel, a green G channel, or a blue B channel, and the blood vessel region attention map is used to indicate a variation amplitude of a pixel point channel value of the original image; and generating an expanded sample by using the blood vessel region attention image and the original image.
In a possible design, when the generation module 12 generates an extended sample by using the blood vessel region attention map and the original image, the generation module is configured to adjust the gray scale of a pixel point in the original image by using the blood vessel region attention map to obtain the extended sample.
In a feasible design, the blood vessel region attention map includes an R-channel blood vessel region attention map corresponding to the R-channel, a G-channel blood vessel region attention map corresponding to the G-channel, and a B-channel blood vessel region attention map corresponding to the B-channel, the generation module 12 is configured to determine a first variation width of an R-channel value of a pixel point in the original image according to the R-channel blood vessel region attention map, adjust the R-channel value of the pixel point according to the first variation width, determine a second variation width of a G-channel value of a pixel point in the original image according to the G-channel blood vessel region attention map, adjust the G-channel value of the pixel point according to the second variation width, determine a third variation width of a B-channel value of a pixel point in the original image according to the B-channel blood vessel region attention map, adjust the B-channel value of the pixel point according to the third variation width, and determining the gray level of the pixel points in the original image according to the adjusted R channel value, the adjusted G channel value and the adjusted B channel value to obtain the extended sample.
In a possible design, the generating module 12 is configured to determine a random coefficient when determining a blood vessel region attention map corresponding to a color channel by using the blood vessel region probability map; and determining a blood vessel region attention map corresponding to the color channel by using the random coefficient and the blood vessel region probability map.
In one possible design, the random coefficients of the R channel, the G channel, or the B channel are different.
In a possible design, the processing module 13 is configured to train a machine model using the extended samples, input the target image into the machine model, and obtain a vessel segmentation map of the target image using the machine model.
The image processing apparatus provided in the embodiment of the present application may be used in the method executed by the electronic device in the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device for implementing an image processing method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 21, memory 22, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 21 is taken as an example.
Memory 22 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the image processing method provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the image processing method provided by the present application.
The memory 22, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (for example, the acquisition module 11, the generation module 12, and the processing module 13 shown in fig. 6) corresponding to the image processing method in the embodiment of the present application. The processor 21 executes various functional applications of the server and data processing, i.e., implements the image processing method in the above-described method embodiment, by executing non-transitory software programs, instructions, and modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area stores data and the like created by the electronic device in the course of executing the image processing method. Further, the memory 22 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 22 optionally includes memory located remotely from processor 21, which may be connected to image processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the image processing method may further include: an input device 23 and an output device 24. The processor 21, the memory 22, the input device 23 and the output device 24 may be connected by a bus or other means, and fig. 7 illustrates the connection by a bus as an example.
The input device 23 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the image processing electronics, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 24 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The embodiment of the present application further provides a model training method, including: obtaining a blood vessel region probability map by using an original image, wherein the blood vessel region probability map is used for indicating the probability that each pixel point in the original image is a blood vessel pixel point, generating an extended sample by using the blood vessel region probability map and the original image, and training a machine model by using the extended sample.
The specific implementation principle of this embodiment can be referred to the description of the above embodiment, and is not described herein again.
According to the technical scheme of the embodiment of the application, the electronic equipment obtains the blood vessel probability map by using the original image, continues to generate a plurality of expansion samples by using the blood vessel probability map and the original image, trains a machine model by using the expansion samples, and further processes the target image by using the machine model. In the process, the electronic equipment generates a plurality of expansion samples based on the blood vessel probability map and the original image, the expansion samples are added into the training set to train the machine model, new samples do not need to be collected and labeled again, and a large amount of labor cost and time cost are saved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (17)

1. An image processing method comprising:
acquiring a blood vessel region probability map by using an original image, wherein the blood vessel region probability map is used for indicating the probability that each pixel point in the original image is a blood vessel pixel point;
generating an expanded sample by using the blood vessel region probability map and the original image;
and processing a target image by using the machine model trained by the extended sample.
2. The method of claim 1, wherein the generating an augmented sample using the vessel region probability map and the original image comprises:
determining a blood vessel region attention map corresponding to a color channel by using the blood vessel region probability map, wherein the color channel comprises at least one of a red R channel, a green G channel or a blue B channel, and the blood vessel region attention map is used for indicating the change amplitude of pixel point channel values of the original image;
and generating an expanded sample by using the blood vessel region attention image and the original image.
3. The method of claim 2, wherein the generating an augmented sample using the vessel region interest map and the original image comprises:
and adjusting the gray level of pixel points in the original image by using the blood vessel region attention image to obtain the extended sample.
4. The method according to claim 3, wherein the blood vessel region attention map includes an R-channel blood vessel region attention map corresponding to the R-channel, a G-channel blood vessel region attention map corresponding to the G-channel, and a B-channel blood vessel region attention map corresponding to the B-channel, and the adjusting gray levels of pixel points in the original image by using the blood vessel region attention map to obtain the extended sample includes:
determining a first change amplitude of an R channel value of a pixel point in the original image according to the R channel blood vessel region attention image, and adjusting the R channel value of the pixel point according to the first change amplitude;
determining a second change amplitude of the G channel value of the pixel point in the original image according to the G channel vessel region attention image, and adjusting the G channel value of the pixel point according to the second change amplitude;
determining a third change amplitude of a B channel value of a pixel point in the original image according to the B channel blood vessel region attention image, and adjusting the B channel value of the pixel point according to the third change amplitude;
and determining the gray level of the pixel points in the original image according to the adjusted R channel value, the adjusted G channel value and the adjusted B channel value to obtain the extended sample.
5. The method according to any one of claims 2-4, wherein the determining a vessel region attention map corresponding to a color channel by using the vessel region probability map comprises:
determining a random coefficient;
and determining a blood vessel region attention map corresponding to the color channel by using the random coefficient and the blood vessel region probability map.
6. The method of claim 5, wherein the random coefficients for the R channel, the G channel, or the B channel are different.
7. The method of any of claims 1-4, wherein the processing a target image using the machine model trained using the augmented samples comprises:
training out a machine model by using the extended sample;
inputting the target image to the machine model;
and acquiring a vessel segmentation map of the target image by using the machine model.
8. An image processing apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a blood vessel region probability map by using an original image, and the blood vessel region probability map is used for indicating the probability that each pixel point in the original image is a blood vessel pixel point;
the generating module is used for generating an expanded sample by utilizing the blood vessel region probability map and the original image;
and the processing module is used for processing the target image by using the machine model trained by the extended sample.
9. The apparatus of claim 8, wherein,
the generation module is specifically configured to determine a blood vessel region attention map corresponding to a color channel by using the blood vessel region probability map, where the color channel includes at least one of a red R channel, a green G channel, or a blue B channel, and the blood vessel region attention map is used to indicate a change amplitude of a pixel point channel value of the original image; and generating an expanded sample by using the blood vessel region attention image and the original image.
10. The apparatus of claim 9, wherein,
when the generation module generates an extended sample by using the blood vessel region attention image and the original image, the generation module is used for adjusting the gray level of a pixel point in the original image by using the blood vessel region attention image to obtain the extended sample.
11. The apparatus according to claim 10, wherein the blood vessel region interest map includes an R-channel blood vessel region interest map corresponding to the R-channel, a G-channel blood vessel region interest map corresponding to the G-channel, and a B-channel blood vessel region interest map corresponding to the B-channel, the generating module is configured to determine a first variation width of R-channel values of pixel points in the original image according to the R-channel blood vessel region interest map, adjust R-channel values of the pixel points according to the first variation width, determine a second variation width of G-channel values of pixel points in the original image according to the G-channel blood vessel region interest map, adjust G-channel values of the pixel points according to the second variation width, and determine a third variation width of B-channel values of pixel points in the original image according to the B-channel blood vessel region interest map, and adjusting the B channel value of the pixel point according to the third change amplitude, and determining the gray level of the pixel point in the original image according to the adjusted R channel value, the adjusted G channel value and the adjusted B channel value to obtain the extended sample.
12. The apparatus of any one of claims 9-11,
the generation module is used for determining a random coefficient when determining a blood vessel region attention map corresponding to a color channel by using the blood vessel region probability map; and determining a blood vessel region attention map corresponding to the color channel by using the random coefficient and the blood vessel region probability map.
13. The apparatus of claim 12, wherein,
the random coefficients corresponding to the R channel, the G channel or the B channel are different.
14. The apparatus of any one of claims 8-11,
the processing module is used for training a machine model by using the extended sample, inputting the target image into the machine model, and acquiring a blood vessel segmentation map of the target image by using the machine model.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A method of model training, comprising:
acquiring a blood vessel region probability map by using an original image, wherein the blood vessel region probability map is used for indicating the probability that each pixel point in the original image is a blood vessel pixel point;
generating an expanded sample by using the blood vessel region probability map and the original image;
and training a machine model by using the extended samples.
CN202010579769.9A 2020-06-23 2020-06-23 Image processing method, device, equipment and readable storage medium Active CN111739008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010579769.9A CN111739008B (en) 2020-06-23 2020-06-23 Image processing method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010579769.9A CN111739008B (en) 2020-06-23 2020-06-23 Image processing method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111739008A true CN111739008A (en) 2020-10-02
CN111739008B CN111739008B (en) 2024-04-12

Family

ID=72650555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010579769.9A Active CN111739008B (en) 2020-06-23 2020-06-23 Image processing method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111739008B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170912A (en) * 2022-09-08 2022-10-11 北京鹰瞳科技发展股份有限公司 Method for training image processing model, method for generating image and related product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754403A (en) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 Tumour automatic division method and system in a kind of CT image
US20190370965A1 (en) * 2017-02-22 2019-12-05 The United States Of America, As Represented By The Secretary, Department Of Health And Human Servic Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks
CN111080592A (en) * 2019-12-06 2020-04-28 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190370965A1 (en) * 2017-02-22 2019-12-05 The United States Of America, As Represented By The Secretary, Department Of Health And Human Servic Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks
CN109754403A (en) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 Tumour automatic division method and system in a kind of CT image
CN111080592A (en) * 2019-12-06 2020-04-28 广州柏视医疗科技有限公司 Rib extraction method and device based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170912A (en) * 2022-09-08 2022-10-11 北京鹰瞳科技发展股份有限公司 Method for training image processing model, method for generating image and related product

Also Published As

Publication number Publication date
CN111739008B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN112541963B (en) Three-dimensional avatar generation method, three-dimensional avatar generation device, electronic equipment and storage medium
US20220261960A1 (en) Super-resolution reconstruction method and related apparatus
KR102410328B1 (en) Method and apparatus for training face fusion model and electronic device
CN108198184B (en) Method and system for vessel segmentation in contrast images
CN111507914B (en) Training method, repairing method, device, equipment and medium for face repairing model
CN111986178A (en) Product defect detection method and device, electronic equipment and storage medium
CN111383232B (en) Matting method, matting device, terminal equipment and computer readable storage medium
EP3872766A2 (en) Method and device for processing image, related electronic device and storage medium
CN111832745A (en) Data augmentation method and device and electronic equipment
CN111539897A (en) Method and apparatus for generating image conversion model
CN109616080B (en) Special-shaped screen contour compensation method and terminal
CN112328345A (en) Method and device for determining theme color, electronic equipment and readable storage medium
CN112184851B (en) Image editing method, network training method, related device and electronic equipment
CN111754431B (en) Image area replacement method, device, equipment and storage medium
CN116208586B (en) Low-delay medical image data transmission method and system
JP2021119535A (en) Image processing method, device, electronic apparatus and storage medium
CN110472600A (en) The identification of eyeground figure and its training method, device, equipment and storage medium
CN111739008B (en) Image processing method, device, equipment and readable storage medium
CN111523467B (en) Face tracking method and device
CN111768005B (en) Training method and device for lightweight detection model, electronic equipment and storage medium
CN111710008B (en) Method and device for generating people stream density, electronic equipment and storage medium
CN111738949B (en) Image brightness adjusting method and device, electronic equipment and storage medium
CN111507944B (en) Determination method and device for skin smoothness and electronic equipment
CN110428377B (en) Data expansion method, device, equipment and medium
CN112200169B (en) Method, apparatus, device and storage medium for training a model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant