CN111626998A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111626998A
CN111626998A CN202010440755.9A CN202010440755A CN111626998A CN 111626998 A CN111626998 A CN 111626998A CN 202010440755 A CN202010440755 A CN 202010440755A CN 111626998 A CN111626998 A CN 111626998A
Authority
CN
China
Prior art keywords
lung
image
focus
features
reference value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010440755.9A
Other languages
Chinese (zh)
Inventor
李月
蔡杭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010440755.9A priority Critical patent/CN111626998A/en
Publication of CN111626998A publication Critical patent/CN111626998A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method, an image processing device, image processing equipment and a storage medium, wherein the method comprises the following steps: acquiring image characteristics of a focus lung image, wherein the focus lung image is a lung image in which a lung region boundary is adhered to the inner wall of a chest cavity; correcting the focus lung image based on the image characteristics, and determining the lung region volume of the focus lung image based on the lung region contour obtained from the corrected focus lung image; determining the position of the lung lobe of the focus lung image based on the lung region volume, and extracting the lung lobe based on the position of the lung lobe. The invention can quickly and accurately divide the lung lobe area, and can assist in realizing quick quantitative analysis of clinical diseases.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
The specific distribution of a certain disease in a certain lung lobe can be determined by extracting the lung lobe, and the lung disease is quantitatively analyzed. At present, lung lobes are usually and directly extracted according to lung lobe cracks or lung boundaries, however, for a patient with lung adhesion or severe emphysema, the lung region boundary and the inner wall of the chest can generate an adhesion region, and if the lung lobe extraction is performed by adopting the above method, the extraction of the lung lobes can be incomplete, so that the accuracy of the extraction of the lung lobes is low.
Disclosure of Invention
The invention mainly aims to provide an image processing method, an image processing device, image processing equipment and a storage medium, and aims to improve the accuracy of lung lobe extraction.
To achieve the above object, in a first aspect, the present invention provides an image processing method, including:
acquiring image characteristics of a focus lung image, wherein the focus lung image is a lung image in which a lung region boundary is adhered to the inner wall of a chest cavity;
correcting the focus lung image based on the image characteristics, and determining the lung region volume of the focus lung image based on the lung region contour obtained from the corrected focus lung image;
determining the position of the lung lobe of the focus lung image based on the lung region volume, and extracting the lung lobe based on the position of the lung lobe.
In a second aspect, the present invention also provides an image processing apparatus, comprising:
the acquisition module is used for acquiring image characteristics of a focus lung image, wherein the focus lung image is a lung image in which a lung region boundary is adhered to the inner wall of a chest cavity;
the correction module is used for correcting the focus lung image based on the image characteristics;
a determining module, configured to determine a lung region volume of the lesion lung image by using a lung region contour obtained based on the corrected lesion lung image;
the determining module is further configured to determine a lung lobe position of the focal lung image based on the lung region volume;
and the extraction module is used for extracting the lung lobes based on the lung lobe positions.
In a third aspect, the present invention also provides an image processing apparatus comprising: the image processing system comprises a memory, a processor and an image processing program which is stored on the memory and can run on the processor, wherein the image processing program realizes the steps of the image processing method when being executed by the processor.
In a fourth aspect, the present invention further provides a storage medium, on which an image processing program is stored, which, when executed by a processor, implements the steps of the image processing method.
The invention provides an image processing method, an image processing device, an image processing apparatus and a storage medium. Firstly, acquiring image characteristics of a focus lung image, wherein the focus lung image is a lung image in which a lung region boundary and a chest inner wall are adhered; the focus lung image can be corrected based on the image characteristics, so that the corrected focus lung image can be obtained, namely the characteristic precision of the focus lung image can be improved, the region where the lung region boundary is adhered to the inner wall of the chest can be corrected and segmented, and the clear and complete lung region contour can be ensured; therefore, the lung lobe position of the focus lung image can be determined based on the lung region volume, the accuracy of lung lobe position positioning can be guaranteed, and then the lung lobes are extracted based on the lung lobe position, so that the accuracy of lung lobe lifting can be improved.
Drawings
FIG. 1 shows a flow diagram of an image processing method according to an embodiment of the invention;
fig. 2 shows a flowchart of an image processing apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic diagram showing a configuration of an image processing apparatus according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, in the embodiment of the present invention, the image processing device may be a smart phone, a personal computer, a server, and the like, and is not limited specifically herein.
An execution main body of the image processing method provided in the embodiment of the present invention may be any image processing apparatus, for example, the image processing method may be executed by a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a user terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like. The server may be a local server or a cloud server. In some possible implementations, the base image processing method may be implemented by a processor calling computer readable instructions stored in a memory.
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention. As shown in fig. 1, an image processing method includes:
s10: and acquiring image characteristics of the focus lung image.
Wherein, the focus lung image is a lung image of adhesion generated between the boundary of the lung region and the inner wall of the thoracic cavity. In some possible embodiments, the embodiments of the present invention may obtain the lung lesion image by taking CT (Computed Tomography), MRI (magnetic resonance imaging), X-ray, and the like. Or the photographed lesion lung image may be received from another electronic device or a server. Wherein the lung image may comprise a tomographic (image) of a multi-layered lung, by superposition of which a global focal lung image may be formed. In addition, the focus lung image in the embodiment of the invention is a lung image in which the slit of the lung lobe is unclear and the boundary of the lung region is adhered to the inner wall of the chest cavity, that is, the lung image in which the slit between the lung lobes is unclear or the division between the lung region and the chest cavity or organs is unclear can be used as the focus lung image in the embodiment of the invention.
In some possible embodiments, in the case of obtaining the lung image, image features of the lung image may be further extracted, and the lung image is modified through optimization of the image features, so as to obtain a lung region contour, where the lung region contour includes at least one of a left lung region contour and a right lung region contour. The image feature of the lung image of the lesion may be obtained by directly using pixel values corresponding to pixel points in each layer of image of the lung image of the lesion as the image feature, or by performing feature extraction processing on the image. The feature extraction process may be performed, for example, by a feature pyramid network or a residual network.
S20: and performing correction processing on the focus lung image based on the image characteristics.
In some possible embodiments, the image features of each layer of image may be optimized to obtain optimized features, and an image formed by the optimized features of each layer of image is determined as a modified lesion lung image. The feature optimization method may include at least one layer of convolution processing, such as feature optimization of each layer of image may be implemented by a residual error network, but is not limited to the specific limitation of the present invention.
In some possible embodiments, the lesion lung image may be corrected by using interlayer information between image features of each layer of image, and the correction accuracy of the lesion lung image may be improved by using fusion and optimization of multilayer feature information.
S30: and determining the lung region volume of the focus lung image based on the lung region contour obtained from the corrected focus lung image.
In some possible embodiments, a lung region volume may be obtained by using the lung region contour of each layer of the image, the lung region volume including at least one of a left lung volume and a right lung volume; the volume of the lung region formed by the lung region contour may be determined as the lung region volume. The lung area volume can be determined according to the sum of the areas surrounded by the outlines of each layer of lung area. For example, the volume of the left lung region may be obtained according to the region area enclosed by the left lung region outline of each layer of image, and the volume of the right lung region may be obtained according to the region area enclosed by the right lung region of each layer of image.
S40: determining the position of the lung lobe of the focus lung image based on the lung region volume, and extracting the lung lobe based on the position of the lung lobe.
In some possible embodiments, in the case of obtaining the lung region volume, the lung lobes of the focal lung image may be divided according to a pre-configured reference parameter, which may be a reference proportion parameter of each lung lobe in the lung image, so that the lung lobe position in the focal lung image may be obtained through the reference parameter, and then the lung lobes may be extracted based on the lung lobe position.
Through the configuration of the embodiment of the invention, the focus lung image can be corrected, the feature precision of the focus image is improved, and the focus lung image can be used for extracting the lung area outline so as to obtain the lung area volume and the lung lobe position. Due to the fact that the accuracy of the characteristics of the focus lung image is improved through the configuration, the lung lobe segmentation accuracy is correspondingly improved, and the problem that quantitative evaluation of the focus lung image is difficult can be solved.
It should be noted that the embodiment of the present invention may be implemented by a neural network, or may be implemented by an algorithm defined in the present application, and may be implemented as an embodiment of the present invention as long as the embodiment is included in the scope of the technical solution protected by the present application.
The following describes embodiments of the present invention in detail with reference to the accompanying drawings. The embodiments of the present invention may first obtain image features of each layer of image in a lung image, where, as described in the embodiments above, pixel values of pixel points of each layer of image in a focal lung image may be used as a responsive image feature, or extraction of image features of the focal lung image may be performed by a feature extraction neural network. The feature extraction neural network can comprise a residual error network or a feature pyramid network, and the focus lung image is input into the feature extraction neural network to obtain image features corresponding to the focus lung image, wherein the image features comprise image features of each layer of image. Or the images of each layer are respectively input into the feature extraction neural network, and the image features of the images of each layer are correspondingly obtained. The image features may be expressed in a vector or matrix form, including feature information of each pixel point of the corresponding image, and the form of the image features is not particularly limited in the present invention.
Under the condition of obtaining the image characteristics of each layer of image, the optimization processing of the image characteristics can be executed, the correction of the focus lung area is realized, and the lung area outline is obtained. In some possible embodiments, the convolution processing may be directly performed on the image features of each layer of image, so as to further improve the accuracy of the image features and enrich the detail feature information, and obtain corresponding optimized features. Or feature fusion can be performed by using the image features of the images of the adjacent layers, and the optimization of the image features is realized by using the fusion result.
In an embodiment of the present invention, the modifying the lesion area image based on the image feature and obtaining a lung area contour by using the modified lesion lung image includes: performing feature optimization processing on the image features of each layer of image to obtain optimized features respectively corresponding to each layer of image; performing correction processing on the focus lung image by using the correlation characteristics among the optimization characteristics of the adjacent layer images in the lung image to obtain a corrected focus lung image; and obtaining the lung region contour of the lung image based on the corrected lung image.
In some possible embodiments, the respective optimization of the image features may be implemented by performing convolution processing on the image features of each layer of image, and through the optimization, more detailed feature information may be added, so as to improve the richness of the features. And performing optimization processing on each layer of image to respectively obtain corresponding optimization characteristics. Or the image features of the images of the adjacent layers can be connected to obtain the connection features, and the feature processing is performed on the connection features, so that the image features of the images of the adjacent layers can be fused with each other, the feature precision can be improved, the obtained homological features are convolved through the two convolution layers respectively, and the optimized features of the images of the layers in the images of the adjacent layers are correspondingly obtained.
In this embodiment of the present invention, the performing feature optimization processing on the image features of each layer of image to obtain optimized features corresponding to each layer of image may include: performing multi-image feature fusion processing on adjacent layer images in the focus lung image to obtain fusion features corresponding to the adjacent layer images in the adjacent layer images respectively, wherein the adjacent layer images in the focus lung image comprise a first image arranged according to the sequence of increasing the number of layers and at least one second image adjacent to the first image, and the fusion features of the images in the adjacent layer images are fused with feature information of any image in the adjacent layer images; and performing single image feature fusion processing on corresponding image features by using the fusion features of the images of all layers in the lesion lung image to obtain the optimized features of the image.
In this embodiment of the present invention, the adjacent layer images may be defined as including a first image and at least one second image, where the second image is an image adjacent to the first image in a direction in which the number of layers increases, for example, the first image is an ith layer image, and the second image may be an i +1 th layer image, or may also be an i +1 th to i + n th images, where n is an integer greater than 1, and n may be an integer less than 5 in this embodiment of the present invention, but is not limited to this specific embodiment of the present invention. According to the embodiment of the invention, each layer of image can be sequentially determined as the first image according to the sequence of the first layer image to the Nth layer image in the lung image, and the feature fusion and correction of the first image can be realized by combining the features of the second image adjacent to the first image.
According to the embodiment of the invention, the first fusion feature corresponding to the first image and the second fusion feature corresponding to the second image can be respectively obtained through multi-image feature fusion between the image feature of the first image and the image feature of the second image. The image features of the first image and the second image can be fused with each other through multi-image feature fusion processing, and therefore the obtained first fusion feature and the second fusion feature respectively comprise feature information of the first image and the second image. Here, the first image may be any one of the lung images.
By the above configuration, a fused feature corresponding to each layer of image in the lesion lung image can be obtained, and the fused feature can include feature information of the layer of image and feature information of an image adjacent to the layer of image.
In some possible embodiments, when obtaining the fusion feature of each layer of image, a single image feature fusion process may be performed on the image features of the image by using the fusion feature of the image. For example, when a first fusion feature of a first image and a second fusion feature of a second image are obtained, a single-image feature fusion may be performed on the image feature of the first image by using the first fusion feature, and a single-image feature fusion may be performed on the image feature of the second image by using the second fusion feature, so as to obtain a first optimized feature and a second optimized feature respectively.
The single image feature fusion processing can further enhance the respective image features on the basis of the fusion features respectively corresponding to the layers. For example, the respective image features may be further enhanced on the basis of the first fusion feature of the first image and the second fusion feature of the second image, such that the obtained first optimization feature also simultaneously fuses the feature information of the second image on the basis of the image feature of the first image, and such that the obtained second optimization feature also simultaneously fuses the feature information of the first image on the basis of the image feature of the second image.
In addition, the performing multi-image feature fusion processing on the adjacent layer images in the focus lung image to obtain fusion features corresponding to the images of the adjacent layers in the adjacent layer images respectively includes: connecting the image characteristics of the adjacent layer images to obtain a first connecting characteristic; processing the first connection characteristic by utilizing a first residual error network to obtain a first residual error characteristic; and performing convolution processing on the first residual error features by utilizing at least two convolution layers respectively to correspondingly obtain fusion features of the adjacent layer images respectively.
In the embodiment of the invention, when the feature fusion of the corresponding multiple images is executed, the image features of the images in the adjacent layer images can be connected to obtain the first connection feature. For example, the connection operation may be performed through a connection function (concat), so that the feature information between the adjacent layer images is briefly merged.
In the case where the first connection characteristic is obtained, the first connection characteristic may be further subjected to optimization processing. In the embodiment of the present invention, the feature optimization process may be performed by using a residual error network (first residual error network). The first connection feature may be input to a first residual block (residual block) to perform feature optimization processing, so as to obtain a first residual feature. The processing of the first residual error network can further fuse the feature information in the first connection feature and improve the accuracy of the feature information, that is, the feature information in the first image and the second image is further accurately fused in the first residual error feature. The first residual network may be any residual network structure, which is not specifically limited in the present invention.
In some possible embodiments, in the case of obtaining the first residual feature, convolution processing may be performed on the first residual feature using different convolution layers, respectively. For example, in the case that the adjacent layers are the first image and the second image, the convolution processing may be performed on the first residual features by using the two convolution layers, so as to obtain first fusion features of the first image and second fusion features of the second image, respectively. Wherein the two convolutional layers may be, but are not limited to, convolution kernels of 1 x 1. The first fusion feature comprises feature information of a second image, and the second fusion feature also comprises feature information of the first image, namely the first fusion feature and the second fusion feature mutually comprise feature information of two images.
By the configuration, the fusion of the feature information of the multiple images of each image in the adjacent layer images can be realized, and the correction precision of each layer image in the lung image can be improved by means of the fusion of the interlayer information.
The method for performing single image feature fusion processing on the corresponding image features by using the fusion features of each layer of image in the lesion lung image to obtain the optimized features of the image comprises the following steps: obtaining the addition characteristic of the image by utilizing the addition processing of the fusion characteristic of the image and the image characteristic; and processing the summation characteristic of the image by using a second residual error network to obtain the optimized characteristic of the image.
In some possible embodiments, in the case of obtaining the fusion features of the images of the respective layers, the fusion features and the corresponding image features perform an optimization process of the image features. The embodiment may first obtain the summation feature through the summation processing of the fusion feature of the image and the image feature. And then, optimizing the sum characteristic by using a residual error network (a second residual error network) to obtain the optimized characteristic of the image. For example, for the first image feature of the first image, the optimization processing may be performed by using a manner of adding the image feature of the first image and the first fusion feature, and the adding may include direct addition of the first fusion feature and the image feature of the first image, and may also include weighted addition of the first fusion feature and the image feature of the first image, that is, the first fusion feature and the image feature of the first image are multiplied by corresponding weighting coefficients respectively and then summed, where the weighting coefficients may be preset values or values learned by a neural network, which is not limited in the present invention.
Similarly, in the case of obtaining the second fusion feature, the single image feature fusion processing of the second image may be performed by using the second fusion feature, and the embodiment of the present invention may perform the fusion processing by using a mode of adding the image feature of the second image and the second fusion feature, where the addition may include direct addition of the second fusion feature and the image feature of the second image, or may also include weighted addition of the second fusion feature and the image feature of the second image, that is, the second fusion feature and the image feature of the second image are respectively multiplied by corresponding weighting coefficients to perform addition operation, where the weighting coefficients may be preset values or values learned by a neural network, and the present invention is not limited thereto.
It should be noted that, in the embodiment of the present invention, the time for performing the summation processing on the image feature of the first image and the first fusion feature and the time for performing the summation processing on the image feature of the second image and the second fusion feature are not specifically limited, and the two may be performed separately or simultaneously.
By the above-described addition processing, the feature information of the original image can be further increased on the basis of the fusion feature. The fusion of the single image features can realize that the feature information of the single-layer image is reserved at each stage of the network, and further, the feature information of the single-layer image can be optimized according to the optimized feature information among the multi-layer images. In addition, the embodiment of the present invention may directly use the first summation characteristic and the second summation characteristic as the first optimization characteristic and the second optimization characteristic, and may also perform subsequent optimization processing, thereby further improving the characteristic accuracy.
By the above configuration, the optimization features of the single-layer image in the lesion lung image can be obtained, and in the case of obtaining the optimization features, feature optimization of the lung image can be performed by using the correlation features between the optimization features.
In the case of obtaining the optimized features of the respective layer images, the focal lung region image may be corrected using the optimized features, wherein the correction operation may be performed using the association between the optimized features of the adjacent layer images. In an embodiment of the present invention, the performing a correction process on the focus lung image by using a correlation feature between optimization features of adjacent layer images in the lung image to obtain a corrected focus lung image includes: acquiring correlation characteristics among the optimization characteristics of adjacent layer images in the focus lung image; performing feature fusion processing on the optimized features respectively corresponding to the adjacent layer images by using the associated features between the optimized features of the adjacent layer images to obtain optimized fusion features; correcting the image characteristics of any image in the adjacent layer images by using the optimized fusion characteristics to obtain the correction characteristics of any image; and obtaining a corrected focus lung image by utilizing the correction characteristics corresponding to each layer of image of the focus lung image.
In the embodiment of the present invention, when the optimized features of each layer of image in the lung image are obtained, the corrected lung image may be obtained by using the correction features respectively corresponding to each layer of image adjacent to and using the lung image.
In the embodiment of the present invention, in the case of obtaining the optimized features of each layer of image in the lung image, the associated features between the optimized features may be determined by using the optimized features of each layer of image in the layer image, for example, the associated features between the first optimized features and the second optimized features may be further obtained by using the first optimized features corresponding to the first image and the second optimized features corresponding to the second image, and the associated features may represent the degree of association between feature information corresponding to the same position in the first optimized features and the second optimized features. The degree of association may reflect a change in the first image and the second image for the same object. The same object may include, for example, a boundary of a lung region, and in the embodiment of the present invention, the scale of each image in the lung image may be the same, and the scale of each corresponding obtained optimization feature is also the same.
In addition, when the obtained first and second optimized features, or the first and second fused features, the first and second summed features, and the image features of the first and second images have different scales, the corresponding features may be adjusted to the same scale, and the scaling operation may be performed by, for example, pooling processing.
In addition, the embodiment of the invention can obtain the correlation characteristics among the optimized characteristics of the images in the adjacent layer images through the graph convolution neural network. For example, a first optimization feature of a first image in an adjacent layer image and a second optimization feature of a second image can be input into a graph volume neural network, the graph volume neural network can output an association feature between the first optimization feature and the second optimization feature after being processed, and elements in the association feature represent association between feature information at the same position in the first optimization feature and the second optimization feature.
In the embodiment of the present invention, before the correction operation of each layer image is performed, a fusion feature (optimized fusion feature) of the optimized features of each layer image in the adjacent layer image may also be obtained. For example, a fusion operation of the first optimization feature and the second optimization feature may be performed. Wherein, the optimization features of the adjacent layer images can be connected, such as connecting the first optimization feature and the second optimization feature, and the first optimization feature and the second optimization feature can be connected in the channel direction. The embodiment of the invention can execute the connection process through the concat function to obtain the second connection characteristic. And then, performing activation processing on the correlation features between the optimization features of the adjacent layer images by using an activation function, wherein the activation function can be a softmax function, each correlation degree in the correlation features can be used as an input parameter, and then, the activation function is used for performing processing on each input parameter and outputting the processed correlation features. Further, the optimized fusion feature may be obtained by using a product between the correlation feature after the activation processing and the second connection feature.
In the case of obtaining the optimized fusion feature of the adjacent layer image, the optimized fusion feature may be utilized to perform a correction operation on any one of the adjacent layer images. The embodiment of the invention can perform the correction of the image characteristics by using the mode of adding the image characteristics of the original image and the optimized fusion characteristics to obtain the corrected image characteristics, namely the corrected image.
In addition, the image feature of the first image and the optimized fusion feature may be subjected to summation processing to obtain a corrected image feature, and a corrected image of the lesion lung image may be determined according to the corrected image feature. The addition process may be direct addition or may be weighted addition performed by using a weighting coefficient, and the present invention is not limited in this respect. The corrected image features can directly correspond to the pixel values of the pixels of the image, so that the corrected image corresponding to the corrected image features can be directly utilized. In addition, convolution processing can be further performed on the corrected image features, feature information is further fused, feature accuracy is improved, and then the corrected lung image is determined according to the features obtained through the convolution processing.
The image correction process of the embodiment of the invention can be used for realizing at least one of denoising, overdividing and deblurring of each layer of image in the focus lung image, and the image quality can be improved to different degrees through the corrected focus lung image.
In addition, it should be noted that, before performing the processing on the adjacent layer images, the embodiments of the present invention may group the images in the lesion lung image, for example, a group of two layers of images, or a group of n layers of images. The above grouping may be performed on each layer of images in the lesion lung image in the order of layer 1 to layer N (the total number of lung images), and then the images of the same group may be used as the images of adjacent layers.
In the case of obtaining the modified lesion lung image, a lung region detection operation, that is, a lung region segmentation process may be performed on the modified lesion lung image. Or, the lung region contour may be obtained by inputting the modified lung image to a convolutional neural network that performs segmentation operation and outputting the lung region contour through the convolutional neural network. The convolutional neural network may be U-net, but is not a specific limitation of the present invention. Wherein the obtained lung region contour may comprise at least one of a left lung region contour and a right lung region contour.
In addition, the neural networks related to the above embodiments of the present invention, such as the feature extraction neural network, the residual error network, the convolution neural network, and the like, are all network structures capable of implementing corresponding functions after training, and can meet the accuracy requirement, and a person skilled in the art can set different accuracy conditions according to the requirement, which is not specifically limited by the present invention.
Based on the configuration, the embodiment of the invention can determine the lung image region contour through the feature information of the interlayer image in the lung image, and the configuration can improve the accuracy of the lung image feature information and the accuracy of the lung region contour.
The corrected lung region contour obtained in the embodiment of the present invention may also be represented in a matrix or vector form, where the corrected lung region contour includes a first label 1 and a second label 0, the first label represents a contour boundary and an image pixel point within the contour boundary, and 0 represents other regions. The lung region in the extracted lung image can be obtained by correcting the product of the lung region outline and the image characteristics.
In addition, another method for obtaining a lung region contour is further provided in the embodiments of the present invention, and step S201: acquiring a set lung region contour; step S202: obtaining a lung region outline to be extracted according to the focus lung image; step S203: correcting the to-be-extracted lung region contour by using the set lung region contour to obtain a corrected lung region contour; step S204: and carrying out lung region extraction on the focus lung image according to the corrected lung region contour. The problem that complete and accurate extraction cannot be carried out in an area where the lung area boundary is adhered to the inner wall of the thoracic cavity is solved. According to the method, only the lung region contour is preset, then the focus lung image is utilized to obtain the lung region contour to be extracted, then the set lung region contour is utilized to correct the lung region contour to be extracted, then the corrected lung region contour is utilized to extract the lung region of the focus lung image, and other characteristic extraction which depends on a traditional algorithm is not needed.
Step S201: and acquiring a set lung area contour.
In an embodiment of the present invention, the method for determining the contour of the set lung region includes: respectively extracting lung region outlines of focus lung images of a plurality of healthy subjects to obtain a plurality of lung region outlines; and fitting the plurality of lung region contours to obtain the set lung region contour. The fitting method may adopt a least square fitting method. The method for extracting the lung region outline of the focus lung image of a plurality of healthy subjects can be an existing region growing method. In the present invention and embodiments of the present invention, the size of the focal lung image of several healthy subjects should be the same as the size of the focal lung image.
In the embodiment of the invention, the number of the focus lung images of a plurality of healthy subjects is 1003 sets, and the lung region outlines of the 1003 sets of focus lung images of the healthy subjects are extracted by using a region growing method respectively to obtain 1003 sets of lung region outlines. And then obtaining a set lung region contour, namely a lung region contour model, by using a fitting method of a least square method.
In the embodiment of the invention, the focal lung image of the healthy subject is that the lung boundary and the inner wall of the thoracic cavity are not adhered, and the focal lung image of the lung area can be performed by using a traditional method.
Step S202: and obtaining the outline of the lung area to be extracted according to the focus lung image.
In an embodiment of the present invention, the focus lung image is a lung image of a lung region contour to be extracted, and similarly, the lung region contour to be extracted may be performed on the focus lung image by using a region growing method. However, the contour of the lung region to be extracted has an erroneous extraction contour at the adhesion, and the erroneous extraction contour is a concave region or a convex region of the contour of the lung region to be extracted. Therefore, the set lung region contour is required to be used for correcting the to-be-extracted lung region contour, and then the lung region contour is corrected to be used for extracting the lung region of the focus lung image.
Step S203: and correcting the to-be-extracted lung region contour by using the set lung region contour to obtain a corrected lung region contour.
In the present invention, the method for setting the lung region contour to correct the lung region contour to be extracted to obtain a corrected lung region contour includes: setting a plurality of points on the outline of the lung region to be extracted; and scaling the set lung region contour according to a set step length, wherein when the number of the points contacted by the scaled set lung region contour is more than or equal to a set number, the scaled set lung region contour is the corrected lung region contour. Wherein scaling the set lung region contour by the set step size only changes the size of the set lung region contour, and does not change the shape of the set lung region contour.
In an embodiment of the present invention, the set lung region may be gradually reduced in contour or gradually enlarged in contour. In the method for correcting the to-be-extracted lung region contour by the set lung region contour to obtain the corrected lung region contour, before scaling the set lung region contour according to a set step length, a first area of the set lung region contour and a second area of the to-be-extracted lung region contour are respectively calculated, and when the first area is larger than the second area, the set lung region contour is reduced; when the first area is smaller than the second area, the set lung region contour is enlarged.
In the embodiment of the present invention, the number of the plurality of points is 1000, and the plurality of points are uniformly distributed on the contour of the lung region to be extracted. The step length is set to 0.1mm, and the set number may be 1/2 or more of several points. The number of the points, the set step length and the set number can be set by the person skilled in the art according to the actual needs, and the invention is not limited.
In the present invention, the set lung region contour is scaled according to a set step length, and when the number of the scaled set lung region contour contacting the plurality of points is greater than or equal to a set number, an error extraction contour of the to-be-extracted lung region contour is determined, the error extraction contour is replaced with a corresponding contour of the scaled set lung region contour, and the replaced to-be-extracted lung region contour is the corrected lung region contour. And only correcting the wrongly extracted contour of the lung region contour to be extracted by reserving the region with the correct contour of the lung region to be extracted.
In the present invention, the method for determining an erroneously extracted contour of the lung region to be extracted includes: the zoomed set lung region contour is contacted with the plurality of points to divide the zoomed set lung region contour into a plurality of set lung region contour line segments and divide the to-be-extracted lung region contour into a plurality of to-be-extracted lung region contour line segments; calculating the distance between the set lung region contour line segment with the same starting point and the same end point and the lung region contour line segment to be extracted; and when the distance is greater than or equal to a set distance, the lung region contour line segment to be extracted is the false extraction contour.
In an embodiment of the present invention, the set lung region contour line segment and the to-be-extracted lung region contour line segment having the same starting point and the same ending point are processed approximately as straight lines, and an average distance between 2 straight lines is calculated to obtain a distance between the set lung region contour line segment and the to-be-extracted lung region contour line segment having the same starting point and the same ending point. Specifically, the 2 straight lines are 2 non-parallel straight lines, and the calculation method includes taking a plurality of contour line points on the lung region contour line segment to be extracted, and calculating line segment distances from the contour line points to the set lung region contour line segment, respectively, where an average value of all the line segment distances is the distances between the set lung region contour line segment and the lung region contour line segment to be extracted, which have the same starting point and the same ending point. And when the distance is greater than or equal to a set distance, the lung region contour line segment to be extracted is the false extraction contour.
In an embodiment of the invention, the set distance may be 8mm to 25 mm. Also, the person skilled in the art can set the distance according to the actual needs, and the present invention is not limited to the set distance.
Step S204: and carrying out lung region extraction on the focus lung image according to the corrected lung region contour.
Further, in the present invention, the set lung region contour is a set lung region contour of a multilayer plane, and the focus lung image is a focus lung image of the multilayer plane; before the set lung region contour is used for correcting the to-be-extracted lung region contour to obtain a corrected lung region contour, whether the number of layers of the set lung region contour is the same as that of the focus lung image or not is judged; if the two images are the same, carrying out lung region extraction on the focus lung image according to the corrected lung region outline; if the set lung region contour and the focus lung image are different, performing interpolation processing on the set lung region contour or the focus lung image to obtain the set lung region contour and the focus lung image with the same number of layers; and carrying out lung region extraction on the focus lung image according to the corrected lung region contour after the interpolation processing, or carrying out lung region extraction on the focus lung image after the interpolation processing according to the corrected lung region contour.
Meanwhile, the set lung region contour corrects the to-be-extracted lung region contour, and the method for obtaining the corrected lung region contour corrects the set lung region contour of the multilayer plane, so that the obtained corrected lung region contour is the corrected lung region contour of the multilayer plane. And respectively carrying out lung region extraction on the focus lung image of the multilayer plane according to the corrected lung region outline of the multilayer plane.
Further, in the present invention, the setting of the lung region contour includes: setting a left lung region contour and a right lung region contour; the to-be-extracted lung region contour comprises: extracting a left lung region contour and a right lung region contour; correcting the left lung region contour to be extracted by using the set left lung region contour to obtain a corrected left lung region contour; correcting the contour of the right lung region to be extracted by using the set contour of the right lung region to obtain a corrected contour of the right lung region; and carrying out lung region extraction on the focus lung image according to the corrected left lung region contour and the corrected right lung region contour.
In an embodiment of the present invention, the extraction method for setting the left lung region contour and the right lung region contour adopts the region growing method. The method of deriving the contour of the corrected left lung region and the method of deriving the contour of the corrected right lung region refer to the above detailed description.
In addition, in the case of obtaining the contour of the lung region, the volume formed by the lung region may be determined using the obtained road condition of the lung region. The volume of the lung region is obtained, for example, after the area of the region surrounded by the lung region contour of each slice image.
In some possible embodiments, the lung region volume may be obtained after utilizing an area corresponding to the lung region contour in each layer of image. Each layer of image may be divided into a mesh shape, each mesh may be a preset size, such as a square with a length and a width of 1mm, the size is not specifically limited in the present invention, and generally, the size may be set to be smaller, so as to improve the detection accuracy of the area. When the lung area contour passes through a certain grid, whether the part of the grid in the lung area is over half of the grid or not can be determined, if so, the area of the grid is determined as the area in the lung area, and if not, the grid is ignored. And then the area of the lung region contour enclosing city can be obtained by utilizing the area of the grid of the lung region contour enclosing city.
In addition, the embodiment of the invention can also determine the area of the lung area contour by utilizing the integral mode of the lung area contour. Alternatively, in the embodiment of the present invention, each layer of lung region contour may also be subjected to linear fitting, such as performing curve fitting processing, and fitted to a standard shape through the lung region contour, where the standard shape may be a circle or a rectangle. The curve fitting method may include a least squares method, but is not limited to the specific limitation of the present invention.
Further, in the case of obtaining the areas formed by the lung region contour of each layer of image in the lesion lung image, the sum of the areas formed by the lung region contour of each layer may be determined as the volume of the lung region. Likewise, a left lung volume may be obtained in case the lung region contour is a left lung region contour, and a right lung volume may be obtained in case the lung region contour is a right lung region contour.
In the embodiment of the invention, the lung lobes of the focus lung image can be extracted and segmented by utilizing the lung region volume. Wherein, the dividing the lung lobe region of the focus lung image based on the lung region volume to obtain the lung lobe position of the focus lung image comprises: acquiring a first reference value, a second reference value and a third reference value; dividing a right focus lung image of the focus lung image by using the first reference value, the second reference value and the right lung volume to obtain each lung lobe position in the right lung; and dividing the left focus lung image of the focus lung image by using the third reference value and the left lung volume to obtain the position of each lung lobe in the left lung.
In the present invention, the right lung may be actually divided into 3 regions, namely, a right lung first region, a right lung second region, and a right lung third region, by using the first reference value and the second reference value. The first right lung region corresponds to the upper right lobe, the second right lung region corresponds to the middle right lobe, and the third right lung region corresponds to the lower right lobe. The third reference value may divide the left lung into 2 regions, namely, a first region of the left lung and a second region of the left lung. The first region of the left lung corresponds to the upper lobe of the left lung, and the second region of the left lung corresponds to the lower lobe of the left lung. Therefore, the rapid quantitative analysis of clinical diseases can be rapidly realized by the method without realizing the fine division of lung lobes.
In an embodiment of the present invention, the first reference value, the second reference value, and the third reference value are preset threshold values. The first, second and third reference values are determined from lung images of a large number of healthy subjects, and lung lobe segmentation is performed by a PTK kit (pulmonary tool kit). In the invention, 1003 lung images of healthy subjects are collected, then lung lobe segmentation is automatically carried out through a PTK (packet transport keying) kit to obtain lung lobe segmentation images, and the lung lobe segmentation images are manually corrected through the PTK kit to obtain corrected lung lobe segmentation images so as to ensure the accuracy of lung lobe segmentation.
In the embodiment of the present invention, the volume of the lung lobes is calculated, a hyperpolarized noble gas ventilation fast magnetic resonance image (lung image) of the lung of a healthy subject can be obtained by performing a lung ventilation fast magnetic resonance imaging scan on the healthy subject, and the lung volumes (left lung volume and right lung volume) can be calculated by segmenting voxels containing noble gas signals in the hyperpolarized noble gas ventilation fast magnetic resonance image. Left and right lung segmentation is performed on the lung image, and then left and right lung volumes may be calculated. The lung lobe segmentation image is obtained or corrected by the PTK toolkit, and then the volume of each lung lobe can be calculated. Alternatively, after the left lung and the right lung are segmented or the lung lobes are segmented by using the lung volume calculation method disclosed in the lung measurement application No. 201480034832.3, the left lung volume, the right lung volume and the volume of each lung lobe can be calculated by the above method.
In the embodiment of the present invention, 1003 lung volume data (left lung volume and right lung volume) data, 1003 left lung volume data, 1003 right lung volume data, and 1003 volume data of each lung lobe are obtained from 1003 lung images of healthy subjects. That is, the number of lung images of a healthy subject is determined, and lung volume (left lung volume and right lung volume) data, left lung volume data, right lung volume data, and volume data of each lung lobe are acquired based on the number of lung images of the healthy subject.
In the embodiment of the present invention, the right lung image has 3 lung lobes, so that only the first reference value and the second reference value are required to complete the right lung segmentation image. For example, the first reference value and the second reference value are average values of volume data of any two of 1003 right lung lobes. That is, the first reference value and the second reference value are the average value of the volume data of any two lung lobes of the right lung lobe based on the number of lung images of the healthy subject.
In the embodiment of the present invention, the left lung image has 2 lung lobes, and thus only 1 reference value, the third reference value, is required. The third reference value may be a volume data average of 1003 upper left lobes or lower left lobes. That is, the third reference value is an average value of the volume data of any one of the left lung lobes based on the number of lung images of the healthy subject.
In an embodiment of the present invention, the lung image of a healthy subject is a lung image without lung disease and with clear lung fissures, and the lung image can be manually segmented or corrected based on a PTK kit.
In an embodiment of the present invention, the method for obtaining the position of each lung lobe in the right lung by dividing the right lung image of the focal lung image by using the first reference value, the second reference value, and the right lung volume includes: and dividing the right lung image of the focus lung image by using the first reference value, the second reference value and the right lung volume of the focus lung image, and extracting three lung lobe positions of the right lung. For example, the first reference value is the volume data average value of 1003 upper right lobes, and the second reference value is the volume data average value of 1003 middle right lobes. Calculating from the upper side of the right lung image of the lesion lung image, and considering the lesion lung image as a right upper lung lobe when the volume of the right lung image reaches a first reference value; continuing to calculate downwards by taking the volume as a reference, and considering the volume as a right middle lung lobe when the volume reaches a second reference value; the remaining part is the lower right lobe.
In an embodiment of the present invention, the dividing the left lung image of the focal lung image by using the third reference value to extract positions of lung lobes of the left lung includes: and dividing the left lung volume in the left lung image by using the third reference value, and extracting two lung lobe positions of the left lung. For example, if the third reference value is the average value of the volume data of 1003 upper left lung lobes, the calculation is started from the upper side of the left lung image of the lung image, and if the volume of the left lung image reaches the third reference value, the lung image is regarded as the upper left lung lobe; the remaining part is the left lower lobe.
In the present invention, the method further comprises, in consideration of the age of the patient and the deterioration of the lung disease, correcting the first reference value, the second reference value, and the third reference value to obtain a first corrected reference value, a second corrected reference value, and a third corrected reference value; dividing a right lung image of the focus lung image by using the first correction reference value and the second correction reference value to obtain the position of each lung lobe of the right lung; and dividing the left lung image of the focus lung image by using the third correction reference value to obtain the position of each lung lobe in the left lung.
In the present invention, the method for obtaining the first correction reference value, the second correction reference value, and the third correction reference value by respectively correcting the first reference value, the second reference value, and the third reference value includes: acquiring a first reference volume and a second reference volume; calculating a first ratio of the first reference volume to the right lung volume and a second ratio of the second reference volume to the left lung volume;
multiplying the first reference value and the second reference value by the first ratio to obtain the first correction reference value and the second correction reference value, respectively, and multiplying the third reference value by the second ratio to obtain the third correction reference value.
In an embodiment of the present invention, in an embodiment of acquiring the first reference value, the second reference value, and the third reference value, the first reference volume is obtained by averaging 1003 pieces of right lung volume data. The 1003 left lung volume data were averaged to obtain a second baseline volume. That is, the first reference volume is an average of right lung volume data based on the number of lung images of the set healthy subject, and the second reference volume is an average of left lung volume data based on the number of lung images of the set healthy subject.
In some examples, a first ratio of the first baseline volume and the right lung volume may be calculated: the first ratio obtained by dividing the right lung volume by the first reference volume is 0.9, and the first ratio 0.9 is multiplied by the first reference value and the second reference value, respectively, to obtain the first correction reference value and the second correction reference value. And then, dividing the right lung image of the lung images by using the method to obtain the lung lobes divided by the right lung.
Additionally, a second ratio of the second baseline volume and the left lung volume may also be calculated: dividing the left lung volume by the second reference volume to obtain a second ratio of 0.97, and multiplying the second ratio of 0.97 by the third correction reference value to obtain the first correction reference value and the third correction reference value. And then, dividing the left lung image of the lung image by using the method to obtain the lung lobes divided by the left lung.
In summary, in the embodiments of the present invention, feature optimization processing may be performed on the image features of each layer of images in the lung images, and the focal lung images are corrected, so that the accuracy of the image features of each layer of images may be improved in the process. The extraction of the lung region outline is achieved by the corrected lung image, the lung region volume is obtained by the extraction and the calculation of the lung region outline, the detection precision of the lung region outline can be improved through the corrected image characteristics, the detection precision of the lung region volume is further improved, the position of the lung lobe is determined through the obtained lung region volume, and then the accurate extraction of the lung lobe is achieved based on the position of the lung lobe. In addition, since the above configuration does not require a large amount of labor cost, the detection time can be saved. In addition, the embodiment of the invention can also perform accurate division of the lung region.
Furthermore, an embodiment of the present invention further provides an image processing apparatus, and with reference to fig. 2, the image processing apparatus includes:
the acquisition module 10 is configured to acquire image characteristics of a focus lung image, where the focus lung image is a lung image in which a lung region boundary and a chest inner wall are adhered;
a correction module 20, configured to perform correction processing on the lesion lung image based on the image features;
a determining module 30, configured to determine a lung region volume of the lesion lung image by using a lung region contour obtained based on the corrected lesion lung image;
the determining module 30 is further configured to determine a lung lobe position of the focal lung image based on the lung region volume;
and an extracting module 40, configured to extract the lung lobes based on the lung lobe positions.
Further, the lung region volume comprises at least one of a left lung volume and a right lung volume, and the determining module 30 is configured to obtain a first reference value, a second reference value and a third reference value;
dividing a right focus lung image of the focus lung image by using the first reference value, the second reference value and the right lung volume to obtain each lung lobe position in the right lung;
and dividing the left focus lung image of the focus lung image by using the third reference value and the left lung volume to obtain the position of each lung lobe in the left lung.
Further, the apparatus further comprises:
the correction module is used for correcting the first reference value, the second reference value and the third reference value respectively to obtain a first correction reference value, a second correction reference value and a third correction reference value;
the determining module 30 is further configured to divide the right focus lung image of the focus lung image by using the first correction reference value and the second correction reference value, so as to obtain lung lobe positions in the right lung.
The determining module 30 is further configured to divide the left focus lung image of the focus lung image by using the third correction reference value, so as to obtain the positions of lung lobes in the left lung.
Further, the extracting module 40 is further configured to segment the lung region contour from the corrected lesion lung image by using a region growing method; or; and inputting the corrected focus lung image into a convolutional neural network, and outputting through the convolutional neural network to obtain the focus lung region contour.
Further, the modification module 20 includes:
the optimization unit is used for performing feature optimization processing on the image features of each layer of image to obtain optimized features respectively corresponding to each layer of image;
and the correction unit is used for performing correction processing on the focus lung image by using the correlation characteristics between the optimized characteristics of the adjacent layer images in the focus lung image.
The optimization unit is specifically configured to perform multi-image feature fusion processing on adjacent layer images in the lesion lung image to obtain fusion features corresponding to the adjacent layer images in the adjacent layer images, where the adjacent layer images in the lung images include a first image arranged in an order in which the number of layers increases and at least one second image adjacent to the first image, and the fusion features of the images in the adjacent layer images are fused with feature information of any image in the adjacent layer images;
and performing single image feature fusion processing on corresponding image features by using the fusion features of the images of all layers in the lesion lung image to obtain the optimization features of the image.
The correction unit is specifically used for acquiring correlation characteristics among the optimized characteristics of adjacent layer images in the focus lung image;
performing feature fusion processing on the optimized features respectively corresponding to the adjacent layer images by using the associated features between the optimized features of the adjacent layer images to obtain optimized fusion features;
correcting the image characteristics of any image in the adjacent layer images by using the optimized fusion characteristics to obtain the correction characteristics of any image;
and obtaining a corrected focus lung image by utilizing the correction characteristics corresponding to each layer of image of the focus lung image.
As shown in fig. 3, an embodiment of the present invention also provides an image processing apparatus, which may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the device configuration shown in fig. 3 does not constitute a limitation of the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 3, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and an image area extraction program. Among them, the operating system is a program that manages and controls the hardware and software resources of the device, and supports the operation of the image area extraction program and other software or programs.
In the apparatus shown in fig. 3, the user interface 1003 is mainly used for data communication with the client; the network interface 1004 is mainly used for establishing communication connection with a server; and the processor 1001 may be adapted to call an image processing program stored in the memory 1005, which when executed by the processor implements the steps of the image processing method.
Furthermore, an embodiment of the present invention further provides a storage medium, where an image processing program is stored, and the image processing program, when executed by the processor, implements the steps of the image processing method.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An image processing method, comprising:
acquiring image characteristics of a focus lung image, wherein the focus lung image is a lung image in which a lung region boundary is adhered to the inner wall of a chest cavity;
correcting the focus lung image based on the image characteristics, and determining the lung region volume of the focus lung image based on the lung region contour obtained from the corrected focus lung image;
determining the position of the lung lobe of the focus lung image based on the lung region volume, and extracting the lung lobe based on the position of the lung lobe.
2. The method of claim 1, wherein the lung region volume comprises at least one of a left lung volume and a right lung volume, and wherein determining lung lobe locations of the focal lung image based on the lung region volume comprises:
acquiring a first reference value, a second reference value and a third reference value;
dividing a right focus lung image of the focus lung image by using the first reference value, the second reference value and the right lung volume to obtain each lung lobe position in the right lung;
and dividing the left focus lung image of the focus lung image by using the third reference value and the left lung volume to obtain the position of each lung lobe in the left lung.
3. The method of claim 2, further comprising:
correcting the first reference value, the second reference value and the third reference value respectively to obtain a first correction reference value, a second correction reference value and a third correction reference value;
dividing a right focus lung image of the focus lung image by using the first correction reference value and the second correction reference value to obtain each lung lobe position in the right lung;
and dividing the left focus lung image of the focus lung image by using the third correction reference value to obtain the position of each lung lobe in the left lung.
4. The method of claim 1, wherein before determining the lung region volume of the lesion lung image based on the lung region contour obtained from the modified lesion lung image, the method further comprises:
segmenting the lung region outline from the corrected focus lung image by using a region growing method; or;
and inputting the corrected focus lung image into a convolutional neural network, and outputting through the convolutional neural network to obtain the focus lung region contour.
5. The method of claim 1, wherein the focal lung image comprises a plurality of superimposed images, and wherein performing a correction process on the focal lung image based on the image features comprises:
performing feature optimization processing on the image features of each layer of image to obtain optimized features respectively corresponding to each layer of image;
and performing correction processing on the focus lung image by using the correlation characteristics between the optimized characteristics of the adjacent layer images in the focus lung image.
6. The method according to claim 5, wherein the performing feature optimization processing on the image features of the images of the respective layers to obtain optimized features respectively corresponding to the images of the respective layers comprises:
performing multi-image feature fusion processing on adjacent layer images in the focus lung image to obtain fusion features corresponding to the adjacent layer images in the adjacent layer images respectively, wherein the adjacent layer images in the lung images comprise a first image arranged according to the sequence of increasing the number of layers and at least one second image adjacent to the first image, and the fusion features of the images in the adjacent layer images are fused with feature information of any image in the adjacent layer images;
and performing single image feature fusion processing on corresponding image features by using the fusion features of the images of all layers in the lesion lung image to obtain the optimization features of the image.
7. The method according to any one of claims 5 or 6, wherein the performing a correction process on the lesion lung image by using the correlation features between the optimized features of the adjacent layer images in the lesion lung image comprises:
acquiring correlation characteristics among the optimization characteristics of adjacent layer images in the focus lung image;
performing feature fusion processing on the optimized features respectively corresponding to the adjacent layer images by using the associated features between the optimized features of the adjacent layer images to obtain optimized fusion features;
correcting the image characteristics of any image in the adjacent layer images by using the optimized fusion characteristics to obtain the correction characteristics of any image;
and obtaining a corrected focus lung image by utilizing the correction characteristics corresponding to each layer of image of the focus lung image.
8. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring image characteristics of a focus lung image, wherein the focus lung image is a lung image in which a lung region boundary is adhered to the inner wall of a chest cavity;
the correction module is used for correcting the focus lung image based on the image characteristics;
a determining module, configured to determine a lung region volume of the lesion lung image by using a lung region contour obtained based on the corrected lesion lung image;
the determining module is further configured to determine a lung lobe position of the focal lung image based on the lung region volume;
and the extraction module is used for extracting the lung lobes based on the lung lobe positions.
9. An image processing apparatus characterized by comprising: memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 7.
10. A storage medium, characterized in that the storage medium has stored thereon an image processing program which, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 7.
CN202010440755.9A 2020-05-22 2020-05-22 Image processing method, device, equipment and storage medium Pending CN111626998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010440755.9A CN111626998A (en) 2020-05-22 2020-05-22 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010440755.9A CN111626998A (en) 2020-05-22 2020-05-22 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111626998A true CN111626998A (en) 2020-09-04

Family

ID=72272284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010440755.9A Pending CN111626998A (en) 2020-05-22 2020-05-22 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111626998A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689355A (en) * 2021-09-10 2021-11-23 数坤(北京)网络科技股份有限公司 Image processing method, image processing device, storage medium and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689355A (en) * 2021-09-10 2021-11-23 数坤(北京)网络科技股份有限公司 Image processing method, image processing device, storage medium and computer equipment

Similar Documents

Publication Publication Date Title
US11170482B2 (en) Image processing method and device
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
CN111340756B (en) Medical image lesion detection merging method, system, terminal and storage medium
CN109767448B (en) Segmentation model training method and device
CN109903269B (en) Method and computing device for determining abnormal type of spine cross-sectional image
CN111862044A (en) Ultrasonic image processing method and device, computer equipment and storage medium
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
CN110992370B (en) Pancreas tissue segmentation method and device and terminal equipment
CN111681205B (en) Image analysis method, computer device, and storage medium
CN113192067B (en) Intelligent prediction method, device, equipment and medium based on image detection
CN111626998A (en) Image processing method, device, equipment and storage medium
CN113160199A (en) Image recognition method and device, computer equipment and storage medium
CN111627026A (en) Image processing method, device, equipment and storage medium
CN109767468B (en) Visceral volume detection method and device
CN111652924A (en) Target volume determination method, device, equipment and storage medium
CN111402191B (en) Target detection method, device, computing equipment and medium
CN112215878A (en) X-ray image registration method based on SURF feature points
CN111275673A (en) Lung lobe extraction method, device and storage medium
CN111627037A (en) Image area extraction method, device, equipment and storage medium
CN115880358A (en) Construction method of positioning model, positioning method of image mark points and electronic equipment
CN111627028A (en) Image area division, device, equipment and storage medium
CN111627036A (en) Image area correction method, device, equipment and storage medium
CN111627027A (en) Image area detection method, device, equipment and storage medium
CN110706222B (en) Method and device for detecting bone region in image
CN116659520B (en) Matching positioning method, device and equipment based on bionic polarization vision enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination