CN111652924A - Target volume determination method, device, equipment and storage medium - Google Patents

Target volume determination method, device, equipment and storage medium Download PDF

Info

Publication number
CN111652924A
CN111652924A CN202010440132.1A CN202010440132A CN111652924A CN 111652924 A CN111652924 A CN 111652924A CN 202010440132 A CN202010440132 A CN 202010440132A CN 111652924 A CN111652924 A CN 111652924A
Authority
CN
China
Prior art keywords
image
lung
features
images
reference value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010440132.1A
Other languages
Chinese (zh)
Inventor
李月
蔡杭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010440132.1A priority Critical patent/CN111652924A/en
Publication of CN111652924A publication Critical patent/CN111652924A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Animal Behavior & Ethology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Veterinary Medicine (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Geometry (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pulmonology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for determining a target volume, wherein the method comprises the following steps: acquiring image optimization characteristics of each layer of image in a lung image, wherein the lung image comprises a plurality of layers of superposed images; modifying the lung image by using the correlation characteristics between the image optimization characteristics of the adjacent layer images in the lung image to obtain a modified lung image; obtaining a lung region contour of the lung image based on the corrected lung image; determining a lung region volume using the lung region contour. The invention can accurately detect the volume of the lung region and improve the volume detection precision of the lung region.

Description

Target volume determination method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a target volume determination method, a target volume determination device, target volume determination equipment and a storage medium.
Background
At present, lung image processing is an important subject in the respiratory field, and lung region analysis has good research value in the clinical and scientific research fields. For example, the determination of the lung volume parameters has high research value and clinical significance for diagnosis, analysis and prevention of diseases such as emphysema and the like. Therefore, it is an urgent technical problem to be solved in the field of image processing to provide a method capable of accurately detecting the volume of a lung region, and effectively assisting clinical diagnosis and analysis of a lung image.
Disclosure of Invention
The present invention is directed to a method, an apparatus, a device, and a storage medium for determining a target volume, which are capable of accurately detecting a lung region volume and improving the accuracy of detecting the lung region volume.
To achieve the above object, in a first aspect, the present invention provides a target volume determining method including the steps of:
acquiring image optimization characteristics of each layer of image in a lung image, wherein the lung image comprises a plurality of layers of superposed images;
modifying the lung image by using the correlation characteristics between the image optimization characteristics of the adjacent layer images in the lung image to obtain a modified lung image;
obtaining a lung region contour of the lung image based on the corrected lung image;
determining a lung region volume using the lung region contour.
In a second aspect, the present invention also provides a target volume determination apparatus, the apparatus comprising:
the acquisition module is used for acquiring image optimization characteristics of each layer of image in a lung image, wherein the lung image comprises a plurality of layers of superposed images;
the correction module is used for correcting the lung image by using the correlation characteristics among the image optimization characteristics of the adjacent layer images in the lung image to obtain a corrected lung image;
a determining module, configured to obtain a lung region contour of the lung image based on the corrected lung image; and determining a lung region volume using the lung region contour.
In a third aspect, the present invention further provides a terminal device, where the terminal device includes: a memory, a processor and a target volume determination program stored on the memory and executable on the processor, the target volume determination program when executed by the processor implementing the steps of the target volume determination method.
In a fourth aspect, the present invention further proposes a storage medium having stored thereon a target volume determination program which, when executed by a processor, implements the steps of the target volume determination method.
The invention provides a target volume determination method, a target volume determination device, target volume determination equipment and a storage medium. The method comprises the steps of firstly, respectively obtaining image optimization features of each layer of image in a lung image, and correcting the features of the lung image by using the correlation features between the image optimization features of adjacent layers of images under the condition of obtaining the image optimization features, so that the correction of the lung image can be realized by combining feature information of at least two layers of images in the lung image. Furthermore, the extraction of the lung region contour can be realized by using the corrected lung image, the lung region volume is calculated by using the area corresponding to the extracted lung region contour, the detection precision of the lung region contour can be improved by the fusion of the multi-layer image feature information, and the determination precision of the lung region volume is further improved. In addition, since the above configuration does not require a large amount of labor cost, the detection time can be saved.
Drawings
FIG. 1 shows a flow chart of a target volume determination method according to an embodiment of the invention;
fig. 2 shows a flow chart of step S10 in a target volume determination method according to an embodiment of the present invention;
fig. 3 shows a flow chart of step S11 in a target volume determination method according to an embodiment of the present invention;
fig. 4 shows a flow chart of step S12 in a target volume determination method according to an embodiment of the present invention;
fig. 5 shows a flow chart of step S20 in a target volume determination method according to an embodiment of the present invention;
FIG. 6 shows a flow diagram of a target area division method according to an embodiment of the invention;
FIG. 7 shows a schematic structural diagram of a target volume determining apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, in the embodiment of the present invention, the target volume determining device may be a smart phone, a personal computer, a server, and the like, and is not limited herein.
The embodiment of the invention provides a target volume determination method, which is used for detecting the volume of a lung region formed by a lung region contour in a lung image, wherein the lung region contour refers to a boundary contour of the lung region in the lung image. The main body of the method may be any image processing apparatus, for example, the method may be performed by a terminal device or a server, wherein the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. The server may be a local server or a cloud server. In some possible implementations, the target volume determination method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
An embodiment of the present invention provides a method for determining a target volume, as shown in fig. 1, including:
s10: and acquiring image optimization characteristics of each layer of image in the lung image.
Wherein the lung image comprises a multi-layered superimposed image; the image optimization features can be obtained by performing feature optimization processing on the image features of each layer of image and are respectively corresponding to the image features of each layer;
in some possible embodiments, the manner of acquiring the lung image may include: the lung image is captured by CT (computed tomography), or the captured lung image may be received from another electronic device or a server. Wherein the lung image may comprise a tomographic (image) of a multi-layered lung, by superposition of which a global lung image may be formed.
In some possible embodiments, in the case of obtaining the lung image, an image feature of the lung image may be further extracted, where a pixel value corresponding to a pixel point in each layer image of the lung image may be directly used as the image feature, or the image feature of the image may also be obtained by performing a feature extraction process on the image.
In some possible embodiments, the respective optimization of the image features may be implemented by performing convolution processing on the image features of each layer of image, and through the optimization, more detailed feature information may be added, so as to improve the richness of the features. And performing optimization processing on each layer of image to respectively obtain corresponding optimization characteristics. Or the image features of the images of the adjacent layers can be connected to obtain the connection features, and the feature processing is performed on the connection features, so that the image features of the images of the adjacent layers can be fused with each other, the feature precision can be improved, the obtained homological features are convolved through the two convolution layers respectively, and the optimized features of the images of the layers in the images of the adjacent layers are correspondingly obtained.
S20: modifying the lung image by using the correlation characteristics between the image optimization characteristics of the adjacent layer images in the lung image to obtain a modified lung image;
in some possible embodiments, in the case of obtaining the optimized features of the images of the layers of the lung image, the correlation features between the optimized features of the images of the adjacent layers may be further obtained, and elements in the correlation features represent the degree of correlation between feature values of the same position in the optimized features of the images of the adjacent layers. By using the correlation characteristics, correction processing of each layer image in the lung image can be performed, and the accuracy of the lung image can be improved. In the embodiment of the invention, the correlation characteristic, the image characteristic, the optimization characteristic, the subsequent fusion characteristic, the residual error characteristic and other characteristics can be represented in a vector or matrix form.
In some possible embodiments, a feature fusion process between the image optimization features of the adjacent layer images may be performed using the obtained associated features, so as to obtain a fusion feature. By the fusion processing, the image characteristics of the images of the adjacent layers can be effectively fused, and the lung images can be corrected. In an embodiment of the present invention, the adjacent layer image refers to any one layer image in the lung image and a next layer image of the any one layer image, and in other embodiments, may also be any one layer image and a next n layer image of the any one layer image, where n is an integer greater than or equal to 1 and less than or equal to 3. That is, adjacent layers may represent two layers of images directly adjacent to each other, or may also represent adjacent layers of images, which is not particularly limited by the present invention.
In some possible embodiments, in the case of obtaining the fusion feature, the fusion feature may be used to perform image correction on any image in the adjacent layer images, for example, the fusion feature and the image feature of the image may be added to obtain a corrected image feature, and the image corresponding to the corrected image feature is the corrected image.
S30: and obtaining the lung region contour of the lung image based on the corrected lung image.
In the embodiment of the invention, the corrected lung image can be obtained by superposing the corrected images of all layers in the lung image. Or the convolution processing can be executed on each layer of the corrected images in the lung image to further improve the image precision, and then the obtained image features are superposed to obtain the corrected lung image. The lung image obtained in step S10 may be at least one of a left lung image, a right lung image, or an entire lung image, and correspondingly, when the lung image is a left lung image, the obtained lung region contour is a left lung region contour, when the lung image is a right lung image, the obtained lung region contour is a right lung region contour, and when the lung image is an entire lung image including a left lung and a right lung, the obtained lung region contour may include at least one of a left lung region contour and a right lung region contour.
By performing lung region segmentation on the corrected image, a corresponding lung region contour can be obtained, for example, detection of the lung region can be realized by a convolutional neural network, so that the lung region contour is obtained. Alternatively, the lung region contour may be obtained by processing the corrected lung image by a region growing method.
S40: determining a lung region volume using the lung region contour.
In some possible embodiments, the volume of the lung region formed by the lung region contour may be determined as the lung region volume. The lung area volume can be determined according to the sum of the areas surrounded by the outlines of each layer of lung area. For example, the volume of the left lung region may be obtained according to the area of the region surrounded by the contour of the left lung region of each layer of image, and the volume of the right lung region may be obtained according to the area of the region surrounded by the right lung region of each layer of image.
It should be noted that the embodiment of the present invention may be implemented by a neural network, or may be implemented by an algorithm defined in the present application, and may be implemented as an embodiment of the present invention as long as the embodiment is included in the scope of the technical solution protected by the present application.
Based on the configuration, the embodiment of the present invention can obtain the correlation features between the optimized features of the image features of the adjacent layer images, and when the optimized feature fusion process is performed through the correlation features, the feature information between the adjacent layer images can be fused according to the correlation of different features at the same position in the correlation features, so as to improve the correction effect of the lung images, and improve the detection accuracy of the lung region contour and the detection accuracy of the lung region volume.
The following describes embodiments of the present invention in detail with reference to the accompanying drawings. The embodiment of the present invention may first obtain the image features of each layer of image in the lung image, wherein, as described in the above embodiment, the pixel values of the pixel points of each layer of image in the lung image may be used as the image features of the response, or the extraction of the image features of the lung image may also be performed by a feature extraction neural network. The feature extraction neural network can comprise a residual error network or a feature pyramid network, and the lung image is input into the feature extraction neural network to obtain image features corresponding to the lung image, wherein the image features comprise image features of each layer of image. Or the images of each layer are respectively input into the feature extraction neural network, and the image features of the images of each layer are correspondingly obtained. The image features may be expressed in a vector or matrix form, including feature information of each pixel point of the corresponding image, and the form of the image features is not particularly limited in the present invention.
Under the condition of obtaining the image characteristics of each layer of image, the optimization processing of the image characteristics can be executed to obtain corresponding optimization characteristics. In some possible embodiments, the convolution processing may be directly performed on the image features of each layer of image, so as to further improve the accuracy of the image features and enrich the detail feature information, and obtain corresponding optimized features. Or feature fusion can be performed by using the image features of the images of the adjacent layers, and the optimization of the image features is realized by using the fusion result.
Fig. 2 shows a flowchart of step S10 in the target volume determination method according to an embodiment of the present invention, where the acquiring image optimization features of each layer image in a lung image includes:
s11: performing multi-image feature fusion processing on adjacent layer images in the lung images to obtain fusion features corresponding to the adjacent layer images in the adjacent layer images respectively, wherein the adjacent layer images in the lung images comprise a first image arranged according to an increasing sequence of the number of layers and at least one second image adjacent to the first image, and the fusion features of the images in the adjacent layer images are fused with feature information of any image in the adjacent layer images;
in this embodiment of the present invention, the adjacent layer images may be defined as including a first image and at least one second image, where the second image is an image adjacent to the first image in a direction in which the number of layers increases, for example, the first image is an ith layer image, and the second image may be an i +1 th layer image, or may also be an i +1 th to i + n th images, where n is an integer greater than 1, and n may be an integer less than 5 in this embodiment of the present invention, but is not limited to this specific embodiment of the present invention. According to the embodiment of the invention, each layer of image can be sequentially determined as the first image according to the sequence of the first layer image to the Nth layer image in the lung image, and the feature fusion and correction of the first image can be realized by combining the features of the second image adjacent to the first image.
According to the embodiment of the invention, the first fusion feature corresponding to the first image and the second fusion feature corresponding to the second image can be respectively obtained through multi-image feature fusion between the image feature of the first image and the image feature of the second image. The image features of the first image and the second image can be fused with each other through multi-image feature fusion processing, and therefore the obtained first fusion feature and the second fusion feature respectively comprise feature information of the first image and the second image. Here, the first image may be any one of the lung images.
By the above configuration, a fusion feature corresponding to each layer of image in the lung image can be obtained, and the fusion feature can include feature information of the layer of image and feature information of an image adjacent to the layer of image.
S12: and performing single image feature fusion processing on corresponding image features by using the fusion features of the images of all layers in the lung image to obtain the optimized features of the image.
In some possible embodiments, when obtaining the fusion feature of each layer of image, a single image feature fusion process may be performed on the image features of the image by using the fusion feature of the image. For example, when a first fusion feature of a first image and a second fusion feature of a second image are obtained, a single-image feature fusion may be performed on the image feature of the first image by using the first fusion feature, and a single-image feature fusion may be performed on the image feature of the second image by using the second fusion feature, so as to obtain a first optimized feature and a second optimized feature respectively.
The single image feature fusion processing can further enhance the respective image features on the basis of the fusion features respectively corresponding to the layers. For example, the respective image features may be further enhanced on the basis of the first fusion feature of the first image and the second fusion feature of the second image, such that the obtained first optimization feature also simultaneously fuses the feature information of the second image on the basis of the image feature of the first image, and such that the obtained second optimization feature also simultaneously fuses the feature information of the first image on the basis of the image feature of the second image.
The multi-image feature fusion and the single-image feature fusion according to the embodiment of the present invention will be described below with reference to the drawings. Fig. 3 shows a flowchart of step S11 in a target volume determination method according to an embodiment of the present invention. Performing multi-image feature fusion processing on adjacent layer images in the lung image to obtain fusion features corresponding to the adjacent layer images respectively, wherein the fusion features include:
s111: connecting the image characteristics of the adjacent layer images to obtain a first connecting characteristic;
s112: processing the first connection characteristic by utilizing a first residual error network to obtain a first residual error characteristic;
s113: and performing convolution processing on the first residual error features by utilizing at least two convolution layers respectively to correspondingly obtain fusion features of the adjacent layer images respectively.
In the embodiment of the invention, when the feature fusion of the corresponding multiple images is executed, the image features of the images in the adjacent layer images can be connected to obtain the first connection feature. For example, the connection operation may be performed through a connection function (concat), so that the feature information between the adjacent layer images is briefly merged.
In the case where the first connection characteristic is obtained, the first connection characteristic may be further subjected to optimization processing. In the embodiment of the present invention, the feature optimization process may be performed by using a residual error network (first residual error network). The first connection feature may be input to a first residual block (residual block) to perform feature optimization processing, so as to obtain a first residual feature. The processing of the first residual error network can further fuse the feature information in the first connection feature and improve the accuracy of the feature information, that is, the feature information in the first image and the second image is further accurately fused in the first residual error feature. The first residual network may be any residual network structure, which is not specifically limited in the present invention.
In some possible embodiments, in the case of obtaining the first residual feature, convolution processing may be performed on the first residual feature using different convolution layers, respectively. For example, in the case that the adjacent layers are the first image and the second image, the convolution processing may be performed on the first residual features by using the two convolution layers, so as to obtain first fusion features of the first image and second fusion features of the second image, respectively. Wherein the two convolutional layers may be, but are not limited to, convolution kernels of 1 x 1. The first fusion feature comprises feature information of a second image, and the second fusion feature also comprises feature information of the first image, namely the first fusion feature and the second fusion feature mutually comprise feature information of two images.
By the configuration, the fusion of the feature information of the multiple images of each image in the adjacent layer images can be realized, and the correction precision of each layer image in the lung image can be improved by means of the fusion of the interlayer information.
Fig. 4 shows a flowchart of step S12 in a target volume determination method according to an embodiment of the present invention. The obtaining of the optimized features of the image by performing single image feature fusion processing on the corresponding image features by using the fusion features of each layer of images in the lung image comprises:
s121: obtaining the addition characteristic of the image by utilizing the addition processing of the fusion characteristic of the image and the image characteristic;
s122: and processing the summation characteristic of the image by using a second residual error network to obtain the optimized characteristic of the image.
In some possible embodiments, in the case of obtaining the fusion features of the images of the respective layers, the fusion features and the corresponding image features perform an optimization process of the image features. The embodiment may first obtain the summation feature through the summation processing of the fusion feature of the image and the image feature. And then, optimizing the sum characteristic by using a residual error network (a second residual error network) to obtain the optimized characteristic of the image. For example, for the first image feature of the first image, the optimization processing may be performed by using a manner of adding the image feature of the first image and the first fusion feature, and the adding may include direct addition of the first fusion feature and the image feature of the first image, and may also include weighted addition of the first fusion feature and the image feature of the first image, that is, the first fusion feature and the image feature of the first image are multiplied by corresponding weighting coefficients respectively and then summed, where the weighting coefficients may be preset values or values learned by a neural network, which is not limited in the present invention.
Similarly, in the case of obtaining the second fusion feature, the single image feature fusion processing of the second image may be performed by using the second fusion feature, and the embodiment of the present invention may perform the fusion processing by using a mode of adding the image feature of the second image and the second fusion feature, where the addition may include direct addition of the second fusion feature and the image feature of the second image, or may also include weighted addition of the second fusion feature and the image feature of the second image, that is, the second fusion feature and the image feature of the second image are respectively multiplied by corresponding weighting coefficients to perform addition operation, where the weighting coefficients may be preset values or values learned by a neural network, and the present invention is not limited thereto.
It should be noted that, in the embodiment of the present invention, the time for performing the summation processing on the image feature of the first image and the first fusion feature and the time for performing the summation processing on the image feature of the second image and the second fusion feature are not specifically limited, and the two may be performed separately or simultaneously.
By the above-described addition processing, the feature information of the original image can be further increased on the basis of the fusion feature. The fusion of the single image features can realize that the feature information of the single-layer image is reserved at each stage of the network, and further, the feature information of the single-layer image can be optimized according to the optimized feature information among the multi-layer images. In addition, the embodiment of the present invention may directly use the first summation characteristic and the second summation characteristic as the first optimization characteristic and the second optimization characteristic, and may also perform subsequent optimization processing, thereby further improving the characteristic accuracy.
With the above configuration, the optimization features of a single-layer image in a lung image can be obtained, and in the case where the optimization features are obtained, feature optimization of the lung image can be performed using the correlation features between the optimization features.
Fig. 5 shows a flowchart of step S20 in a target volume determination method according to an embodiment of the present invention. Wherein, the modifying the lung image by using the correlation characteristics between the image optimization characteristics of the adjacent layer images in the lung image to obtain a modified lung image comprises:
s21: acquiring correlation characteristics among image optimization characteristics of adjacent layer images in the lung image;
s22: performing feature fusion processing on the optimized features respectively corresponding to the images of the adjacent layers by using the correlation features between the image optimized features of the images of the adjacent layers to obtain optimized fusion features;
s23: correcting the image characteristics of any image in the adjacent layer images by using the optimized fusion characteristics to obtain the correction characteristics of any image;
s24: and obtaining a corrected lung image by using the correction characteristics corresponding to each layer of image of the lung image.
In the embodiment of the present invention, when the optimized features of each layer of image in the lung image are obtained, the corrected lung image may be obtained by using the correction features respectively corresponding to each layer of image adjacent to and using the lung image.
In the embodiment of the present invention, in the case of obtaining the optimized features of each layer of image in the lung image, the associated features between the optimized features may be determined by using the optimized features of each layer of image in the layer image, for example, the associated features between the first optimized features and the second optimized features may be further obtained by using the first optimized features corresponding to the first image and the second optimized features corresponding to the second image, and the associated features may represent the degree of association between feature information corresponding to the same position in the first optimized features and the second optimized features. The degree of association may reflect a change in the first image and the second image for the same object. The same object may include, for example, a boundary of a lung region, and in the embodiment of the present invention, the scale of each image in the lung image may be the same, and the scale of each corresponding obtained optimization feature is also the same.
In addition, when the obtained first and second optimized features, or the first and second fused features, the first and second summed features, and the image features of the first and second images have different scales, the corresponding features may be adjusted to the same scale, and the scaling operation may be performed by, for example, pooling processing.
In addition, the embodiment of the invention can obtain the correlation characteristics among the optimized characteristics of the images in the adjacent layer images through the graph convolution neural network. For example, a first optimization feature of a first image in an adjacent layer image and a second optimization feature of a second image can be input into a graph volume neural network, the graph volume neural network can output an association feature between the first optimization feature and the second optimization feature after being processed, and elements in the association feature represent association between feature information at the same position in the first optimization feature and the second optimization feature.
In the embodiment of the present invention, before the correction operation of each layer image is performed, a fusion feature (optimized fusion feature) of the optimized features of each layer image in the adjacent layer image may also be obtained. For example, a fusion operation of the first optimization feature and the second optimization feature may be performed. Wherein, the image optimization features of the adjacent layer images can be connected, such as connecting the first optimization feature and the second optimization feature, and the first optimization feature and the second optimization feature can be connected in the channel direction. The embodiment of the invention can execute the connection process through the concat function to obtain the second connection characteristic. And then, activating the correlation features between the image optimization features of the adjacent layer images by using an activation function, wherein the activation function can be a softmax function, each correlation degree in the correlation features can be used as an input parameter, and then the activation function is used for executing processing on each input parameter and outputting the processed correlation features. Further, the optimized fusion feature may be obtained by using a product between the correlation feature after the activation processing and the second connection feature.
In the case of obtaining the optimized fusion feature of the adjacent layer image, the optimized fusion feature may be utilized to perform a correction operation on any one of the adjacent layer images. The embodiment of the invention can perform the correction of the image characteristics by using the mode of adding the image characteristics of the original image and the optimized fusion characteristics to obtain the corrected image characteristics, namely the corrected image.
In addition, the image feature of the first image and the optimized fusion feature may be summed to obtain a corrected image feature, and a corrected image of the lung image may be determined according to the corrected image feature. The addition process may be direct addition or may be weighted addition performed by using a weighting coefficient, and the present invention is not limited in this respect. The corrected image features can directly correspond to the pixel values of the pixels of the image, so that the corrected image corresponding to the corrected image features can be directly utilized. In addition, convolution processing can be further performed on the corrected image features, feature information is further fused, feature accuracy is improved, and then the corrected lung image is determined according to the features obtained through the convolution processing.
The image correction process of the embodiment of the invention can be used for realizing at least one of denoising, overdividing and deblurring of each layer of image in the lung image, and the image quality can be improved to different degrees through the corrected image.
In addition, it should be noted that, before performing processing on the images of adjacent layers, the embodiments of the present invention may group the images in the lung image, for example, a group of two layers of images, or a group of n layers of images. The above grouping may be performed on each layer of images in the lung images in the order of layer 1 to layer N (the total number of lung images), and then the images of the same group may be used as the images of the adjacent layer.
In the case of obtaining the corrected lung image, a lung region detection operation, that is, a lung region segmentation process may be performed on the corrected lung image, and the embodiment of the present invention may segment the lung region contour from the corrected lung image by using a region growing method, where the region growing method may refer to a prior art method, and the present invention is implemented without specific description. Or, the lung region contour may be obtained by inputting the modified lung image to a convolutional neural network and outputting the lung image through the convolutional neural network. The convolutional neural network may be U-net, but is not a specific limitation of the present invention.
In addition, the neural networks related to the above embodiments of the present invention, such as the feature extraction neural network, the residual error network, the convolution neural network, and the like, are all network structures capable of implementing corresponding functions after training, and can meet the accuracy requirement, and a person skilled in the art can set different accuracy conditions according to the requirement, which is not specifically limited by the present invention.
Based on the configuration, the embodiment of the invention can determine the lung image region contour through the feature information of the interlayer image in the lung image, and the configuration can improve the accuracy of the lung image feature information and the accuracy of the lung region contour.
The corrected lung region contour obtained in the embodiment of the present invention may also be represented in a matrix or vector form, where the corrected lung region contour includes a first label 1 and a second label 0, the first label represents a contour boundary and an image pixel point within the contour boundary, and 0 represents other regions. The lung region in the extracted lung image can be obtained by correcting the product of the lung region outline and the image characteristics.
In addition, in the case of obtaining the contour of the lung region, the volume formed by the lung region may be determined using the obtained road condition of the lung region. As described in the above embodiment, the volume of the lung region can be obtained by following the region area of the lung region contour of each layer of image.
In some possible embodiments, the lung region volume may be obtained after utilizing an area corresponding to the lung region contour in each layer of image. Each layer of image may be divided into a mesh shape, each mesh may be a preset size, such as a square with a length and a width of 1mm, the size is not specifically limited in the present invention, and generally, the size may be set to be smaller, so as to improve the detection accuracy of the area. When the lung area contour passes through a certain grid, whether the part of the grid in the lung area is over half of the grid or not can be determined, if so, the area of the grid is determined as the area in the lung area, and if not, the grid is ignored. And then the area of the lung region contour enclosing city can be obtained by utilizing the area of the grid of the lung region contour enclosing city.
In addition, the embodiment of the invention can also determine the area of the lung area contour by utilizing the integral mode of the lung area contour.
Alternatively, in the embodiment of the present invention, each layer of lung region contour may also be subjected to linear fitting, such as performing curve fitting processing, and fitted to a standard shape through the lung region contour, where the standard shape may be a circle or a rectangle. The curve fitting method may include a least squares method, but is not limited to the specific limitation of the present invention.
Further, in a case where the areas formed by the lung region profiles of each layer of the lung image are obtained, the sum of the areas formed by the lung region profiles of each layer may be determined as the volume of the lung region. Likewise, a left lung volume may be obtained in case the lung region contour is a left lung region contour, and a right lung volume may be obtained in case the lung region contour is a right lung region contour. Further, the embodiment of the present invention may be further configured to obtain the volume of each lobe region in the left lung and the right lung, where the obtained lung region contour is the lobe region contour.
Based on the above configuration, determination of the lung region volume can be achieved. The extraction precision of the lung image contour is improved by fusing the characteristic information in the multilayer images, and the detection precision of the lung region volume is further improved.
In addition, in the embodiment of the present invention, when the lung region volume is obtained, the lung region may be further divided. Such as a plurality of lobes of the left lung from the left lung volume and a plurality of lobes of the right lung from the right lung volume.
Fig. 6 shows a flowchart of a target region dividing method according to an embodiment of the present invention, in particular, the target region dividing method may be a lung region dividing method, wherein the target region dividing method includes:
s100: acquiring a first reference value, a second reference value and a third reference value;
s200: determining the lung image to obtain a right lung volume and a left lung volume by using a target volume determination method;
s300: and dividing a right lung image of the lung image by using the first reference value, the second reference value and the right lung volume to obtain a right lung divided image, and/or dividing a left lung image of the lung image by using the third reference value and the left lung volume to obtain a left lung divided image.
In the related art, many scholars propose and improve a lung lobe segmentation method, such as the lung lobe segmentation method with the best effect in 2019, namely the article Automatic segmentation of pulmonary lobes using adaptive reactive dense network. The present lung lobe segmentation aims to determine the specific distribution of a certain disease in a certain lung lobe so as to perform quantitative analysis on the disease. However, in any case, certain errors exist in the lung lobe segmentation, and the current segmentation method is too long in time and cannot rapidly segment accurate lung lobes. Especially for patients with pulmonary emphysema with adhesion or serious pulmonary emphysema, lung images cannot be segmented by using a lung lobe segmentation method. The lung region division method provided by the embodiment of the invention can be used for meeting the requirements of clinical disease quantitative analysis. The lung region division method provided by the embodiment of the invention can solve the problems that the conventional division method is too long in time and cannot be used for clinical rapid quantitative analysis of diseases. Especially, for a patient with adhesion or serious emphysema of the lung, the lung image cannot be segmented by using a lung lobe segmentation method, so that the method has great significance.
In the embodiment of the present invention, the right lung image may be actually divided into 3 regions, namely, a right lung first region, a right lung second region, and a right lung third region, by using the first reference value and the second reference value. The first right lung region corresponds to the upper right lobe, the second right lung region corresponds to the middle right lobe, and the third right lung region corresponds to the lower right lobe. The left lung image may be divided into 2 regions, a left lung first region and a left lung second region, using the third reference value. The first region of the left lung corresponds to the upper lobe of the left lung, and the second region of the left lung corresponds to the lower lobe of the left lung. Therefore, the rapid quantitative analysis of clinical diseases can be rapidly realized by the method without realizing the fine division of lung lobes.
In some possible embodiments, the lung image may be obtained by capturing a CT (computed tomography), an MRI (magnetic resonance imaging), an X-ray, and the like.
In an embodiment of the present invention, the first reference value, the second reference value, and the third reference value are preset threshold values. The first, second and third reference values are determined from lung images of a large number of healthy subjects, and the lung lobe segmentation is performed by means of a PTK kit (pulmonary tool kit) which is downloadable via the website https:// githu. In the embodiment of the invention, 1003 lung images of healthy subjects are collected, then lung lobe segmentation images are automatically obtained by means of a PTK (packet transfer key) kit through lung lobe segmentation, and then the lung lobe segmentation images are manually corrected through the PTK kit to obtain corrected lung lobe segmentation images, so that the accuracy of lung lobe segmentation is ensured.
In an embodiment of the present invention, in addition to obtaining the left lung and/or the right lung volume or the volume of each lung lobe region by the method for determining the lung region, a hyperpolarized noble gas ventilation fast magnetic resonance image (lung image) of the lung of a healthy subject may be obtained by performing a lung ventilation fast magnetic resonance imaging scan on the healthy subject, and the lung volumes (left lung volume and right lung volume) are calculated by segmenting voxels containing noble gas signals in the hyperpolarized noble gas ventilation fast magnetic resonance image. Left and right lung segmentation is performed on the lung image, and then left and right lung volumes may be calculated. The lung lobe segmentation image is obtained or corrected by the PTK toolkit, and then the volume of each lung lobe can be calculated. Alternatively, after the left lung and the right lung are segmented or the lung lobes are segmented by using the lung volume calculation method disclosed in the lung measurement application No. 201480034832.3, the left lung volume, the right lung volume and the volume of each lung lobe can be calculated by the above method.
In the embodiment of the present invention, 1003 lung volume data (left lung volume and right lung volume) data, 1003 left lung volume data, 1003 right lung volume data, and 1003 volume data of each lung lobe are obtained from 1003 lung images of healthy subjects. That is, the number of lung images of a healthy subject is determined, and lung volume (left lung volume and right lung volume) data, left lung volume data, right lung volume data, and volume data of each lung lobe are acquired based on the number of lung images of the healthy subject.
In the embodiment of the present invention, the right lung image has 3 lung lobes, so that only the first reference value and the second reference value are required to complete the right lung segmentation image. For example, the first reference value and the second reference value are average values of volume data of any two of 1003 right lung lobes. That is, the first reference value and the second reference value are the average value of the volume data of any two lung lobes of the right lung lobe based on the number of lung images of the healthy subject.
In the embodiment of the present invention, the left lung image has 2 lung lobes, and thus only 1 reference value, the third reference value, is required. The third reference value may be a volume data average of 1003 upper left lobes or lower left lobes. That is, the third reference value is an average value of the volume data of any one of the left lung lobes based on the number of lung images of the healthy subject.
In an embodiment of the present invention, the lung image of a healthy subject is a lung image without lung disease and with clear lung fissures, and the lung image can be manually segmented or corrected based on a PTK kit.
In an embodiment of the present invention, the method for obtaining a right lung segmentation image by segmenting the right lung image of the lung image by using the first reference value, the second reference value, and the right lung volume includes: a right lung image of the lung image is partitioned using the first reference value, the second reference value, and the right lung volume. For example, the first reference value is the volume data average value of 1003 upper right lobes, and the second reference value is the volume data average value of 1003 middle right lobes. Calculating from the upper side of the right lung image of the lung image, and considering the right upper lung lobe when the volume of the right lung image reaches a first reference value; continuing to calculate downwards by taking the volume as a reference, and considering the volume as a right middle lung lobe when the volume reaches a second reference value; the remaining part is the lower right lobe.
In an embodiment of the present invention, the method for obtaining a left lung partition image by partitioning a left lung image of the lung image using the third reference value and the left lung volume includes: and dividing the left lung volume in the left lung image by using the third reference value to obtain the left lung divided image. For example, if the third reference value is the average value of the volume data of 1003 upper left lung lobes, the calculation is started from the upper side of the left lung image of the lung image, and if the volume of the left lung image reaches the third reference value, the lung image is regarded as the upper left lung lobe; the remaining part is the left lower lobe.
In an embodiment of the present invention, the method further includes, in consideration of the age of the patient and the deterioration of the lung disease, correcting the first reference value, the second reference value, and the third reference value to obtain a first corrected reference value, a second corrected reference value, and a third corrected reference value; dividing a right lung image of the lung image by using the first correction reference value and the second correction reference value to obtain a right lung divided image; and dividing the left lung image of the lung image by using the third correction reference value to obtain a left lung divided image.
In an embodiment of the present invention, the method for obtaining the first correction reference value, the second correction reference value, and the third correction reference value by respectively correcting the first reference value, the second reference value, and the third reference value includes: obtaining a first reference volume, a second reference volume, and a right lung volume and a left lung volume in the lung image; correcting the first reference value and the second reference value by using the first reference volume and the right lung volume to obtain a first correction reference value and a second correction reference value respectively; and correcting the third reference value by using the second reference volume and the left lung volume to obtain a third correction reference value.
In an embodiment of the present invention, in an embodiment in which the first reference value, the second reference value, and the third reference value are obtained, the 1003 right lung volume data pieces are averaged to obtain the first reference volume. The 1003 left lung volume data were averaged to obtain a second baseline volume. That is, the first reference volume is an average of right lung volume data based on the number of lung images of the set healthy subject, and the second reference volume is an average of left lung volume data based on the number of lung images of the set healthy subject.
In an embodiment of the present invention, the method for correcting the first reference value and the second reference value by using the first reference volume and the right lung volume to obtain the first correction reference value and the second correction reference value respectively includes: calculating a first ratio of the first baseline volume and the right lung volume; the first ratio is multiplied by the first reference value and the second reference value, respectively, to obtain the first correction reference value and the second correction reference value.
For example, a first ratio of the first baseline volume and the right lung volume is calculated: the first ratio obtained by dividing the right lung volume by the first reference volume is 0.9, and the first ratio 0.9 is multiplied by the first reference value and the second reference value, respectively, to obtain the first correction reference value and the second correction reference value. And then, dividing the right lung image of the lung image by using the method to obtain a right lung divided image.
In an embodiment of the present invention, the method for correcting the third reference value by using the second reference volume and the left lung volume to obtain the third corrected reference value includes: calculating a second ratio of the second baseline volume and the left lung volume; and multiplying the second ratio by the third reference value to obtain the third correction reference value.
For example, a second ratio of the second baseline volume and the left lung volume is calculated: dividing the left lung volume by the second reference volume to obtain a second ratio of 0.97, and multiplying the second ratio of 0.97 by the third correction reference value to obtain the first correction reference value and the third correction reference value. And then, dividing the left lung image of the lung image by using the method to obtain a left lung divided image.
In summary, in the embodiments of the present invention, feature optimization processing may be performed on the image features of each layer of images in the lung images to obtain corresponding optimized features, and the process may improve the accuracy of the image features of each layer of images. Under the condition of obtaining the optimized features, the features of the lung images can be corrected by utilizing the associated features between the image optimized features of the adjacent layer images, so that the correction of the lung images can be realized by combining the feature information of at least two layers of images in the lung images. Furthermore, the extraction of the lung region outline can be realized by using the corrected lung image, the lung region volume is obtained by using the extraction and the calculation of the lung region outline, the detection precision of the lung region outline can be improved by the fusion of the multi-layer image characteristic information, and the detection precision of the lung region volume is further improved. In addition, since the above configuration does not require a large amount of labor cost, the detection time can be saved. In addition, the embodiment of the invention can also perform accurate division of the lung region.
In addition, an embodiment of the present invention further provides a target volume determining apparatus, and referring to fig. 7, the target volume determining apparatus includes:
the acquiring module 10 is configured to acquire image optimization features of images of each layer in a lung image, where the lung image includes multiple layers of superimposed images;
a correction module 20, configured to perform correction processing on the lung image by using a correlation feature between image optimization features of adjacent layer images in the lung image, so as to obtain a corrected lung image;
a determining module 30, configured to obtain a lung region contour of the lung image based on the modified lung image; and determining a lung region volume using the lung region contour.
Further, the determining module 30 is specifically configured to perform curve fitting on each layer of lung region contour to form a standard shape, and obtain the lung region volume by using a sum of areas of the standard shape corresponding to each layer of lung region contour, or obtain the lung region volume by using a sum of areas corresponding to the lung region contour in each layer of image.
The acquisition module 10 includes:
the fusion unit is used for carrying out multi-image feature fusion processing on adjacent layer images in the lung images to obtain fusion features corresponding to the adjacent layer images respectively;
and the determining unit is used for executing single image feature fusion processing on the corresponding image features by using the fusion features of the images of all layers in the lung image to obtain the image optimization features.
The fusion unit is specifically configured to obtain correlation features between image optimization features of adjacent layer images in the lung image;
performing feature fusion processing on the optimized features respectively corresponding to the images of the adjacent layers by using the correlation features between the image optimized features of the images of the adjacent layers to obtain optimized fusion features;
correcting the image characteristics of any image in the adjacent layer images by using the optimized fusion characteristics to obtain the correction characteristics of any image;
and obtaining a corrected lung image by using the correction characteristics corresponding to each layer of image of the lung image.
The determining unit is specifically configured to input the image optimization feature of the adjacent layer image to a graph convolution neural network, and obtain the correlation feature through the graph convolution neural network.
Further, the apparatus further comprises: and dividing the modules.
The acquisition module is used for acquiring a first reference value, a second reference value and a third reference value;
the dividing module is used for dividing a right lung image of the lung image according to the first reference value, the second reference value and the right lung volume to obtain a right lung divided image;
and dividing the left lung image of the lung image by using the third reference value and the left lung volume to obtain a left lung divided image.
Further, the apparatus further comprises: and a correction module.
The correction module is configured to correct the first reference value, the second reference value, and the third reference value to obtain a first correction reference value, a second correction reference value, and a third correction reference value;
the dividing module is used for dividing a right lung image of the lung image by using the first correction reference value and the second correction reference value to obtain a right lung divided image; dividing a left lung image of the lung image by using the third correction reference value to obtain a left lung divided image;
an embodiment of the present invention provides a target area dividing device, where the device includes:
a determining module, configured to determine a right lung volume and a left lung volume of the lung image by using the target volume determining method;
the acquisition module is used for acquiring a lung image, a first reference value, a second reference value and a third reference value;
the dividing module is used for dividing a right lung image of the lung image by using the first reference value, the second reference value and the right lung volume to obtain a right lung divided image; and dividing the left lung image of the lung image by using the third reference value and the left lung volume to obtain a left lung divided image.
As shown in fig. 8, an embodiment of the present invention further provides a terminal device, where the terminal device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the device configuration shown in fig. 8 does not constitute a limitation of the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 8, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and an image area extraction program. Among them, the operating system is a program that manages and controls the hardware and software resources of the device, and supports the operation of the image area extraction program and other software or programs.
In the apparatus shown in fig. 8, the user interface 1003 is mainly used for data communication with the client; the network interface 1004 is mainly used for establishing communication connection with a server; the processor 1001 may be adapted to invoke a target volume determination program stored in the memory 1005, which when executed by the processor implements the steps of the target volume determination method or the steps of the target region division method.
Furthermore, an embodiment of the present invention further provides a storage medium, in which a target volume determination program is stored, and the target volume determination program implements the steps of the target volume determination method when executed by the processor.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method of target volume determination, comprising:
acquiring image optimization characteristics of each layer of image in a lung image, wherein the lung image comprises a plurality of layers of superposed images;
modifying the lung image by using the correlation characteristics between the image optimization characteristics of the adjacent layer images in the lung image to obtain a modified lung image;
obtaining a lung region contour of the lung image based on the corrected lung image;
determining a lung region volume using the lung region contour.
2. The method of claim 1, wherein said using said lung region contour to obtain a lung region volume comprises:
performing curve fitting on each layer of lung region contour to form a standard shape, and obtaining the lung region volume by using the sum of the areas of the standard shape corresponding to each layer of lung region contour, or
And obtaining the lung region volume by using the sum of the areas corresponding to the lung region outlines in each layer of image.
3. The method of claim 1, wherein obtaining image optimization features for each layer of images in the lung image comprises:
performing multi-image feature fusion processing on adjacent layer images in the lung image to obtain fusion features corresponding to the images in the adjacent layer images respectively;
and performing single image feature fusion processing on corresponding image features by using the fusion features of the images of all layers in the lung image to obtain the image optimization features.
4. The method according to claim 1, wherein the modifying the lung image by using the correlation features between the image optimization features of the images of the adjacent layers in the lung image to obtain a modified lung image comprises:
acquiring correlation characteristics among image optimization characteristics of adjacent layer images in the lung image;
performing feature fusion processing on the optimized features respectively corresponding to the images of the adjacent layers by using the correlation features between the image optimized features of the images of the adjacent layers to obtain optimized fusion features;
correcting the image characteristics of any image in the adjacent layer images by using the optimized fusion characteristics to obtain the correction characteristics of any image;
and obtaining a corrected lung image by using the correction characteristics corresponding to each layer of image of the lung image.
5. The method of claim 4, wherein determining the correlation features between the image optimization features of the adjacent layer images in the lung image comprises:
and inputting the image optimization features of the adjacent layer images into a graph convolution neural network, and obtaining the correlation features through the graph convolution neural network.
6. The method of any of claims 1-5, after determining a lung region volume using the lung region contour, the method further comprising:
acquiring a lung image, a first reference value, a second reference value and a third reference value;
dividing a right lung image of the lung image by using the first reference value, the second reference value and the right lung volume to obtain a right lung divided image;
and dividing the left lung image of the lung image by using the third reference value and the left lung volume to obtain a left lung divided image.
7. The method of claim 6, further comprising:
correcting the first reference value, the second reference value and the third reference value respectively to obtain a first correction reference value, a second correction reference value and a third correction reference value;
dividing a right lung image of the lung image by using the first correction reference value and the second correction reference value to obtain a right lung divided image;
and dividing the left lung image of the lung image by using the third correction reference value to obtain a left lung divided image.
8. A target volume determination apparatus, the apparatus comprising:
the acquisition module is used for acquiring image optimization characteristics of each layer of image in a lung image, wherein the lung image comprises a plurality of layers of superposed images;
the correction module is used for correcting the lung image by using the correlation characteristics among the image optimization characteristics of the adjacent layer images in the lung image to obtain a corrected lung image;
a determining module, configured to obtain a lung region contour of the lung image based on the corrected lung image; and determining a lung region volume using the lung region contour.
9. A terminal device, characterized in that the terminal device comprises: memory, a processor and a target volume determination program stored on the memory and executable on the processor, the target volume determination program when executed by the processor implementing the steps of the target volume determination method as claimed in any one of claims 1 to 7.
10. A storage medium, characterized in that the storage medium has stored thereon a target volume determination program which, when executed by the processor, implements the steps of the target volume determination method according to any one of claims 1 to 7.
CN202010440132.1A 2020-05-22 2020-05-22 Target volume determination method, device, equipment and storage medium Pending CN111652924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010440132.1A CN111652924A (en) 2020-05-22 2020-05-22 Target volume determination method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010440132.1A CN111652924A (en) 2020-05-22 2020-05-22 Target volume determination method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111652924A true CN111652924A (en) 2020-09-11

Family

ID=72348261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010440132.1A Pending CN111652924A (en) 2020-05-22 2020-05-22 Target volume determination method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111652924A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117062569A (en) * 2021-12-20 2023-11-14 皇家飞利浦有限公司 Estimating lung volume from radiographic images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117062569A (en) * 2021-12-20 2023-11-14 皇家飞利浦有限公司 Estimating lung volume from radiographic images

Similar Documents

Publication Publication Date Title
CN111429421B (en) Model generation method, medical image segmentation method, device, equipment and medium
CN109903269B (en) Method and computing device for determining abnormal type of spine cross-sectional image
CN111179247A (en) Three-dimensional target detection method, training method of model thereof, and related device and equipment
CN111462097A (en) Image processing method, device, equipment and storage medium based on federal learning
CN109767448B (en) Segmentation model training method and device
CN112365413A (en) Image processing method, device, equipment, system and computer readable storage medium
CN111652924A (en) Target volume determination method, device, equipment and storage medium
CN108634934B (en) Method and apparatus for processing spinal sagittal image
CN111402191B (en) Target detection method, device, computing equipment and medium
CN111626998A (en) Image processing method, device, equipment and storage medium
CN109767468B (en) Visceral volume detection method and device
CN111627026A (en) Image processing method, device, equipment and storage medium
CN113192031A (en) Blood vessel analysis method, blood vessel analysis device, computer equipment and storage medium
CN111275673A (en) Lung lobe extraction method, device and storage medium
CN114881930B (en) 3D target detection method, device, equipment and storage medium based on dimension reduction positioning
CN111388000A (en) Virtual lung air retention image prediction method and system, storage medium and terminal
CN111627028A (en) Image area division, device, equipment and storage medium
CN111627037A (en) Image area extraction method, device, equipment and storage medium
CN115375787A (en) Artifact correction method, computer device and readable storage medium
CN115311430A (en) Training method and system of human body reconstruction model and computer equipment
CN111714145B (en) Femoral neck fracture detection method and system based on weak supervision segmentation
Xu et al. A reference image database approach for NLM filter-regularized CT reconstruction
CN113780519A (en) Method and device for generating confrontation network training, computer equipment and storage medium
CN110570417B (en) Pulmonary nodule classification device and image processing equipment
CN111627027A (en) Image area detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination