CN116797591A - DR image processing method and device, electronic equipment and storage medium - Google Patents

DR image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116797591A
CN116797591A CN202310810472.2A CN202310810472A CN116797591A CN 116797591 A CN116797591 A CN 116797591A CN 202310810472 A CN202310810472 A CN 202310810472A CN 116797591 A CN116797591 A CN 116797591A
Authority
CN
China
Prior art keywords
image
lung
chest
edge
groups
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310810472.2A
Other languages
Chinese (zh)
Inventor
杨英健
吴天琦
李勇
华贤国
欧阳张磊
郭朋
郑杰
陈晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lanying Medical Technology Co ltd
Original Assignee
Shenzhen Lanying Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lanying Medical Technology Co ltd filed Critical Shenzhen Lanying Medical Technology Co ltd
Priority to CN202310810472.2A priority Critical patent/CN116797591A/en
Publication of CN116797591A publication Critical patent/CN116797591A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10124Digitally reconstructed radiograph [DRR]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Geometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to a DR image processing method and device, an electronic device and a storage medium. The DR image processing method comprises the following steps: acquiring a DR image to be processed, and determining whether the DR image to be processed is a lung image or not according to the DR image to be processed; and if the DR image is the lung image, detecting the thoracic cavity of the DR image to be processed, and removing the information outside the thoracic cavity in the DR image to be processed. The embodiment of the disclosure can realize automatic thoracic cavity detection of DR images.

Description

DR image processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of DR image processing, and in particular relates to a DR image processing method and apparatus, an electronic device and a storage medium.
Background
Digital X-Ray (DR) images can provide high-resolution and real-time X-Ray images, and have been widely used in bone system, chest, dental and other examinations, such as fracture diagnosis, lung disease screening, dental imaging, etc.
The accurate detection of the chest cavity of the DR lung image is the basis of the diagnosis of the chest cavity lung and the heart, and the problem that other image information except the chest cavity in the current DR lung image causes interference to the auxiliary intelligent diagnosis of the chest cavity lung and the heart is necessary to be solved.
Disclosure of Invention
Based on the above, the disclosure provides a DR image processing method and apparatus, an electronic device, and a storage medium technical scheme.
According to an aspect of the present disclosure, there is provided a DR image processing method including:
acquiring a DR image to be processed, and determining whether the DR image to be processed is a lung image or not according to the DR image to be processed;
and if the DR image is the lung image, detecting the thoracic cavity of the DR image to be processed, and removing the information outside the thoracic cavity in the DR image to be processed.
Preferably, the method for performing DR image processing on the DR image to be processed includes:
respectively calculating a plurality of first gradient magnitudes corresponding to the transverse direction of each pixel point and a plurality of second gradient magnitudes corresponding to the longitudinal direction of each pixel point in the DR image to be processed;
determining a plurality of total gradient magnitudes based on the plurality of first gradient magnitudes and the plurality of second gradient magnitudes;
integrating a plurality of first gradient amplitudes corresponding to the transverse direction and a plurality of second gradient amplitudes corresponding to the longitudinal direction along the direction perpendicular to the first gradient amplitudes to obtain a plurality of first integral values and a plurality of second integral values;
integrating the total gradient amplitude values in two directions corresponding to the longitudinal direction and the transverse direction respectively to obtain a plurality of third integrated values and a plurality of fourth integrated values;
Calculating a plurality of first local maxima corresponding to the plurality of first ratios, and calculating a plurality of first local minima and a plurality of second local minima corresponding to the plurality of first ratios and the plurality of second ratios;
determining a chest image corresponding to the DR image to be processed according to the plurality of first local maxima, the plurality of first local minima, the plurality of second local minima and the chest characteristics; the chest features can be configured with a first segmentation position of a neck or a shoulder corresponding to the chest and a second segmentation position on two sides of the chest; and/or the number of the groups of groups,
the method for respectively calculating a plurality of first gradient magnitudes corresponding to the transverse direction of each pixel point and a plurality of second gradient magnitudes corresponding to the longitudinal direction of each pixel point in the DR image to be processed comprises the following steps:
acquiring a gradient operator;
respectively calculating a plurality of first gradient amplitudes corresponding to the transverse direction of each pixel point and a plurality of second gradient amplitudes corresponding to the longitudinal direction of each pixel point in the DR image to be processed by utilizing the gradient operators; and/or the number of the groups of groups,
the method of determining a plurality of total gradient magnitudes based on the plurality of first gradient magnitudes and the plurality of second gradient magnitudes, comprising: respectively calculating a plurality of first square sums corresponding to the plurality of first gradient magnitudes and a plurality of second square sums corresponding to the plurality of second gradient magnitudes, and determining the plurality of total gradient magnitudes based on the plurality of first square sums and the plurality of second square sums; and/or the number of the groups of groups,
The method of determining the plurality of total gradient magnitudes based on the plurality of first sums of squares and the plurality of second sums of squares, comprising: summing the first square sums and the second square sums respectively, and performing open square processing on the sums to obtain a plurality of total gradient amplitudes; and/or the number of the groups of groups,
before the calculating the first local maxima corresponding to the first ratios and the calculating the first local minima and the second local minima corresponding to the first ratios and the second ratios, determining whether each first ratio of the first integral values and the third integral values is reserved in the transverse direction based on the first integral values and the third integral values; and determining, based on the plurality of second integrated values and the plurality of fourth integrated values, whether each of a plurality of second ratios of the plurality of second integrated values and the plurality of fourth integrated values in the longitudinal direction remains; further, calculating a plurality of first local maxima corresponding to the reserved first ratios, and calculating a plurality of first local minima and a plurality of second local minima corresponding to the reserved first ratios and reserved second ratios; and/or the number of the groups of groups,
Calculating differential derivatives of the first curves corresponding to the first ratios to obtain the first local maxima and the first local minima; and calculating differential derivatives of the second ratios corresponding to the second curves to obtain the second local minima.
Preferably, the method for determining the chest image corresponding to the lung image to be segmented according to the first local maxima, the first local minima, the second local minima and the chest features comprises the following steps:
determining a second segmentation location on both sides of the thorax based on the plurality of first local maxima and the thorax feature;
determining a first segmentation location of the neck or shoulder based on the plurality of second local minima and the thoracic feature;
and determining a chest image corresponding to the lung image to be segmented according to the first segmentation position and the first segmentation position.
Preferably, the method for determining the second segmentation position on both sides of the thorax based on the plurality of first local maxima and the thorax feature comprises:
determining the maximum value of all the first local maxima at one side of the central line according to the central line of the DR image to be processed, and configuring the position information corresponding to the maximum value as a segmentation position point at one side to be determined corresponding to one side of the thoracic cage;
Determining the minimum value of all the first local minima on the other side of the central line according to the central line of the DR image to be processed, and configuring the position information corresponding to the minimum value as the dividing position point on the other side to be determined corresponding to the other side of the thoracic cage; and/or the number of the groups of groups,
a method of determining a first segmentation location of a neck or shoulder based on the plurality of second local minima and the thoracic feature, comprising: and configuring position information corresponding to the minimum value corresponding to the plurality of second local minima as a first division position of the neck or shoulder.
Preferably, a chest image obtained by performing chest detection on the DR image to be processed is configured as a lung image to be segmented; based on the lung image to be segmented, segmentation of left lung and right lung is performed; and/or the number of the groups of groups,
filtering the DR image to be processed before thoracic cavity detection is carried out on the DR image to be processed, and downsampling the filtered lung image to be segmented to a set size; and/or the number of the groups of groups,
and carrying out image enhancement on the logarithmic transformation of the DR image to be processed with the set size to obtain an enhanced DR image to be processed.
Preferably, the lung image to be segmented is segmented into a left side chest image and a right side chest image;
Performing left lung and right lung segmentation based on the left chest image and the right chest image, respectively; and/or the number of the groups of groups,
a method of left and right lung segmentation based on the left and right chest images, respectively, comprising:
respectively detecting a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge of the left chest image and the right chest image;
obtaining a left lung segmentation image according to a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge corresponding to the left chest image;
obtaining a right lung segmentation image according to a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge corresponding to the right chest image; and/or the number of the groups of groups,
the method for rib border of the left chest image comprises the following steps:
constructing a direction derivative template of the left chest image by using the direction derivative, and setting a set weighting depth of the direction derivative template;
performing template traversal of direction derivative on the left chest image by using a direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the left chest image to obtain a left chest overlapping image;
Performing binarization processing on the left chest superimposed image to obtain a left rib edge binary image;
obtaining a left rib edge angle diagram to be screened according to the left rib edge binary diagram and the left chest superposition image;
obtaining a filtered left rib angle diagram based on the rib angle diagram of the left side to be filtered and the first set rib angle;
carrying out connected domain selection on the left rib edge angle diagram to obtain a rib edge boundary corresponding to the maximum connected domain; and/or the number of the groups of groups,
the method for obtaining the left rib angle diagram to be screened according to the left rib edge binary diagram and the left chest superposition image comprises the following steps:
performing morphological opening and closing operation and refinement treatment on the left rib edge binary image to obtain a left rib edge binary image after morphological treatment;
performing AND operation on the morphologically processed left rib edge binary image and the gradient direction angle of each pixel in the left chest superimposed image to obtain a left rib edge angle image to be screened; and/or the number of the groups of groups,
before the direction derivative template of the left chest image is constructed by using the direction derivative, carrying out set-scale Gaussian blur on the left chest image to obtain a corresponding left chest Gaussian blur image; further, constructing a directional derivative template of the left chest Gaussian blur image by utilizing a directional derivative; in the rib edge boundary process of the left chest image, performing template traversal of direction derivative on the left chest Gaussian blur image by using a direction derivative template corresponding to the set weighted depth, and overlapping a template traversal result into the left chest image to obtain a left chest Gaussian blur overlapping image; performing binarization processing on the left chest Gaussian blur superimposed image to obtain a left rib edge binary image; obtaining a left rib edge angle diagram to be screened according to the left rib edge binary diagram and the left chest Gaussian blur superimposed image; and/or the number of the groups of groups,
The method for rib border of the right chest image comprises the following steps:
constructing a direction derivative template of the right chest image by using the direction derivative, and setting a set weighting depth of the direction derivative template;
performing template traversal of direction derivative on the right chest image by using a direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the right chest image to obtain a right chest overlapping image;
performing binarization processing on the right chest superimposed image to obtain a right rib edge binary image;
obtaining a rib edge angle diagram of the right side to be screened according to the right rib edge binary diagram and the right chest superposition image;
obtaining a screened right rib angle diagram based on the rib angle diagram of the right side to be screened and a second set rib angle;
carrying out connected domain selection on the right rib edge angle diagram to obtain a rib edge boundary corresponding to the maximum connected domain; and/or the number of the groups of groups,
the method for obtaining the rib angle diagram of the right side to be screened according to the right rib edge binary diagram and the right chest superposition image comprises the following steps:
performing morphological opening and closing operation and refinement treatment on the right rib edge binary image to obtain a right rib edge binary image after morphological treatment;
Performing AND operation on the morphologically processed right rib edge binary image and the gradient direction angle of each pixel in the right chest superimposed image to obtain a rib edge angle image on the right side to be screened; and/or the number of the groups of groups,
before the direction derivative template of the right chest image is constructed by using the direction derivative, carrying out set-scale Gaussian blur on the right chest image to obtain a corresponding right chest Gaussian blur image; further, constructing a directional derivative template of the right chest Gaussian blur image by utilizing a directional derivative; in the rib edge boundary process of the right chest image, performing template traversal of the direction derivative of the right chest Gaussian blur image by using the direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the right chest image to obtain a right chest Gaussian blur overlapping image; performing binarization processing on the Gaussian blur superimposed image of the right chest to obtain a right rib edge binary image; obtaining a rib edge angle diagram of the right side to be screened according to the right rib edge binary diagram and the right chest Gaussian blur superimposed image; and/or the number of the groups of groups,
The method for detecting the left lung tip boundary of the left chest image comprises the following steps:
determining a left lung apex detection area according to the left chest image;
determining a left lung apex edge binary image according to the left lung apex detection area;
fitting by adopting a quadratic function according to the left lung apex edge binary image to obtain a fitted left lung apex boundary; and/or the number of the groups of groups,
the method for determining the left lung apex detection area according to the left chest image comprises the following steps:
detecting a first coordinate corresponding to the uppermost coordinate point of the rib edge of the left chest image;
the region formed by the hypotenuse formed by the first coordinate and the coordinate point of the rightmost upper corner of the left chest image is configured as a left lung apex detection region; and/or the number of the groups of groups,
the method for detecting the right lung tip boundary of the right chest image comprises the following steps:
determining a right lung apex detection area according to the right chest image;
determining a right lung apex edge binary image according to the right lung apex detection area;
fitting by adopting a quadratic function according to the right lung apex edge binary image to obtain a fitted right lung apex boundary; and/or the number of the groups of groups,
the method for determining the right lung apex detection area according to the right chest image comprises the following steps:
Detecting a second coordinate corresponding to the uppermost coordinate point of the rib edge of the right chest image;
the region formed by the hypotenuse formed by the second coordinate and the coordinate point of the leftmost upper corner of the left chest image is configured as a right lung apex detection region; and/or the number of the groups of groups,
a method of left lung mediastinum and diaphragmatic edge detection for the left chest image, respectively, comprising:
binarizing the left chest image to obtain a left chest binary image;
performing edge detection on the left chest binary image to obtain a left chest edge binary image;
obtaining a left chest edge angle diagram according to the left chest binary image and the gradient direction angle of each pixel in the left chest edge binary image;
obtaining a left side diaphragm and mediastinum edge angle diagram after selection according to the obtained left side chest edge angle diagram and the edge angle range of the set diaphragm and mediastinum;
performing connected domain selection processing according to the selected left transverse diaphragm and longitudinal diaphragm edge angle diagram, and acquiring left lung longitudinal diaphragm and transverse diaphragm edges corresponding to the maximum connected domain; and/or the number of the groups of groups,
the method for binarizing the left chest image to obtain the left chest binary image comprises the following steps:
Performing contrast enhancement processing and maximum inter-class variance processing on the left chest Gaussian blur image corresponding to the left chest image to obtain a left chest binary image; and/or the number of the groups of groups,
a method of determining a gradient direction angle for each pixel in the left chest edge binary image, comprising:
performing transverse gradient and longitudinal gradient calculation on the left chest Gaussian blur image corresponding to the left chest image to obtain a left chest transverse gradient map and a left chest longitudinal gradient map;
based on the transverse gradient map and the longitudinal gradient map of the left chest, obtaining a gradient direction angle of each pixel in the Gaussian blur image of the left chest; and/or the number of the groups of groups,
a method of right lung mediastinum and diaphragmatic edge detection for the right chest image, respectively, comprising:
binarizing the right chest image to obtain a right chest binary image;
performing edge detection on the right chest binary image to obtain a right chest edge binary image;
obtaining a right chest edge angle diagram according to the right chest binary image and the gradient direction angle of each pixel in the right chest edge binary image;
obtaining a right side diaphragm and mediastinum edge angle diagram after selection according to the obtained right side chest edge angle diagram and the edge angle range of the set diaphragm and mediastinum;
Performing connected domain selection processing according to the selected right transverse diaphragm and mediastinum edge angle diagram to obtain right lung mediastinum and transverse diaphragm edges corresponding to the maximum connected domain; and/or the number of the groups of groups,
the method for obtaining the right chest binary image by binarizing the right chest image comprises the following steps:
performing contrast enhancement processing and maximum inter-class variance processing on a right chest Gaussian blur image corresponding to the right chest image to obtain a right chest binary image; and/or the number of the groups of groups,
a method of determining a gradient direction angle for each pixel in the right chest edge binary image, comprising:
performing transverse gradient and longitudinal gradient calculation on the right chest Gaussian blur image corresponding to the right chest image to obtain a transverse gradient map and a longitudinal gradient map of the right chest;
based on the transverse gradient map and the longitudinal gradient map of the right chest, obtaining a gradient direction angle of each pixel in the Gaussian blur image of the right chest; and/or the number of the groups of groups,
the method for obtaining the left lung segmentation image according to the rib edge boundary, the lung tip boundary, the mediastinum and the transverse septum edge corresponding to the left chest image comprises the following steps:
calculating a first left lung tip edge point and a left lung mediastinum edge point corresponding to the shortest distance between a lung tip boundary and a mediastinum edge in the left chest image;
Calculating a second left lung tip edge point and a first left lung rib edge point corresponding to the shortest distance between the lung tip boundary and the rib edge in the left chest image;
calculating a second rib edge point and a diaphragm edge point corresponding to the shortest distance between the rib edge and the diaphragm edge in the left chest image;
obtaining a left lung segmentation image based on the first left lung apex edge point, the left lung mediastinum edge point, the second left lung apex edge point, the first left lung rib edge point, the second rib edge point and the diaphragmatic edge point; and/or the number of the groups of groups,
the method for obtaining the right lung segmentation image according to the rib edge boundary, the lung tip boundary, the mediastinum and the transverse septum edge corresponding to the right chest image comprises the following steps:
calculating a first right lung tip edge point and a right lung mediastinum edge point corresponding to the shortest distance between a lung tip boundary and a mediastinum edge in the right chest image;
calculating a second right lung tip edge point and a first right lung rib edge point corresponding to the shortest distance between the lung tip boundary and the rib edge in the right chest image;
calculating a second rib edge point and a diaphragm edge point corresponding to the shortest distance between the rib edge and the diaphragm edge in the right chest image;
and obtaining a right lung segmentation image based on the first right lung apex edge point, the right lung mediastinum edge point, the second right lung apex edge point, the first right lung rib edge point, the second rib edge point and the diaphragmatic edge point.
Preferably, the method further comprises: respectively carrying out lung function assessment based on a plurality of left lung segmentation images and a plurality of right lung segmentation images which correspond to a plurality of moments in the breathing process or in the breath-hold state;
wherein the pulmonary function assessment comprises: assessment of lung field area during breathing and/or assessment of lung ventilation and/or assessment of pulmonary blood flow in a breath-hold state.
According to an aspect of the present disclosure, there is provided a DR image processing apparatus including:
the acquisition unit is used for acquiring a DR image to be processed and determining whether the DR image to be processed is a lung image or not according to the DR image to be processed;
and if the DR image is the lung image, the detection unit is used for detecting the thoracic cavity of the DR image to be processed and removing the information outside the thoracic cavity in the DR image to be processed.
According to an aspect of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the DR image processing method described above is performed.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above DR image processing method.
In the embodiment of the disclosure, the disclosure provides a DR image processing method and device, electronic equipment and a storage medium technical scheme, so as to solve the problem that other image information except for the chest in the current DR lung image causes interference to auxiliary intelligent diagnosis of the chest lung and heart, provide important visual assistance for case screening of doctors and a processing basis for chest image processing, and further improve the level of computer-aided diagnosis based on the DR image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 illustrates a flowchart of a DR image processing method according to an embodiment of the present disclosure;
fig. 2 shows a block diagram of a DR image processing apparatus according to an embodiment of the present disclosure;
FIG. 3 is a block diagram of an electronic device 800, shown in accordance with an exemplary embodiment;
Fig. 4 is a block diagram illustrating an electronic device 1900 according to an example embodiment. .
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure.
In addition, the disclosure further provides a DR image processing apparatus, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any one of the DR image processing methods provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 1 shows a flowchart of a DR image processing method according to an embodiment of the present disclosure, as shown in fig. 1, including: step S101: acquiring a DR image to be processed, and determining whether the DR image to be processed is a lung image or not according to the DR image to be processed; step S102: and if the DR image is the lung image, detecting the thoracic cavity of the DR image to be processed, and removing the information outside the thoracic cavity in the DR image to be processed.
Step S101: and acquiring a DR image to be processed, and determining whether the DR image to be processed is a lung image or not according to the DR image to be processed.
In embodiments of the present disclosure and other possible embodiments, digital X-Ray (DR) images may provide high resolution and real-time X-Ray images, which have been widely used for bone system, chest, dental, etc. examinations, such as fracture diagnosis, lung disease screening, dental imaging, etc. Therefore, the DR imaging apparatus can be used to image the skeletal system, chest, dentistry, and the like.
In an embodiment of the present disclosure and other possible embodiments, the method for determining whether a DR image to be processed is a lung image according to the DR image includes: and calculating an average gray value corresponding to the DR image to be processed, and determining whether the DR image is a lung image according to the average gray value and the set gray value. Wherein, the person skilled in the art can configure the set gray value according to actual needs. For example, the set gray value may be configured to any one value or range between-1000 HU and 0 HU. Meanwhile, in the embodiments of the present disclosure and other possible embodiments, the determination of whether the DR image to be processed is a lung image may also be performed by manually determining the DR image to be processed. Wherein the DR image to be processed is configured as a plurality of two-dimensional DR images.
In embodiments of the present disclosure and other possible embodiments, the method for determining whether a lung image is based on the average gray value and the set gray value includes: if the DR image to be processed is not subjected to the inverse processing, if the average gray value is smaller than or equal to the set gray value, determining that the DR image to be processed is a lung image; if the DR image to be processed is subjected to the inverse processing, if the average gray value is larger than or equal to the set gray value, determining that the DR image to be processed is a lung image. The principle is that if the image is a lung image, the lung in the lung image is generally filled with or contains a certain amount of air, and the air corresponds to a smaller gray value (CT value) and is generally configured to be-1000 HU; the gray value (CT value) of water is generally configured to be 0HU, and the gray value (CT value) of bone is generally configured to be more than 1000HU; therefore, the technical scheme is adopted to determine whether the DR image to be processed is a lung image.
Step S102: and if the DR image is the lung image, detecting the thoracic cavity of the DR image to be processed, and removing the information outside the thoracic cavity in the DR image to be processed.
In embodiments of the present disclosure and other possible embodiments, the extrathoracic information includes: unnecessary information such as arms, heads and blank backgrounds is removed. The lung images are configured as a plurality of two-dimensional DR lung images at a plurality of moments in the breathing process or a plurality of two-dimensional DR lung images at a plurality of moments in the breath-hold state.
In an embodiment of the present disclosure, before chest detection is performed on the DR image to be processed (lung image to be segmented), the DR image to be processed is filtered, and the filtered lung image to be segmented is downsampled to a set size.
In the embodiment of the disclosure, the logarithmic transformation of the DR image to be processed with the set size is subjected to image enhancement, so as to obtain an enhanced DR image to be processed.
(1) Image preprocessing of DR image to be processed (lung image to be segmented).
In embodiments of the present disclosure and other possible embodiments, a. A DR image to be processed (lung image to be segmented), and low-pass filtering the DR image to be processed (lung image to be segmented), and reducing (downsampling) the filtered lung image to a set size to increase the processing speed of the image; carrying out image enhancement on the logarithmic transformation of the downsampled lung image to be segmented to obtain an enhanced lung to be segmented; wherein, the low-pass template Gaussian filtering or average filtering of the set size value is configured as 3×3 or 5×5. Wherein the range of the downscaling (downsampling) may be configured to be 2-6 times. The person skilled in the art may configure the set size values and/or the reduced (downsampled) ranges according to the actual needs.
In the embodiments of the present disclosure and other possible embodiments, b. the technical solution for performing chest detection on the DR image to be processed uses adaptive determination of a chest contour according to characteristics of the lung image to be segmented, and removes unnecessary information such as an arm head and a blank background. The DR images to be processed (to-be-segmented lung images) are configured as a plurality of two-dimensional DR lung images at a plurality of moments in the breathing process or a plurality of two-dimensional DR lung images at a plurality of moments in the breath-hold state.
In an embodiment of the present disclosure, step 1) the method for DR image processing of the DR image to be processed includes: respectively calculating a plurality of first gradient magnitudes corresponding to the transverse direction of each pixel point and a plurality of second gradient magnitudes corresponding to the longitudinal direction of each pixel point in the DR image to be processed; determining a plurality of total gradient magnitudes based on the plurality of first gradient magnitudes and the plurality of second gradient magnitudes; integrating a plurality of first gradient amplitudes corresponding to the transverse direction and a plurality of second gradient amplitudes corresponding to the longitudinal direction along the direction perpendicular to the first gradient amplitudes to obtain a plurality of first integral values and a plurality of second integral values; integrating the total gradient amplitude values in two directions corresponding to the longitudinal direction and the transverse direction respectively to obtain a plurality of third integrated values and a plurality of fourth integrated values; calculating a plurality of first local maxima corresponding to the plurality of first ratios, and calculating a plurality of first local minima and a plurality of second local minima corresponding to the plurality of first ratios and the plurality of second ratios; determining a chest image corresponding to the DR image to be processed according to the plurality of first local maxima, the plurality of first local minima, the plurality of second local minima and the chest characteristics; the chest features can be configured with a first segmentation position of a neck or a shoulder corresponding to the chest and a second segmentation position on two sides of the chest.
In an embodiment of the disclosure, the method for respectively calculating a plurality of first gradient magnitudes corresponding to a lateral direction of each pixel point and a plurality of second gradient magnitudes corresponding to a longitudinal direction of each pixel point in the DR image to be processed includes: acquiring a gradient operator; and respectively calculating a plurality of first gradient amplitudes corresponding to the transverse direction of each pixel point and a plurality of second gradient amplitudes corresponding to the longitudinal direction of each pixel point in the DR image to be processed by utilizing the gradient operators.
In an embodiment of the present disclosure, the method of determining a plurality of total gradient magnitudes based on the plurality of first gradient magnitudes and the plurality of second gradient magnitudes includes: respectively calculating a plurality of first square sums corresponding to the plurality of first gradient magnitudes and a plurality of second square sums corresponding to the plurality of second gradient magnitudes, and determining the plurality of total gradient magnitudes based on the plurality of first square sums and the plurality of second square sums; and/or the method of determining the plurality of total gradient magnitudes based on the plurality of first sums of squares and the plurality of second sums of squares, comprising: and summing the first square sums and the second square sums respectively, and performing open square processing on the sums to obtain the total gradient amplitudes.
In the embodiments of the present disclosure and other possible embodiments, a plurality of first gradient magnitudes corresponding to a lateral direction of each pixel point in a lung image to be segmented and a plurality of second gradient magnitudes corresponding to a longitudinal direction of each pixel point are calculated respectively; and determining a plurality of total gradient magnitudes based on the plurality of first gradient magnitudes and the plurality of second gradient magnitudes; wherein the method of determining a plurality of total gradient magnitudes based on the plurality of first gradient magnitudes and the plurality of second gradient magnitudes comprises: and respectively calculating a plurality of first square sums corresponding to the plurality of first gradient amplitudes and a plurality of second square sums corresponding to the plurality of second gradient amplitudes, and determining the plurality of total gradient amplitudes based on the plurality of first square sums and the plurality of second square sums. Wherein the method of determining the plurality of total gradient magnitudes based on the plurality of first sums of squares and the plurality of second sums of squares comprises: and summing the first square sums and the second square sums respectively, and performing open square processing on the sums to obtain the total gradient amplitudes.
For example, each pixel point e of the lung image to be segmented and eight neighbor matrixes thereof are as follows Use of sobel gradient operator->Respectively calculating a plurality of first gradients corresponding to the transverse direction of each pixel point e in the lung image to be segmentedAmplitude (c+2 x f+i-a-2*d-g) and transpose using sobel gradient operator +.>And respectively calculating a plurality of second gradient magnitudes (g+2 x h+i-a-2*b-c) in the longitudinal direction of each pixel point e in the lung image to be segmented. Also, it may be necessary for a person skilled in the art to actually select other gradient operators, such as the Roberts gradient operator or the Laplace gradient operator.
As another example, based on the plurality of first gradient magnitudes (h 1 ,h 2 ,...,h n ) And the plurality of second gradient magnitudes (k 1 ,k 2 ,...,k n ) Determining a plurality of total gradient magnitudes as
Step 2), integrating a plurality of first gradient amplitudes corresponding to the transverse direction and a plurality of second gradient amplitudes corresponding to the longitudinal direction along the direction perpendicular to the first gradient amplitudes to obtain a plurality of first integral values and a plurality of second integral values;
integrating a plurality of first gradient amplitude values corresponding to the transverse direction in the longitudinal direction to obtain a plurality of first integral values; and integrating the first gradient magnitudes corresponding to the longitudinal direction in the transverse direction to obtain a plurality of second integral values.
And 3) integrating the total gradient amplitude values in two directions corresponding to the longitudinal direction and the transverse direction respectively to obtain a plurality of third integrated values and a plurality of fourth integrated values.
Integrating the total gradient amplitude values in the longitudinal direction to obtain a plurality of third integrated values; and integrating the total gradient amplitude integrals in the transverse direction to obtain a plurality of fourth integral values.
Step 4), (a) when determining the first local maxima, calculating a ratio according to the result of step 2 and the result of step 3, and when the ratio is smaller than a preset value, the image information at the position is regarded as noise and needs to be discarded. The noise may be configured as image information corresponding to unnecessary information such as an arm head and a blank background.
Wherein it is determined whether or not each of a plurality of first ratios of the plurality of first integrated values corresponding to the plurality of third integrated values in the lateral direction remains, based on the plurality of first integrated values and the plurality of third integrated values.
Wherein the method of determining whether each of a plurality of first ratios of the plurality of first integrated values and the plurality of third integrated values in the lateral direction remains based on the plurality of first integrated values and the plurality of third integrated values includes: acquiring a first preset value; calculating a plurality of first ratio values of the plurality of first integrated values and the corresponding plurality of third integrated values in the lateral direction, respectively; and discarding a certain first ratio value in the plurality of first ratio values if the certain first ratio value is smaller than the first preset value. In the embodiments of the present disclosure and other possible embodiments, it may also be required for those skilled in the art to configure the first preset value as actually needed.
(b) When a plurality of first local minima are determined, a ratio is calculated according to the result of the step 2 and the result of the step 3, and when the ratio is larger than a preset value, the image information of the position is regarded as noise and needs to be discarded. The noise may be configured as image information corresponding to unnecessary information such as an arm head and a blank background.
Wherein it is determined whether or not each of a plurality of first ratios of the plurality of first integrated values corresponding to the plurality of third integrated values in the lateral direction remains, based on the plurality of first integrated values and the plurality of third integrated values.
Wherein the method of determining whether each of a plurality of first ratios of the plurality of first integrated values and the plurality of third integrated values in the lateral direction remains based on the plurality of first integrated values and the plurality of third integrated values includes: acquiring a first preset value; calculating a plurality of first ratio values of the plurality of first integrated values and the corresponding plurality of third integrated values in the lateral direction, respectively; and discarding a certain first ratio value in the plurality of first ratio values if the certain first ratio value is larger than the first preset value.
Wherein it is determined whether or not each of a plurality of second ratios of the plurality of second integrated values corresponding to the plurality of fourth integrated values in the longitudinal direction remains, based on the plurality of second integrated values and the plurality of fourth integrated values.
Wherein the method of determining whether each of a plurality of second ratios of the plurality of second integrated values and the plurality of fourth integrated values in the longitudinal direction remains based on the plurality of second integrated values and the plurality of fourth integrated values includes: acquiring a second preset value; calculating a plurality of second ratio values of the plurality of second integrated values and the corresponding plurality of fourth integrated values in the longitudinal direction, respectively; and discarding a certain second ratio of the plurality of second ratios if the certain second ratio is greater than the second preset value. . In the embodiments of the present disclosure and other possible embodiments, it may be required for those skilled in the art to configure the second preset value as needed.
In an embodiment of the present disclosure, before the calculating the plurality of first local maxima corresponding to the plurality of first ratios and the calculating the plurality of first local minima and the plurality of second local minima corresponding to the plurality of first ratios and the plurality of second ratios, determining whether each of the plurality of first ratios corresponding to the plurality of first integral values and the plurality of third integral values in the lateral direction remains based on the plurality of first integral values and the plurality of third integral values; and determining, based on the plurality of second integrated values and the plurality of fourth integrated values, whether each of a plurality of second ratios of the plurality of second integrated values and the plurality of fourth integrated values in the longitudinal direction remains; further, a plurality of first local maxima corresponding to the reserved plurality of first ratios are calculated, and a plurality of first local minima and a plurality of second local minima corresponding to the reserved plurality of first ratios and reserved plurality of second ratios are calculated.
In an embodiment of the disclosure, calculating differential derivatives of the first ratios corresponding to the first curves to obtain the first local maxima and the first local minima; and calculating differential derivatives of the second ratios corresponding to the second curves to obtain the second local minima.
In an embodiment of the present disclosure, the method for determining a thoracic map corresponding to the lung image to be segmented according to a plurality of first local maxima, a plurality of first local minima, a plurality of second local minima, and a thoracic feature includes: determining a second segmentation location on both sides of the thorax based on the plurality of first local maxima and the thorax feature; determining a first segmentation location of the neck or shoulder based on the plurality of second local minima and the thoracic feature; and determining a chest image corresponding to the lung image to be segmented according to the first segmentation position and the first segmentation position.
In embodiments of the present disclosure and other possible embodiments, 5) calculating a plurality of first local maxima and corresponding to the plurality of first ratios after discarding in step 4 (a), respectively. Similarly, a plurality of first local minima and a plurality of second local minima corresponding to the plurality of first ratios and the plurality of second ratios after the discarding in step 4 (b) are respectively calculated. Wherein the first ratios after discarding are the first ratios retained after discarding; similarly, the second plurality of ratios after discarding are the second plurality of ratios retained after discarding.
And determining a chest image corresponding to the lung image to be segmented according to the first local maxima, the first local minima, the second local minima and the chest characteristics, and removing unnecessary information such as arms, heads, blank backgrounds and the like. The chest features can be configured with a first segmentation position of a neck or a shoulder corresponding to the chest and a second segmentation position on two sides of the chest.
The method for determining the chest image corresponding to the lung image to be segmented according to the first local maxima, the first local minima, the second local minima and the chest characteristics comprises the following steps: determining a second segmentation location on both sides of the thorax based on the plurality of first local maxima and the thorax feature; determining a first segmentation location of the neck or shoulder based on the plurality of second local minima and the thoracic feature; and determining a chest image corresponding to the lung image to be segmented according to the first segmentation position and the first segmentation position.
The method comprises the steps of obtaining a plurality of first local maxima by calculating differential derivatives of a plurality of discarded first ratios corresponding to a first curve; wherein the differential derivative may be configured as a first, second or other multi-order differential derivative. Similarly, obtaining a plurality of first local minima by calculating differential derivatives of the plurality of first ratios corresponding to the first curve after discarding; wherein the differential derivative may be configured as a first, second or other multi-order differential derivative. Similarly, obtaining a plurality of second local minima by calculating differential derivatives of the plurality of second ratios corresponding to the second curve after discarding; wherein the differential derivative may be configured as a first, second or other multi-order differential derivative.
In an embodiment of the disclosure, the method for determining the second segmentation locations on both sides of the thorax based on the plurality of first local maxima and the thorax feature comprises: determining the maximum value of all the first local maxima at one side of the central line according to the central line of the DR image to be processed, and configuring the position information corresponding to the maximum value as a segmentation position point at one side to be determined corresponding to one side of the thoracic cage; and determining the minimum value of all the plurality of first local minima on the other side of the central line according to the central line of the DR image to be processed, and configuring the position information corresponding to the minimum value as the dividing position point on the other side to be determined corresponding to the other side of the thoracic cage.
In an embodiment of the present disclosure, a method of determining a first segmentation position of a neck or shoulder based on the plurality of second local minima and the thoracic feature, comprises: and configuring position information corresponding to the minimum value corresponding to the plurality of second local minima as a first division position of the neck or shoulder.
In embodiments of the present disclosure and other possible embodiments, the method for determining a second segmentation position on both sides of the thorax based on the plurality of first local maxima and the thorax feature includes: determining the maximum value of all the first local maxima at one side (right side) of the central line according to the central line of the image to be segmented, and configuring the position information corresponding to the maximum value as a segmentation position point at one side to be determined corresponding to one side of the thoracic cage; and determining the minimum value of all the first local minima on the other side (left side) of the central line according to the central line of the image to be segmented, and configuring the position information corresponding to the minimum value as the segmentation position point on the other side to be determined corresponding to the other side of the thoracic cage.
For example, all of the plurality of first local maxima (a) on one side (right side) of the center line are determined 1 ,a 2 ,...,a n ) Maximum value a of m The method comprises the steps of carrying out a first treatment on the surface of the Wherein m is less than or equal to n; m is the position information (for example, a certain abscissa) corresponding to the maximum value, that is, the to-be-determined side segmentation position point corresponding to the thoracic side. Similarly, all of the plurality of first local minima (b) on the other side (left side) of the centerline are determined 1 ,b 2 ,...,b n ) Minimum value b of (2) r Wherein r is less than or equal to n; r is the position information (e.g., a certain abscissa) corresponding to the minimum value.
In embodiments of the present disclosure and other possible embodiments, a method of determining a first segmentation location of a neck or shoulder based on the plurality of second local minima and the thoracic feature, comprising: position information (for example, a certain ordinate) corresponding to a minimum value corresponding to the plurality of second local minima is configured as a first divided position of the neck or shoulder.
(2) Lung field segmentation.
In the embodiment of the disclosure, a thoracic image obtained by performing thoracic detection on the DR image to be processed is configured as a lung image to be segmented; and based on the lung image to be segmented, segmentation of the left lung and the right lung is performed.
In an embodiment of the present disclosure, the lung image to be segmented is segmented into a left side chest image and a right side chest image; performing left lung and right lung segmentation based on the left chest image and the right chest image, respectively; or, obtaining a segmentation model of a preset convolutional neural network, and a plurality of DR lung images (to-be-segmented lung images) to be segmented at multiple moments in the breathing process or in a breath-hold state, wherein the DR lung region label image is used for training the segmentation model; wherein the method for determining the DR lung region label image for training the segmentation model comprises: respectively detecting rib edge boundaries, lung tip boundaries, mediastinum and transverse septum edges of left chest images and right chest images of a plurality of DR lung region images to obtain DR lung region label images corresponding to the plurality of DR lung region images; training the segmentation model by utilizing the DR lung region label image for training the segmentation model; and based on the trained segmentation model, completing left lung and/or right lung segmentation of the DR lung images to be segmented.
In embodiments of the present disclosure and other possible embodiments, the segmentation model of the preset convolutional neural network is configured as a nnet convolutional neural network or a nnUnet convolutional neural network or a convolutional neural network modified based on the Unet convolutional neural network or a convolutional neural network modified based on the nnUnet convolutional neural network. For example, a convolutional neural network modified based on a Unet convolutional neural network may be configured as a Unet convolutional neural network having a residual structure.
In embodiments of the present disclosure and other possible embodiments, the nnet convolutional neural network or nnune convolutional neural network or a convolutional neural network modified based on the Unet convolutional neural network or a convolutional neural network modified based on the nnune convolutional neural network at least includes: downsampled contracted paths, upsampled expanded paths, and final classification layers.
In embodiments of the present disclosure and other possible embodiments, the plurality of DR lung region images are configured as a plurality of DR lung region images acquired in a deep inhalation state or in a breath-hold state.
In embodiments of the present disclosure and other possible embodiments, before training the segmentation model by using the DR lung region tag image for training the segmentation model, performing data enhancement on the DR lung region tag image to obtain an enhanced DR lung region tag image; and training the segmentation model by utilizing the enhanced DR lung region label image.
In an embodiment of the present disclosure and other possible embodiments, the method for enhancing the DR lung region tag image to obtain an enhanced DR lung region tag image includes: and performing space geometric transformation and/or overturning and/or rotation and/or clipping and/or scaling and/or image shifting and/or edge filling and/or random erasing and/or random shielding operation on the DR lung region label image to obtain an enhanced DR lung region label image.
In an embodiment of the present disclosure and other possible embodiments, the method for enhancing the DR lung region tag image to obtain an enhanced DR lung region tag image further includes: randomly extracting any two DR lung region label images from the DR lung region label images; performing configuration operation on the arbitrary two DR lung region label images to obtain corresponding DR lung region label registration images; and carrying out fusion operation on the DR lung region label registration image to obtain an enhanced DR lung region label image.
In an embodiment of the present disclosure and other possible embodiments, the method for performing a fusion operation on the DR lung region label registration image to obtain an enhanced DR lung region label image includes: and respectively carrying out minimum or maximum or average operation on pixel values corresponding to the DR lung region label registration image to obtain an enhanced DR lung region label image.
In embodiments of the present disclosure, as well as other possible embodiments, the registration methods used by the present disclosure, may employ existing registration algorithms or models, such as one or more of SIFT (Scale-invariant feature transform) registration algorithms or models, SURF (Speeded Up Robust Features) registration algorithms or models, ORB (Oriented FAST and Rotated BRIEF) registration algorithms or models, etc., or other convolutional neural network-based registration algorithms or models. For example, a convolutional neural network-based registration algorithm or model may be configured as a VGG network-based registration algorithm or model.
c. And dividing the chest image into a left chest image and a right chest image according to the chest image corresponding to the lung image to be divided, and dividing the left lung field and the right lung field respectively on the left chest image and the right chest image based on the chest image.
In an embodiment of the present disclosure, a method of left and right lung segmentation based on the left and right chest images, respectively, includes: respectively detecting a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge of the left chest image and the right chest image; obtaining a left lung segmentation image according to a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge corresponding to the left chest image; and obtaining a right lung segmentation image according to the rib edge boundary, the lung tip boundary, the mediastinum and the transverse septum edge corresponding to the right chest image.
In an embodiment of the disclosure, the method of rib border of the left chest image includes: constructing a direction derivative template of the left chest image by using the direction derivative, and setting a set weighting depth of the direction derivative template; performing template traversal of direction derivative on the left chest image by using a direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the left chest image to obtain a left chest overlapping image; performing binarization processing on the left chest superimposed image to obtain a left rib edge binary image; obtaining a left rib edge angle diagram to be screened according to the left rib edge binary diagram and the left chest superposition image; obtaining a filtered left rib angle diagram based on the rib angle diagram of the left side to be filtered and the first set rib angle; and selecting the communication domain from the left rib edge angle diagram to obtain a rib edge boundary corresponding to the maximum communication domain.
In an embodiment of the disclosure, the method for obtaining a left rib angle map to be screened according to the left rib binary map and the left chest superimposed image includes: performing morphological opening and closing operation and refinement treatment on the left rib edge binary image to obtain a left rib edge binary image after morphological treatment; performing AND operation on the morphologically processed left rib edge binary image and the gradient direction angle of each pixel in the left chest superimposed image to obtain a rib edge angle image on the left to be screened.
In the embodiment of the disclosure, before the direction derivative template of the left chest image is constructed by using the direction derivative, performing scale Gaussian blur on the left chest image to obtain a corresponding left chest Gaussian blur image; further, constructing a directional derivative template of the left chest Gaussian blur image by utilizing a directional derivative; in the rib edge boundary process of the left chest image, performing template traversal of direction derivative on the left chest Gaussian blur image by using a direction derivative template corresponding to the set weighted depth, and overlapping a template traversal result into the left chest image to obtain a left chest Gaussian blur overlapping image; performing binarization processing on the left chest Gaussian blur superimposed image to obtain a left rib edge binary image; and obtaining a left rib edge angle diagram to be screened according to the left rib edge binary diagram and the left chest Gaussian blur superimposed image.
a. In embodiments of the present disclosure and other possible embodiments, the lung field segmentation step is performed using a left side chest image as an example as follows. And (5) rib margin boundary detection of the left chest image.
1) And (3) carrying out set-scale Gaussian blur on the left chest region (image), and reducing detail information of the left chest region (image) to obtain a corresponding processed left chest Gaussian blur image. Wherein, the set scale can be configured to be 7×7 or 9×9, and the person skilled in the art can configure the set scale according to actual needs. The mean square error of the gaussian blur algorithm can be configured to be 2, 2.5, 3 and other values, and meanwhile, a person skilled in the art can configure the mean square error sigma of the gaussian blur algorithm according to actual needs.
2) Construction of a processed left-side chest Gaussian blur image f (x) 0 ,y 0 ) And setting a set weighted depth of the directional derivative template. Wherein, (x) 0 ,y 0 ) Respectively left chest image f (x 0 ,y 0 ) Abscissa x in corresponding coordinate point 0 And the ordinate y 0
The calculation formula of the directional derivative is as follows:
where l is the unit vector in the direction and cos α and cos β are the cosine of the l direction. Wherein the direction can be configured as a horizontal direction, alpha is an angle formed by the l direction and the horizontal direction, and beta is an angle formed by the l direction and the vertical direction.
Wherein the left chest Gaussian blur image f (x) 0 ,y 0 ) A direction derivative template of (c), comprising: acquiring the corresponding x of the radius r of the set template 0 First radius in the directionAt y 0 Second radius in the direction ∈>Based on the first radius->And said second radius->Construction of a processed left-side chest Gaussian blur image f (x) 0 ,y 0 ) Is a directional derivative template of (a).
Wherein, the liquid crystal display device comprises a liquid crystal display device,
for example, the set template radius may be configured to be 1, 2, 3, 4, 5, etc. The radius of the set template can be configured by a person skilled in the art according to actual needs.
Also for example, at x 0 First radius in the directionWhen configured as-1 or 0 or 1, at y 0 Second radius in the direction ∈>The range of the values of (2) is-1, 0 and 1./>
Wherein, the person skilled in the art can configure the set weighted depth according to actual needs, for example, the set weighted depth can be configured to be 6. In addition, the method for setting the weighted depth of the direction derivative template comprises the following steps: and obtaining a set weighted depth, wherein the set weighted depth is multiplied by the direction derivative template to obtain a direction derivative template corresponding to the set weighted depth. And (3) reasonably setting a direction angle range according to the structural characteristics of the rib edge region, and taking the direction angle range into a direction derivative calculation formula to obtain a template array.
3) And (c) performing template traversal of the directional derivative on the processed left chest Gaussian blur image in the step (a.1) by using a directional derivative template corresponding to the set weighted depth, and superposing (adding corresponding pixels) the template traversal result into the left chest Gaussian blur image in the step (1) to obtain a left chest Gaussian blur superposition image.
For example, each pixel point e of the left chest Gaussian blur image and eight neighborhood matrixes thereof are as followsUsing the direction derivative template corresponding to the set weighted depth +.>And respectively calculating the e added value (e+w (c+2 f+i-a-2*d-g)) of each pixel point.
4) And d, performing binarization processing on the maximum inter-class variance on the result (left chest Gaussian blur superimposed image) obtained in the step a.3 to obtain a rib edge binary image.
5) And c, performing morphological opening and closing operation and refinement treatment on the result (rib binary image) obtained in the step a.4 to obtain a rib binary image after morphological treatment.
6) C, respectively carrying out transverse gradient and longitudinal gradient calculation on the result (left chest Gaussian blur superimposed image) obtained in the step a.3 to obtain a transverse gradient map and a longitudinal gradient map; and calculating the gradient direction angle of each pixel in the left chest Gaussian blur superimposed image based on the transverse gradient map and the longitudinal gradient map.
7) And (c) performing AND operation on the result (the rib edge binary image after morphological processing) of the step a.5 and the result (the gradient direction angle of each pixel in the left chest Gaussian blur superposition image) of the step a.6 to obtain a rib edge angle image to be screened.
8) And d, setting a reasonable angle range (a first set rib angle range) according to the characteristics of the rib tissue, and performing angle screening on the result (the rib angle diagram to be screened) obtained in the step a.7 to obtain a screened rib angle diagram.
9) And d, selecting a connected domain according to the result (the rib edge angle diagram after screening) in the step a.8 to obtain a maximum connected domain, wherein the connected domain is a rib edge part (rib edge boundary).
Similarly, in an embodiment of the disclosure, the method of rib-edge-bounding the right chest image includes: constructing a direction derivative template of the right chest image by using the direction derivative, and setting a set weighting depth of the direction derivative template; performing template traversal of direction derivative on the right chest image by using a direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the right chest image to obtain a right chest overlapping image; performing binarization processing on the right chest superimposed image to obtain a right rib edge binary image; obtaining a rib edge angle diagram of the right side to be screened according to the right rib edge binary diagram and the right chest superposition image; obtaining a screened right rib angle diagram based on the rib angle diagram of the right side to be screened and a second set rib angle; and selecting the connected domain from the right rib edge angle diagram to obtain a rib edge boundary corresponding to the maximum connected domain.
Similarly, in an embodiment of the disclosure, the method for obtaining a rib angle map on the right side to be screened according to the right rib edge binary map and the right chest superimposed image includes: performing morphological opening and closing operation and refinement treatment on the right rib edge binary image to obtain a right rib edge binary image after morphological treatment; performing AND operation on the morphologically processed right rib edge binary image and the gradient direction angle of each pixel in the right chest superposition image to obtain a rib edge angle image on the right side to be screened.
Similarly, in an embodiment of the disclosure, before the direction derivative template of the right chest image is constructed by using the direction derivative, performing scale gaussian blur on the right chest image to obtain a corresponding right chest gaussian blur image; further, constructing a directional derivative template of the right chest Gaussian blur image by utilizing a directional derivative; in the rib edge boundary process of the right chest image, performing template traversal of the direction derivative of the right chest Gaussian blur image by using the direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the right chest image to obtain a right chest Gaussian blur overlapping image; performing binarization processing on the Gaussian blur superimposed image of the right chest to obtain a right rib edge binary image; and obtaining a rib edge angle diagram of the right side to be screened according to the right rib edge binary diagram and the right chest Gaussian blur superimposed image.
b. And detecting the boundary of the lung tip. In an embodiment of the disclosure, the method for detecting a left lung tip boundary of the left chest image includes: determining a left lung apex detection area according to the left chest image; determining a left lung apex edge binary image according to the left lung apex detection area; and fitting by adopting a quadratic function according to the left lung apex edge binary image to obtain a fitted left lung apex boundary.
In an embodiment of the present disclosure, the method of determining a left lung apex detection area from the left chest image comprises: detecting a first coordinate corresponding to the uppermost coordinate point of the rib edge of the left chest image; the region formed by the hypotenuse formed by the first coordinate and the coordinate point of the rightmost upper corner of the left chest image is configured as a left lung apex detection region.
In embodiments of the present disclosure and other possible embodiments, 1) coordinates corresponding to the uppermost coordinate point of the rib edge portion of the detected left side chest image. 2) And a rectangular area formed by the oblique side formed by the coordinate point and the coordinate point of the rightmost upper corner of the left chest image is a lung tip detection area. If the right lung corresponds to the lung apex detection area, a rectangular area formed by a hypotenuse formed by coordinates corresponding to the uppermost coordinate point of the rib edge part and the leftmost coordinate point of the right chest image is the right lung corresponds to the lung apex detection area. 3) And d, carrying out set-scale Gaussian filtering and contrast enhancement on the lung apex region obtained in the step b.2 to obtain a filtering enhanced lung apex region image. 4) And (3) determining a left lung apex edge binary image by adopting the same method as the steps a.2 to a.5 based on filtering and enhancing the lung apex region image according to the angle characteristics of the lung apex edge. Wherein the angular feature of the tip edge is configured to set an angular range. 5) Fitting by adopting Hough space parameters of a quadratic function according to the characteristics of the lung tip edge and the left lung tip edge binary image to obtain a fitted lung tip edge (line). Wherein the lung tip edge is characterized by a quadratic function opening downward.
Similarly, in an embodiment of the disclosure, the method for right lung tip boundary detection for the right chest image includes: determining a right lung apex detection area according to the right chest image; determining a right lung apex edge binary image according to the right lung apex detection area; and fitting by adopting a quadratic function according to the right lung apex edge binary image to obtain a fitted right lung apex boundary.
Similarly, in an embodiment of the disclosure, the method of determining a right lung apex detection region from the right chest image includes: detecting a second coordinate corresponding to the uppermost coordinate point of the rib edge of the right chest image; and the region formed by the hypotenuse formed by the second coordinate and the coordinate point of the leftmost upper corner of the left chest image is configured as a right lung apex detection region.
c. Mediastinum and transverse septum edge detection. In an embodiment of the present disclosure, a method for left lung mediastinum and lateral septum edge detection, respectively, of the left chest image includes: binarizing the left chest image to obtain a left chest binary image; performing edge detection on the left chest binary image to obtain a left chest edge binary image; obtaining a left chest edge angle diagram according to the left chest binary image and the gradient direction angle of each pixel in the left chest edge binary image; obtaining a left side diaphragm and mediastinum edge angle diagram after selection according to the obtained left side chest edge angle diagram and the edge angle range of the set diaphragm and mediastinum; and carrying out connected domain selection processing according to the selected left transverse diaphragm and mediastinum edge angle diagram to obtain the left lung mediastinum and transverse diaphragm edge corresponding to the maximum connected domain.
In an embodiment of the present disclosure, the method for binarizing the left chest image to obtain a left chest binary image includes: and performing contrast enhancement processing and maximum inter-class variance processing on the left chest Gaussian blur image corresponding to the left chest image to obtain a left chest binary image.
In an embodiment of the present disclosure, a method of determining a gradient direction angle for each pixel in the left chest edge binary image comprises: performing transverse gradient and longitudinal gradient calculation on the left chest Gaussian blur image corresponding to the left chest image to obtain a left chest transverse gradient map and a left chest longitudinal gradient map; and obtaining the gradient direction angle of each pixel in the Gaussian blur image of the left chest based on the transverse gradient image and the longitudinal gradient image of the left chest.
In embodiments of the present disclosure and other possible embodiments, a left lung mediastinum and transverse septum edge detection method for the left side chest image includes: 1) C, performing contrast enhancement treatment and maximum inter-class variance treatment on the left chest Gaussian blur image obtained in the step a.1 to obtain a left chest binary image; 2) C, sequentially carrying out morphological opening and closing operation and canny edge detection on the result (left chest binary image) in the step c.1 to obtain a left chest edge binary image; 3) C, performing transverse gradient and longitudinal gradient calculation on the processed left chest Gaussian blur image obtained in the step a.1 to obtain a transverse gradient map and a longitudinal gradient map; calculating a gradient direction angle of each pixel in the left chest Gaussian blur image based on the transverse gradient map and the longitudinal gradient map; 4) C, performing AND operation on the left chest binary image obtained in the step c.1 and the left chest edge binary image obtained in the step c.2, and reserving angles on edge pixel points to obtain a left chest edge angle image; 5) Selecting a proper angle range (setting the edge angle range of the diaphragm and the mediastinum) according to the edge characteristics of the diaphragm and the mediastinum, and removing stray tissue edge information in the left chest edge angle diagram to obtain a selected left diaphragm and mediastinum edge angle diagram; 6) And c, carrying out connected domain selection processing according to the result (the selected left transverse diaphragm and mediastinum edge angle diagram) of c.5, and obtaining the maximum connected domain, namely the transverse diaphragm and mediastinum edge area.
Similarly, in an embodiment of the present disclosure, a method of right lung mediastinum and lateral septum edge detection, respectively, of the right chest image includes: binarizing the right chest image to obtain a right chest binary image; performing edge detection on the right chest binary image to obtain a right chest edge binary image; obtaining a right chest edge angle diagram according to the right chest binary image and the gradient direction angle of each pixel in the right chest edge binary image; obtaining a right side diaphragm and mediastinum edge angle diagram after selection according to the obtained right side chest edge angle diagram and the edge angle range of the set diaphragm and mediastinum; and carrying out connected domain selection processing according to the selected right transverse diaphragm and mediastinum edge angle diagram to obtain right lung mediastinum and transverse diaphragm edges corresponding to the maximum connected domain.
Similarly, in an embodiment of the disclosure, the method for binarizing the right chest image to obtain a right chest binary image includes: and performing contrast enhancement processing and maximum inter-class variance processing on the right chest Gaussian blur image corresponding to the right chest image to obtain a right chest binary image.
Similarly, in an embodiment of the present disclosure, a method of determining a gradient direction angle for each pixel in the right chest edge binary image includes: performing transverse gradient and longitudinal gradient calculation on the right chest Gaussian blur image corresponding to the right chest image to obtain a transverse gradient map and a longitudinal gradient map of the right chest; and obtaining the gradient direction angle of each pixel in the Gaussian blurred image of the right chest based on the transverse gradient image and the longitudinal gradient image of the right chest.
(3) The connection of the diaphragmatic edge, the lung apex and the rib edge. In an embodiment of the disclosure, the method for obtaining a left lung segmented image according to a rib border, a lung apex border, a mediastinum and a transverse septum border corresponding to the left chest image includes: calculating a first left lung tip edge point and a left lung mediastinum edge point corresponding to the shortest distance between a lung tip boundary and a mediastinum edge in the left chest image; calculating a second left lung tip edge point and a first left lung rib edge point corresponding to the shortest distance between the lung tip boundary and the rib edge in the left chest image; calculating a second rib edge point and a diaphragm edge point corresponding to the shortest distance between the rib edge and the diaphragm edge in the left chest image; and obtaining a left lung segmentation image based on the first left lung apex edge point, the left lung mediastinum edge point, the second left lung apex edge point, the first left lung rib edge point, the second rib edge point and the diaphragmatic edge point.
Similarly, in an embodiment of the disclosure, the method for obtaining a right lung segmented image according to a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge corresponding to the right chest image includes: calculating a first right lung tip edge point and a right lung mediastinum edge point corresponding to the shortest distance between a lung tip boundary and a mediastinum edge in the right chest image; calculating a second right lung tip edge point and a first right lung rib edge point corresponding to the shortest distance between the lung tip boundary and the rib edge in the right chest image; calculating a second rib edge point and a diaphragm edge point corresponding to the shortest distance between the rib edge and the diaphragm edge in the right chest image; and obtaining a right lung segmentation image based on the first right lung apex edge point, the right lung mediastinum edge point, the second right lung apex edge point, the first right lung rib edge point, the second rib edge point and the diaphragmatic edge point.
In embodiments of the present disclosure and other possible embodiments, a method of connecting a diaphragmatic edge, a lung tip, and a rib edge, comprising: a. calculating two points (a first lung tip edge point and a mediastinum edge point) with shortest Euclidean distance between the lung tip edge (line) and the mediastinum edge (line), wherein the two points are respectively one of the endpoints of the lung tip and the mediastinum boundary; b. calculating and obtaining two points (a second lung point edge point and a first rib edge point) with shortest Euclidean distance between the lung point edge (line) and the rib edge (line), wherein the two points are respectively one of the end points of the lung point and the rib edge, and obtaining a lung point region boundary according to the other lung point obtained in the step a; c. calculating two points (a second rib edge point and a diaphragmatic edge point) with shortest Euclidean distance between the rib edge (line) and the diaphragmatic edge (line), wherein the two points are respectively one of the rib edge and the diaphragmatic end point, the diaphragmatic boundary region can be obtained according to the end point of the other diaphragmatic boundary obtained in the step a, and the boundary region of the rib edge can be obtained according to the other rib edge end point obtained in the step b; d. connecting the edge areas of the three parts obtained in the step abc to obtain a closed lung field outline, and dividing the lung field area according to different internal and external tissues of the outline border; e. and processing the right lung field region in the same way, and finally mapping the lung field region into the original image to obtain lung field segmentation of the original image.
In addition, the processing method provided by the embodiment of the disclosure further includes: respectively carrying out lung function assessment based on a plurality of left lung segmentation images and a plurality of right lung segmentation images which correspond to each other at a plurality of moments in the breathing process or in the breath-hold state; wherein the pulmonary function assessment comprises: assessment of lung field area during breathing and/or assessment of lung ventilation and/or assessment of lung blood flow in a breath-hold state.
In embodiments of the present disclosure and other possible embodiments, the method of lung field area assessment during breathing includes: acquiring a two-dimensional DR left lung image and/or a two-dimensional DR right lung image at multiple moments in the respiratory process; determining a left lung area and/or a right lung area in the breathing process based on the two-dimensional DR left lung image and/or the two-dimensional DR right lung image at multiple moments in the breathing process respectively; acquiring age information, height information and sex information of a patient corresponding to the left lung area and/or the right lung area in the respiratory process, and determining a set left lung field area and a set right lung field area in the respiratory process according to the age information, the height information and the sex information; and carrying out lung development evaluation on the patient according to the left lung field area, the right lung field area, the set left lung field area and the set right lung field area in the respiratory process of the patient. And further, the intelligent evaluation level of the DR lung image is improved, so that the problem that the DR lung image is widely applied to lung development evaluation at present is solved.
In an embodiment of the disclosure, the method of determining the left lung area and/or the right lung area during the breathing process based on the two-dimensional DR left lung image and/or the two-dimensional DR right lung image, respectively, at multiple moments during the breathing process comprises: acquiring a set area corresponding to a single pixel; respectively counting the number of left lung pixels in a two-dimensional DR left lung image and/or the number of right lung pixels in a two-dimensional DR right lung image at multiple moments in the respiratory process; and respectively calculating the left lung area and/or the right lung area corresponding to the left lung pixel number in the two-dimensional DR left lung image and/or the right lung pixel number in the right lung image in the two-dimensional DR right lung image at multiple moments in the respiratory process based on the set area.
In the embodiments of the present disclosure and other possible embodiments, those skilled in the art may configure the setting area corresponding to the single pixel according to actual needs.
In an embodiment of the disclosure, the method for calculating the number of left lung pixels in the two-dimensional DR left lung image and/or the left lung area and/or the right lung area corresponding to the number of right lung pixels in the right lung image in the two-dimensional DR right lung image at multiple times during the respiration process based on the set area includes: multiplying the set area by the number of left lung pixels in the two-dimensional DR left lung image at multiple moments in the respiratory process to obtain a corresponding left lung area in the respiratory process; and/or multiplying the set area by the number of right lung pixels in the two-dimensional DR right lung image at multiple moments in the breathing process to obtain the corresponding right lung area in the breathing process.
In embodiments of the present disclosure and other possible embodiments, the person skilled in the art may configure the set left lung field area and the set right lung field area according to actual needs. For example, the age information, height information, and sex information corresponding to the patient for the lung development assessment should be identical or substantially identical to the age information, height information, and sex information corresponding to the set left lung field area and the set right lung field area.
For example, in embodiments of the present disclosure and other possible embodiments, the method for determining the set left lung field area and/or the set right lung field area includes: and respectively acquiring left lung field areas and right lung field areas corresponding to a plurality of pieces of age information, height information and sex information with normal lung development, respectively counting average left lung field areas and/or average right lung field areas corresponding to the ages in the same sex, the height in a first set deviation range and the second set deviation range, and respectively configuring the average left lung field areas and/or the average right lung field areas into the set left lung field areas and/or the set right lung field areas.
In the embodiments of the present disclosure and other possible embodiments, the person skilled in the art may configure the first setting deviation range and the second setting deviation range according to actual needs. For example, the first set deviation range is configured to be 1-2cm, and the second set deviation range is configured to be 1-3 years. That is, when the average left lung field area and/or the average right lung field area corresponding to the ages in the same sex, the height in the first set deviation range, and the second set deviation range are counted, respectively, the ages of the samples with normal lung development corresponding to the average left lung field area and/or the average right lung field area are within 1-3 years, and the heights are within 1-2 cm.
In an embodiment of the disclosure, the method for performing a lung development assessment on the patient according to a left lung field area, a right lung field area, a set left lung field area, and a set right lung field area, respectively, during breathing of the patient comprises: if the left lung field area in the breathing process is smaller than the corresponding set left lung field area in the breathing process, the left lung of the patient is abnormal in development; otherwise, the patient's left lung develops normally; or, calculating the average left lung area corresponding to the left lung field area in the breathing process and calculating the average set left lung area corresponding to the set left lung field area in the breathing process respectively; if the average left lung area is smaller than the set average left lung area, the left lung of the patient is dysplasia; otherwise, the patient's left lung develops normally.
In an embodiment of the present disclosure, right lung dysplasia in the patient if the right lung field area during breathing is less than a corresponding set right lung field area during breathing; otherwise, the right lung of the patient develops normally; or, respectively calculating the right lung average area corresponding to the right lung field area in the breathing process and calculating the set right lung average area corresponding to the set right lung field area in the breathing process; if the right lung average area is smaller than the set right lung average area, the right lung of the patient is dysplasia; otherwise, the right lung of the patient develops normally.
In an embodiment of the present disclosure, further comprising: under the condition that the right lung development is normal and/or the right lung development is normal according to the left lung field area, the right lung field area, the set left lung field area and the set right lung field area in the respiratory process of the patient, respectively, the left lung area and/or the right lung area corresponding to the patient under deep inhalation and deep exhalation in the respiratory process are obtained, and the corresponding left lung set area set value and/or right lung area set value are determined according to the age information, the height information and the gender information; and carrying out lung development assessment on the patient based on the left lung area and the left lung set area set value under deep inhalation and deep exhalation and/or based on the right lung area and the right lung set area set value.
In an embodiment of the present disclosure and other possible embodiments, the method for determining the set left lung field area and/or the set right lung field area corresponding to the deep inhalation and the deep exhalation includes: and respectively acquiring a plurality of age information, height information and sex information of normal lung development, corresponding left lung field area and/or right lung field area under deep inhalation and deep exhalation, respectively counting average left lung field area and/or average right lung field area corresponding to the ages in the height and second set deviation ranges of the same sex and the first set deviation range, and respectively configuring the average left lung field area and/or the average right lung field area into the corresponding set left lung field area and/or set right lung field area under deep inhalation and deep exhalation.
Likewise, in the embodiments of the present disclosure and other possible embodiments, the first setting deviation range and the second setting deviation range may be configured by those skilled in the art according to actual needs. For example, the first set deviation range is configured to be 1-2cm, and the second set deviation range is configured to be 1-3 years. That is, when the average left lung field area and/or the average right lung field area corresponding to the ages under deep inhalation and deep exhalation in the same sex, the height within the first set deviation range, and the second set deviation range are counted, respectively, the ages of the samples with normal lung development corresponding to the average left lung field area and/or the average right lung field area under deep inhalation and deep exhalation are within 1-3 years, and the heights are within 1-2 cm.
In an embodiment of the disclosure, the method based on the left lung area and the left lung area set point includes: calculating a first difference between the left lung area under the deep inhalation and the left lung area under the deep exhalation; and carrying out lung development assessment on the patient by the first difference value and the left lung set area set value. Wherein the first difference and the left lung set area set value, a method for assessing lung development of the patient, comprising: if the first difference value is greater than or equal to the left lung set area set value, the left lung of the patient is abnormal in development; otherwise, the patient's left lung develops normally.
In an embodiment of the disclosure, the method based on the right lung area and the right lung area set point comprises: calculating a second difference between the right lung area under the deep inhalation and the right lung area under the deep exhalation; and carrying out lung development assessment on the patient by the second difference value and the right lung set area set value. Wherein the second difference and the right lung set area set point are used for carrying out the lung development evaluation on the patient, and the method comprises the following steps: if the second difference value is greater than or equal to the right lung set area set value, the right lung of the patient is abnormal in development; otherwise, the right lung of the patient develops normally.
In an embodiment of the present disclosure and other possible embodiments, the method for determining the set value of the left lung set area includes: and calculating the difference between the average left lung field area (set left lung field area) under the deep inhalation and the deep exhalation to obtain a set value of the left lung set area. Similarly, the method for determining the set value of the right lung set area comprises the following steps: and calculating the difference between the average right lung field area (set right lung field area) under the deep inhalation and the deep exhalation to obtain a set right lung set area value.
In an embodiment of the disclosure, the method for assessing pulmonary blood flow in a breath-hold state includes: acquiring left lung images and right lung images corresponding to a plurality of first DR lung images at a plurality of moments in a breath-hold state; respectively determining a plurality of lung blood vessel region images corresponding to the left lung image and the right lung image; and performing silhouette processing on the lung vessel region images corresponding to the left lung image and the right lung image to obtain a lung blood flow image corresponding to the heartbeat.
In the embodiments of the present disclosure and other possible embodiments, before the acquiring the two-dimensional DR left lung image and/or the two-dimensional DR right lung image at multiple times in the breathing process or in the breath-hold state, the two-dimensional DR lung image at multiple times in the breathing process or in the breath-hold state is acquired, and the two-dimensional DR lung image at multiple times in the breathing process or in the breath-hold state is segmented into a left lung and a right lung, so as to obtain the two-dimensional DR left lung image and the two-dimensional DR right lung image at multiple times.
In an embodiment of the present disclosure, a plurality of lung vessel region images corresponding to the left lung image and the right lung image are determined, respectively.
In an embodiment of the disclosure, the method for determining the pulmonary blood vessel region images corresponding to the left and right lung images respectively includes: obtaining a maximum density projection algorithm or a maximum density projection model; and respectively determining a plurality of lung blood vessel region images corresponding to the left lung image and the right lung image by using the maximum density projection algorithm or the maximum density projection model.
In an embodiment of the disclosure, before determining a plurality of lung vessel region images corresponding to the left lung image and the right lung image respectively by using the maximum intensity projection algorithm or the maximum intensity projection model, a gaussian blur algorithm or a gaussian blur model is obtained; performing Gaussian blur processing on the left lung image and the right lung image corresponding to the plurality of first DR lung images by using the Gaussian blur algorithm or the Gaussian blur model respectively to obtain a plurality of corresponding Gaussian blurred left lung images and a plurality of Gaussian blurred right lung images; and respectively determining a plurality of Zhang Gaosi blurred left lung images and lung vessel region images corresponding to a plurality of Gaussian blurred right lung images by using the maximum density projection algorithm or the maximum density projection model.
In the embodiments of the present disclosure and other possible embodiments, the maximum intensity projection (Maximum Intensity Projection, MIP) takes the maximum value of the pixel point at the same position in the plurality of first DR lung images at different frame images at multiple times in the breath-hold state, and the maximum value set of the pixel points of all the plurality of first DR lung images may determine the lung vessel region images corresponding to the plurality of left lung images and the plurality of right lung images. Therefore, when the lung blood vessel region images corresponding to the left lung image and the right lung image are determined, it is possible to further perform the lung blood flow analysis based on the blood vessels in the lung blood vessel region images.
In the embodiments of the present disclosure and other possible embodiments, silhouette processing is performed on the lung vessel area images corresponding to the left lung image and the right lung image to obtain a lung blood flow image corresponding to the heart beat. Wherein, the pulmonary blood flow image corresponding to the heart beat comprises: one or more of a distribution image and/or a flow velocity image corresponding to the pulmonary blood flow.
In the embodiments of the present disclosure and other possible embodiments, since the plurality of first DR lung images are acquired at multiple times in the breath-hold state, it is considered that the left lung area and the right lung area corresponding to the plurality of first DR lung images at multiple times do not change, and at the same time, the air currents in the left lung and the right lung corresponding to the plurality of first DR lung images at multiple times do not change, and during the breath-hold process, the heart may beat, and a distribution image and/or a flow velocity image corresponding to the pulmonary blood flow corresponding to the heart beat may be obtained.
In the embodiments of the present disclosure and other possible embodiments, when it is difficult to evaluate these variations by observing the images, image subtraction is a method of identifying small variations in pixel values, and by image subtraction between frames and image subtraction between specific images of a plurality of first DR lung images at a plurality of times, a distribution image and/or a flow velocity image corresponding to pulmonary blood flow corresponding to heart beat can be obtained.
In an embodiment of the disclosure, the method for performing silhouette processing on the lung vessel region images corresponding to the left lung image and the right lung image to obtain a lung blood flow image corresponding to the heart beat includes: the method comprises the steps of configuring a lung blood vessel region image corresponding to a diastole starting time after a systole in multiple times into a first basic image, and configuring lung blood vessel region images corresponding to other times except the certain time into a plurality of first images to be processed; and subtracting the first basic images from the plurality of first images to be processed respectively to obtain lung blood flow distribution images corresponding to the beating of the heart.
In an embodiment of the disclosure, the method for performing silhouette processing on the pulmonary blood vessel region images corresponding to the left and right lung images to obtain a pulmonary blood flow image corresponding to the heart beat further includes: and respectively performing silhouette processing on the lung blood vessel region images at adjacent moments in the left lung image and the right lung image corresponding to the plurality of first DR lung images at multiple moments to obtain a lung blood flow velocity map corresponding to the heart beat. The method for performing silhouette processing on the lung vessel region images at adjacent moments in the left lung image and the right lung image corresponding to the plurality of first DR lung images at multiple moments respectively to obtain a lung blood flow velocity map corresponding to heart beat comprises the following steps: and subtracting the adjacent lung vessel area images in the left lung image and the right lung image respectively to obtain a lung blood flow velocity image corresponding to the heartbeat.
In an embodiment of the disclosure, the method for performing silhouette processing on the pulmonary blood vessel region images corresponding to the left and right lung images to obtain a pulmonary blood flow image corresponding to the heart beat further includes: the lung blood vessel region image corresponding to any time in the systole or diastole of the multiple times is configured as a second basic image, and the lung blood vessel region images corresponding to other times in the systole or diastole of the heart except any time are configured as a plurality of second images to be processed; and subtracting the second basic images from the plurality of second images to be processed respectively to obtain a systolic pulmonary blood flow distribution image or a diastolic pulmonary blood flow distribution image corresponding to the beating of the heart. Wherein the multiple times are configured as a complete heart cycle of systole and systole formation.
In an embodiment of the present disclosure, further comprising: before the left lung image and the right lung image corresponding to the plurality of first DR lung images at a plurality of moments in the breath-hold state are acquired, the method further comprises: and respectively performing rib inhibition or rib reduction on the left lung image and the right lung image corresponding to the plurality of first DR lung images in the breath-hold state.
In an embodiment of the present disclosure, further comprising: and displaying the blood flow in the pulmonary blood flow image corresponding to the heart beat in a set color. The method for displaying the blood flow in the pulmonary blood flow image corresponding to the heart-following beat in a set color comprises the following steps: acquiring configuration corresponding to the set color; and displaying the blood flow in the pulmonary blood flow image corresponding to the heart beat in a set color based on the configuration corresponding to the set color.
In an embodiment of the present disclosure, the setting a configuration corresponding to a color includes: one or more of hue, saturation and brightness; and/or wherein the set color or the hue is configured as a red color system.
In the embodiments of the present disclosure and other possible embodiments, hue refers to the appearance of a color, that is, various colors, such as red, orange, yellow, green, cyan, blue, violet, etc., are commonly known, and hue is the best standard for distinguishing various colors, and has no relation with the intensity and brightness of the color, but merely purely represents the difference of the appearance of the hues. And, saturation refers to the vividness of a color, which is one of the important attributes affecting the final effect of the color; saturation is also referred to as the purity of a color, i.e., the ratio of a color component and a decoloring component (i.e., gray) contained in the color, which determines the saturation and vividness of the color. The brightness (lightness) refers to the degree of darkness of a color, and depends not only on the intensity of a light source but also on the reflection coefficient of the surface of an object.
In an embodiment of the present disclosure, the method for assessing pulmonary ventilation during breathing includes: acquiring left lung images and right lung images corresponding to a plurality of DR lung images in the breathing process, and a plurality of registration transformation matrixes and preset air threshold intervals in the breathing process; performing registration operation on a left lung image and a right lung image corresponding to the DR lung images respectively by using a plurality of registration transformation matrixes in the breathing process to obtain a left lung registration image and a right lung registration image corresponding to the DR lung images; and determining lung ventilation areas corresponding to the DR lung images at multiple moments in the breathing process based on the preset air threshold interval and the left lung registration image and the right lung registration image corresponding to the DR lung images respectively.
In an embodiment of the disclosure, a left lung image and a right lung image corresponding to a plurality of DR lung images in a respiratory process, a plurality of registration transformation matrices in the respiratory process, and a preset air threshold interval are acquired. Wherein the plurality of DR lung images are a plurality of DR lung images in the respiratory process; the left lung image and the right lung image corresponding to the DR lung images in the respiratory process are a two-dimensional DR left lung image and a two-dimensional DR right lung image.
In the embodiments and other possible embodiments of the present disclosure, before the acquiring the multi-time two-dimensional DR left lung image and/or the two-dimensional DR right lung image during respiration or in a breath-hold state, acquiring the multi-time two-dimensional DR lung image during respiration or in a breath-hold state, and performing left lung and right lung segmentation on the multi-time two-dimensional DR lung image during respiration or in a breath-hold state to obtain the multi-time two-dimensional DR left lung image and the multi-time two-dimensional DR right lung image. For example, the two-dimensional DR lung images at multiple moments in time in the breath-hold state may be configured as a plurality of first DR lung images, and the two-dimensional DR left lung images at multiple moments in the breath-hold state may be configured as a plurality of second DR lung images.
In an embodiment of the disclosure, before acquiring a left lung image and a right lung image corresponding to a plurality of DR lung images during respiration, rib suppression or rib subtraction is performed on the left lung image and the right lung image corresponding to the plurality of DR lung images during respiration, respectively.
In the embodiments of the present disclosure and other possible embodiments, the person skilled in the art may configure the preset air threshold interval according to actual needs. Meanwhile, the present disclosure proposes a method for determining the preset air threshold interval, before the preset air threshold interval is obtained, the method for determining the preset air threshold interval includes: respectively determining the minimum preset air threshold value corresponding to the left lung image and the right lung image corresponding to the DR lung images in the breathing process; and determining the maximum preset air threshold corresponding to the left lung image and the right lung image corresponding to the DR lung images in the breathing process based on the minimum preset air threshold and the set threshold step length.
In embodiments of the present disclosure, as well as other possible embodiments, the set threshold step size may be configured by those skilled in the art according to actual needs. E.g., 2, 3, 5, 10, etc. And simultaneously, configuring the minimum pixel threshold corresponding to the left lung image and the right lung image which are corresponding to the DR lung images in the respiratory process as the minimum preset air threshold.
In an embodiment of the disclosure, the method for determining a maximum preset air threshold corresponding to a left lung image and a right lung image corresponding to a plurality of DR lung images during the respiration based on the minimum preset air threshold and a set threshold step length includes: respectively displaying left lung images and right lung images corresponding to the DR lung images in the breathing process; air identification is carried out on the left lung image and the right lung image corresponding to the displayed DR lung images based on the minimum preset air threshold and the set threshold step length; and determining the maximum preset air threshold value corresponding to the left lung image and the right lung image corresponding to the DR lung images in the breathing process according to the air identifications of the left lung image and the right lung image corresponding to the DR lung images.
In an embodiment of the present disclosure and other possible embodiments, the method for performing air identification on a left lung image and a right lung image corresponding to the displayed multiple DR lung images based on the minimum preset air threshold and a set threshold step length includes: acquiring accumulation times K; wherein the accumulation times K is more than or equal to 1; the minimum preset air threshold is added with K to set a threshold step length, so that left lung images and right lung images corresponding to a plurality of DR lung images corresponding to different accumulation times K are obtained; and carrying out air identification on the left lung image and the right lung image corresponding to the displayed DR lung images.
In an embodiment of the disclosure, the method for performing air identification on the left lung image and the right lung image corresponding to the displayed multiple DR lung images based on the minimum preset air threshold and the set threshold step length includes: determining an air threshold interval to be displayed based on the minimum preset air threshold and a set threshold step length; and based on the air threshold interval to be displayed, performing air identification on the left lung image and the right lung image corresponding to the displayed DR lung images.
In an embodiment of the present disclosure and other possible embodiments, the method for determining an air threshold interval to be displayed based on the minimum preset air threshold and a set threshold step length includes: acquiring accumulation times K; wherein the accumulation times K is more than or equal to 1; the minimum preset air threshold value is added with K to set a threshold step length, so that air threshold value intervals to be displayed corresponding to different accumulation times K are obtained; and further, based on the air threshold interval to be displayed, performing air identification on the left lung image and the right lung image corresponding to the displayed multiple DR lung images.
In an embodiment of the disclosure, the method for determining an air threshold interval to be displayed based on the minimum preset air threshold and a set threshold step length includes: determining maximum pixel thresholds corresponding to left lung images and right lung images corresponding to a plurality of DR lung images in the respiratory process; displaying a threshold sliding bar corresponding to the minimum preset air threshold and the maximum pixel threshold; and adjusting the threshold value on the threshold sliding bar based on the set threshold step length to determine an air threshold interval to be displayed. The air threshold interval to be displayed is configured as a threshold interval corresponding to the minimum preset air threshold and the air threshold on the threshold slide after the threshold slide is adjusted.
In an embodiment of the present disclosure, a method for performing air identification on a left lung image and a right lung image corresponding to the displayed multiple DR lung images includes: acquiring a first configuration color and/or a first configuration transparency corresponding to the air identifier; and carrying out air identification on the left lung image and the right lung image corresponding to the displayed DR lung images based on the first configuration color and/or the first configuration transparency.
And performing registration operation on the left lung image and the right lung image corresponding to the DR lung images respectively by using a plurality of registration transformation matrixes in the breathing process to obtain a left lung registration image and a right lung registration image corresponding to the DR lung images.
In an embodiment of the present disclosure, the method for performing registration operation on a left lung image and a right lung image corresponding to the multiple DR lung images by using multiple registration transformation matrices in the respiratory process, to obtain a left lung registration image and a right lung registration image corresponding to the multiple DR lung images, includes: configuring a DR lung image at a first moment in the breathing process as a fixed image, and configuring a DR lung image at a second moment corresponding to a next moment adjacent to the first moment as a floating image; and performing registration operation on the fixed image and the floating image or the left lung image and the right lung image corresponding to the fixed image and the left lung image and the right lung image corresponding to the floating image respectively by using a plurality of registration transformation matrixes in the breathing process to obtain a left lung registration image and a right lung registration image corresponding to the DR lung images.
In an embodiment of the present disclosure, before the acquiring a plurality of registration transformation matrices in the respiratory process, a method of determining the plurality of registration transformation matrices includes: and registering adjacent DR lung images of the plurality of DR lung images in the breathing process respectively to obtain a plurality of registration transformation matrixes in the corresponding breathing process. Or, prior to the acquiring a plurality of registration transformation matrices during respiration, a method of determining the plurality of registration transformation matrices, comprising: and registering the adjacent left lung images and right lung images corresponding to the DR lung images in the breathing process respectively to obtain a plurality of registration transformation matrixes corresponding to the left lung images in the breathing process and a plurality of registration transformation matrixes corresponding to the right lung images in the breathing process. And further, performing registration operation on the left lung image and the right lung image corresponding to the DR lung images by using a plurality of registration transformation matrixes corresponding to the left lung image in the respiratory process and a plurality of registration transformation matrixes corresponding to the right lung image in the respiratory process, so as to obtain a left lung registration image and a right lung registration image corresponding to the DR lung images.
In embodiments of the present disclosure, as well as other possible embodiments, the registration methods used by the present disclosure, may employ existing registration algorithms or models, such as one or more of SIFT (Scale-invariant feature transform) registration algorithms or models, SURF (Speeded Up Robust Features) registration algorithms or models, ORB (Oriented FAST and Rotated BRIEF) registration algorithms or models, etc., or other convolutional neural network-based registration algorithms or models. For example, a convolutional neural network-based registration algorithm or model may be configured as a VGG network-based registration algorithm or model.
In an embodiment of the disclosure, a lung ventilation area corresponding to a plurality of DR lung images at a plurality of moments in the respiratory process is determined based on the preset air threshold interval and a left lung registration image and a right lung registration image corresponding to the plurality of DR lung images, respectively.
In an embodiment of the disclosure, the method for determining a lung ventilation area corresponding to a plurality of DR lung images at multiple times in the respiratory process based on the preset air threshold interval and the left lung registration image and the right lung registration image corresponding to the plurality of DR lung images, respectively, includes: based on the preset air threshold interval, respectively determining air identification areas corresponding to the DR lung images; and determining lung ventilation areas corresponding to the DR lung images at multiple moments in the breathing process based on the air identification areas corresponding to the DR lung images respectively.
In an embodiment of the present disclosure and other possible embodiments, the method for determining air identification areas corresponding to the DR lung images based on the preset air threshold interval includes: and if the pixel values corresponding to the DR lung images are in the preset air threshold interval, determining the area where the pixel values in the preset air threshold interval are located as an air identification area corresponding to the DR lung images.
In an embodiment of the present disclosure, the method for displaying a lung ventilation area corresponding to a plurality of DR lung images at a plurality of moments in time during the respiration includes: acquiring a first configuration color and/or a first configuration transparency corresponding to the air identification area; and displaying the air identification areas in the left lung image and the right lung image corresponding to the displayed DR lung images based on the first configuration color and/or the first configuration transparency respectively.
In embodiments of the present disclosure and other possible embodiments, one skilled in the art may configure the first configuration color and/or the first configuration transparency according to actual needs. For example, the first configuration color may be configured as a blue series; meanwhile, the first configuration transparency is configured to be 50%.
In addition, in the embodiment of the disclosure, a lung ventilation area corresponding to a plurality of DR lung images at a plurality of moments in the respiratory process obtained by the lung ventilation determining method is further included or applied; and determining air retention areas of the left lung image and the right lung image corresponding to the DR lung images in the breathing process based on the left lung image and the right lung image corresponding to the DR lung images in the breathing process and the lung ventilation areas corresponding to the DR lung images at multiple moments in the breathing process.
In an embodiment of the disclosure, the method for determining an air retention area of a left lung image and a right lung image corresponding to a plurality of DR lung images in the respiratory process based on the left lung image and the right lung image corresponding to the plurality of DR lung images in the respiratory process and a lung ventilation area corresponding to the plurality of DR lung images at a plurality of moments in the respiratory process includes: determining lung non-ventilation areas corresponding to the DR lung images at multiple times in the breathing process based on the left lung image and the right lung image corresponding to the DR lung images at multiple times in the breathing process and the lung ventilation areas corresponding to the DR lung images at multiple times in the breathing process; and determining air retention areas of left lung images and right lung images corresponding to the DR lung images in the breathing process based on the preset air threshold interval and the lung non-ventilation area.
In an embodiment of the disclosure, the method for determining a lung non-ventilation area corresponding to a plurality of DR lung images in a respiratory process based on a left lung image and a right lung image corresponding to the plurality of DR lung images in the respiratory process and a lung ventilation area corresponding to the plurality of DR lung images in the respiratory process includes: the left lung image and the right lung image corresponding to the DR lung images in the breathing process are subtracted from the lung ventilation areas corresponding to the DR lung images in the breathing process at multiple times, so that lung non-ventilation areas corresponding to the DR lung images in the breathing process at multiple times are obtained; and configuring lung non-ventilation areas corresponding to the DR lung images at multiple moments in the respiratory process as air retention areas of left lung images and right lung images corresponding to the DR lung images in the respiratory process.
In an embodiment of the present disclosure, further comprising: displaying the air retention areas of the left lung image and the right lung image corresponding to the DR lung images in the breathing process, including: acquiring a second configuration color and/or a second configuration transparency corresponding to the air retention area; and displaying the air retention areas of the left lung image and the right lung image corresponding to the displayed DR lung images based on the second configuration color and/or the second configuration transparency.
In embodiments of the present disclosure and other possible embodiments, one skilled in the art may configure the second configuration color and/or the second configuration transparency according to actual needs. For example, the second configuration color may be configured as a yellow color family; meanwhile, the second configuration transparency is configured to be 50%.
In an embodiment of the present disclosure, further comprising: the method comprises or is applied to a pulmonary blood flow image corresponding to the heartbeat obtained by the detection method; the left lung image and the right lung image corresponding to the plurality of first DR lung images at a plurality of moments in the breath process of the patient, a plurality of registration transformation matrixes in the breath process and a preset air threshold interval are obtained; performing registration operation on the left lung image and the right lung image corresponding to the second DR lung images respectively by using a plurality of registration transformation matrixes in the breathing process to obtain a left lung registration image and a right lung registration image corresponding to the first DR lung images; determining lung ventilation images corresponding to a plurality of second DR lung images at a plurality of moments in the respiratory process based on the preset air threshold interval and the left lung registration image and the right lung registration image corresponding to the plurality of second DR lung images respectively; and determining the ventilation and perfusion ratio of the patient at multiple moments based on the lung blood flow image and the lung ventilation image at multiple moments respectively.
In the embodiments of the present disclosure and other possible embodiments, the direction of the segmentation of the left lung and the right lung of the multiple second DR lung images at multiple times during the respiration process may be described in detail in the above description of the left lung and the right lung segmentation method.
In an embodiment of the present disclosure, the method of determining a ventilation-perfusion ratio of the patient at multiple moments based on the lung blood flow image and the lung ventilation image at multiple moments, respectively, comprises: respectively determining a plurality of lung blood flow areas corresponding to the lung blood flow images at multiple times and a plurality of lung ventilation areas corresponding to the lung ventilation images at multiple times; and respectively calculating the ratio of the lung blood flow areas to the corresponding lung ventilation areas to obtain the ventilation and perfusion ratio of the patient at multiple moments.
In an embodiment of the present disclosure, the method for calculating the ratio of the areas of the pulmonary blood flow to the corresponding areas of the pulmonary ventilation to obtain the ventilation and perfusion ratio of the patient at multiple moments includes: respectively determining the time of systole and diastole corresponding to the multiple times; and respectively calculating the ratio of the lung blood flow area to the lung ventilation area at the same time of systole or diastole to obtain the ventilation and perfusion ratio of the patient at multiple times.
In an embodiment of the present disclosure, before the acquiring a plurality of registration transformation matrices in the respiratory process, a method of determining the plurality of registration transformation matrices includes: registering adjacent DR lung images of the second DR lung images in the breathing process respectively to obtain a plurality of registration transformation matrixes in the corresponding breathing process; or, registering the adjacent left lung image and right lung image corresponding to the second DR lung images in the breathing process respectively to obtain a plurality of registration transformation matrixes corresponding to the left lung images in the breathing process and a plurality of registration transformation matrixes corresponding to the right lung images in the breathing process; and further, performing registration operation on the left lung image and the right lung image corresponding to the plurality of second DR lung images by using a plurality of registration transformation matrixes corresponding to the left lung image in the breathing process and a plurality of registration transformation matrixes corresponding to the right lung image in the breathing process, so as to obtain a left lung registration image and a right lung registration image corresponding to the plurality of second DR lung images.
In embodiments of the present disclosure, as well as other possible embodiments, the registration methods used by the present disclosure, may employ existing registration algorithms or models, such as one or more of SIFT (Scale-invariant feature transform) registration algorithms or models, SURF (Speeded Up Robust Features) registration algorithms or models, ORB (Oriented FAST and Rotated BRIEF) registration algorithms or models, etc., or other convolutional neural network-based registration algorithms or models. For example, a convolutional neural network-based registration algorithm or model may be configured as a VGG network-based registration algorithm or model.
In an embodiment of the disclosure, before acquiring a left lung image and a right lung image corresponding to a plurality of second DR lung images during respiration, rib suppression or rib subtraction is performed on the left lung image and the right lung image corresponding to the plurality of DR lung images during respiration, respectively.
The main execution subject of the DR image processing method may be a DR image processing apparatus, for example, the DR image processing method may be executed by a terminal device or a server or other processing device, wherein the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. In some possible implementations, the DR image processing method may be implemented by a processor invoking computer readable instructions stored in a memory.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Fig. 2 shows a block diagram of a DR image processing apparatus according to an embodiment of the present disclosure, as shown in fig. 2, including: an acquiring unit 101, configured to acquire a DR image to be processed, and determine whether the DR image to be processed is a lung image according to the DR image to be processed; if the image is the lung image, the detection unit 102 is configured to detect the chest cavity of the DR image to be processed, and remove information outside the chest cavity in the DR image to be processed.
In some embodiments, the functions or modules included in the apparatus provided by the embodiments of the present disclosure may be used to perform the method described in the DR image processing method embodiment, and the specific implementation of the method may refer to the description of the DR image processing method embodiment, which is not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described DR image processing method. The computer readable storage medium may be a non-volatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above DR image processing method. Wherein the electronic device may be provided as a terminal, server or other modality of device.
Fig. 3 is a block diagram of an electronic device 800, according to an example embodiment. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 3, the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 4 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, electronic device 1900 may be provided as a server. Referring to FIG. 4, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A DR image processing method, comprising:
acquiring a DR image to be processed, and determining whether the DR image to be processed is a lung image or not according to the DR image to be processed;
and if the DR image is the lung image, detecting the thoracic cavity of the DR image to be processed, and removing the information outside the thoracic cavity in the DR image to be processed.
2. The processing method according to claim 1, characterized in that the method of DR image processing the DR image to be processed comprises:
respectively calculating a plurality of first gradient magnitudes corresponding to the transverse direction of each pixel point and a plurality of second gradient magnitudes corresponding to the longitudinal direction of each pixel point in the DR image to be processed;
Determining a plurality of total gradient magnitudes based on the plurality of first gradient magnitudes and the plurality of second gradient magnitudes;
integrating a plurality of first gradient amplitudes corresponding to the transverse direction and a plurality of second gradient amplitudes corresponding to the longitudinal direction along the direction perpendicular to the first gradient amplitudes to obtain a plurality of first integral values and a plurality of second integral values;
integrating the total gradient amplitude values in two directions corresponding to the longitudinal direction and the transverse direction respectively to obtain a plurality of third integrated values and a plurality of fourth integrated values;
calculating a plurality of first local maxima corresponding to the plurality of first ratios, and calculating a plurality of first local minima and a plurality of second local minima corresponding to the plurality of first ratios and the plurality of second ratios;
determining a chest image corresponding to the DR image to be processed according to the plurality of first local maxima, the plurality of first local minima, the plurality of second local minima and the chest characteristics; the chest features can be configured with a first segmentation position of a neck or a shoulder corresponding to the chest and a second segmentation position on two sides of the chest; and/or the number of the groups of groups,
the method for respectively calculating a plurality of first gradient magnitudes corresponding to the transverse direction of each pixel point and a plurality of second gradient magnitudes corresponding to the longitudinal direction of each pixel point in the DR image to be processed comprises the following steps:
Acquiring a gradient operator;
respectively calculating a plurality of first gradient amplitudes corresponding to the transverse direction of each pixel point and a plurality of second gradient amplitudes corresponding to the longitudinal direction of each pixel point in the DR image to be processed by utilizing the gradient operators; and/or the number of the groups of groups,
the method of determining a plurality of total gradient magnitudes based on the plurality of first gradient magnitudes and the plurality of second gradient magnitudes, comprising: respectively calculating a plurality of first square sums corresponding to the plurality of first gradient magnitudes and a plurality of second square sums corresponding to the plurality of second gradient magnitudes, and determining the plurality of total gradient magnitudes based on the plurality of first square sums and the plurality of second square sums; and/or the number of the groups of groups,
the method of determining the plurality of total gradient magnitudes based on the plurality of first sums of squares and the plurality of second sums of squares, comprising: summing the first square sums and the second square sums respectively, and performing open square processing on the sums to obtain a plurality of total gradient amplitudes; and/or the number of the groups of groups,
before the calculating the first local maxima corresponding to the first ratios and the calculating the first local minima and the second local minima corresponding to the first ratios and the second ratios, determining whether each first ratio of the first integral values and the third integral values is reserved in the transverse direction based on the first integral values and the third integral values; and determining, based on the plurality of second integrated values and the plurality of fourth integrated values, whether each of a plurality of second ratios of the plurality of second integrated values and the plurality of fourth integrated values in the longitudinal direction remains; further, calculating a plurality of first local maxima corresponding to the reserved first ratios, and calculating a plurality of first local minima and a plurality of second local minima corresponding to the reserved first ratios and reserved second ratios; and/or the number of the groups of groups,
Calculating differential derivatives of the first curves corresponding to the first ratios to obtain the first local maxima and the first local minima; and calculating differential derivatives of the second ratios corresponding to the second curves to obtain the second local minima.
3. The method according to claim 2, wherein the determining the corresponding thoracic map of the lung image to be segmented according to the first local maxima, the first local minima, the second local minima and the thoracic features comprises:
determining a second segmentation location on both sides of the thorax based on the plurality of first local maxima and the thorax feature;
determining a first segmentation location of the neck or shoulder based on the plurality of second local minima and the thoracic feature;
and determining a chest image corresponding to the lung image to be segmented according to the first segmentation position and the first segmentation position.
4. A method of processing according to claim 3, wherein the method of determining the second segmentation locations on both sides of the thorax based on the plurality of first local maxima and the thorax feature comprises:
determining the maximum value of all the first local maxima at one side of the central line according to the central line of the DR image to be processed, and configuring the position information corresponding to the maximum value as a segmentation position point at one side to be determined corresponding to one side of the thoracic cage;
Determining the minimum value of all the first local minima on the other side of the central line according to the central line of the DR image to be processed, and configuring the position information corresponding to the minimum value as the dividing position point on the other side to be determined corresponding to the other side of the thoracic cage; and/or the number of the groups of groups,
a method of determining a first segmentation location of a neck or shoulder based on the plurality of second local minima and the thoracic feature, comprising: and configuring position information corresponding to the minimum value corresponding to the plurality of second local minima as a first division position of the neck or shoulder.
5. The method according to any one of claims 1 to 4, wherein a thoracic image obtained by performing chest detection on the DR image to be processed is configured as a lung image to be segmented; based on the lung image to be segmented, segmentation of left lung and right lung is performed; and/or the number of the groups of groups,
filtering the DR image to be processed before thoracic cavity detection is carried out on the DR image to be processed, and downsampling the filtered lung image to be segmented to a set size; and/or the number of the groups of groups,
and carrying out image enhancement on the logarithmic transformation of the DR image to be processed with the set size to obtain an enhanced DR image to be processed.
6. The processing method according to claim 5, characterized in that the lung image to be segmented is segmented into a left side chest image and a right side chest image;
performing left lung and right lung segmentation based on the left chest image and the right chest image, respectively; and/or the number of the groups of groups,
a method of left and right lung segmentation based on the left and right chest images, respectively, comprising:
respectively detecting a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge of the left chest image and the right chest image;
obtaining a left lung segmentation image according to a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge corresponding to the left chest image;
obtaining a right lung segmentation image according to a rib edge boundary, a lung tip boundary, a mediastinum and a transverse septum edge corresponding to the right chest image; and/or the number of the groups of groups,
the method for rib border of the left chest image comprises the following steps:
constructing a direction derivative template of the left chest image by using the direction derivative, and setting a set weighting depth of the direction derivative template;
performing template traversal of direction derivative on the left chest image by using a direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the left chest image to obtain a left chest overlapping image;
Performing binarization processing on the left chest superimposed image to obtain a left rib edge binary image;
obtaining a left rib edge angle diagram to be screened according to the left rib edge binary diagram and the left chest superposition image;
obtaining a filtered left rib angle diagram based on the rib angle diagram of the left side to be filtered and the first set rib angle;
carrying out connected domain selection on the left rib edge angle diagram to obtain a rib edge boundary corresponding to the maximum connected domain; and/or the number of the groups of groups,
the method for obtaining the left rib angle diagram to be screened according to the left rib edge binary diagram and the left chest superposition image comprises the following steps:
performing morphological opening and closing operation and refinement treatment on the left rib edge binary image to obtain a left rib edge binary image after morphological treatment;
performing AND operation on the morphologically processed left rib edge binary image and the gradient direction angle of each pixel in the left chest superimposed image to obtain a left rib edge angle image to be screened; and/or the number of the groups of groups,
before the direction derivative template of the left chest image is constructed by using the direction derivative, carrying out set-scale Gaussian blur on the left chest image to obtain a corresponding left chest Gaussian blur image; further, constructing a directional derivative template of the left chest Gaussian blur image by utilizing a directional derivative; in the rib edge boundary process of the left chest image, performing template traversal of direction derivative on the left chest Gaussian blur image by using a direction derivative template corresponding to the set weighted depth, and overlapping a template traversal result into the left chest image to obtain a left chest Gaussian blur overlapping image; performing binarization processing on the left chest Gaussian blur superimposed image to obtain a left rib edge binary image; obtaining a left rib edge angle diagram to be screened according to the left rib edge binary diagram and the left chest Gaussian blur superimposed image; and/or the number of the groups of groups,
The method for rib border of the right chest image comprises the following steps:
constructing a direction derivative template of the right chest image by using the direction derivative, and setting a set weighting depth of the direction derivative template;
performing template traversal of direction derivative on the right chest image by using a direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the right chest image to obtain a right chest overlapping image;
performing binarization processing on the right chest superimposed image to obtain a right rib edge binary image;
obtaining a rib edge angle diagram of the right side to be screened according to the right rib edge binary diagram and the right chest superposition image;
obtaining a screened right rib angle diagram based on the rib angle diagram of the right side to be screened and a second set rib angle;
carrying out connected domain selection on the right rib edge angle diagram to obtain a rib edge boundary corresponding to the maximum connected domain; and/or the number of the groups of groups,
the method for obtaining the rib angle diagram of the right side to be screened according to the right rib edge binary diagram and the right chest superposition image comprises the following steps:
performing morphological opening and closing operation and refinement treatment on the right rib edge binary image to obtain a right rib edge binary image after morphological treatment;
Performing AND operation on the morphologically processed right rib edge binary image and the gradient direction angle of each pixel in the right chest superimposed image to obtain a rib edge angle image on the right side to be screened; and/or the number of the groups of groups,
before the direction derivative template of the right chest image is constructed by using the direction derivative, carrying out set-scale Gaussian blur on the right chest image to obtain a corresponding right chest Gaussian blur image; further, constructing a directional derivative template of the right chest Gaussian blur image by utilizing a directional derivative; in the rib edge boundary process of the right chest image, performing template traversal of the direction derivative of the right chest Gaussian blur image by using the direction derivative template corresponding to the set weighted depth, and overlapping the template traversal result into the right chest image to obtain a right chest Gaussian blur overlapping image; performing binarization processing on the Gaussian blur superimposed image of the right chest to obtain a right rib edge binary image; obtaining a rib edge angle diagram of the right side to be screened according to the right rib edge binary diagram and the right chest Gaussian blur superimposed image; and/or the number of the groups of groups,
The method for detecting the left lung tip boundary of the left chest image comprises the following steps:
determining a left lung apex detection area according to the left chest image;
determining a left lung apex edge binary image according to the left lung apex detection area;
fitting by adopting a quadratic function according to the left lung apex edge binary image to obtain a fitted left lung apex boundary; and/or the number of the groups of groups,
the method for determining the left lung apex detection area according to the left chest image comprises the following steps:
detecting a first coordinate corresponding to the uppermost coordinate point of the rib edge of the left chest image;
the region formed by the hypotenuse formed by the first coordinate and the coordinate point of the rightmost upper corner of the left chest image is configured as a left lung apex detection region; and/or the number of the groups of groups,
the method for detecting the right lung tip boundary of the right chest image comprises the following steps:
determining a right lung apex detection area according to the right chest image;
determining a right lung apex edge binary image according to the right lung apex detection area;
fitting by adopting a quadratic function according to the right lung apex edge binary image to obtain a fitted right lung apex boundary; and/or the number of the groups of groups,
the method for determining the right lung apex detection area according to the right chest image comprises the following steps:
Detecting a second coordinate corresponding to the uppermost coordinate point of the rib edge of the right chest image;
the region formed by the hypotenuse formed by the second coordinate and the coordinate point of the leftmost upper corner of the left chest image is configured as a right lung apex detection region; and/or the number of the groups of groups,
a method of left lung mediastinum and diaphragmatic edge detection for the left chest image, respectively, comprising:
binarizing the left chest image to obtain a left chest binary image;
performing edge detection on the left chest binary image to obtain a left chest edge binary image;
obtaining a left chest edge angle diagram according to the left chest binary image and the gradient direction angle of each pixel in the left chest edge binary image;
obtaining a left side diaphragm and mediastinum edge angle diagram after selection according to the obtained left side chest edge angle diagram and the edge angle range of the set diaphragm and mediastinum;
performing connected domain selection processing according to the selected left transverse diaphragm and longitudinal diaphragm edge angle diagram, and acquiring left lung longitudinal diaphragm and transverse diaphragm edges corresponding to the maximum connected domain; and/or the number of the groups of groups,
the method for binarizing the left chest image to obtain the left chest binary image comprises the following steps:
Performing contrast enhancement processing and maximum inter-class variance processing on the left chest Gaussian blur image corresponding to the left chest image to obtain a left chest binary image; and/or the number of the groups of groups,
a method of determining a gradient direction angle for each pixel in the left chest edge binary image, comprising:
performing transverse gradient and longitudinal gradient calculation on the left chest Gaussian blur image corresponding to the left chest image to obtain a left chest transverse gradient map and a left chest longitudinal gradient map;
based on the transverse gradient map and the longitudinal gradient map of the left chest, obtaining a gradient direction angle of each pixel in the Gaussian blur image of the left chest; and/or the number of the groups of groups,
a method of right lung mediastinum and diaphragmatic edge detection for the right chest image, respectively, comprising:
binarizing the right chest image to obtain a right chest binary image;
performing edge detection on the right chest binary image to obtain a right chest edge binary image;
obtaining a right chest edge angle diagram according to the right chest binary image and the gradient direction angle of each pixel in the right chest edge binary image;
obtaining a right side diaphragm and mediastinum edge angle diagram after selection according to the obtained right side chest edge angle diagram and the edge angle range of the set diaphragm and mediastinum;
Performing connected domain selection processing according to the selected right transverse diaphragm and mediastinum edge angle diagram to obtain right lung mediastinum and transverse diaphragm edges corresponding to the maximum connected domain; and/or the number of the groups of groups,
the method for obtaining the right chest binary image by binarizing the right chest image comprises the following steps:
performing contrast enhancement processing and maximum inter-class variance processing on a right chest Gaussian blur image corresponding to the right chest image to obtain a right chest binary image; and/or the number of the groups of groups,
a method of determining a gradient direction angle for each pixel in the right chest edge binary image, comprising:
performing transverse gradient and longitudinal gradient calculation on the right chest Gaussian blur image corresponding to the right chest image to obtain a transverse gradient map and a longitudinal gradient map of the right chest;
based on the transverse gradient map and the longitudinal gradient map of the right chest, obtaining a gradient direction angle of each pixel in the Gaussian blur image of the right chest; and/or the number of the groups of groups,
the method for obtaining the left lung segmentation image according to the rib edge boundary, the lung tip boundary, the mediastinum and the transverse septum edge corresponding to the left chest image comprises the following steps:
calculating a first left lung tip edge point and a left lung mediastinum edge point corresponding to the shortest distance between a lung tip boundary and a mediastinum edge in the left chest image;
Calculating a second left lung tip edge point and a first left lung rib edge point corresponding to the shortest distance between the lung tip boundary and the rib edge in the left chest image;
calculating a second rib edge point and a diaphragm edge point corresponding to the shortest distance between the rib edge and the diaphragm edge in the left chest image;
obtaining a left lung segmentation image based on the first left lung apex edge point, the left lung mediastinum edge point, the second left lung apex edge point, the first left lung rib edge point, the second rib edge point and the diaphragmatic edge point; and/or the number of the groups of groups,
the method for obtaining the right lung segmentation image according to the rib edge boundary, the lung tip boundary, the mediastinum and the transverse septum edge corresponding to the right chest image comprises the following steps:
calculating a first right lung tip edge point and a right lung mediastinum edge point corresponding to the shortest distance between a lung tip boundary and a mediastinum edge in the right chest image;
calculating a second right lung tip edge point and a first right lung rib edge point corresponding to the shortest distance between the lung tip boundary and the rib edge in the right chest image;
calculating a second rib edge point and a diaphragm edge point corresponding to the shortest distance between the rib edge and the diaphragm edge in the right chest image;
and obtaining a right lung segmentation image based on the first right lung apex edge point, the right lung mediastinum edge point, the second right lung apex edge point, the first right lung rib edge point, the second rib edge point and the diaphragmatic edge point.
7. A method of treatment according to any one of claims 5 to 6, further comprising: respectively carrying out lung function assessment based on a plurality of left lung segmentation images and a plurality of right lung segmentation images which correspond to each other at a plurality of moments in the breathing process or in the breath-hold state;
wherein the pulmonary function assessment comprises: assessment of lung field area during breathing and/or assessment of lung ventilation and/or assessment of pulmonary blood flow in a breath-hold state.
8. A DR image processing apparatus, comprising:
the acquisition unit is used for acquiring a DR image to be processed and determining whether the DR image to be processed is a lung image or not according to the DR image to be processed;
and if the DR image is the lung image, the detection unit is used for detecting the thoracic cavity of the DR image to be processed and removing the information outside the thoracic cavity in the DR image to be processed.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the DR image processing method of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the DR image processing method of any one of claims 1 to 7.
CN202310810472.2A 2023-07-03 2023-07-03 DR image processing method and device, electronic equipment and storage medium Pending CN116797591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310810472.2A CN116797591A (en) 2023-07-03 2023-07-03 DR image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310810472.2A CN116797591A (en) 2023-07-03 2023-07-03 DR image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116797591A true CN116797591A (en) 2023-09-22

Family

ID=88045964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310810472.2A Pending CN116797591A (en) 2023-07-03 2023-07-03 DR image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116797591A (en)

Similar Documents

Publication Publication Date Title
US11100683B2 (en) Image color adjustment method and system
KR102304674B1 (en) Facial expression synthesis method and apparatus, electronic device, and storage medium
WO2019228473A1 (en) Method and apparatus for beautifying face image
CN105103189B (en) The image denoising of near-infrared guiding
WO2022036972A1 (en) Image segmentation method and apparatus, and electronic device and storage medium
KR20210107667A (en) Image segmentation method and apparatus, electronic device and storage medium
CN110832498B (en) Blurring facial features of objects in an image
CN107832677A (en) Face identification method and system based on In vivo detection
CN111814520A (en) Skin type detection method, skin type grade classification method, and skin type detection device
TW202014984A (en) Image processing method, electronic device, and storage medium
CN109064390A (en) A kind of image processing method, image processing apparatus and mobile terminal
WO2018082388A1 (en) Skin color detection method and device, and terminal
CN113222038B (en) Breast lesion classification and positioning method and device based on nuclear magnetic image
CN112135041B (en) Method and device for processing special effect of human face and storage medium
CN107622480A (en) A kind of Kinect depth image Enhancement Method
CN116883426A (en) Lung region segmentation method, lung disease assessment method, lung region segmentation device, lung disease assessment device, electronic equipment and storage medium
CN109859217A (en) The dividing method in pore region and calculating equipment in facial image
CN111724364B (en) Method and device based on lung lobes and trachea trees, electronic equipment and storage medium
JP7076168B1 (en) How to enhance the object contour of an image in real-time video
KR20220034844A (en) Image processing method and apparatus, electronic device, storage medium and program product
CN116843647A (en) Method and device for determining lung field area and evaluating lung development, electronic equipment and medium
JP2022548453A (en) Image segmentation method and apparatus, electronic device and storage medium
CN116797591A (en) DR image processing method and device, electronic equipment and storage medium
CN111743524A (en) Information processing method, terminal and computer readable storage medium
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination