CN110956636A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110956636A
CN110956636A CN201911189278.7A CN201911189278A CN110956636A CN 110956636 A CN110956636 A CN 110956636A CN 201911189278 A CN201911189278 A CN 201911189278A CN 110956636 A CN110956636 A CN 110956636A
Authority
CN
China
Prior art keywords
brain image
detected
brain
plane
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911189278.7A
Other languages
Chinese (zh)
Inventor
万兰若
龚强
陈伟导
陈宽
王少康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Infervision Technology Co Ltd
Infervision Co Ltd
Original Assignee
Infervision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Co Ltd filed Critical Infervision Co Ltd
Priority to CN201911189278.7A priority Critical patent/CN110956636A/en
Publication of CN110956636A publication Critical patent/CN110956636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method and device, comprising the following steps: training a deep convolutional neural network through a first sample set containing a brain image marked with a ventricle segmentation result, and then segmenting the brain image through the deep convolutional neural network, so that the obtained segmentation result is not only fast, but also accurate and robust; then, the position of the actual midline plane of the brain image is determined through the segmentation result, and the position of the actual midline plane is more accurate; and finally, comparing the position of the actual central line plane with the position of the theoretical central line plane to determine an image processing result of the brain image to be detected. Therefore, the purpose of automatically judging the brain structure displacement is achieved, the influence of whether the posture of the brain image is correct on the recognition result is greatly reduced, the posture of more brain images can be dealt with, and in addition, the result of image processing is more accurate by adopting the deep convolution neural network.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method and apparatus.
Background
Whether the brain midline shifts is an important brain image feature, whether the brain has extrusion space occupying effect can be effectively reflected, and whether diseases such as brain injury, stroke, brain tumor, abscess and the like occur can be diagnosed by judging the brain midline shift. That is to say, through the judgement to the brain aversion, be favorable to helping the doctor to judge whether the brain is impaired.
Currently, the judgment of the brain midline shift is generally realized by a doctor manually measuring the midline offset distance. However, since the posture of the patient may not be standard when the brain image is captured, the obtained head image is not a head image of a standard position, in this case, it is difficult for the doctor to measure the offset distance of the central line by a manual measurement method, or the measurement of the offset distance of the central line shift of the brain is prone to error, and the efficiency of the manual measurement is low, so that the judgment of the central line shift of the brain is difficult. Therefore, the current judgment on the brain midline shift is poor in accuracy and low in efficiency.
Disclosure of Invention
In view of this, the embodiment of the present invention discloses an image processing method and an image processing apparatus, which achieve the purpose of automatically determining the displacement of a brain structure, are not affected by the posture of a brain image, and can be used for the brain image shot in any posture.
The embodiment of the invention discloses an image processing method, which comprises the following steps:
acquiring a brain image to be detected;
processing a brain image to be detected based on a pre-trained deep convolutional neural network model to obtain a ventricle segmentation result of the brain image to be detected; the deep convolutional neural network model is obtained by training a first sample set containing a brain image marked with a ventricle segmentation result;
determining the position of a theoretical centerline plane in a brain image to be detected;
determining the position of an actual midline plane based on the ventricular segmentation result of the brain image to be detected;
and determining an image processing result of the brain image to be detected according to the position of the actual central line plane and the position of the theoretical central line plane of the brain image to be detected.
Optionally, the determining the position of the theoretical centerline plane in the brain image to be detected includes:
registering the brain image to be detected with a standard brain image marked with the position of the midline plane;
and determining the position of the theoretical midline plane in the brain image to be detected according to the position of the midline plane in the standard brain image.
Optionally, the determining the position of the actual midline plane based on the result of the ventricular segmentation of the brain image to be detected includes:
selecting the position of a reference plane in the brain image to be detected;
translating the position of the reference plane along the coronal axis of the brain image to be detected;
when the reference plane moves to a target position, acquiring a division result of the reference plane on a ventricle of a brain image to be detected at the target position;
and determining the target position as the actual plane position of the midline according to the segmentation result of the target position on the ventricles of the brain image to be detected and the segmentation result of the brain image to be detected by the pre-trained deep convolutional neural network.
Optionally, the position of the reference plane selected from the brain image to be detected is the position of the theoretical centerline plane in the brain image to be detected.
Optionally, the determining, according to the position of the actual centerline plane of the brain image to be detected and the position of the theoretical centerline plane, an image processing result of the brain image to be detected includes:
calculating a first offset distance between the actual central line plane position and the theoretical central line plane position of the brain image to be detected;
judging whether the first offset distance is larger than a preset distance threshold value or not;
and if the first offset distance is greater than a preset distance threshold value, determining the degree of brain structure displacement in the brain image to be detected.
Optionally, the method further includes:
acquiring a second sample set containing brain images;
determining the position of a theoretical centerline plane for each brain image in the second sample set;
processing the brain image in the second sample set according to the trained deep convolutional neural network to obtain a ventricle segmentation result of the brain image, and determining the position of an actual centerline plane of the brain image based on the ventricle segmentation result of the brain image;
calculating a second offset distance from the position of the actual centerline plane to the position of the theoretical centerline plane;
obtaining a result of whether each brain image in the second sample set has brain structure displacement;
and analyzing the second offset distance corresponding to each brain image in the second sample set and the result of whether the brain structure is displaced, and determining a distance threshold value for judging whether the brain structure is displaced.
Optionally, the training process of the deep convolutional neural network model includes:
acquiring a first sample set; the first sample set comprises a brain image marked with a ventricular segmentation result;
constructing a deep convolutional neural network model;
and training the constructed deep convolutional neural network model through the brain image marked with the ventricle segmentation result in the sample set.
The embodiment of the invention also discloses an image processing device, which comprises:
the acquisition unit is used for acquiring a brain image to be detected;
the model processing unit is used for processing the brain image to be detected based on a pre-trained deep convolutional neural network model to obtain a ventricular segmentation result of the brain image to be detected; the deep convolutional neural network model is obtained by training a first sample set containing a brain image marked with a ventricle segmentation result;
the position determining unit of the theoretical midline plane is used for determining the position of the theoretical midline plane in the brain image to be detected;
the position determining unit of the actual midline plane is used for determining the position of the actual midline plane based on the ventricular segmentation result of the brain image to be detected;
and the processing result determining unit is used for determining the image processing result of the brain image to be detected according to the position of the actual central line plane and the position of the theoretical central line plane of the brain image to be detected.
Optionally, the unit for determining the position of the theoretical centerline plane includes:
the registration subunit is used for registering the brain image to be detected with the standard brain image marked with the midline plane position;
and the theoretical midline determining subunit is used for determining the position of the theoretical midline plane in the brain image to be detected according to the position of the midline plane in the standard brain image.
Optionally, the unit for determining the position of the actual midline plane includes:
the selecting subunit is used for selecting the position of a reference plane in the brain image to be detected;
a translation subunit, configured to translate the position of the reference plane along a coronal axis of the brain image to be detected;
the acquisition subunit is used for acquiring the division result of the reference plane on the ventricles of the brain image to be detected at the target position when the reference plane moves to the target position;
and the plane position determining subunit is used for determining the target position as the actual plane position of the central line according to the result of segmenting the ventricle of the brain image to be detected by the target position and the result of segmenting the brain image to be detected by the pre-trained deep convolutional neural network.
Optionally, the position of the reference plane selected from the brain image to be detected is the position of the theoretical centerline plane in the brain image to be detected.
Optionally, the processing result determining unit includes:
the calculating subunit is used for calculating a first offset distance between the actual central line plane position of the brain image to be detected and the theoretical central line plane position;
the judging subunit is used for judging whether the first offset distance is greater than a preset distance threshold value;
and the processing result determining subunit is used for determining the degree of the brain structure displacement in the brain image to be detected if the distance is greater than a preset distance threshold.
Optionally, the method further includes:
a first training unit to:
acquiring a second sample set containing brain images;
determining the position of a theoretical centerline plane for each brain image in the second sample set;
processing the brain image in the second sample set according to the trained deep convolutional neural network to obtain a ventricle segmentation result of the brain image, and determining the position of an actual centerline plane of the brain image based on the ventricle segmentation result of the brain image;
calculating a second offset distance from the position of the actual centerline plane to the position of the theoretical centerline plane;
obtaining a result of whether each brain image in the second sample set has brain structure displacement;
and analyzing the second offset distance corresponding to each brain image in the second sample set and the result of whether the brain structure is displaced, and determining a distance threshold value for judging whether the brain structure is displaced.
Optionally, the method further includes:
a deep convolutional neural network training unit to:
acquiring a first sample set; the first sample set comprises a brain image marked with a ventricular segmentation result;
constructing a deep convolutional neural network model;
and training the constructed deep convolutional neural network model through the brain image marked with the ventricle segmentation result in the sample set.
The embodiment of the invention also discloses a storage medium, wherein a computer program is stored on the storage medium, and the computer program realizes the image processing method when being processed and executed.
The embodiment of the invention also discloses an electronic device, which comprises:
a processor and a memory;
wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement the image processing method according to the above.
The embodiment of the invention discloses an image processing method and device, comprising the following steps: the brain image segmentation method based on the deep convolutional neural network comprises the steps that the deep convolutional neural network is trained through the first sample set of the brain image marked with the ventricle segmentation result, then the brain image is segmented through the deep convolutional neural network, the segmentation result obtained in the way is fast and accurate, then the position of the actual midline plane of the brain image is determined through the segmentation result, and the position of the actual midline plane is more accurate. And comparing the position of the actual midline plane with the position of the theoretical midline plane to determine the image processing result of the brain image to be detected, namely determining whether the brain structure in the brain image to be detected has the degree of brain structure displacement. Thus, the doctor can analyze the state of illness based on the degree of brain displacement.
According to the embodiment, the purpose of automatically judging the brain structure displacement is realized, the influence of whether the posture of the brain image is righted on the recognition result is greatly reduced, the posture of more brain images can be responded, and in addition, the deep convolution neural network is adopted to enable the processing result to be more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method for identifying a line structure shift in a head according to an embodiment of the present invention;
fig. 2-3 are schematic diagrams of a midline plane of the brain;
FIG. 4 is a schematic view of a non-invasive brain image;
FIG. 5 is a flow chart of a training method of a deep convolutional neural network according to an embodiment of the present invention;
FIG. 6 shows a schematic diagram of a deep convolutional neural network model;
fig. 7 is a flowchart illustrating a method for training a threshold for discriminating whether a brain structure is displaced according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an apparatus for identifying structural displacement of a line in a head according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of an image processing method provided in an embodiment of the present invention is shown, where the method includes:
s101: acquiring a brain image to be detected;
in this embodiment, the acquired brain image to be detected is a medical image.
In order to eliminate the influence of image noise and the like on the recognition result, the brain image to be detected needs to be preprocessed in advance, and thus, the acquired brain image to be detected may be a preprocessed image.
The operation process of the pretreatment may include many steps, and is not limited in this embodiment.
S102: processing a brain image to be detected based on a pre-trained deep convolution neural model to obtain a ventricle segmentation result of the brain image to be detected;
the deep convolutional neural network model is obtained by training a first sample set of a brain image marked with a ventricle segmentation result.
In this embodiment, the deep convolutional neural network may be any deep convolutional neural network algorithm model, which is not limited in this embodiment, and in practical application, a skilled person may set the deep convolutional neural network according to an actual situation, for example, the deep convolutional neural network may be U-net.
The following embodiments will describe the training process of the deep convolutional neural network in detail, and will not be described in detail in this embodiment.
S103: determining the position of a theoretical centerline plane in a brain image to be detected;
in this embodiment, the position of the theoretical midline plane can be understood as the position of the midline plane of the brain without regard to asymmetry of the midline structure caused by brain injury and other diseases.
Here, as shown in fig. 2 and 3, the portion of the dotted line is the position of the theoretical midline, which is the display result in the two-dimensional image, regardless of the brain injury or the like, but in the three-dimensional image, the position is an extended plane, which is represented as the position of the midline plane.
Specifically, the position of the theoretical centerline plane of the brain image to be detected can be determined by the following method: registering the brain image to be detected with a standard brain image marked with the position of the midline plane;
and determining the position of the theoretical midline plane in the brain image to be detected according to the position of the midline plane in the standard brain image.
In this embodiment, the standard brain image can be understood as an intact brain image, as shown in fig. 4.
S104: determining the position of an actual midline plane based on the ventricular segmentation result of the brain image to be detected;
in this embodiment, in the case of brain injury or some other disease, the position of the midline plane of the brain structure may be shifted, i.e., the position of the midline plane may shift, due to the compression of blood clots or other reasons. As shown in fig. 2 and 3, the solid line portion is the position of the actual centerline plane. Fig. 2 shows a case where the center line is deviated to the left, and fig. 3 shows a case where the center line is deviated to the right.
It is to be noted that since the captured brain image is viewed from the side of the human foot to the head, the left and right of the image are opposite to the actual left and right.
In this embodiment, the brain segmentation result of the brain image to be detected may be to segment a left ventricle and a right ventricle, wherein in practical application, in addition to the left ventricle and the right ventricle, other parts other than the ventricles may be segmented, or a part with a lesion in the ventricles may be segmented.
It should be noted that the midline plane is a plane which can separate the left ventricle from the right ventricle, and after the left ventricle and the right ventricle are separated by the deep convolutional neural network model, the position of the actual midline plane can be determined according to the separation result.
Specifically, S104 may include:
selecting the position of a reference plane in the brain image to be detected;
translating the position of the reference plane along the coronal axis of the brain image to be detected;
when the reference plane moves to a target position, acquiring a division result of the reference plane on a ventricle of a brain image to be detected at the target position;
and determining the target position as the actual plane position of the midline according to the segmentation result of the target position on the ventricles of the brain image to be detected and the segmentation result of the brain image to be detected by the pre-trained deep convolutional neural network.
The position of the reference plane may be any position in the brain image to be detected, or a more central position in the brain image to be detected, which is not limited in this embodiment. The position of the reference plane may be preset, or the technician may set the reference plane according to actual situations.
In this embodiment, the position of the reference plane may preferably be the position of the theoretical centerline plane in the brain image to be detected determined in S103.
In this embodiment, in the process of translating the reference midline plane, the ventricle may be divided into two parts at the target position of the reference midline plane each time, and the position of the actual midline plane is determined according to the division result of the target position on the ventricle.
The target position of the segmentation result meeting the preset condition can be selected as the position of the actual centerline plane.
In this embodiment, the preferred preset condition may be that the ventricle is divided into the most effective target positions, for example, the target position with the least pixel points that the left ventricle and the right ventricle are mistakenly divided may be, or the difference between the distance from the target position to the left ventricle and the distance from the target position to the right ventricle is the smallest.
In this embodiment, the result of the ventricle division at the target position may be determined based on the lateral ventricle division result of the brain image to be detected by the depth convolution neural network trained in advance, so as to determine whether the effect of the ventricle division at the target position is good or bad.
S105: and determining an image processing result of the brain image to be detected according to the position of the actual central line plane and the position of the theoretical central line plane of the brain image to be detected.
In this embodiment, the result of the image processing may be understood as a result of determining the degree of displacement of the brain structure in the brain image, wherein the determination of the degree of displacement of the brain centerline result may be determined by comparing the position of the actual centerline plane of the brain image to be detected with the position of the theoretical centerline plane.
Preferably, whether there is a midline shift can be determined by analyzing a distance between a position of an actual midline plane of the brain image and a position of a theoretical midline plane, and the specific step S105 includes:
calculating a first offset distance between the actual central line plane position of the brain image to be detected and the position of the theoretical central line plane;
judging whether the first offset distance is larger than a preset distance threshold value or not;
and if the first offset distance is greater than a preset distance threshold value, determining the degree of the brain structure displacement in the brain image to be detected.
The preset distance threshold may be obtained through a large number of experiments or statistics in advance, or may also be obtained through expert experience, or obtained through experiments and expert experience.
In the following examples, the method of experiment with expert experience will be described in detail, but the present example is not limited thereto.
In this embodiment, the preset distance threshold may include two thresholds, one for determining whether the central line is biased to the left, and one for determining whether the central line is biased to the right.
The degree of the displacement of the brain result in the brain image to be detected may include a degree of left deviation and a degree of right deviation of the brain image.
The distance between the actual central line plane position of the brain image to be detected and the theoretical central line plane position may be calculated in various ways, where this means that the distance is the first offset distance, and this embodiment is not limited, and may be calculated by an euclidean distance method, for example.
In the embodiment, a brain image to be detected is processed based on a pre-trained deep convolutional neural network model to obtain a ventricular segmentation result of the brain influence to be detected; the deep convolutional neural network model is obtained by training a first sample set of a brain image marked with a ventricle segmentation result; determining the position of a theoretical centerline plane in a brain image to be detected; determining the position of an actual midline plane based on the ventricular segmentation result of the brain image to be detected; and judging whether the brain structure in the brain image to be detected is displaced or not according to the position of the actual midline plane and the position of the theoretical midline plane of the brain image to be detected.
Therefore, in the embodiment, the brain image is segmented through the deep convolutional neural network, so that the processing speed is high, a more accurate segmentation result can be obtained, then the position of the actual midline plane of the brain image is determined through the segmentation result, and further, the more accurate position of the actual midline plane can be obtained; and finally, comparing the position of the actual midline plane with the position of the theoretical midline plane to determine whether the brain structure is displaced. In conclusion, the method of the embodiment not only achieves the purpose of automatically judging the brain structure displacement, but also greatly reduces the influence of whether the posture of the brain image is correct on the recognition result, can deal with more postures of the brain image, and adopts the deep convolutional neural network to enable the judgment result to be more accurate. In this way, it is advantageous for the physician to analyze the condition of the disease based on procedures for shifting the midline structure of the brain.
Referring to fig. 5, a flowchart of a training method for a deep convolutional neural network according to an embodiment of the present invention is shown, where the method includes:
s501: acquiring a first sample set; the first sample set comprises a brain image marked with a ventricular segmentation result;
in this embodiment, the labeling method for the ventricular segmentation result may include a plurality of methods, and is not limited in this embodiment.
For example, the following steps are carried out: the ventricular segmentation result in the brain image can be labeled as follows: the left ventricle is labeled 1, the right ventricle is labeled 2, and the part outside the ventricle may also be labeled 0 (where the part outside the ventricle may be understood as background). In addition, when the division result of the ventricle in the brain image is marked, the injured part in the brain can be marked.
In this embodiment, what kind of mode is selected for marking the brain image is not limited in this embodiment.
In this embodiment, the first sample set may be further divided into a training set and a test set.
S502: constructing a deep convolutional neural network model;
the deep convolutional neural network model may be any deep convolutional neural network algorithm, or may be a model constructed according to actual conditions.
For example, as shown in fig. 6, the structure of a deep convolutional neural network model is constructed:
the DenseBlock is a dense connecting block, the TransitionBlock is a transition dense connecting block, DenseBlock-n is a dense connecting block composed of n basic convolution blocks, upsampling-2 is an upsampling module with an output characteristic diagram size twice as large as that of an input characteristic diagram, and conv-uint-n is a convolution module with an output characteristic diagram channel n.
S503: and training the constructed deep convolutional neural network model through the brain image marked with the ventricle segmentation result in the sample set.
In this embodiment, in order to eliminate the influence of noise and the like in the brain image, before training the deep convolutional neural network model constructed by the brain image labeled with the ventricular segmentation result in the sample set, the brain image in the sample set may be preprocessed.
In this embodiment, the trained deep convolutional neural network can rapidly and accurately segment the ventricles of the brain images, is not affected by the postures of the brain images, can cope with more actual postures of the brain images, and the determination result is more accurate.
Referring to fig. 7, a flowchart illustrating a method for training a threshold for discriminating whether a brain structure is displaced according to an embodiment of the present invention is shown, in which the method includes:
s701: acquiring a second sample set containing brain images;
in this embodiment, the data in the first sample set and the second sample set may be the same or different.
S702: determining the location of a theoretical centerline plane for each brain image in the sample set;
in this embodiment, the step of S702 is the same as the step of S103, and is not described again in this embodiment.
S703: processing the brain image in the second sample set according to the trained deep convolutional neural network to obtain a ventricle segmentation result of the brain image, and determining the position of an actual centerline plane of the brain image based on the ventricle segmentation result of the brain image;
in this embodiment, specifically, S703 includes:
selecting a position of a reference plane in the brain image;
translating the location of the reference plane along a coronal axis of the brain image;
when the reference plane moves to a target position, acquiring a brain ventricle segmentation result of the brain image from the target position;
and determining the target position as the actual plane position of the midline according to the segmentation result of the target position on the ventricles of the brain image and the segmentation result of the pre-trained deep convolutional neural network on the brain image.
S704: calculating a second offset distance from the position of the actual centerline plane to the position of the theoretical centerline plane;
the distance between the position of the actual centerline plane and the position of the theoretical centerline plane may be calculated in various ways, and the distance is expressed as the second offset distance, which is not limited in this embodiment and may be calculated by the euclidean distance method, for example.
S705: obtaining a result of whether each brain image in the second sample set has brain structure displacement;
in this embodiment, the result of whether each brain image in the second sample set has the brain structure displacement may be obtained by an expert through experience, or may be a diagnosis conclusion of a doctor.
S706: and analyzing the second offset distance corresponding to each brain image in the second sample set and the result of whether the brain structure is displaced, and determining a distance threshold value for judging whether the brain structure is displaced.
In this embodiment, whether the second offset distance corresponding to each brain image in the second sample set corresponds to the result of displacement or not is determined, and the second offset distance corresponding to the result of displacement is extracted and analyzed, so that a reasonable distance threshold value which can be used for judging whether the brain structure is displaced or not is obtained.
The distance threshold obtained by training may include two, one distance threshold is used for determining whether to deviate to the right, and the other distance threshold is used for determining whether to deviate to the left.
In this embodiment, whether the brain structure is displaced can be accurately determined by the distance threshold obtained by training a large number of sample sets.
Referring to fig. 8, a schematic structural diagram of an apparatus for identifying a structural shift of a line in a head according to an embodiment of the present invention is shown, and in this embodiment, the apparatus includes:
an acquiring unit 801, configured to acquire a brain image to be detected;
the model processing unit 802 is configured to process the brain image to be detected based on a pre-trained deep convolutional neural network model to obtain a ventricular segmentation result of the brain image to be detected; the deep convolutional neural network model is obtained by training a first sample set containing a brain image marked with a ventricle segmentation result;
a theoretical centerline plane position determining unit 803, configured to determine a position of a theoretical centerline plane in the brain image to be detected;
a position determining unit 804 of the actual midline plane, configured to determine a position of the actual midline plane based on a ventricular segmentation result of the brain image to be detected;
a processing result determining unit 805, configured to determine whether there is a brain structure shift in the brain image to be detected according to the position of the actual centerline plane and the position of the theoretical centerline plane of the brain image to be detected.
Optionally, the position determining unit of the theoretical centerline plane is further configured to:
registering the brain image to be detected with a standard brain image marked with the position of the midline plane;
and determining the position of the theoretical midline plane in the brain image to be detected according to the position of the midline plane in the standard brain image.
Optionally, the unit for determining the position of the theoretical centerline plane includes:
the registration subunit is used for registering the brain image to be detected with the standard brain image marked with the midline plane position;
and the theoretical midline determining subunit is used for determining the position of the theoretical midline plane in the brain image to be detected according to the position of the midline plane in the standard brain image.
Optionally, the unit for determining the position of the actual midline plane includes:
the selecting subunit is used for selecting the position of a reference plane in the brain image to be detected;
a translation subunit, configured to translate the position of the reference plane along a coronal axis of the brain image to be detected;
the acquisition subunit is used for acquiring the division result of the reference plane on the ventricles of the brain image to be detected at the target position when the reference plane moves to the target position;
and the plane position determining subunit is used for determining the target position as the actual plane position of the central line according to the result of segmenting the ventricle of the brain image to be detected by the target position and the result of segmenting the brain image to be detected by the pre-trained deep convolutional neural network.
Optionally, the position of the reference plane selected from the brain image to be detected is the position of the theoretical centerline plane in the brain image to be detected.
Optionally, the processing result determining unit includes:
the calculating subunit is used for calculating a first offset distance between the actual central line plane position of the brain image to be detected and the theoretical central line plane position;
the judging subunit is used for judging whether the first offset distance is greater than a preset distance threshold value;
and the processing result determining subunit is used for determining the degree of the brain structure displacement in the brain image to be detected if the distance is greater than a preset distance threshold.
Optionally, the method further includes:
a first training unit to:
acquiring a second sample set containing brain images;
determining the position of a theoretical centerline plane for each brain image in the second sample set;
processing the brain image in the second sample set according to the trained deep convolutional neural network to obtain a ventricle segmentation result of the brain image, and determining the position of an actual centerline plane of the brain image based on the ventricle segmentation result of the brain image;
calculating a second offset distance from the position of the actual centerline plane to the position of the theoretical centerline plane;
obtaining a result of whether each brain image in the second sample set has brain structure displacement;
and analyzing the second offset distance corresponding to each brain image in the second sample set and the result of whether the brain structure is displaced, and determining a distance threshold value for judging whether the brain structure is displaced.
Optionally, the method further includes:
a deep convolutional neural network training unit to:
acquiring a first sample set; the first sample set comprises a brain image marked with a ventricular segmentation result;
constructing a deep convolutional neural network model;
and training the constructed deep convolutional neural network model through the brain image marked with the ventricle segmentation result in the sample set.
Through the device of this embodiment, not only realized the purpose of automatic judgement brain structure aversion, greatly reduced moreover whether the posture of brain image is ajusted the influence to the recognition result, can deal with the posture of more brain image to make the result of judgement more accurate through the deep convolution neural network.
Referring to fig. 9, a schematic structural diagram of an electronic device according to an embodiment of the present invention is shown, and in this embodiment, the electronic device includes:
a processor 901 and a memory 902;
wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement an image processing method according to:
acquiring a brain image to be detected;
processing a brain image to be detected based on a pre-trained deep convolutional neural network model to obtain a ventricle segmentation result of the brain image to be detected; the deep convolutional neural network model is obtained by training a first sample set containing a brain image marked with a ventricle segmentation result;
determining the position of a theoretical centerline plane in a brain image to be detected;
determining the position of an actual midline plane based on the ventricular segmentation result of the brain image to be detected;
and determining an image processing result of the brain image to be detected according to the position of the actual central line plane and the position of the theoretical central line plane of the brain image to be detected.
Optionally, the determining the position of the theoretical centerline plane in the brain image to be detected includes:
registering the brain image to be detected with a standard brain image marked with the position of the midline plane;
and determining the position of the theoretical midline plane in the brain image to be detected according to the position of the midline plane in the standard brain image.
Optionally, the determining the position of the actual midline plane based on the result of the ventricular segmentation of the brain image to be detected includes:
selecting the position of a reference plane in the brain image to be detected;
translating the position of the reference plane along the coronal axis of the brain image to be detected;
when the reference plane moves to a target position, acquiring a division result of the reference plane on a ventricle of a brain image to be detected at the target position;
and determining the target position as the actual plane position of the midline according to the segmentation result of the target position on the ventricles of the brain image to be detected and the segmentation result of the brain image to be detected by the pre-trained deep convolutional neural network.
Optionally, the position of the reference plane selected from the brain image to be detected is the position of the theoretical centerline plane in the brain image to be detected.
Optionally, the determining whether there is a displacement of the brain structure in the brain image to be detected according to the position of the actual centerline plane and the position of the theoretical centerline plane of the brain image to be detected includes:
calculating a first offset distance between the actual central line plane position and the theoretical central line plane position of the brain image to be detected;
judging whether the first offset distance is larger than a preset distance threshold value or not;
and if the distance is larger than the preset distance threshold, judging the degree of brain structure displacement in the brain image to be detected.
Optionally, the method further includes:
acquiring a second sample set containing brain images;
determining the position of a theoretical centerline plane for each brain image in the second sample set;
processing the brain image in the second sample set according to the trained deep convolutional neural network to obtain a ventricle segmentation result of the brain image, and determining the position of an actual centerline plane of the brain image based on the ventricle segmentation result of the brain image;
calculating a second offset distance from the position of the actual centerline plane to the position of the theoretical centerline plane;
obtaining a result of whether each brain image in the second sample set has brain structure displacement;
and analyzing the second offset distance corresponding to each brain image in the second sample set and the result of whether the brain structure is displaced, and determining a distance threshold value for judging whether the brain structure is displaced.
Optionally, the training process of the deep convolutional neural network model includes:
acquiring a first sample set; the first sample set comprises a brain image marked with a ventricular segmentation result;
constructing a deep convolutional neural network model;
and training the constructed deep convolutional neural network model through the brain image marked with the ventricle segmentation result in the sample set.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image processing method, comprising:
acquiring a brain image to be detected;
processing a brain image to be detected based on a pre-trained deep convolutional neural network model to obtain a ventricle segmentation result of the brain image to be detected; the deep convolutional neural network model is obtained by training a first sample set containing a brain image marked with a ventricle segmentation result;
determining the position of a theoretical centerline plane in a brain image to be detected;
determining the position of an actual midline plane based on the ventricular segmentation result of the brain image to be detected;
and determining an image processing result of the brain image to be detected according to the position of the actual central line plane and the position of the theoretical central line plane of the brain image to be detected.
2. The method according to claim 1, wherein determining the position of the theoretical centerline plane in the brain image to be detected comprises:
registering the brain image to be detected with a standard brain image marked with the position of the midline plane;
and determining the position of the theoretical midline plane in the brain image to be detected according to the position of the midline plane in the standard brain image.
3. The method according to claim 1, wherein determining the position of the actual midline plane based on the results of the ventricular segmentation of the brain images to be detected comprises:
selecting the position of a reference plane in the brain image to be detected;
translating the position of the reference plane along the coronal axis of the brain image to be detected;
when the reference plane moves to a target position, acquiring a division result of the reference plane on a ventricle of a brain image to be detected at the target position;
and determining the target position as the actual plane position of the midline according to the segmentation result of the target position on the ventricles of the brain image to be detected and the segmentation result of the brain image to be detected by the pre-trained deep convolutional neural network.
4. The method according to claim 3, wherein the position of the selected reference plane in the brain image to be detected is the position of the theoretical midline plane in the brain image to be detected.
5. The method according to claim 1, wherein determining the image processing result of the brain image to be detected according to the position of the actual centerline plane and the position of the theoretical centerline plane of the brain image to be detected comprises:
calculating a first offset distance between the actual central line plane position and the theoretical central line plane position of the brain image to be detected;
judging whether the first offset distance is larger than a preset distance threshold value or not;
and if the distance is larger than a preset distance threshold value, determining the degree of the brain structure displacement in the brain image to be detected.
6. The method of claim 1 or 5, further comprising:
acquiring a second sample set containing brain images;
determining the position of a theoretical centerline plane for each brain image in the second sample set;
processing the brain image in the second sample set according to the trained deep convolutional neural network to obtain a ventricle segmentation result of the brain image, and determining the position of an actual centerline plane of the brain image based on the ventricle segmentation result of the brain image;
calculating a second offset distance from the position of the actual centerline plane to the position of the theoretical centerline plane;
obtaining a result of whether each brain image in the second sample set has brain structure displacement;
and analyzing the second offset distance corresponding to each brain image in the second sample set and the result of whether the brain structure is displaced, and determining a distance threshold value for judging whether the brain structure is displaced.
7. The method of claim 1, wherein the training process of the deep convolutional neural network model comprises:
acquiring a first sample set; the first sample set comprises a brain image marked with a ventricular segmentation result;
constructing a deep convolutional neural network model;
and training the constructed deep convolutional neural network model through the brain image marked with the ventricle segmentation result in the sample set.
8. An image processing apparatus characterized by comprising:
the acquisition unit is used for acquiring a brain image to be detected;
the model processing unit is used for processing the brain image to be detected based on a pre-trained deep convolutional neural network model to obtain a ventricular segmentation result of the brain image to be detected; the deep convolutional neural network model is obtained by training a first sample set containing a brain image marked with a ventricle segmentation result;
the position determining unit of the theoretical midline plane is used for determining the position of the theoretical midline plane in the brain image to be detected;
the position determining unit of the actual midline plane is used for determining the position of the actual midline plane based on the ventricular segmentation result of the brain image to be detected;
and the processing result determining unit is used for determining the image processing result of the brain image to be detected according to the position of the actual central line plane and the position of the theoretical central line plane of the brain image to be detected.
9. A storage medium having stored thereon a computer program which, when processed and executed, implements an image processing method according to any one of claims 1 to 7.
10. An electronic device, characterized in that the electronic device comprises:
a processor and a memory;
wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement the image processing method according to any one of claims 1 to 7.
CN201911189278.7A 2019-11-28 2019-11-28 Image processing method and device Pending CN110956636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911189278.7A CN110956636A (en) 2019-11-28 2019-11-28 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911189278.7A CN110956636A (en) 2019-11-28 2019-11-28 Image processing method and device

Publications (1)

Publication Number Publication Date
CN110956636A true CN110956636A (en) 2020-04-03

Family

ID=69978784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911189278.7A Pending CN110956636A (en) 2019-11-28 2019-11-28 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110956636A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583212A (en) * 2020-04-29 2020-08-25 上海杏脉信息科技有限公司 Method and device for determining brain midline shift
CN111862014A (en) * 2020-07-08 2020-10-30 深圳市第二人民医院(深圳市转化医学研究院) ALVI automatic measurement method and device based on left and right ventricle segmentation
CN113256705A (en) * 2021-03-23 2021-08-13 杭州依图医疗技术有限公司 Processing method, display method and processing device of craniocerebral image
WO2021189959A1 (en) * 2020-10-22 2021-09-30 平安科技(深圳)有限公司 Brain midline recognition method and apparatus, and computer device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100322496A1 (en) * 2008-02-29 2010-12-23 Agency For Science, Technology And Research Method and system for anatomy structure segmentation and modeling in an image
US20140270434A1 (en) * 2013-03-18 2014-09-18 Samsung Electronics Co., Ltd. System and method for automatic planning of views in 3d images of brain
CN105426808A (en) * 2014-09-23 2016-03-23 深圳先进技术研究院 Intra-brain sagittal line measurement method and system
CN108369642A (en) * 2015-12-18 2018-08-03 加利福尼亚大学董事会 Acute disease feature is explained and quantified according to head computer tomography
CN108765483A (en) * 2018-06-04 2018-11-06 东北大学 The method and system of sagittal plane in being determined in a kind of CT images from brain
CN108962380A (en) * 2017-05-27 2018-12-07 周仁海 The device and method of interpretation brain phantom and the device of offer brain status information
CN109983474A (en) * 2016-11-22 2019-07-05 海珀菲纳研究股份有限公司 For the system and method detected automatically in magnetic resonance image
CN110415219A (en) * 2019-07-04 2019-11-05 杭州深睿博联科技有限公司 Medical image processing method and device, equipment, storage medium based on depth segmentation network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100322496A1 (en) * 2008-02-29 2010-12-23 Agency For Science, Technology And Research Method and system for anatomy structure segmentation and modeling in an image
US20140270434A1 (en) * 2013-03-18 2014-09-18 Samsung Electronics Co., Ltd. System and method for automatic planning of views in 3d images of brain
CN105426808A (en) * 2014-09-23 2016-03-23 深圳先进技术研究院 Intra-brain sagittal line measurement method and system
CN108369642A (en) * 2015-12-18 2018-08-03 加利福尼亚大学董事会 Acute disease feature is explained and quantified according to head computer tomography
CN109983474A (en) * 2016-11-22 2019-07-05 海珀菲纳研究股份有限公司 For the system and method detected automatically in magnetic resonance image
CN108962380A (en) * 2017-05-27 2018-12-07 周仁海 The device and method of interpretation brain phantom and the device of offer brain status information
CN108765483A (en) * 2018-06-04 2018-11-06 东北大学 The method and system of sagittal plane in being determined in a kind of CT images from brain
CN110415219A (en) * 2019-07-04 2019-11-05 杭州深睿博联科技有限公司 Medical image processing method and device, equipment, storage medium based on depth segmentation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王玉梁: "《高血压脑出血CT影像的计算机辅助诊断研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583212A (en) * 2020-04-29 2020-08-25 上海杏脉信息科技有限公司 Method and device for determining brain midline shift
CN111583212B (en) * 2020-04-29 2021-11-30 上海杏脉信息科技有限公司 Method and device for determining brain midline shift
CN111862014A (en) * 2020-07-08 2020-10-30 深圳市第二人民医院(深圳市转化医学研究院) ALVI automatic measurement method and device based on left and right ventricle segmentation
WO2021189959A1 (en) * 2020-10-22 2021-09-30 平安科技(深圳)有限公司 Brain midline recognition method and apparatus, and computer device and storage medium
CN113256705A (en) * 2021-03-23 2021-08-13 杭州依图医疗技术有限公司 Processing method, display method and processing device of craniocerebral image

Similar Documents

Publication Publication Date Title
CN110956636A (en) Image processing method and device
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN110796613B (en) Automatic identification method and device for image artifacts
EP2888718B1 (en) Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
CN103249358B (en) Medical image-processing apparatus
CN111539944A (en) Lung focus statistical attribute acquisition method and device, electronic equipment and storage medium
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
KR102338018B1 (en) Ultrasound diagnosis apparatus for liver steatosis using the key points of ultrasound image and remote medical-diagnosis method using the same
CN109614869B (en) Pathological image classification method based on multi-scale compression reward and punishment network
CN111861989B (en) Method, system, terminal and storage medium for detecting brain midline
CN109801276B (en) Method and device for calculating heart-chest ratio
CN111374712B (en) Ultrasonic imaging method and ultrasonic imaging equipment
CN111738992B (en) Method, device, electronic equipment and storage medium for extracting lung focus area
CN101667297A (en) Method for extracting breast region in breast molybdenum target X-ray image
CN111401102B (en) Deep learning model training method and device, electronic equipment and storage medium
CN113421272B (en) Tumor infiltration depth monitoring method, device, equipment and storage medium
JP3668629B2 (en) Image diagnostic apparatus and image processing method
US20230115927A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
KR101154355B1 (en) Automatic left ventricle segmentation method
CN115249289A (en) Automatic frame selection for 3D model construction
CN113256625A (en) Electronic equipment and recognition device
CN111062943A (en) Plaque stability determination method and device and medical equipment
CN112085711A (en) Method for automatically tracking muscle pinnate angle by combining convolutional neural network and Kalman filtering
US11744542B2 (en) Method for evaluating movement state of heart
CN117523207B (en) Method, device, electronic equipment and storage medium for lung lobe segmentation correction processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200403