CN112435227A - Medical image processing method and device, terminal equipment and medium - Google Patents

Medical image processing method and device, terminal equipment and medium Download PDF

Info

Publication number
CN112435227A
CN112435227A CN202011300705.7A CN202011300705A CN112435227A CN 112435227 A CN112435227 A CN 112435227A CN 202011300705 A CN202011300705 A CN 202011300705A CN 112435227 A CN112435227 A CN 112435227A
Authority
CN
China
Prior art keywords
brain
image
dimensional
images
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011300705.7A
Other languages
Chinese (zh)
Inventor
罗怡珊
林陈冉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Brainnow Medical Technology Co ltd
Original Assignee
Shenzhen Brainnow Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Brainnow Medical Technology Co ltd filed Critical Shenzhen Brainnow Medical Technology Co ltd
Priority to CN202011300705.7A priority Critical patent/CN112435227A/en
Publication of CN112435227A publication Critical patent/CN112435227A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The application is applicable to the technical field of image processing, and provides a method, a device, terminal equipment and a medium for processing a medical image, wherein the method comprises the following steps: shooting a medical image to be processed to obtain a digital image; identifying a plurality of brain images contained in the digital image; extracting a sequence of consecutive two-dimensional images from the plurality of brain images; superposing a plurality of brain images in the continuous two-dimensional image sequence to obtain a three-dimensional brain image; and quantifying the brain atrophy degree according to the three-dimensional brain image. By the method, the medical image can be automatically processed, and the brain atrophy degree can be automatically quantified.

Description

Medical image processing method and device, terminal equipment and medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method, an apparatus, a terminal device, and a medium for processing a medical image.
Background
Brain atrophy refers to a phenomenon in which organic lesions occur in brain tissue itself due to various causes, and atrophy occurs. Generally, the degree of brain atrophy can be checked by medical imaging.
Nowadays, medical Imaging technology is rapidly developed, medical Imaging examination is widely popularized, and patients can obtain solid films such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and the like from hospitals. However, not only is the physical film inconvenient to store, but there are difficulties in viewing and accurately understanding the information in the physical film for general patients lacking medical-related knowledge.
Disclosure of Invention
The embodiment of the application provides a medical image processing method, a medical image processing device, terminal equipment and a medium, which can automatically process a medical image and help a patient to understand information in the medical image.
In a first aspect, an embodiment of the present application provides a method for processing a medical image, including:
shooting a brain medical image to be processed to obtain a digital image;
identifying a plurality of brain images contained in the digital image;
extracting a sequence of consecutive two-dimensional images from the plurality of brain images;
superposing a plurality of brain images in the continuous two-dimensional image sequence to obtain a three-dimensional brain image;
and quantifying the brain atrophy degree according to the three-dimensional brain image.
In a second aspect, an embodiment of the present application provides a processing apparatus for medical image, including:
the digital image acquisition module is used for shooting a brain medical image to be processed to obtain a digital image;
a brain image identification module for identifying a plurality of brain images contained in the digital image;
a two-dimensional image sequence extraction module for extracting a continuous two-dimensional image sequence from the plurality of brain images;
the three-dimensional brain image acquisition module is used for superposing a plurality of brain images in the continuous two-dimensional image sequence to obtain a three-dimensional brain image;
and the brain atrophy degree quantifying module is used for quantifying the brain atrophy degree according to the three-dimensional brain image.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment, a medical image to be processed is shot to obtain a digital image; then, continuously processing the digital images, and extracting a plurality of brain images from the digital images; extracting a preset two-dimensional image sequence from a plurality of brain images; two-dimensional images in the two-dimensional image sequence are superposed to form a three-dimensional brain image, and the brain atrophy degree is generally judged through the three-dimensional image; after obtaining a three-dimensional brain image, the brain atrophy degree of the brain was quantified. In this embodiment, only taking a picture of the medical image is needed, the medical image can be automatically processed, and image information included in the medical image, such as information about brain atrophy degree, is output. Based on the method in the application, the medical image map can realize cross-hospital browsing and secondary consultation.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a medical image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a digital image obtained by capturing a medical image according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a medical image processing apparatus according to a second embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to a third embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 1 is a schematic flowchart of a processing method of a medical image map according to an embodiment of the present application, as shown in fig. 1, the method includes:
s101, shooting a brain medical image to be processed to obtain a digital image.
The execution subject of the embodiment is a terminal device, and the terminal device may include a shooting device for shooting a medical image. The medical image may be an image obtained by technical means such as X-ray, CT, nuclear magnetic resonance, nuclear medicine examination, and ultrasound examination, and may be, for example, a magnetic resonance image film, a CT film, or the like. Fig. 2 is a schematic diagram of a digital image obtained by capturing a medical image according to an embodiment of the present disclosure, and as shown in fig. 2, the captured medical image may be a film for magnetic resonance imaging of a brain.
In addition, the film can also be shot by adopting a digital device, the digital image is stored in the terminal device, and the terminal device directly obtains the stored digital image and then carries out subsequent processing.
S102, identifying a plurality of brain images contained in the digital image.
Specifically, a plurality of brain images may be extracted from the digital image. As shown in fig. 2, the digital image includes a plurality of brain images, but includes some other irrelevant parts. If the brain condition needs to be judged, a plurality of brain images in the digital image need to be extracted respectively.
Specifically, if the digital image is a color image, the digital image may be converted into a grayscale image; then, a preset image corrosion template is adopted to carry out corrosion treatment on the gray level image so as to remove an adhesion part in the gray level image, wherein the adhesion part can be a grid in the digital image to be treated; then carrying out binarization processing on the gray level image; then determining each communicating region in the gray level image after binarization processing, wherein each communicating region can be equivalent to a brain part; a plurality of brain images are determined from the digital image using each communicating region as a mask.
S103, extracting a continuous two-dimensional image sequence from the plurality of brain images.
Specifically, different mri image sequences may exist in the film, and when medical image processing is performed, a required sequence needs to be extracted, so as to quantify the brain atrophy degree.
Specifically, a large number of different sequences, such as a T1W MRI two-dimensional image sequence, a T2-FLAIR MRI two-dimensional image sequence, a T2W MRI two-dimensional image sequence, etc., may be acquired in advance, and then a preset machine learning algorithm may be trained using these different two-dimensional image sequences until the algorithm can distinguish the different two-dimensional image sequences.
The brain images have a certain sequence in the digital images, so that the sequence of the brain images in the digital images can be scanned, and a sequence number is determined for each brain image; a plurality of two-dimensional image sequences can then be determined according to the sequential numbering of the brain images, each two-dimensional image sequence comprises a plurality of brain images of different sections, and the brain images in each two-dimensional image are sequentially connected. The two-dimensional image sequences are input into a machine learning algorithm, and the machine learning algorithm can identify different sequences so as to select a required two-dimensional image sequence. For example, in order to quantify the degree of brain atrophy, since the determination can be made using a T1W MRI two-dimensional image sequence, a T1W MRI sequence among a plurality of brain images can be selected based on the recognition result of the machine learning algorithm. Referring to fig. 2, the selected portion 20 of the frame of fig. 2 is a T1W MRI two-dimensional image sequence.
And S104, overlapping the plurality of brain images in the continuous two-dimensional image sequence to obtain a three-dimensional brain image.
Specifically, the two-dimensional image sequence comprises a plurality of brain images, and the plurality of brain images are sequentially connected, so that the brain images of a plurality of different cross sections in the two-dimensional image sequence can be sequentially superposed to form a three-dimensional brain image, thereby facilitating the judgment of the brain atrophy condition. For example, a T1W MRI two-dimensional image sequence may be superimposed into a three-dimensional T1W MRI image, and the degree of brain atrophy quantified from the three-dimensional image.
And S105, quantifying the brain atrophy degree according to the three-dimensional brain image.
The quantification is to identify the brain atrophy degree by using a quantification index, and the result of quantifying the brain atrophy degree can be a numerical value which can reflect the brain atrophy degree. In this example, the ratio between the volume of cerebrospinal fluid and the sum of the volumes of white and gray brain matter can be used to quantify the extent of brain atrophy.
Specifically, the three-dimensional brain image may be divided into a plurality of brain partitions by using a preset brain template; the extent of brain atrophy in each brain region was then quantified. Specifically, the plurality of brain partitions partitioned may include a left frontal lobe brain partition, a right frontal lobe brain partition, a left occipital lobe brain partition, a right occipital lobe brain partition, a left parietal lobe brain partition, a right parietal lobe brain partition, a left temporal lobe brain partition, a right temporal lobe brain partition, a left cingulate gyrobolic partition, a right cingulate gyroidal brain partition, a left island lobe brain partition, and a right island lobe brain partition.
Specifically, a brain tissue segmentation method based on a bayesian network can be used for performing brain tissue segmentation on each brain part region to obtain a segmentation map of white brain matter, gray brain matter and cerebrospinal fluid; then, according to the segmentation map, the volume of white matter, the volume of gray matter and the volume of cerebrospinal fluid in each brain partition are respectively calculated; then calculating the volume sum of white matter and gray matter in each brain subarea, and calculating the ratio of the volume sum of cerebrospinal fluid to the volume sum of white matter and gray matter; this ratio can be used as a quantitative indicator of the extent of brain atrophy in the brain region.
In this embodiment, the terminal device may automatically process the medical image map, extract the brain image therein, extract the required two-dimensional image sequence from the brain image, superimpose the brain image in the two-dimensional image sequence into a three-dimensional brain image, and quantify the brain atrophy degree based on the three-dimensional brain image. The terminal equipment can shoot and store the film, can automatically process the medical image map and is convenient for a patient to view and understand the medical image map. In addition, the medical image map is stored, so that cross-hospital browsing and secondary consultation are facilitated.
Fig. 3 is a schematic diagram of a medical image processing apparatus according to a second embodiment of the present application, and as shown in fig. 3, the apparatus includes:
the digital image acquisition module 31 is used for shooting a brain medical image to be processed to obtain a digital image;
a brain image recognition module 32, configured to recognize a plurality of brain images included in the digital image;
a two-dimensional image sequence extraction module 33, configured to extract a continuous two-dimensional image sequence from the plurality of brain images;
a three-dimensional brain image obtaining module 34, configured to superimpose a plurality of brain images in the continuous two-dimensional image sequence to obtain a three-dimensional brain image;
and a brain atrophy degree quantifying module 35, configured to quantify the brain atrophy degree according to the three-dimensional brain image.
The brain image recognition module 32 includes:
the gray level processing submodule is used for converting the digital image into a gray level image;
the binarization processing module is used for carrying out binarization processing on the gray level image;
a communicating area determining submodule used for each communicating area in the gray level image after the binarization processing;
and the extraction sub-module is used for determining a plurality of brain images from the gray level image after the binarization processing by taking each communication area as a mask.
The brain image recognition module 32 further includes:
and the corrosion processing submodule is used for carrying out corrosion processing on the gray level image by adopting a preset image corrosion template so as to remove an adhesion part in the gray level image, wherein the adhesion part is a grid in the medical image to be processed.
The two-dimensional image sequence extraction module 33 includes:
the sequence acquisition sub-module is used for respectively determining the scanning sequence of each brain image obtained by scanning;
the image sequence acquisition sub-module is used for combining a plurality of brain images in the digital image into a plurality of image sequences according to the scanning sequence;
and the preset two-dimensional image sequence extraction submodule is used for extracting a two-dimensional image sequence which meets the preset sequence condition from the plurality of image sequences by adopting a preset image recognition algorithm.
The brain atrophy level identification and quantification module 35 includes:
the brain partition submodule is used for dividing the three-dimensional brain image into a plurality of brain partitions by adopting a preset brain template;
a quantification sub-module for quantifying the extent of brain atrophy of each of said brain regions.
The quantification sub-module comprises:
the segmentation unit is used for performing brain tissue segmentation on each brain part region based on a Bayesian network brain tissue segmentation method to obtain a segmentation map of white brain matter, gray brain matter and cerebrospinal fluid;
the volume calculation unit is used for respectively calculating the volumes of the white brain matter, the gray brain matter and the cerebrospinal fluid in each brain subarea according to the segmentation maps;
a ratio calculation unit for calculating a volume sum of the white brain matter and the gray brain matter in each of the brain regions, and calculating a ratio between the volume sum of the cerebrospinal fluid and the volume sum of the white brain matter and the gray brain matter;
and the quantification unit is used for taking the ratio as a quantification index of the brain atrophy degree of the brain subarea.
Fig. 4 is a schematic structural diagram of a terminal device according to a third embodiment of the present application. As shown in fig. 4, the terminal device 4 of this embodiment includes: at least one processor 40 (only one shown in fig. 4), a memory 41, and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, the processor 40 implementing the steps in any of the various method embodiments described above when executing the computer program 42.
The terminal device 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of the terminal device 4, and does not constitute a limitation of the terminal device 4, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The Processor 40 may be a Central Processing Unit (CPU), and the Processor 40 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may in some embodiments be an internal storage unit of the terminal device 4, such as a hard disk or a memory of the terminal device 4. In other embodiments, the memory 41 may also be an external storage device of the terminal device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the terminal device 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal device 4. The memory 41 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for processing a medical image map, comprising:
shooting a brain medical image to be processed to obtain a digital image;
identifying a plurality of brain images contained in the digital image;
extracting a sequence of consecutive two-dimensional images from the plurality of brain images;
superposing a plurality of brain images in the continuous two-dimensional image sequence to obtain a three-dimensional brain image;
and quantifying the brain atrophy degree according to the three-dimensional brain image.
2. The method of claim 1, wherein the identifying the plurality of brain images contained in the digital image comprises:
converting the digital image into a gray image;
carrying out binarization processing on the gray level image;
determining each communication area in the gray-scale image after binarization processing;
and determining a plurality of brain images from the gray-scale image after the binarization processing by using each communication area as a mask.
3. The method as claimed in claim 2, further comprising, before the binarizing processing on the grayscale image:
and corroding the gray level image by adopting a preset image corrosion template to remove an adhesion part in the gray level image, wherein the adhesion part is a grid in the medical image to be treated.
4. The method of claim 1, wherein said extracting a continuous sequence of two-dimensional images from the plurality of brain images comprises:
respectively determining the scanning sequence of each brain image obtained by scanning;
combining a plurality of brain images in the digital image into a plurality of image sequences according to the scanning sequence;
and extracting a two-dimensional image sequence which meets the preset sequence condition from the plurality of image sequences by adopting a preset image recognition algorithm.
5. The method of claim 1, wherein quantifying the extent of brain atrophy from the three-dimensional brain image comprises:
dividing the three-dimensional brain image into a plurality of brain subareas by adopting a preset brain template;
quantifying the extent of brain atrophy in each of said brain regions.
6. The method of claim 5, wherein the plurality of brain regions comprises a left frontal lobe, a right frontal lobe, a left occipital lobe, a right occipital lobe, a left parietal lobe, a right parietal lobe, a left temporal lobe, a right temporal lobe, a left cingulate gyrus, a right cingulate gyrus, a left islet lobe, and a right islet lobe.
7. The method of claim 5, wherein said quantifying the extent of brain atrophy in each of said brain regions comprises:
performing brain tissue segmentation on each brain part region based on a Bayesian network brain tissue segmentation method to obtain a segmentation map of white brain matter, gray brain matter and cerebrospinal fluid;
calculating the volume of the white brain matter, the gray brain matter and the cerebrospinal fluid in each brain subarea respectively according to the segmentation maps;
calculating the sum of the volumes of the white brain matter and the grey brain matter in each of the brain partitions, and calculating the ratio between the volume of the cerebrospinal fluid and the sum of the volumes of the white brain matter and the grey brain matter;
and taking the ratio as a quantitative index of the brain atrophy degree of the brain subarea.
8. A device for processing a medical image map, comprising:
the digital image acquisition module is used for shooting a brain medical image to be processed to obtain a digital image;
a brain image identification module for identifying a plurality of brain images contained in the digital image;
a two-dimensional image sequence extraction module for extracting a continuous two-dimensional image sequence from the plurality of brain images;
the three-dimensional brain image acquisition module is used for superposing a plurality of brain images in the continuous two-dimensional image sequence to obtain a three-dimensional brain image;
and the brain atrophy degree quantifying module is used for quantifying the brain atrophy degree according to the three-dimensional brain image.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202011300705.7A 2020-11-19 2020-11-19 Medical image processing method and device, terminal equipment and medium Pending CN112435227A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011300705.7A CN112435227A (en) 2020-11-19 2020-11-19 Medical image processing method and device, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011300705.7A CN112435227A (en) 2020-11-19 2020-11-19 Medical image processing method and device, terminal equipment and medium

Publications (1)

Publication Number Publication Date
CN112435227A true CN112435227A (en) 2021-03-02

Family

ID=74694342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011300705.7A Pending CN112435227A (en) 2020-11-19 2020-11-19 Medical image processing method and device, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN112435227A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010000002A (en) * 1999-06-26 2001-01-05 박종원 A method of separating the white matter and the grey matter from brain image and calculating the volume of thereof
CN101604458A (en) * 2008-06-11 2009-12-16 美国西门子医疗解决公司 The method that is used for the computer aided diagnosis results of display of pre-rendered
CN107103612A (en) * 2017-03-28 2017-08-29 深圳博脑医疗科技有限公司 Automate the quantitative calculation method of subregion brain atrophy
CN109472263A (en) * 2018-10-12 2019-03-15 东南大学 A kind of brain magnetic resonance image dividing method of the global and local information of combination
CN109602434A (en) * 2018-03-09 2019-04-12 上海慈卫信息技术有限公司 A kind of fetal in utero cranial image detection method
CN111145147A (en) * 2019-12-14 2020-05-12 中国科学院深圳先进技术研究院 Segmentation method of multi-modal medical image and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010000002A (en) * 1999-06-26 2001-01-05 박종원 A method of separating the white matter and the grey matter from brain image and calculating the volume of thereof
CN101604458A (en) * 2008-06-11 2009-12-16 美国西门子医疗解决公司 The method that is used for the computer aided diagnosis results of display of pre-rendered
CN107103612A (en) * 2017-03-28 2017-08-29 深圳博脑医疗科技有限公司 Automate the quantitative calculation method of subregion brain atrophy
CN109602434A (en) * 2018-03-09 2019-04-12 上海慈卫信息技术有限公司 A kind of fetal in utero cranial image detection method
CN109472263A (en) * 2018-10-12 2019-03-15 东南大学 A kind of brain magnetic resonance image dividing method of the global and local information of combination
CN111145147A (en) * 2019-12-14 2020-05-12 中国科学院深圳先进技术研究院 Segmentation method of multi-modal medical image and terminal device

Similar Documents

Publication Publication Date Title
CN110533609B (en) Image enhancement method, device and storage medium suitable for endoscope
Beare et al. Image segmentation, registration and characterization in R with SimpleITK
US8811708B2 (en) Quantification of medical image data
CN108876794B (en) Isolation of aneurysm from parent vessel in volumetric image data
CN109716445B (en) Similar case image search program, similar case image search device, and similar case image search method
US8031917B2 (en) System and method for smart display of CAD markers
CN110910441A (en) Method and device for extracting center line
CN108172275B (en) Medical image processing method and device
CN108294728A (en) wound state analysis method and system
CN110782446A (en) Method and device for determining volume of lung nodule
CN111080583A (en) Medical image detection method, computer device and readable storage medium
CN113888566A (en) Target contour curve determining method and device, electronic equipment and storage medium
WO2019109410A1 (en) Fully convolutional network model training method for splitting abnormal signal region in mri image
CN109767468B (en) Visceral volume detection method and device
CN115294426B (en) Method, device and equipment for tracking interventional medical equipment and storage medium
Liu et al. Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera
CN112435227A (en) Medical image processing method and device, terminal equipment and medium
CN114299046A (en) Medical image registration method, device, equipment and storage medium
CN113850794A (en) Image processing method and device
CN111739004A (en) Image processing method, apparatus and storage medium
CN109493396B (en) CT image display method, device, equipment and medium
CN113177938A (en) Method and device for segmenting brain glioma based on circular convolution kernel and related components
CN112102295A (en) DR image registration method, device, terminal and computer-readable storage medium
CN110517239A (en) A kind of medical image detection method and device
CN112651924B (en) Data generation device, method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination