CN116681717B - CT image segmentation processing method and device - Google Patents

CT image segmentation processing method and device Download PDF

Info

Publication number
CN116681717B
CN116681717B CN202310973299.8A CN202310973299A CN116681717B CN 116681717 B CN116681717 B CN 116681717B CN 202310973299 A CN202310973299 A CN 202310973299A CN 116681717 B CN116681717 B CN 116681717B
Authority
CN
China
Prior art keywords
image
organ tissue
organ
standard
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310973299.8A
Other languages
Chinese (zh)
Other versions
CN116681717A (en
Inventor
马骁
谷文成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingzhi Information Technology Shandong Co ltd
Original Assignee
Jingzhi Information Technology Shandong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingzhi Information Technology Shandong Co ltd filed Critical Jingzhi Information Technology Shandong Co ltd
Priority to CN202310973299.8A priority Critical patent/CN116681717B/en
Publication of CN116681717A publication Critical patent/CN116681717A/en
Application granted granted Critical
Publication of CN116681717B publication Critical patent/CN116681717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The application discloses a segmentation processing method and device of CT images, and relates to the technical field of image processing application. The method comprises the following steps: acquiring a CT image to be detected; identifying each organ tissue in the CT image; obtaining the edge of the organ tissue through an edge detection algorithm, and carrying out frame selection on the edge of the organ tissue; the organ tissue selected by the frame is segmented, the segmented organ tissue is selected to be amplified independently, meanwhile, the unselected part of the CT image comprises the unselected organ tissue and useless information selected by the frame without the segmentation line, including contents such as background, and the like, and the amplified selected organ tissue and the unselected part of the hidden CT image can be more convenient for a doctor to check, so that the interference of other organ tissues is reduced, and the doctor can concentrate on carefully checking a certain organ tissue.

Description

CT image segmentation processing method and device
Technical Field
The application relates to the technical field of image processing, in particular to a segmentation processing method and device of a CT image.
Background
CT images are one of the common medical images, CT (English full name: computed Tomography, chinese full name: computer tomography) images are multi-layer CT images with a plurality of different sections, which are obtained by scanning a part of a human body around the part of the human body as a section one by one together with a detector with extremely high sensitivity by utilizing an X-ray beam, gamma rays, ultrasonic waves and the like which are subjected to fine collimation, and along with the continuous progress of the detector and imaging equipment, the obtained CT images are clearer and more accurate, so that doctors can perform more accurate disease diagnosis according to the CT images with the different sections.
In the prior art, in order to avoid the error of CT image information caused by excessive processing of CT images, the CT images generally pursue the restoration of the actual condition of the organ tissues in the human body without modifying the image content, but the useful information and useless information in the CT images are mixed in a lot, so that a plurality of organ tissues are difficult to distinguish, a doctor is usually required to identify through subjective experience, and risks such as missed judgment and misjudgment easily occur through subjective experience, so that the treatment of a patient is delayed, even medical accidents are caused, and the life health of the patient is threatened, therefore, a segmentation processing method of the CT images is required, and under the condition that the content of the CT images is not modified, each organ tissue in the CT images is independently presented, so that the useless information is effectively shielded, the doctor does not consume energy to identify the organ tissues, and the doctor concentrates on the condition judgment of the CT images.
Disclosure of Invention
The application aims at: aiming at the defects of the prior art, the application provides a segmentation processing method and a segmentation processing device for CT images, which can independently present each organ tissue by segmenting each organ tissue in the CT images under the condition of not modifying the content of the CT images, effectively shield useless information, ensure that doctors do not need to consume energy to identify the organ tissue and concentrate on studying and judging the illness state of the CT images.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect of the present application, there is provided a segmentation processing method for a CT image, including:
acquiring a CT image to be detected;
identifying each organ tissue in the CT image;
obtaining the edge of the organ tissue through an edge detection algorithm, and carrying out frame selection on the edge of the organ tissue;
and dividing the organ tissue selected by the frame, and singly amplifying the organ tissue after selecting the division, and hiding the part of the CT image which is not selected.
In an embodiment of the present application, before the step of identifying each organ tissue in the CT image, the method further includes:
the method comprises the steps of acquiring the existing CT image, wherein the acquisition path comprises a hospital database, the Internet, medical teaching materials and medical documents;
screening the existing CT images to obtain CT images of healthy organs and tissues;
based on the CT images of the healthy state of the organ and the tissue, obtaining CT images of the healthy state of the organ and the tissue, and further obtaining a standard CT image set of the healthy state of the organ and the tissue;
and comparing the CT image to be detected with the standard CT image set.
In an embodiment of the present application, before the step of comparing the CT image to be detected with the standard CT image set, the method further includes:
labeling each organ tissue based on the standard CT image set, wherein the labeling is the name of each organ tissue;
determining a standard CT image training set and a verification set of each organ tissue based on the labels and the standard CT image sets;
performing iterative training on a CT detection model according to the standard CT image training set of each organ tissue, evaluating by using the verification set, stopping training when the neural network model reaches the preset iterative training times, and deriving the CT detection model;
the CT detection model inputs CT images to be detected, and the CT detection model outputs names of organ tissues in the CT images to be detected.
In an embodiment of the present application, after the step of identifying each organ tissue in the CT image, the method further includes:
the CT detection model outputs the matching degree of each organ tissue in the CT image to be detected and the standard CT image, and the higher the matching degree is, the more similar the CT image to be detected and the standard CT image are;
the matching degree comprises a plurality of ranges, and particularly comprises a first matching range, a second matching range and a third matching range, wherein the lower limit of the first matching range is larger than or equal to a first matching threshold value; the upper limit of the second matching range is smaller than the first matching threshold, and the lower limit of the second matching range is larger than or equal to the second matching threshold; the upper limit of the third matching range is smaller than a second matching threshold value;
each organ tissue in different matching degree ranges is segmented through different types of segmentation lines, wherein the different types of segmentation lines comprise thickness, color and transparency of segmentation line lines.
In an embodiment of the present application, the dividing the organ tissues in different matching ranges into different types further includes dividing adjacent organ tissues by different types of dividing lines.
In an embodiment of the present application, after the step of identifying each organ tissue in the CT image, the method further includes:
dividing each organ tissue into regions;
respectively identifying all areas of each organ tissue;
and outputting the region with the largest difference from the standard CT image set by each organ tissue, and marking.
In an embodiment of the present application, after the step of outputting the region having the greatest difference from the standard CT image set, the method further includes:
s601: collecting the region with the largest difference and the peripheral region of the region with the largest difference;
s602: dividing the region with the largest difference and the peripheral region of the region with the largest difference again;
s603: outputting the region with the largest difference between the current organ tissue and the standard CT image set;
s604: and repeating the steps S601-S603, and correcting the region with the largest difference to obtain the region with the largest difference of each organ tissue in the CT image to be detected relative to the standard CT image set.
In a second aspect of the present application, there is provided a CT image segmentation processing apparatus, the apparatus comprising:
an image acquisition module: acquiring a CT image to be detected;
an image recognition module: identifying each organ tissue in the CT image;
and (3) an edge frame selection module: obtaining the edge of the organ tissue through an edge detection algorithm, and carrying out frame selection on the edge of the organ tissue;
an image segmentation module: and dividing the organ tissue selected by the frame, and singly amplifying the organ tissue after selecting the division, and hiding the part of the CT image which is not selected.
In an embodiment of the application, the apparatus further comprises:
the existing image collection module: the method comprises the steps of acquiring the existing CT image, wherein the acquisition path comprises a hospital database, the Internet, medical teaching materials and medical documents;
and an image screening module: screening the existing CT images to obtain CT images of healthy organs and tissues;
standard image module: based on the CT images of the healthy organ tissues, CT images of the healthy organ tissues are obtained. Obtaining a standard CT image set under the health state of each organ and tissue;
and an image comparison module: and comparing the CT image to be detected with the standard CT image set.
In an embodiment of the application, the apparatus further comprises:
and the image marking module is used for: labeling each organ tissue based on the standard CT image set, wherein the labeling is the name of each organ tissue;
training set determination module: determining a standard CT image training set and a verification set of each organ tissue based on the labels and the standard CT image sets;
model training module: performing iterative training on a CT detection model according to the standard CT image training set of each organ tissue, evaluating by using the verification set, stopping training when the neural network model reaches the preset iterative training times, and deriving the CT detection model;
model output module: the CT detection model inputs CT images to be detected, and the CT detection model outputs names of organ tissues in the CT images to be detected.
In an embodiment of the application, the apparatus further comprises:
and the matching degree output module is used for: the CT detection model outputs the matching degree of each organ tissue in the CT image to be detected and the standard CT image, and the higher the matching degree is, the more similar the CT image to be detected and the standard CT image are;
and the matching degree sorting module is used for: the matching degree comprises a plurality of ranges, and particularly comprises a first matching range, a second matching range and a third matching range, wherein the lower limit of the first matching range is larger than or equal to a first matching threshold value; the upper limit of the second matching range is smaller than the first matching threshold, and the lower limit of the second matching range is larger than or equal to the second matching threshold; the upper limit of the third matching range is smaller than a second matching threshold value;
an image segmentation sub-module: each organ tissue in different matching degree ranges is segmented through different types of segmentation lines, wherein the different types of segmentation lines comprise thickness, color and transparency of segmentation line lines.
In an embodiment of the application, the apparatus further comprises:
adjacent image segmentation module: adjacent organ tissues are segmented by different types of segmentation lines.
In an embodiment of the application, the apparatus further comprises:
region dividing module: dividing each organ tissue into regions;
region identification module: respectively identifying all areas of each organ tissue;
the difference recognition module: outputting the region with the largest difference of each organ tissue output relative to the standard CT image set, and marking.
In an embodiment of the application, the apparatus further comprises:
region collection module: collecting the region with the largest difference and the peripheral region of the region with the largest difference;
region dividing sub-module: dividing the region with the largest difference and the peripheral region of the region with the largest difference again;
the difference recognition sub-module: outputting the region with the largest difference between the current organ tissue and the standard CT image set;
and a region correction module: and correcting the region with the largest difference to obtain a region with the largest difference of each organ tissue in the CT image to be detected relative to the standard CT image set.
The application has the following beneficial effects:
in the embodiment of the application, after the edge detection algorithm obtains the edge of the organ tissue, each organ tissue is subjected to frame selection through a dividing line, after frame selection, the organ tissues are subjected to segmentation treatment, when one organ is selected, the selected organ tissue is singly amplified, and other unselected parts are hidden, wherein the unselected parts comprise unselected organ tissues and useless information which is not selected by the dividing line frame, and comprise contents such as background, the amplified selected organ tissue and the hidden CT image are more convenient for a doctor to check, so that the interference of other organ tissues is reduced, and the doctor can concentrate on carefully checking a certain organ tissue; furthermore, because the images presented by the organ tissues in the diseased state are quite different and have the conditions of hyperplasia, deficiency, pathological changes and the like, the extent of the increase, deficiency and pathological changes can influence the presentation forms of CT images, so that the difficulty in collecting and classifying CT images in all diseased states of the organ tissues is quite high, the expression forms of CT images in healthy states of the organ tissues are quite uniform, a standard CT image set in the healthy state of each organ tissue can be obtained by screening CT images in healthy states of the organ tissues, and then the standard CT image set is compared with the CT images to be detected, and further the identification of each organ tissue is carried out on the CT images to be detected based on the standard CT image set; further, the organ tissues are further divided, the organ tissues and the organ tissues corresponding to the standard CT images are correspondingly identified, the area with the largest difference between the output of each organ tissue and the standard CT image set is obtained, and the area with the largest difference can be carefully checked during checking, so that the area with larger difference from the standard CT image can be checked with concentrated effort, and the diseased area can be checked more easily; further, the area with the largest difference and the area surrounding the area with the largest difference are collected, that is, the area where the disease is possibly present and the area with the largest difference are divided again, the current division only includes the area with the largest difference and the area surrounding the area with the largest difference, so that the area needing to be divided is reduced, the rest area can be divided more finely, each area is identified again through step S803 to obtain the area with the largest difference at present, and the one-time division is further changed into multiple divisions through repeated superposition operation of step S804, so that the area with the largest difference obtained is corrected and accurate.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of an electronic device in a hardware operating environment according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a system architecture according to an embodiment of the application.
Fig. 3 is a flowchart of steps of a method for segmenting a CT image according to an embodiment of the present application.
Fig. 4 is a functional block diagram of a CT image segmentation apparatus according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It will be apparent that the described embodiments are some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The scheme of the application is further described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device in a hardware running environment according to an embodiment of the present application.
As shown in fig. 1, the electronic device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) Memory or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the structure shown in fig. 1 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
As shown in fig. 1, an operating system, a data storage module, a network communication module, a user interface module, and an electronic program may be included in the memory 1005 as one type of storage medium.
In the electronic device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the electronic device of the present application may be disposed in the electronic device, and the electronic device invokes a segmentation processing apparatus for a CT image stored in the memory 1005 through the processor 1001, and executes a segmentation processing method for a CT image provided by the embodiment of the present application.
Referring to fig. 2, a system architecture diagram of an embodiment of the present application is shown. As shown in fig. 2, the system architecture may include a first device 201, a second device 202, a third device 203, a fourth device 204, and a network 205. Wherein the network 205 is used as a medium to provide communication links between the first device 201, the second device 202, the third device 203, and the fourth device 204. The network 205 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
In this embodiment, the first device 201, the second device 202, the third device 203, and the fourth device 204 may be hardware devices or software that support network connection to provide various network services. When the device is hardware, it may be a variety of electronic devices including, but not limited to, smartphones, tablets, laptop portable computers, desktop computers, servers, and the like. In this case, the hardware device may be realized as a distributed device group composed of a plurality of devices, or may be realized as a single device. When the device is software, it can be installed in the above-listed devices. In this case, as software, it may be implemented as a plurality of software or software modules for providing distributed services, for example, or as a single software or software module. The present application is not particularly limited herein.
In a specific implementation, the device may provide the corresponding network service by installing a corresponding client application or server application. After the device has installed the client application, it may be embodied as a client in network communication. Accordingly, after the server application is installed, it may be embodied as a server in network communications.
As an example, in fig. 2, the first device 201 is embodied as a server, and the second device 202, the third device 203, and the fourth device 204 are embodied as clients. Specifically, the second device 202, the third device 203, and the fourth device 204 may be clients installed with an information browsing-type application, and the first device 103 may be a background server of the information browsing-type application. It should be noted that, the method for processing the CT image according to the embodiment of the present application may be performed by the first device 201.
It should be understood that the number of networks and devices in fig. 2 is merely illustrative. There may be any number of networks and devices as desired for an implementation.
Referring to fig. 3, based on the foregoing hardware running environment and system architecture, an embodiment of the present application provides a method for segmentation processing of a CT image, including:
s301: acquiring a CT image to be detected;
it should be noted that CT is a disease detection instrument, and the technique of electronic computer X-ray tomography is abbreviated. The CT examination is to measure the human body by using an instrument with extremely high sensitivity according to the difference of the absorption and transmittance of the X-rays of different tissues of the human body, then input the data obtained by measurement into an electronic computer, and after the electronic computer processes the data, the electronic computer can shoot a section or a three-dimensional image of the examined part of the human body, namely a CT image, wherein the CT image is a layer image and is commonly used as a cross section;
in the present embodiment, CT images to be detected are acquired for the next step of identifying each organ tissue from the CT images;
s302: identifying each organ tissue in the CT image;
in the present embodiment, each organ tissue in the CT image is identified, and an approximate region range of the organ tissue is obtained;
s303: obtaining the edge of the organ tissue through an edge detection algorithm, and carrying out frame selection on the edge of the organ tissue;
it should be noted that the edge detection algorithm includes a plurality of algorithms, specifically including a Canny edge detection algorithm, where the Canny edge detection algorithm is a multi-step algorithm for detecting an edge of any input image;
in the embodiment, the identified organ tissues are subjected to edge detection, and the edges of the organ tissues are subjected to frame selection through the dividing lines, so that different organ tissues are separated, and mutual interference and influence caused by overlapping, crossing and tight connection among the different organ tissues are reduced when the organ tissues are checked;
s304: and dividing the organ tissue selected by the frame, and singly amplifying the organ tissue after selecting the division, and hiding the part of the CT image which is not selected.
In this embodiment, after the edge detection algorithm obtains the edge of the organ tissue, each organ tissue is framed by the dividing line, after framing, the organ tissues are segmented, when one organ is selected, the selected organ tissue is amplified separately, and the rest of the unselected part is hidden, wherein the unselected part comprises the unselected organ tissue and the useless information selected by the frame without the dividing line, including contents such as background, and the enlarged selected organ tissue and the hidden CT image are both more convenient for a doctor to check, so that the interference of other organ tissues is reduced, and the doctor can concentrate on carefully checking a certain organ tissue.
In a possible embodiment, before the step of identifying each organ tissue in the CT image, the method further comprises:
s401: the method comprises the steps of acquiring the existing CT image, wherein the acquisition path comprises a hospital database, the Internet, medical teaching materials and medical documents;
it should be noted that, at present, a large amount of CT image data is stored in information of various hospitals for many years, and CT images of various organ tissues can be downloaded on the internet, so that the present state of the organ tissues with different layer thicknesses is different, CT images of different layers of various organ tissues are also included, the organ tissues are not identified by name, and most of CT images in medical teaching materials and medical literature have detailed labels, so that names of various organ tissues in the CT images can be obtained more clearly;
in this embodiment, an existing CT image is acquired through multiple ways, and the existing CT image is used to perform contrast recognition on a CT image to be detected, so as to obtain a region range, a name, and the like of each organ tissue in the CT image to be detected;
s402: screening the existing CT images to obtain CT images of healthy organs and tissues;
in this embodiment, existing CT images are screened, CT images of healthy organ tissues are discarded, because there are many cases that cause the diseased organ tissues, the corresponding CT images are too many in presentation form and are not easy to collect and classify, meanwhile, the organ tissues in the CT images to be detected are difficult to compare successfully with the CT images because of large differences, only CT images of healthy organ tissues are required to be collected, the CT images to be detected are compared with CT images of healthy organ tissues, meanwhile, the threshold of success of comparing the CT images to be detected with CT images of healthy organ tissues is properly reduced, and even if there is a diseased organ tissue in the CT images to be detected, the identification of the healthy organ tissues is successful because other ranges of the organ tissues are the same as those of the CT images of healthy organ tissues;
s403: based on the CT images of the healthy state of the organ and the tissue, obtaining CT images of the healthy state of the organ and the tissue, and further obtaining a standard CT image set of the healthy state of the organ and the tissue;
s404: and comparing the CT image to be detected with the standard CT image set.
In this embodiment, because the difference of the images presented by the organ tissue in the diseased state is large, and the conditions including hyperplasia, deficiency, pathological changes and the like are included, and the extent of the increase, deficiency and pathological changes also affects the presentation forms of the CT images, the difficulty in collecting and classifying the CT images in all diseased states of the organ tissue is large, the presentation forms of the CT images in healthy states of the organ tissue are uniform, the standard CT image set in the healthy states of each organ tissue can be obtained by screening the CT images in healthy states of the organ tissue, and then the standard CT image set is compared with the CT image to be detected, so that the identification of each organ tissue is further performed on the CT image to be detected based on the standard CT image set.
In a possible embodiment, before the step of comparing the CT image to be detected with the standard CT image set, the method further comprises:
s501: labeling each organ tissue based on the standard CT image set, wherein the labeling is the name of each organ tissue;
in this embodiment, the standard CT image set is preprocessed, where the processing manner includes the intercepted individual image of the organ tissue and the name identifier corresponding to the image of the organ tissue, which are used for training the CT detection model in the subsequent step;
s502: determining a standard CT image training set and a verification set of each organ tissue based on the labels and the standard CT image sets;
s503: performing iterative training on a CT detection model according to the standard CT image training set of each organ tissue, evaluating by using the verification set, stopping training when the neural network model reaches the preset iterative training times, and deriving the CT detection model;
it should be noted that, the training set and the verification set are both learning processes for the neural network model, the training set is used for training parameters of the model, and the verification set is used for verifying generalization performance of the final model which has been trained;
in this embodiment, the training set is used for performing iterative training on the CT detection model, the verification set is used for evaluating the CT detection model, and when the neural network model reaches a preset iterative training number and/or the loss value corresponding to the target sample converges to a preset loss threshold, training is stopped, a trained CT detection model is derived, and the trained CT detection model is used for identifying each organ tissue of the CT image to be detected;
s504: the CT detection model inputs CT images to be detected, and the CT detection model outputs names of organ tissues in the CT images to be detected.
In this embodiment, the CT image to be detected is identified by training a CT detection model, where the CT detection model is trained by the training set, and the training set includes the intercepted organ tissue image and the corresponding name identifier, so that the CT detection model can identify the organ tissue in the CT image to be detected, and perform region division and name labeling on each organ tissue according to the identification result, and by selecting the organ tissue in the CT detection model, the selected organ tissue region is enlarged and displayed relative to the unselected organ tissue region, so that the organ tissue region to be carefully checked is highlighted, and the organ and tissue region irrelevant to diagnosis are hidden or weakened, thereby effectively helping doctors to more easily and accurately check the possibly diseased region.
In a possible embodiment, after the step of identifying each organ tissue in the CT image, the method further comprises:
s601: the CT detection model outputs the matching degree of each organ tissue in the CT image to be detected and the standard CT image, and the higher the matching degree is, the more similar the CT image to be detected and the standard CT image are;
in this embodiment, the CT detection model outputs a degree of matching between each organ tissue in the CT image to be detected and the standard CT image, where the degree of matching is how many pixels are the same as the number of pixels in each organ tissue in the CT image to be detected and the standard CT image, and the more the same pixels are, the more similar each organ tissue in the CT image to be detected is to the standard CT image, that is, the higher the degree of matching is;
s602: the matching degree comprises a plurality of ranges, and particularly comprises a first matching range, a second matching range and a third matching range, wherein the lower limit of the first matching range is larger than or equal to a first matching threshold value; the upper limit of the second matching range is smaller than the first matching threshold, and the lower limit of the second matching range is larger than or equal to the second matching threshold; the upper limit of the third matching range is smaller than a second matching threshold value;
in the present embodiment, the matching degree is divided into a plurality of ranges, and by way of example, a first matching range may be set to [98%,100% ], the first matching threshold being 98%; a second matching range may be set to [90%,98% ], the second matching threshold being 90%; the second matching range may be set to 0%,90% ].
S603: each organ tissue in different matching degree ranges is segmented through different types of segmentation lines, wherein the different types of segmentation lines comprise thickness, color and transparency of segmentation line lines.
In this embodiment, the different matching ranges correspond to different types of parting lines to divide the organ tissue, and when the matching range of the CT detection model output to the organ tissue region is in the first matching range, it is illustrated that the CT detection model may identify that the organ tissue in the CT image to be identified is the same as the organ tissue in the standard CT image set, and may simply view the portion when viewing; the CT detection model outputs a matching range of the organ tissue area in the second matching range and the third matching range, which indicates that the organ tissue in the CT image to be identified is determined to be partially different from the organ tissue in the standard CT image set by the CT detection model, and the organ tissue needs to be carefully checked during checking;
in this embodiment, the dividing lines are used for dividing different organ tissues, so different types of organ tissues need to be distinguished by different types of dividing lines, including thickness, color and transparency of the dividing line, and by different thickness, color and transparency of the dividing line.
In a possible embodiment, the dividing of each organ tissue in different matching ranges into different types further comprises dividing adjacent organ tissues by different types of dividing lines.
In this embodiment, it is also considered that if the adjacent organ tissue is the organ tissue in the same matching range, the same dividing line is used for dividing, so that the adjacent organ tissue is difficult to distinguish, and therefore, on the basis that the different types of dividing lines are required for distinguishing according to the different types of organ tissue, the adjacent organ tissue is distinguished by the different types of dividing lines, and the dividing line in the first matching range is exemplified as a red dividing line, the dividing line in the second matching range is exemplified as a green dividing line, and then the adjacent organ in the same matching range is distinguished by using dividing lines with different thicknesses.
In a possible embodiment, after the step of identifying each organ tissue in the CT image, the method further comprises:
s701: dividing each organ tissue into regions;
s702: respectively identifying all areas of each organ tissue;
s703: and outputting the region with the largest difference from the standard CT image set by each organ tissue, and marking.
It should be noted that, the CT detection model outputs the matching degree of the organ tissue in the CT image to be detected relative to the standard CT image, and when the organ tissue with low matching degree is checked, the whole organ tissue needs to be checked;
in this embodiment, the organ tissue is further divided, and then the organ tissue and the organ tissue corresponding to the standard CT image are correspondingly identified, so as to obtain the region with the largest difference between the output of each organ tissue and the standard CT image set, and the region with the largest difference can be carefully checked relatively when being checked, so that the region with the larger difference from the standard CT image can be checked with concentrated effort, and the affected region can be checked more easily.
In a possible embodiment, after the step of outputting the region of greatest difference from the standard CT image set, the method further comprises:
s801: collecting the region with the largest difference and the peripheral region of the region with the largest difference;
s802: dividing the region with the largest difference and the peripheral region of the region with the largest difference again;
s803: outputting the region with the largest difference between the current organ tissue and the standard CT image set;
s804: and repeating the steps S801-S803, and correcting the region with the largest difference to obtain the region with the largest difference of each organ tissue in the CT image to be detected relative to the standard CT image set.
In step S701, when dividing the organ tissue, a first division is adopted, and then each divided region is identified and detected, but when dividing the region, it may happen that the diseased region is divided into two or more regions, the diseased region is greatly different from the standard CT image, but only the part of the diseased region that falls in the more divided region is finally marked, and the part of the diseased region is not marked because it is divided into the regions beside;
in this embodiment, the area with the largest difference and the area surrounding the area with the largest difference are collected, that is, the area where the diseased area may exist and the area with the largest difference are divided again, the current division includes only the area with the largest difference and the area surrounding the area with the largest difference, so that the area to be divided is reduced, the remaining area can be divided more finely, each area is identified again through step S803 to obtain the area with the largest difference at present, and then the repeated overlapping operation of step S804 further changes the one-time division into multiple divisions to correct and accurately the area with the largest difference, so that the obtained area with the largest difference is more accurate.
In a second aspect of the present application, referring to fig. 4, there is provided a segmentation processing apparatus 900 for CT images, the apparatus comprising:
an image acquisition module 901: acquiring a CT image to be detected;
image recognition module 902: identifying each organ tissue in the CT image;
edge framing module 903: obtaining the edge of the organ tissue through an edge detection algorithm, and carrying out frame selection on the edge of the organ tissue;
the image segmentation module 904: and dividing the organ tissue selected by the frame, and singly amplifying the organ tissue after selecting the division, and hiding the part of the CT image which is not selected.
In a possible embodiment, the apparatus further comprises:
the existing image collection module: the method comprises the steps of acquiring the existing CT image, wherein the acquisition path comprises a hospital database, the Internet, medical teaching materials and medical documents;
and an image screening module: screening the existing CT images to obtain CT images of healthy organs and tissues;
standard image module: based on the CT images of the healthy organ tissues, CT images of the healthy organ tissues are obtained. Obtaining a standard CT image set under the health state of each organ and tissue;
and an image comparison module: and comparing the CT image to be detected with the standard CT image set.
In a possible embodiment, the apparatus further comprises:
and the image marking module is used for: labeling each organ tissue based on the standard CT image set, wherein the labeling is the name of each organ tissue;
training set determination module: determining a standard CT image training set and a verification set of each organ tissue based on the labels and the standard CT image sets;
model training module: performing iterative training on a CT detection model according to the standard CT image training set of each organ tissue, evaluating by using the verification set, stopping training when the neural network model reaches the preset iterative training times, and deriving the CT detection model;
model output module: the CT detection model inputs CT images to be detected, and the CT detection model outputs names of organ tissues in the CT images to be detected.
In a possible embodiment, the apparatus further comprises:
and the matching degree output module is used for: the CT detection model outputs the matching degree of each organ tissue in the CT image to be detected and the standard CT image, and the higher the matching degree is, the more similar the CT image to be detected and the standard CT image are;
and the matching degree sorting module is used for: the matching degree comprises a plurality of ranges, and particularly comprises a first matching range, a second matching range and a third matching range, wherein the lower limit of the first matching range is larger than or equal to a first matching threshold value; the upper limit of the second matching range is smaller than the first matching threshold, and the lower limit of the second matching range is larger than or equal to the second matching threshold; the upper limit of the third matching range is smaller than a second matching threshold value;
an image segmentation sub-module: each organ tissue in different matching degree ranges is segmented through different types of segmentation lines, wherein the different types of segmentation lines comprise thickness, color and transparency of segmentation line lines.
In a possible embodiment, the apparatus further comprises:
adjacent image segmentation module: adjacent organ tissues are segmented by different types of segmentation lines.
In a possible embodiment, the apparatus further comprises:
region dividing module: dividing each organ tissue into regions;
region identification module: respectively identifying all areas of each organ tissue;
the difference recognition module: outputting the region with the largest difference of each organ tissue output relative to the standard CT image set, and marking.
In a possible embodiment, the apparatus further comprises:
region collection module: collecting the region with the largest difference and the peripheral region of the region with the largest difference;
region dividing sub-module: dividing the region with the largest difference and the peripheral region of the region with the largest difference again;
the difference recognition sub-module: outputting the region with the largest difference between the current organ tissue and the standard CT image set;
and a region correction module: and correcting the region with the largest difference to obtain a region with the largest difference of each organ tissue in the CT image to be detected relative to the standard CT image set.
It should be noted that, referring to the specific implementation of the multi-CT image fusion method set forth in the first aspect of the embodiment of the present application, the specific implementation of the information pushing device 500 for a blockchain network in the embodiment of the present application is not described herein.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories. The computer may be a variety of computing devices including smart terminals and servers.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in an article or apparatus that comprises the element.
The above description has been made in detail for a method and apparatus for segmenting CT images, and specific examples are applied to illustrate the principles and embodiments of the present application, and the description of the above examples is only for helping to understand the method and core idea of the present application for segmenting CT images; meanwhile, as those skilled in the art will vary in the specific embodiments and application scope according to the idea of the present application, the present disclosure should not be construed as limiting the present application in summary.

Claims (4)

1. A segmentation processing method for a CT image, comprising:
acquiring a CT image to be detected;
the method comprises the steps of acquiring the existing CT image, wherein the acquisition path comprises a hospital database, the Internet, medical teaching materials and medical documents;
screening the existing CT images to obtain CT images of healthy organs and tissues;
based on the CT images of the healthy state of the organ and the tissue, obtaining CT images of the healthy state of the organ and the tissue, and further obtaining a standard CT image set of the healthy state of the organ and the tissue;
labeling each organ tissue based on the standard CT image set, wherein the labeling is the name of each organ tissue;
determining a standard CT image training set and a verification set of each organ tissue based on the labels and the standard CT image sets;
performing iterative training on a CT detection model according to the standard CT image training set of each organ tissue, evaluating by using the verification set, stopping training when the neural network model reaches the preset iterative training times, and deriving the CT detection model;
the CT detection model inputs CT images to be detected, and the CT detection model outputs names of organ tissues in the CT images to be detected;
comparing the CT image to be detected with the standard CT image set;
identifying each organ tissue in the CT image;
the CT detection model outputs the matching degree of each organ tissue in the CT image to be detected and the standard CT image, and the higher the matching degree is, the more similar the CT image to be detected and the standard CT image are;
the matching degree comprises a plurality of ranges, and particularly comprises a first matching range, a second matching range and a third matching range, wherein the lower limit of the first matching range is larger than or equal to a first matching threshold value; the upper limit of the second matching range is smaller than the first matching threshold, and the lower limit of the second matching range is larger than or equal to the second matching threshold; the upper limit of the third matching range is smaller than a second matching threshold value;
dividing each organ tissue in different matching degree ranges through different types of dividing lines, wherein the different types of dividing lines comprise thickness, color and transparency of dividing line lines;
obtaining the edge of the organ tissue through an edge detection algorithm, and carrying out frame selection on the edge of the organ tissue;
and dividing the organ tissue selected by the frame, and singly amplifying the organ tissue after selecting the division, and hiding the part of the CT image which is not selected.
2. The method for segmenting a CT image according to claim 1, wherein: the method comprises the steps of dividing each organ tissue in different matching degree ranges into different types, and dividing adjacent organ tissues by different types of dividing lines.
3. A method of segmentation of a CT image according to any one of claim 2, wherein after the step of identifying each organ tissue in the CT image, the method further comprises:
dividing each organ tissue into regions;
respectively identifying all areas of each organ tissue;
and outputting the region with the largest difference from the standard CT image set by each organ tissue, and marking.
4. A segmentation process method according to claim 3, wherein, after the step of outputting the region of maximum difference from the standard CT image set, the method further comprises:
s601: collecting the region with the largest difference and the peripheral region of the region with the largest difference;
s602: dividing the region with the largest difference and the peripheral region of the region with the largest difference again;
s603: outputting the region with the largest difference between the current organ tissue and the standard CT image set;
s604: and repeating the steps S601-S603, and correcting the region with the largest difference to obtain the region with the largest difference of each organ tissue in the CT image to be detected relative to the standard CT image set.
CN202310973299.8A 2023-08-04 2023-08-04 CT image segmentation processing method and device Active CN116681717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310973299.8A CN116681717B (en) 2023-08-04 2023-08-04 CT image segmentation processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310973299.8A CN116681717B (en) 2023-08-04 2023-08-04 CT image segmentation processing method and device

Publications (2)

Publication Number Publication Date
CN116681717A CN116681717A (en) 2023-09-01
CN116681717B true CN116681717B (en) 2023-11-28

Family

ID=87782318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310973299.8A Active CN116681717B (en) 2023-08-04 2023-08-04 CT image segmentation processing method and device

Country Status (1)

Country Link
CN (1) CN116681717B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146899A (en) * 2018-08-28 2019-01-04 众安信息技术服务有限公司 CT image jeopardizes organ segmentation method and device
CN110097557A (en) * 2019-01-31 2019-08-06 卫宁健康科技集团股份有限公司 Automatic medical image segmentation method and system based on 3D-UNet
CN110223303A (en) * 2019-05-13 2019-09-10 清华大学 HE dyes organ pathological image dividing method, device
CN110232383A (en) * 2019-06-18 2019-09-13 湖南省华芯医疗器械有限公司 A kind of lesion image recognition methods and lesion image identifying system based on deep learning model
EP3644275A1 (en) * 2018-10-22 2020-04-29 Koninklijke Philips N.V. Predicting correctness of algorithmic segmentation
CN111738989A (en) * 2020-06-02 2020-10-02 北京全域医疗技术集团有限公司 Organ delineation method and device
CN112331311A (en) * 2020-11-06 2021-02-05 青岛海信医疗设备股份有限公司 Method and device for fusion display of video and preoperative model in laparoscopic surgery
KR102237198B1 (en) * 2020-06-05 2021-04-08 주식회사 딥노이드 Ai-based interpretation service system of medical image
CN113034522A (en) * 2021-04-01 2021-06-25 上海市第一人民医院 CT image segmentation method based on artificial neural network
CN113139948A (en) * 2021-04-28 2021-07-20 福建自贸试验区厦门片区Manteia数据科技有限公司 Organ contour line quality evaluation method, device and system
WO2022089221A1 (en) * 2020-10-30 2022-05-05 苏州瑞派宁科技有限公司 Medical image segmentation method and apparatus, and device, system and computer storage medium
CN114595972A (en) * 2022-03-09 2022-06-07 经智信息科技(山东)有限公司 Smart city management method applying virtual digital people
CN115393376A (en) * 2022-08-26 2022-11-25 北京联影智能影像技术研究院 Medical image processing method, medical image processing device, computer equipment and storage medium
CN115439486A (en) * 2022-05-27 2022-12-06 陕西科技大学 Semi-supervised organ tissue image segmentation method and system based on dual-countermeasure network
CN115775219A (en) * 2021-09-08 2023-03-10 上海微创卜算子医疗科技有限公司 Medical image segmentation method, system, electronic device, and medium
CN116206160A (en) * 2023-04-04 2023-06-02 江苏理工学院 Automatic identification network model and automatic sketching network model construction method for nasopharyngeal carcinoma lesion tissues based on convolutional neural network model
WO2023104464A1 (en) * 2021-12-08 2023-06-15 Koninklijke Philips N.V. Selecting training data for annotation

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146899A (en) * 2018-08-28 2019-01-04 众安信息技术服务有限公司 CT image jeopardizes organ segmentation method and device
EP3644275A1 (en) * 2018-10-22 2020-04-29 Koninklijke Philips N.V. Predicting correctness of algorithmic segmentation
CN110097557A (en) * 2019-01-31 2019-08-06 卫宁健康科技集团股份有限公司 Automatic medical image segmentation method and system based on 3D-UNet
CN110223303A (en) * 2019-05-13 2019-09-10 清华大学 HE dyes organ pathological image dividing method, device
CN110232383A (en) * 2019-06-18 2019-09-13 湖南省华芯医疗器械有限公司 A kind of lesion image recognition methods and lesion image identifying system based on deep learning model
CN111738989A (en) * 2020-06-02 2020-10-02 北京全域医疗技术集团有限公司 Organ delineation method and device
KR102237198B1 (en) * 2020-06-05 2021-04-08 주식회사 딥노이드 Ai-based interpretation service system of medical image
WO2022089221A1 (en) * 2020-10-30 2022-05-05 苏州瑞派宁科技有限公司 Medical image segmentation method and apparatus, and device, system and computer storage medium
CN112331311A (en) * 2020-11-06 2021-02-05 青岛海信医疗设备股份有限公司 Method and device for fusion display of video and preoperative model in laparoscopic surgery
CN113034522A (en) * 2021-04-01 2021-06-25 上海市第一人民医院 CT image segmentation method based on artificial neural network
CN113139948A (en) * 2021-04-28 2021-07-20 福建自贸试验区厦门片区Manteia数据科技有限公司 Organ contour line quality evaluation method, device and system
CN115775219A (en) * 2021-09-08 2023-03-10 上海微创卜算子医疗科技有限公司 Medical image segmentation method, system, electronic device, and medium
WO2023104464A1 (en) * 2021-12-08 2023-06-15 Koninklijke Philips N.V. Selecting training data for annotation
CN114595972A (en) * 2022-03-09 2022-06-07 经智信息科技(山东)有限公司 Smart city management method applying virtual digital people
CN115439486A (en) * 2022-05-27 2022-12-06 陕西科技大学 Semi-supervised organ tissue image segmentation method and system based on dual-countermeasure network
CN115393376A (en) * 2022-08-26 2022-11-25 北京联影智能影像技术研究院 Medical image processing method, medical image processing device, computer equipment and storage medium
CN116206160A (en) * 2023-04-04 2023-06-02 江苏理工学院 Automatic identification network model and automatic sketching network model construction method for nasopharyngeal carcinoma lesion tissues based on convolutional neural network model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
深度卷积神经网络在放射治疗计划图像分割中的应用;邓金城;彭应林;刘常春;陈子杰;雷国胜;吴江华;张广顺;邓小武;;中国医学物理学杂志(06);全文 *

Also Published As

Publication number Publication date
CN116681717A (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN113052795B (en) X-ray chest radiography image quality determination method and device
US8311296B2 (en) Voting in mammography processing
Sedghi Gamechi et al. Automated 3D segmentation and diameter measurement of the thoracic aorta on non-contrast enhanced CT
Schreuder et al. Artificial intelligence for detection and characterization of pulmonary nodules in lung cancer CT screening: ready for practice?
Alberdi et al. Effects of incorrect computer-aided detection (CAD) output on human decision-making in mammography
EA015959B1 (en) Method for brightness level calculation in the area of interest of the digital x-ray image for medical applications
KR20220155828A (en) Medical image analysis apparatus and method, medical image visualization apparatus and method
Roux et al. Fully automated opportunistic screening of vertebral fractures and osteoporosis on more than 150 000 routine computed tomography scans
Lakhani et al. Endotracheal tube position assessment on chest radiographs using deep learning
US20240112329A1 (en) Distinguishing a Disease State from a Non-Disease State in an Image
Samei et al. Automated characterization of perceptual quality of clinical chest radiographs: validation and calibration to observer preference
Oh et al. Reliable quality assurance of X-ray mammography scanner by evaluation the standard mammography phantom image using an interpretable deep learning model
CN116681717B (en) CT image segmentation processing method and device
JP2004213643A (en) Computer aided reconciliation method
Iqbal et al. AD-CAM: Enhancing Interpretability of Convolutional Neural Networks with a Lightweight Framework-From Black Box to Glass Box
CN111080625B (en) Training method and training device for lung image strip and rope detection model
CN116664580B (en) Multi-image hierarchical joint imaging method and device for CT images
CN117015799A (en) Detecting anomalies in x-ray images
CN116721045B (en) Method and device for fusing multiple CT images
Acri et al. A novel phantom and a dedicated developed software for image quality controls in x-ray intraoral devices
CN114742836B (en) Medical image processing method and device and computer equipment
Dahal et al. Virtual versus reality: external validation of COVID-19 classifiers using XCAT phantoms for chest radiography
Park et al. Devising a deep neural network based mammography phantom image filtering algorithm using images obtained under mAs and kVp control
Akogo A Standardized Radiograph-Agnostic Framework and Platform For Evaluating AI Radiological Systems
Mansourvar et al. Automatic method for bone age assessment based on combined method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant