CN116664580A - Multi-image hierarchical joint imaging method and device for CT images - Google Patents

Multi-image hierarchical joint imaging method and device for CT images Download PDF

Info

Publication number
CN116664580A
CN116664580A CN202310960134.7A CN202310960134A CN116664580A CN 116664580 A CN116664580 A CN 116664580A CN 202310960134 A CN202310960134 A CN 202310960134A CN 116664580 A CN116664580 A CN 116664580A
Authority
CN
China
Prior art keywords
image
organ tissue
identification
organ
cnn model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310960134.7A
Other languages
Chinese (zh)
Other versions
CN116664580B (en
Inventor
王利辉
初蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingzhi Information Technology Shandong Co ltd
Original Assignee
Jingzhi Information Technology Shandong Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingzhi Information Technology Shandong Co ltd filed Critical Jingzhi Information Technology Shandong Co ltd
Priority to CN202310960134.7A priority Critical patent/CN116664580B/en
Publication of CN116664580A publication Critical patent/CN116664580A/en
Application granted granted Critical
Publication of CN116664580B publication Critical patent/CN116664580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/945User interactive design; Environments; Toolboxes
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The application discloses a multi-image hierarchical imaging method and device of CT images, and relates to the technical field of image processing. The method comprises the following steps: acquiring a human body CT image with an organ and tissue name mark and a CT image of a human body to be identified, wherein the CT image comprises tomographic area images of a plurality of layers, and determining the layer number of the CT image according to layer thickness and layer distance; preprocessing a plurality of image layers of the CT image; forming a data set, wherein the data set comprises intercepted organ tissue images and corresponding name identifiers, the data set is divided into a training set and a verification set, the training set and the verification set are used for training a classification CNN model, and the trained classification CNN model is used for identifying each organ tissue of a CT image of a human body to be identified; and obtaining the identification result of each organ tissue in the CT image, dividing the region of each organ tissue according to the identification result, and highlighting the selected organ tissue region relative to the unselected organ tissue region by selecting the organ tissue in the identification result.

Description

Multi-image hierarchical joint imaging method and device for CT images
Technical Field
The application relates to the technical field of image processing, in particular to a multi-image hierarchical imaging method and device of CT images.
Background
CT images are one of the common medical images, CT (English full name: computed Tomography, chinese full name: computer tomography) images are multi-layer CT images with a plurality of different sections, which are obtained by scanning a part of a human body around the part of the human body as a section one by one together with a detector with extremely high sensitivity by utilizing an X-ray beam, gamma rays, ultrasonic waves and the like which are subjected to fine collimation, and along with the continuous progress of the detector and imaging equipment, the obtained CT images are clearer and more accurate, so that doctors can perform more accurate disease diagnosis according to the CT images with the different sections.
At present, although the quality of CT images is higher and higher, in order to learn more about the condition in the patient's body to be beneficial to disease diagnosis, a large-scale shooting scan is usually performed on a certain part of the human body, so that a display image of a large number of organs and tissues is obtained, and most of the organs and tissues can be considered normal through simple inspection, but the existence of the normal organs and tissues can make it difficult for doctors to inspect organs and tissues which are likely to have disease, because doctors need to spend efforts to divide the boundaries of the organs and tissues through artificial experience, so as to achieve the purpose of excluding useless information, focusing attention to carefully inspect the diseased part, and the boundaries of some organs and tissues are difficult to recognize, and even a large amount of effort is required to be consumed through artificial experience recognition, even recognition errors are likely to occur, so that a multi-image hierarchical joint imaging method and device for CT images are needed, by recognizing the organs and tissues of a plurality of layers, automatically dividing the boundaries of each organ and tissue, highlighting the organs and tissues which need to be carefully inspected, hiding or blurring the organs and tissues which are not related to diagnosis, thereby effectively facilitating the doctors to inspect the diseased region more accurately.
Disclosure of Invention
The application aims at: aiming at the defects of the prior art, the multi-image hierarchical imaging method and the device for CT images are provided, and the organs and tissues are automatically subjected to boundary division by identifying the organs and tissues of the CT images of the multi-image hierarchical imaging method and the device, so that the areas of the organs and tissues needing to be carefully checked are highlighted, the areas of the organs and tissues irrelevant to diagnosis are hidden or weakened, and doctors are effectively helped to check the areas possibly suffering from diseases more easily and accurately.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect of the present application, there is provided a multi-map hierarchical imaging method of CT images, comprising:
acquiring a human body CT image, wherein the CT image comprises a human body CT image with an organ and tissue name mark and a CT image of a human body to be identified, the CT image comprises tomographic area images of a plurality of layers, and the layer number of the CT image is determined according to layer thickness and layer distance;
preprocessing a plurality of image layers of the CT image of the human body with the organ and tissue name marks;
forming a data set, wherein the data set comprises the intercepted organ tissue image and a corresponding name identifier, the data set is divided into a training set and a verification set, the training set and the verification set are used for training a classification CNN model, and the CT image of a human body to be identified is used for identifying each organ tissue by using the trained classification CNN model;
And obtaining the identification result of each organ tissue in the CT image, dividing the region of each organ tissue according to the identification result, and highlighting the selected organ tissue region relative to the unselected organ tissue region by selecting the organ tissue in the identification result.
In some optional embodiments, the classification CNN model is used to identify each organ tissue in the CT image of each layer, and after the identification is completed, cascade processing is performed on each identical organ tissue in all layers, where the cascade processing makes the identical organ tissue in the rest layers realize synchronous highlighting when a doctor selects an organ tissue in one layer.
In some alternative embodiments, the preprocessing of the CT image includes, but is not limited to, cropping to the same size, noise reduction, resampling, and contrast enhancement.
In some alternative embodiments, the classification CNN model is based on a network structure of the res net50, and specifically includes: an independent convolution layer a, a residual block and a full connection layer; wherein:
the residual blocks comprise 4, each residual block is formed by stacking a plurality of identical basic residual blocks, the 4 residual blocks sequentially comprise 3, 4, 6 and 4 basic residual blocks, each basic residual block comprises 3 convolution layers, specifically 2 convolution layers b and 1 convolution layer c, the convolution layers b, the convolution layers c and the convolution layers b are sequentially arranged in sequence, a batch of standardization layers and a ReLU activation layer are connected behind each convolution layer,
The independent convolution layer a is connected with 4 residual blocks through a maximum pooling layer, the adjacent two residual blocks adopt a residual connection mode, the last residual block is connected with a message full-connection layer through an average pooling layer,
the output of the full connection layer is a 2*1 vector, which represents the image recognition and classification result, and finally the vector is converted into the prediction probability through the Softmax layer; the predictive probability is a fraction between 0 and 1.
In some optional embodiments, the CNN model recognition result passes through a Canny edge detection algorithm, extracts edge information of the organ tissue, frames the edge of the recognized organ tissue according to the edge information, and performs name identification on the recognized organ tissue.
In some alternative embodiments, the name identifier is located at the edge of the CT image, pointing to the corresponding organ tissue by an arrow, and the doctor performs region selection by clicking on the organ tissue region or the name identifier, and de-selecting by clicking again.
In some optional embodiments, when the CNN model fails to identify the diseased organ tissue region, the CNN model first performs blank identification on the region which is not identified, a doctor supplements the blank identification, and simultaneously reviews and modifies the edge of the organ tissue, and finally returns the supplemented and modified CT image to the CNN training set and the verification set to obtain CNN model supplementation and modification.
In some alternative embodiments, the organ tissue highlighting includes a brightness enhancement treatment of the selected organ tissue relative to the unselected organ tissue, and the organ tissue highlighting further includes a brightness reduction treatment or a blurring treatment or a hiding treatment of the unselected organ tissue relative to the selected organ tissue.
In some alternative embodiments, the CNN model outputs an identification probability of the organ tissue region, the range of identification probabilities including a first identification range, a second identification range, and a third identification range; the lower limit of the first identification range is larger than or equal to a first identification threshold value; the upper limit of the second identification range is smaller than the first identification threshold, and the lower limit of the second identification range is larger than or equal to the second identification threshold; the upper limit of the third recognition range is smaller than a second recognition threshold;
based on the pushing strategy of the identification probability range corresponding to the CNN model output to the organ tissue area identification result, pushing the organ tissue area identification result, comprising the following steps:
when the CNN model outputs the identification probability of the organ tissue region in the first identification range, carrying out name identification on the organ tissue region, and outputting a confirmation result of the organ tissue;
When the CNN model outputs the identification probability of the organ tissue region in the second identification range, carrying out name identification on the organ tissue region, and outputting the identification result to be confirmed of the organ tissue;
and when the identification probability of the CNN model output to the organ tissue region is in the third identification range, performing blank name identification on the organ tissue region, and outputting a notification for performing identification supplement on the blank identification.
In a second aspect of the present application, there is provided a multi-map hierarchical imaging apparatus for CT images, the apparatus comprising:
and an image acquisition module: acquiring a human body CT image, wherein the CT image comprises a human body CT image with an organ and tissue name mark and a CT image of a human body to be identified;
an image processing module: preprocessing a plurality of image layers of the CT image of the human body with the organ tissue name marks to form a data set, wherein the data set comprises the intercepted organ tissue images and the corresponding name marks;
model training module: the data set is divided into a training set and a validation set, which are used to train the classification CNN model,
an image recognition module: identifying each organ tissue by utilizing the trained classification CNN model to identify the CT image of the human body;
And (3) identifying and displaying the module: and obtaining the identification result of each organ tissue in the CT image, dividing the region of each organ tissue according to the identification result, selecting the organ tissue in the identification result, and highlighting the selected organ tissue region relative to the unselected organ tissue region.
In some optional embodiments, after the image recognition module recognizes each organ tissue of the CT image of the human body to be recognized, the apparatus further includes:
layer identification module: identifying each organ tissue by utilizing the classification CNN model to the CT image of each layer;
and the cascade processing module is used for: after identification, carrying out cascade treatment on the same organ tissues in all the layers;
and a cascade display module: the cascading process enables the same organ tissue in the remaining layers to be synchronously highlighted when a physician selects the organ tissue in one of the layers.
In some alternative embodiments, further comprising:
and (3) an edge frame selection module: and extracting the outer edge information of the organ tissues by a Canny edge detection algorithm through which the CNN model identification result passes, carrying out frame selection on the edges of the identified organ tissues according to the edge information, and carrying out name identification on the identified organ tissues.
In some optional embodiments, after the edge selection module performs name identification on the identified organ tissue, the method further includes:
edge identification module: the name mark is positioned at the edge of the CT image, points to the corresponding organ tissue through an arrow, performs region selection by clicking the organ tissue region or the name mark, and cancels the selection by clicking again.
In some alternative embodiments, the classification CNN model is based on a network structure of the res net50, and specifically includes: an independent convolution layer a, a residual block and a full connection layer; wherein:
the residual blocks comprise 4, each residual block is formed by stacking a plurality of identical basic residual blocks, the 4 residual blocks sequentially comprise 3, 4, 6 and 4 basic residual blocks, each basic residual block comprises 3 convolution layers, specifically 2 convolution layers b and 1 convolution layer c, the convolution layers b, the convolution layers c and the convolution layers b are sequentially arranged in sequence, a batch of standardization layers and a ReLU activation layer are connected behind each convolution layer,
the independent convolution layer a is connected with 4 residual blocks through a maximum pooling layer, the adjacent two residual blocks adopt a residual connection mode, the last residual block is connected with a message full-connection layer through an average pooling layer,
The output of the full connection layer is a 2*1 vector, which represents the image recognition and classification result, and finally the vector is converted into the prediction probability through the Softmax layer; the predictive probability is a fraction between 0 and 1.
In some alternative embodiments, further comprising:
and a supplementary correction module: when the CNN model fails to identify a diseased organ tissue region, the CNN model performs blank identification on the region which is not identified, supplements the blank identification, rechecks and modifies the edge of the organ tissue, and returns the supplemented and modified CT image to the CNN training set and the verification set to obtain CNN model supplementation and modification.
In some alternative embodiments, the organ tissue highlighting includes a brightness enhancement treatment of the selected organ tissue relative to the unselected organ tissue, and the organ tissue highlighting further includes a brightness reduction treatment or a blurring treatment or a hiding treatment of the unselected organ tissue relative to the selected organ tissue.
In some optional embodiments, the method further comprises an identification probability pushing module, wherein the identification probability pushing module outputs identification probabilities of organ tissue areas on the CNN model, and the range of the identification probabilities comprises a first identification range, a second identification range and a third identification range; the lower limit of the first identification range is larger than or equal to a first identification threshold value; the upper limit of the second identification range is smaller than the first identification threshold, and the lower limit of the second identification range is larger than or equal to the second identification threshold; the upper limit of the third recognition range is smaller than a second recognition threshold value:
The identification probability pushing module outputs a pushing strategy for the identification result of the organ tissue region based on the identification probability range corresponding to the CNN model, and pushes the identification result of the organ tissue region, and the identification probability pushing module comprises:
a first information pushing unit: when the identification probability of the CNN model output to the organ tissue region is in the first identification range, carrying out name identification on the organ tissue region, and outputting a confirmation identification result of the organ tissue;
the second information pushing unit: when the identification probability of the CNN model output to the organ tissue region is in the second identification range, carrying out name identification on the organ tissue region, and outputting a recognition result to be confirmed of the organ tissue;
a third information pushing unit: and when the identification probability of the CNN model output to the organ tissue region is in the third identification range, performing blank name identification on the organ tissue region, and outputting a notification for performing identification supplement on the blank identification.
The application has the following beneficial effects:
in the embodiment of the application, the existing CT image with name identification is acquired and converted into a data set, the CT image of the human body to be identified is acquired, the CT image of the human body to be identified is compared with the CT image in the data set through a CNN model, after identification, the region division is carried out on the organ tissue in the CT image of the human body to be identified, a doctor selects the organ tissue in the CT image, so that the selected organ tissue region is highlighted relative to the unselected organ tissue region, the doctor is effectively helped to more easily and accurately check the possibly diseased region, meanwhile, the subjective discrimination error of the doctor when the doctor subjectively identifies the organ tissue is avoided, and simultaneously, the same organ tissue in all the layers is subjected to cascade treatment, so that when the doctor selects the organ tissue in one layer, the same organ tissue in the other layers is synchronously highlighted, when the doctor switches different layers to check the same organ tissue region, the time for the doctor to select the organ tissue highlighting again can be saved, and when all the layers are displayed in the same display region, the selected region can be synchronously highlighted in all the layers, so that the doctor is more convenient; and finally, completing the identification of the identified CT image of the human body to be identified, supplementing the blank identification of the area which cannot be identified by the CNN model by a doctor, simultaneously rechecking and modifying the edge of the organ tissue, and finally returning the supplemented and modified CT image to the CNN training set and the verification set to obtain the supplement and modification of the CNN model, so that the identification of the CT image is more accurate, and even suspected diseases possibly existing in the CT image of the human body to be identified can be directly identified according to the name identification and the disease identification content of the doctor.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of an electronic device in a hardware operating environment according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a system architecture according to an embodiment of the application.
Fig. 3 is a flowchart illustrating a multi-image hierarchical imaging method of CT images according to an embodiment of the present application.
Fig. 4 is a schematic functional block diagram of a multi-image hierarchical imaging device for CT images according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It will be apparent that the described embodiments are some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The scheme of the application is further described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device in a hardware running environment according to an embodiment of the present application.
As shown in fig. 1, the electronic device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) Memory or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the structure shown in fig. 1 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
As shown in fig. 1, an operating system, a data storage module, a network communication module, a user interface module, and an electronic program may be included in the memory 1005 as one type of storage medium.
In the electronic device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the electronic device of the present application may be disposed in the electronic device, and the electronic device invokes a multi-image hierarchical imaging device of a CT image stored in the memory 1005 through the processor 1001, and executes a multi-image hierarchical imaging method of a CT image provided by the embodiment of the present application.
Referring to fig. 2, a system architecture diagram of an embodiment of the present application is shown. As shown in fig. 1, the system architecture may include a first device 201, a second device 202, a third device 203, a fourth device 204, and a network 205. Wherein the network 205 is used as a medium to provide communication links between the first device 201, the second device 202, the third device 203, and the fourth device 204. The network 205 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
In this embodiment, the first device 201, the second device 202, the third device 203, and the fourth device 204 may be hardware devices or software that support network connection to provide various network services. When the device is hardware, it may be a variety of electronic devices including, but not limited to, smartphones, tablets, laptop portable computers, desktop computers, servers, and the like. In this case, the hardware device may be realized as a distributed device group composed of a plurality of devices, or may be realized as a single device. When the device is software, it can be installed in the above-listed devices. In this case, as software, it may be implemented as a plurality of software or software modules for providing distributed services, for example, or as a single software or software module. The present invention is not particularly limited herein.
In a specific implementation, the device may provide the corresponding network service by installing a corresponding client application or server application. After the device has installed the client application, it may be embodied as a client in network communication. Accordingly, after the server application is installed, it may be embodied as a server in network communications.
As an example, in fig. 2, the first device 201 is embodied as a server, and the second device 202, the third device 203, and the fourth device 204 are embodied as clients. Specifically, the second device 202, the third device 203, and the fourth device 204 may be clients installed with an information browsing-type application, and the first device 103 may be a background server of the information browsing-type application. It should be noted that, the multi-image hierarchical imaging method of CT images provided by the embodiment of the present application may be performed by the first device 201.
It should be understood that the number of networks and devices in fig. 2 is merely illustrative. There may be any number of networks and devices as desired for an implementation.
Referring to fig. 3, based on the foregoing hardware running environment and system architecture, an embodiment of the present application provides a multi-image hierarchical imaging method of CT images, which specifically may include the following steps:
s301: acquiring a human body CT image, wherein the CT image comprises a human body CT image with an organ and tissue name mark and a CT image of a human body to be identified;
it should be noted that CT is a disease detection instrument, and the technique of electronic computer X-ray tomography is abbreviated. The CT examination measures the human body by using an instrument with extremely high sensitivity according to the difference of the absorption and the transmittance of the X-rays of different tissues of the human body, then the data obtained by the measurement is input into an electronic computer, and after the electronic computer processes the data, a section or a three-dimensional image of the examined part of the human body, namely a CT image, which is a layer image and is commonly used as a cross section, can be photographed. In order to display the whole organ, a plurality of continuous layer images, namely a plurality of CT images are needed to obtain a three-dimensional CT image by using a reconstruction technology and used for finding out tiny lesions of any part in a human body;
It should be noted that, through information storage for many years, at present, each large hospital has a large amount of CT image data, and CT images of each organ tissue can be downloaded on the internet, and the organ tissue with different layer thicknesses presents different states, so that CT images of different layers of each organ tissue are also included, and identification of name identification is not performed on the organ tissue;
in the embodiment, a human body CT image with an organ and tissue name mark and a CT image of a human body to be identified are obtained, wherein the human body CT image with the organ and tissue name mark and the CT image of the human body to be identified both comprise a plurality of image layers, the CT image comprises tomographic scanning area images of the image layers, and the image layer number of the CT image is determined according to layer thickness and layer distance;
s302: preprocessing a plurality of image layers of the human body CT image with the organ tissue name identification to form a data set, wherein the data set comprises the intercepted organ tissue image and the corresponding name identification;
in this embodiment, preprocessing is performed on a plurality of image layers of the existing human CT image with organ tissue name identifiers, where the processing manner includes capturing the organ tissue individual image and the name identifiers corresponding to the organ tissue image to form a dataset for training of a CNN model in a subsequent step;
S303: dividing the data set into a training set and a verification set, wherein the training set and the verification set are used for training a classification CNN model, and identifying each organ tissue by utilizing the trained classification CNN model;
it should be noted that, the training set and the validation set are both learning processes for a neural network model, the training set is used for training parameters of the training model, the validation set is used for checking generalization performance of a final model which is already trained, the classified CNN model is a convolutional neural network (Convolutional Neural Networks, CNN) which is a feedforward neural network (Feedforward Neural Networks) containing convolutional calculation and having a deep structure, and is one of representative algorithms of deep learning (deep learning), and the classified CNN model generally includes an input layer, a convolutional layer, a full connection layer and an output layer;
in this embodiment, the training set is used for performing iterative training on the classified CNN model, the verification set is used for evaluating the classified CNN model, and when the neural network model reaches a preset iterative training frequency and/or the loss value corresponding to the target sample converges to a preset loss threshold, training is stopped, a trained classified CNN model is derived, and the trained classified CNN model is used for identifying each organ tissue of the CT image of the human body to be identified;
S304: and obtaining the identification result of each organ tissue in the CT image, dividing the region of each organ tissue according to the identification result, and highlighting the selected organ tissue region relative to the unselected organ tissue region by selecting the organ tissue in the identification result.
In this embodiment, the classification CNN model is trained by the training set, the training set is obtained by the data set, and the data set is the intercepted organ tissue image and the corresponding name identifier, so that the classification CNN model can identify the organ tissue in the CT image to be detected, and perform the region division of each organ tissue according to the identification result, and by selecting the organ tissue in the classification CNN model, the selected organ tissue region is highlighted with respect to the unselected organ tissue region, so that the organ and tissue region needing to be carefully checked is highlighted, and the organ and tissue region irrelevant to diagnosis is hidden or blurring, thereby effectively helping doctors to more easily and accurately check the possible diseased region.
In a possible embodiment, the step S303 performs identification of each organ tissue on the CT image of the human body to be identified, and the method further includes:
S401: identifying each organ tissue by utilizing the classification CNN model to the CT image of each layer;
in this embodiment, as in step S303, the identification of each organ tissue is performed on the CT image of each layer by the classification CNN model;
s402: after identification, carrying out cascade treatment on the same organ tissues in all the layers;
the cascade process is used to design one-to-many relationships. For example, a table stores teacher information, table A (name, gender, age), name being the primary key. A table stores class information taught by the teacher, table B (name, class). They are concatenated by name. The cascade operation has cascade update and cascade deletion. After a cascade update option is enabled, a primary key value may be changed if there is a matching foreign key value. The system updates all matching foreign key values accordingly. If the record with the name Zhang three is changed to Li four in the table A, all the records with the name Zhang three in the table B are changed to Li four. Cascading delete is similar to update. If the record with the name Zhang three is deleted in Table A, then all records with the name Zhang three in Table B are deleted, and the cascade process can display the data associated with them, such as the data in the associated database, in a hierarchical view. The following is an example of displaying data in a database using a hierarchical view, where the database references a database in an instruction manual. The user may expand or collapse the hierarchy of the hierarchical organization by clicking on the expand and collapse charts (plus and minus signs). Taking the form software Spread as an example, how to use a cascade of forms, the cascade is to display the data in a hierarchical manner, first, a data set needs to be created to store the related data, then, the interrelationship between the data is defined, and finally, a Spread control is set to display the data in a desired manner. The appearance settings for the cell type, color, title, and other aspects of the sub-form are then customized.
In this embodiment, after the identification is completed, cascade processing is performed on the same organ tissues in all the layers, so that the same organ tissues in all the layers in the CT image to be identified are correlated with each other;
s403: the cascading process enables synchronous highlighting of the same organ tissue in one of the layers when the organ tissue in the other layers is selected.
In this embodiment, the cascade processing makes all the same organ tissues in all the layers in the CT image to be identified correlate with each other, and when the organ tissue in one of the layers is selected, the same organ tissues in the other layers realize synchronous highlighting, so that when the doctor selects the organ tissue in one of the layers, the same organ tissues in the other layers realize synchronous highlighting, when the doctor switches different layers to view the same organ tissue region, the time for the doctor to select again to highlight the organ tissue can be saved, and when all the layers are displayed in the same display region, the selected region can be synchronously highlighted in all the layers, so that the doctor can view more conveniently.
In a possible implementation manner, the CNN model recognition result passes through a Canny edge detection algorithm, extracts the outer edge information of the organ tissue, performs frame selection on the edge of the recognized organ tissue according to the edge information, and performs name identification on the recognized organ tissue.
It should be noted that the Canny edge detection algorithm is a multi-step algorithm for detecting the edges of any input image.
In a possible embodiment, the framing the identified edge of the organ tissue according to the edge information further includes:
s501: checking the edge of organ tissue framing;
s502: according to the checking result, checking that the edges of the organ and the tissue are correct, selecting to not modify, checking that the edges of the organ and the tissue are incorrect, and selecting to modify the edges of the organ and the tissue;
s503: and returning the CT image with the edge information corrected to the CNN training set and the verification set to obtain the correction of the CNN model on the edge of the organ tissue.
In the embodiment, edge checking for organ tissue framing is added, so that the edge recognition of the organ tissue of the CT image is more accurate.
In a possible embodiment, the name identifier is located at the edge of the CT image, points to the corresponding organ tissue by means of an arrow, and the doctor performs region selection by clicking on the organ tissue region or the name identifier, and the doctor cancels the selection by clicking again.
In one possible embodiment, when the CNN model fails to identify the organ tissue region,
s601: the CNN model performs blank identification on the area which is not identified;
It should be noted that, since the organ tissues suffer from different symptoms, the images presented by the CT images may be very different in the state of different development degrees, so that the CNN model may not identify the diseased organ tissues, and even the organ tissues of a few persons or injured patients may be different from those of a normal person, such as malformed organ tissues or organ tissues with partial lack, so that the CNN model may not identify the organ tissues;
in this embodiment, the CNN model performs blank identification on the areas that are not identified;
s602: performing identification supplement on the blank identification;
s603: and returning the CT image with the identification supplement completion to the training set and the verification set to obtain the identification supplement of the CNN model to the organ tissues.
In this embodiment, the organ tissues which cannot be identified by the CNN model are marked with the blank identifier, the blank identifier is supplemented, and finally, the supplemented CT image is returned to the training set and the verification set to obtain the supplement of the CNN model, so that the organ tissues with different presentation forms can be supplemented to the CNN model, the identification of the organ tissues of the CT image is more accurate, and even the suspected disorder possibly existing in the CT image of the human body to be identified can be directly identified according to the doctor name identifier and the disorder identifier content.
In a possible embodiment, the CNN model outputs an identification probability of the organ tissue region, the range of identification probabilities including a first identification range, a second identification range, and a third identification range; the lower limit of the first identification range is larger than or equal to a first identification threshold value; the upper limit of the second identification range is smaller than the first identification threshold, and the lower limit of the second identification range is larger than or equal to the second identification threshold; the upper limit of the third recognition range is smaller than a second recognition threshold;
based on the pushing strategy of the identification probability range corresponding to the CNN model output to the organ tissue area identification result, pushing the organ tissue area identification result, comprising the following steps:
when the CNN model outputs the identification probability of the organ tissue region in the first identification range, carrying out name identification on the organ tissue region, and outputting a confirmation result of the organ tissue;
in this embodiment, different recognition probability ranges correspond to different pushing strategies, when the recognition probability of the CNN model output to the organ tissue region is in the first recognition range, it is explained that the classification CNN model may identify that the organ tissue in the CT image to be recognized is the same as the organ tissue in the CT image of the human body with the organ tissue name identifier, and exemplary, the first recognition range may be set to [95%,100% ], and the first recognition threshold is 95%;
When the CNN model outputs the identification probability of the organ tissue region in the second identification range, carrying out name identification on the organ tissue region, and outputting the identification result to be confirmed of the organ tissue;
in this embodiment, different recognition probability ranges correspond to different pushing strategies, when the CNN model outputs that the recognition probability of the organ tissue region is in the second recognition range, it is explained that the classification CNN model may identify that the organ tissue in the CT image to be recognized is substantially the same as the organ tissue in the CT image of the human body with the organ tissue name identifier, but further confirmation is required, and exemplary, the second recognition range may be set to [80%,95% ], and the second recognition threshold is 80%;
and when the identification probability of the CNN model output to the organ tissue region is in the third identification range, performing blank name identification on the organ tissue region, and outputting a notification for performing identification supplement on the blank identification.
In this embodiment, different recognition probability ranges correspond to different pushing strategies, when the CNN model outputs that the recognition probability of the organ tissue region is in the second recognition range, it is indicated that the classification CNN model cannot recognize that the organ tissue in the CT image to be recognized exists the same as the organ tissue in the CT image of the human body with the organ tissue name identifier, the organ tissue region needs to be subjected to blank name identifier, and a notification for performing identifier supplement on the blank identifier is output, and exemplary, the second recognition range may be set to [0%,80% ].
In one possible embodiment, the preprocessing of the CT image includes, but is not limited to, cropping to the same size, noise reduction, resampling, and contrast enhancement.
In a possible implementation manner, the classification CNN model is formed based on a network structure of the res net50, and specifically includes: an independent convolution layer a, a residual block and a full connection layer; wherein:
the residual blocks comprise 4, each residual block is formed by stacking a plurality of identical basic residual blocks, the 4 residual blocks sequentially comprise 3, 4, 6 and 4 basic residual blocks, each basic residual block comprises 3 convolution layers, specifically 2 convolution layers b and 1 convolution layer c, the convolution layers b, the convolution layers c and the convolution layers b are sequentially arranged in sequence, a batch of standardization layers and a ReLU activation layer are connected behind each convolution layer,
the independent convolution layer a is connected with 4 residual blocks through a maximum pooling layer, the adjacent two residual blocks adopt a residual connection mode, the last residual block is connected with a message full-connection layer through an average pooling layer,
the output of the full connection layer is a 2*1 vector, which represents the image recognition and classification result, and finally the vector is converted into the prediction probability through the Softmax layer; the predictive probability is a fraction between 0 and 1.
In a second aspect of the present application, referring to fig. 4, there is provided a multi-map hierarchical imaging apparatus 400 of CT images, the information push apparatus 400 for a blockchain network including:
image acquisition module 401: acquiring a human body CT image, wherein the CT image comprises a human body CT image with an organ and tissue name mark and a CT image of a human body to be identified;
image processing module 402: preprocessing a plurality of image layers of the CT image of the human body with the organ tissue name marks to form a data set, wherein the data set comprises the intercepted organ tissue images and the corresponding name marks;
model training module 403: dividing the data set into a training set and a verification set, wherein the training set and the verification set are used for training a classification CNN model, and identifying each organ tissue by utilizing the trained classification CNN model;
the identification display module 404: and obtaining the identification result of each organ tissue in the CT image, dividing the region of each organ tissue according to the identification result, selecting the organ tissue in the identification result, and highlighting the selected organ tissue region relative to the unselected organ tissue region.
In some optional embodiments, after the image recognition module recognizes each organ tissue of the CT image of the human body to be recognized, the apparatus further includes:
Layer identification module: identifying each organ tissue by utilizing the classification CNN model to the CT image of each layer;
and the cascade processing module is used for: after identification, carrying out cascade treatment on the same organ tissues in all the layers;
and a cascade display module: the cascading process enables the same organ tissue in the remaining layers to be synchronously highlighted when a physician selects the organ tissue in one of the layers.
In some alternative embodiments, further comprising:
and (3) an edge frame selection module: and extracting the outer edge information of the organ tissues by a Canny edge detection algorithm through which the CNN model identification result passes, carrying out frame selection on the edges of the identified organ tissues according to the edge information, and carrying out name identification on the identified organ tissues.
In some optional embodiments, after the edge selection module performs name identification on the identified organ tissue, the method further includes:
edge identification module: the name mark is positioned at the edge of the CT image, points to the corresponding organ tissue through an arrow, performs region selection by clicking the organ tissue region or the name mark, and cancels the selection by clicking again.
In some alternative embodiments, the classification CNN model is based on a network structure of the res net50, and specifically includes: an independent convolution layer a, a residual block and a full connection layer; wherein:
The residual blocks comprise 4, each residual block is formed by stacking a plurality of identical basic residual blocks, the 4 residual blocks sequentially comprise 3, 4, 6 and 4 basic residual blocks, each basic residual block comprises 3 convolution layers, specifically 2 convolution layers b and 1 convolution layer c, the convolution layers b, the convolution layers c and the convolution layers b are sequentially arranged in sequence, a batch of standardization layers and a ReLU activation layer are connected behind each convolution layer,
the independent convolution layer a is connected with 4 residual blocks through a maximum pooling layer, the adjacent two residual blocks adopt a residual connection mode, the last residual block is connected with a message full-connection layer through an average pooling layer,
the output of the full connection layer is a 2*1 vector, which represents the image recognition and classification result, and finally the vector is converted into the prediction probability through the Softmax layer; the predictive probability is a fraction between 0 and 1.
In some alternative embodiments, further comprising:
and a supplementary correction module: when the CNN model fails to identify a diseased organ tissue region, the CNN model performs blank identification on the region which is not identified, supplements the blank identification, rechecks and modifies the edge of the organ tissue, and returns the supplemented and modified CT image to the CNN training set and the verification set to obtain CNN model supplementation and modification.
In some alternative embodiments, the organ tissue highlighting includes a brightness enhancement treatment of the selected organ tissue relative to the unselected organ tissue, and the organ tissue highlighting further includes a brightness reduction treatment or a blurring treatment or a hiding treatment of the unselected organ tissue relative to the selected organ tissue.
In some optional embodiments, the method further comprises an identification probability pushing module, wherein the identification probability pushing module outputs identification probabilities of organ tissue areas on the CNN model, and the range of the identification probabilities comprises a first identification range, a second identification range and a third identification range; the lower limit of the first identification range is larger than or equal to a first identification threshold value; the upper limit of the second identification range is smaller than the first identification threshold, and the lower limit of the second identification range is larger than or equal to the second identification threshold; the upper limit of the third recognition range is smaller than a second recognition threshold value:
the identification probability pushing module outputs a pushing strategy for the identification result of the organ tissue region based on the identification probability range corresponding to the CNN model, and pushes the identification result of the organ tissue region, and the identification probability pushing module comprises:
A first information pushing unit: when the identification probability of the CNN model output to the organ tissue region is in the first identification range, carrying out name identification on the organ tissue region, and outputting a confirmation identification result of the organ tissue;
the second information pushing unit: when the identification probability of the CNN model output to the organ tissue region is in the second identification range, carrying out name identification on the organ tissue region, and outputting a recognition result to be confirmed of the organ tissue;
a third information pushing unit: and when the identification probability of the CNN model output to the organ tissue region is in the third identification range, performing blank name identification on the organ tissue region, and outputting a notification for performing identification supplement on the blank identification.
It should be noted that, referring to the specific implementation of the multi-image hierarchical imaging device 400 for CT images according to the first aspect of the embodiment of the present application, the specific implementation of the multi-image hierarchical imaging method for CT images according to the first aspect of the embodiment of the present application is not described herein.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories. The computer may be a variety of computing devices including smart terminals and servers.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in an article or apparatus that comprises the element.
The above description is made in detail on a multi-image hierarchical imaging method and apparatus for CT images, and specific examples are applied to illustrate the principles and embodiments of the present application, where the above description of the embodiments is only for helping to understand the information pushing method for blockchain network and the core idea thereof; meanwhile, as those skilled in the art will vary in the specific embodiments and application scope according to the idea of the present application, the present disclosure should not be construed as limiting the present application in summary.

Claims (10)

1. A multi-map hierarchical imaging method for CT images, comprising:
Acquiring a human body CT image, wherein the CT image comprises a human body CT image with an organ and tissue name mark and a CT image of a human body to be identified;
preprocessing a plurality of image layers of the CT image of the human body with the organ tissue name marks to form a data set, wherein the data set comprises the intercepted organ tissue images and the corresponding name marks;
dividing the data set into a training set and a verification set, wherein the training set and the verification set are used for training a classification CNN model, and identifying each organ tissue by utilizing the trained classification CNN model;
and obtaining the identification result of each organ tissue in the CT image, dividing the region of each organ tissue according to the identification result, selecting the organ tissue in the identification result, and highlighting the selected organ tissue region relative to the unselected organ tissue region.
2. A multi-image hierarchical imaging method of CT images according to claim 1, wherein said identifying of each organ tissue is performed on CT images of a human to be identified, said method comprising:
identifying each organ tissue by utilizing the classification CNN model to the CT image of each layer;
after identification, carrying out cascade treatment on the same organ tissues in all the layers;
The cascading process enables the same organ tissue in the remaining layers to be synchronously highlighted when a physician selects the organ tissue in one of the layers.
3. The multi-image hierarchical imaging method of a CT image according to claim 2, wherein the CNN model recognition result is extracted by a Canny edge detection algorithm, the edge information of the organ tissue is extracted, the edge of the recognized organ tissue is framed according to the edge information, and the name of the recognized organ tissue is identified.
4. A multi-map hierarchical imaging method of CT images according to claim 3, wherein framing the identified edges of the organ tissue based on the edge information further comprises:
checking the edge of organ tissue framing;
according to the checking result, checking that the edges of the organ and the tissue are correct, selecting to not modify, checking that the edges of the organ and the tissue are incorrect, and selecting to modify the edges of the organ and the tissue;
and returning the CT image with the edge information corrected to the CNN training set and the verification set to obtain the correction of the CNN model on the edge of the organ tissue.
5. A multi-image hierarchical imaging method of CT images according to claim 4, wherein said name label is located at the edge of the CT image, pointing to the corresponding organ tissue by means of an arrow, and the doctor performs region selection by clicking on the organ tissue region or name label, and de-selecting by clicking again.
6. A multi-image hierarchical imaging method of CT images according to any of claims 1-5, wherein: when the CNN model fails to identify the organ tissue region,
the CNN model performs blank identification on the area which is not identified;
performing identification supplement on the blank identification;
and returning the CT image with the identification supplement completion to the CNN training set and the verification set to obtain the identification supplement of the CNN model to the organ tissues.
7. A multi-image hierarchical imaging method of CT images as set forth in claim 6, wherein: the CNN model outputs identification probability of the organ tissue region, wherein the range of the identification probability comprises a first identification range, a second identification range and a third identification range; the lower limit of the first identification range is larger than or equal to a first identification threshold value; the upper limit of the second identification range is smaller than the first identification threshold, and the lower limit of the second identification range is larger than or equal to the second identification threshold; the upper limit of the third recognition range is smaller than a second recognition threshold;
based on the pushing strategy of the identification probability range corresponding to the CNN model output to the organ tissue area identification result, pushing the organ tissue area identification result, comprising the following steps:
When the CNN model outputs the identification probability of the organ tissue region in the first identification range, carrying out name identification on the organ tissue region, and outputting a confirmation result of the organ tissue;
when the CNN model outputs the identification probability of the organ tissue region in the second identification range, carrying out name identification on the organ tissue region, and outputting the identification result to be confirmed of the organ tissue;
and when the identification probability of the CNN model output to the organ tissue region is in the third identification range, performing blank name identification on the organ tissue region, and outputting a notification for performing identification supplement on the blank identification.
8. A multi-map hierarchical imaging apparatus for CT images, comprising:
and an image acquisition module: acquiring a human body CT image, wherein the CT image comprises a human body CT image with an organ and tissue name mark and a CT image of a human body to be identified;
an image processing module: preprocessing a plurality of image layers of the CT image of the human body with the organ tissue name marks to form a data set, wherein the data set comprises the intercepted organ tissue images and the corresponding name marks;
model training module: dividing the data set into a training set and a verification set, wherein the training set and the verification set are used for training a classification CNN model, and identifying each organ tissue by utilizing the trained classification CNN model;
And (3) identifying and displaying the module: and obtaining the identification result of each organ tissue in the CT image, dividing the region of each organ tissue according to the identification result, selecting the organ tissue in the identification result, and highlighting the selected organ tissue region relative to the unselected organ tissue region.
9. A multi-image hierarchical imaging electronic device for CT images, comprising a processor and a memory; the memory is for storing a computer program which, when executed by the processor, causes the electronic device to perform a multi-map hierarchical imaging method of a CT image as claimed in any one of claims 1-7.
10. A multi-image hierarchical imaging apparatus for CT images, characterized in that a readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the respective procedures of a multi-image hierarchical imaging method for CT images according to any of claims 1-7.
CN202310960134.7A 2023-08-02 2023-08-02 Multi-image hierarchical joint imaging method and device for CT images Active CN116664580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310960134.7A CN116664580B (en) 2023-08-02 2023-08-02 Multi-image hierarchical joint imaging method and device for CT images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310960134.7A CN116664580B (en) 2023-08-02 2023-08-02 Multi-image hierarchical joint imaging method and device for CT images

Publications (2)

Publication Number Publication Date
CN116664580A true CN116664580A (en) 2023-08-29
CN116664580B CN116664580B (en) 2023-11-28

Family

ID=87724659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310960134.7A Active CN116664580B (en) 2023-08-02 2023-08-02 Multi-image hierarchical joint imaging method and device for CT images

Country Status (1)

Country Link
CN (1) CN116664580B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097944A (en) * 2019-04-11 2019-08-06 山东大学齐鲁医院 A kind of display regulation method and system of human organ model
CN110223300A (en) * 2019-06-13 2019-09-10 北京理工大学 CT image abdominal multivisceral organ dividing method and device
CN110232383A (en) * 2019-06-18 2019-09-13 湖南省华芯医疗器械有限公司 A kind of lesion image recognition methods and lesion image identifying system based on deep learning model
CN110522516A (en) * 2019-09-23 2019-12-03 杭州师范大学 A kind of multi-level interactive visual method for surgical navigational
CN112634246A (en) * 2020-12-28 2021-04-09 深圳市人工智能与机器人研究院 Oral cavity image identification method and related equipment
KR20210073622A (en) * 2019-12-09 2021-06-21 시너지에이아이 주식회사 Method and apparatus for measuring volume of organ using artificial neural network
CN115439486A (en) * 2022-05-27 2022-12-06 陕西科技大学 Semi-supervised organ tissue image segmentation method and system based on dual-countermeasure network
CN115861150A (en) * 2021-09-23 2023-03-28 上海微创卜算子医疗科技有限公司 Segmentation model training method, medical image segmentation method, electronic device, and medium
CN116128887A (en) * 2021-11-08 2023-05-16 上海微创卜算子医疗科技有限公司 Target organ tissue region of interest positioning method, electronic device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097944A (en) * 2019-04-11 2019-08-06 山东大学齐鲁医院 A kind of display regulation method and system of human organ model
CN110223300A (en) * 2019-06-13 2019-09-10 北京理工大学 CT image abdominal multivisceral organ dividing method and device
CN110232383A (en) * 2019-06-18 2019-09-13 湖南省华芯医疗器械有限公司 A kind of lesion image recognition methods and lesion image identifying system based on deep learning model
CN110522516A (en) * 2019-09-23 2019-12-03 杭州师范大学 A kind of multi-level interactive visual method for surgical navigational
KR20210073622A (en) * 2019-12-09 2021-06-21 시너지에이아이 주식회사 Method and apparatus for measuring volume of organ using artificial neural network
CN112634246A (en) * 2020-12-28 2021-04-09 深圳市人工智能与机器人研究院 Oral cavity image identification method and related equipment
CN115861150A (en) * 2021-09-23 2023-03-28 上海微创卜算子医疗科技有限公司 Segmentation model training method, medical image segmentation method, electronic device, and medium
CN116128887A (en) * 2021-11-08 2023-05-16 上海微创卜算子医疗科技有限公司 Target organ tissue region of interest positioning method, electronic device and storage medium
CN115439486A (en) * 2022-05-27 2022-12-06 陕西科技大学 Semi-supervised organ tissue image segmentation method and system based on dual-countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHANG LIU .ETAL: "Reliable Automatic Organ Segmentation from CT Images using Deep CNN", IEEE, pages 368 - 374 *
周云成 等: "基于双卷积链Fast R-CNN的番茄关键器官识别方法", 沈阳农业大学学报, no. 01 *

Also Published As

Publication number Publication date
CN116664580B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
Azizi et al. Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging
US10733727B2 (en) Application of deep learning for medical imaging evaluation
US8194959B2 (en) Medical image part recognition apparatus and medical image part recognition program
US10722210B2 (en) Method for memorable image generation for anonymized three-dimensional medical image workflows
US11847188B2 (en) Image recognition method and device based on deep convolutional neural network
JP6719421B2 (en) Learning data generation support device, learning data generation support method, and learning data generation support program
CN110400617A (en) The combination of imaging and report in medical imaging
EP4170670A1 (en) Medical data processing method and system
WO2020209382A1 (en) Medical document generation device, method, and program
CN116721045B (en) Method and device for fusing multiple CT images
CN112562860A (en) Training method and device of classification model and coronary heart disease auxiliary screening method and device
JP2023175011A (en) Document creation assistance device, method, and program
JP2024009342A (en) Document preparation supporting device, method, and program
Poletti et al. Towards a digital twin of coronary stenting: a suitable and validated image-based approach for mimicking patient-specific coronary arteries
Oh et al. Reliable quality assurance of X-ray mammography scanner by evaluation the standard mammography phantom image using an interpretable deep learning model
Tomihama et al. Machine learning analysis of confounding variables of a convolutional neural network specific for abdominal aortic aneurysms
CN116664580B (en) Multi-image hierarchical joint imaging method and device for CT images
CN111436212A (en) Application of deep learning for medical imaging assessment
US11803970B2 (en) Image judgment device, image judgment method, and storage medium
CN115798667A (en) Physical examination report generation method and device, computer equipment and storage medium
Mahajan et al. Audit of artificial intelligence algorithms and its impact in relieving shortage of specialist doctors
CN116681717B (en) CT image segmentation processing method and device
KR20210054140A (en) Medical image diagnosis assistance apparatus and method using a plurality of medical image diagnosis algorithm for endoscope images
CN115132328B (en) Information visualization method, device, equipment and storage medium
Akogo A Standardized Radiograph-Agnostic Framework and Platform For Evaluating AI Radiological Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant