CN115063397A - Computer-aided image analysis method, computer device and storage medium - Google Patents

Computer-aided image analysis method, computer device and storage medium Download PDF

Info

Publication number
CN115063397A
CN115063397A CN202210823460.9A CN202210823460A CN115063397A CN 115063397 A CN115063397 A CN 115063397A CN 202210823460 A CN202210823460 A CN 202210823460A CN 115063397 A CN115063397 A CN 115063397A
Authority
CN
China
Prior art keywords
image
medical image
morphological structure
target
target medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210823460.9A
Other languages
Chinese (zh)
Inventor
曹晓欢
薛忠
高菲菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202210823460.9A priority Critical patent/CN115063397A/en
Publication of CN115063397A publication Critical patent/CN115063397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The present application relates to a computer-aided image analysis method, a computer device, and a storage medium. The method comprises the following steps: performing image recognition on the target medical image, and determining a part or a morphological structure included in the target medical image; matching and calling an analysis algorithm corresponding to the part or the morphological structure based on the part or the morphological structure; and analyzing the target medical image according to an analysis algorithm to obtain an analysis result. The method can realize the full automation and the intellectualization of the computer-aided image analysis.

Description

Computer-aided image analysis method, computer device and storage medium
The scheme is a divisional application of a Chinese patent application with application number 201910650614.7, which is filed in 2019, 7, 18 and is entitled computer-aided image analysis method, computer equipment and storage medium.
Technical Field
The present application relates to the field of computer-aided system technologies, and in particular, to a computer-aided image analysis method, a computer device, and a storage medium.
Background
Computer-aided image analysis refers to a technology for finding a focus in an auxiliary way by combining computer analysis and calculation through iconography, medical image processing technology and other possible physiological and biochemical means. Because the human body parts can be roughly divided into three major parts, namely a head part, a neck part, a chest part, a lung part and an abdominal cavity and a pelvic part, the existing computer-aided image analysis system is usually developed and designed aiming at specific parts, and if the body parts covered by the images cannot be confirmed, related analysis algorithms cannot be called. For example, in the lung nodule auxiliary screening software, it is necessary to determine whether an image is a chest lung image before calling a lung nodule detection algorithm, and if the image is not determined, the lung nodule detection algorithm is called blindly, which may cause problems such as invalid algorithm check result, wasted operation time, and misleading to clinical diagnosis and treatment.
In the existing computer-aided image analysis workflow, the identification of the body part of the image is usually achieved in two ways. One is manual judgment by a user, and although the result of the manual judgment by the user is more accurate, the user can use different auxiliary image analysis algorithms according to different inspection parts. However, in this case, although many image analysis algorithms have been developed in the direction of intellectualization and automation, manual work cannot be avoided, and complete automation and intellectualization cannot be achieved. And when a large amount of image data needs to be processed, the processing efficiency is greatly reduced. And the other is obtained by automatically reading the information of the DICOM (Digital Imaging and Communications in Medicine) header file. Although the DICOM header file contains information of a human body part, in practical applications, the DICOM header file can be read and analyzed to automatically identify the human body part. However, due to differences in culture and language, the information in the DICOM header file is not in a unified standard, so that accurate identification of the DICOM header file information is difficult, and direct identification using the DICOM information causes certain interference in an actual workflow.
Disclosure of Invention
In view of the above, it is necessary to provide a computer-aided image analysis method, a computer device and a storage medium, which can be easily performed without interference and can perform automated analysis.
A method of computer-aided image analysis, the method comprising:
performing image recognition on a target medical image, and determining a part or a morphological structure included in the target medical image;
matching and calling an analysis algorithm corresponding to the part or the morphological structure based on the part or the morphological structure;
and analyzing the target medical image according to the analysis algorithm to obtain an analysis result.
In one embodiment, the step of performing image recognition on the target medical image and determining a part or a morphological structure included in the target medical image includes:
performing image recognition on the target medical image by using a neural network, and determining a part or a morphological structure included in the target medical image; and the neural network is obtained by training according to the labeled medical image.
In one embodiment, the labeled medical image is obtained according to a predetermined standard image.
In one embodiment, the labeled medical image is obtained according to a preset standard image, and the method includes:
acquiring a preset standard image, and determining the number of a divided part on the standard image;
determining the number of cross-sectional images included in a training sample;
calculating based on the numbers of the divided parts and the number of the transverse sectional images to respectively obtain the numbers of the transverse sectional images;
and labeling each transverse sectional image in the training sample based on the serial number of each transverse sectional image to obtain a labeled medical image.
In one embodiment, the step of matching and calling an analysis algorithm corresponding to the part or morphological structure based on the part or morphological structure includes:
determining whether the site or morphological structure comprises a target site or a target morphological structure;
when it is determined that the portion or morphological structure comprises a target portion or target morphological structure, invoking an analysis algorithm corresponding to the target portion or target morphological structure.
In one embodiment, analyzing the target medical image according to the analysis algorithm to obtain an analysis result includes:
determining a part or a morphological structure included in each transverse sectional image in the target medical image;
acquiring an analysis algorithm corresponding to a part or a morphological structure included in each transverse sectional image;
and respectively inputting each transverse sectional image into a corresponding analysis algorithm so as to analyze the transverse sectional images by utilizing the corresponding analysis algorithm to obtain an analysis result.
In one embodiment, the target medical image includes at least two layers of cross-sectional images including an identification layer and an adjacent layer corresponding to the identification layer.
In one embodiment, the at least two layers of transverse slice images are continuous three layers of transverse slice images, including an identification layer and an upper adjacent layer and a lower adjacent layer corresponding to the identification layer.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the computer-aided image analysis method of any one of the above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements a computer-aided image analysis method as recited in any one of the above.
The computer-aided image analysis method, the computer device and the storage medium perform image recognition on the target medical image, determine the part or the morphological structure included in the target medical image, do not need to judge the medical image manually, and can obtain the part or the morphological structure in the medical image without calling a header file of the medical image. Based on the matching of the part or the morphological structure and the calling of an analysis algorithm corresponding to the part or the morphological structure, the target medical image is analyzed according to the analysis algorithm, so that the analysis algorithm can be automatically called according to the identified part or morphological structure to complete auxiliary analysis, and the full automation and the intellectualization of computer-aided image analysis are realized.
Drawings
FIG. 1 is a diagram of an exemplary computer-aided image analysis system;
FIG. 2 is a flowchart illustrating a method for computer-aided image analysis according to an embodiment;
FIG. 3 is a flow diagram illustrating the processing of image recognition in one embodiment;
FIG. 4 is a diagram illustrating the numbering of the divided portions in one embodiment;
FIG. 5 is a flow diagram illustrating the steps of a method for matching and invoking an analysis algorithm corresponding to a site or morphological structure based on the site or morphological structure in one embodiment;
FIG. 6 is a schematic flow chart illustrating the steps of a method for matching and invoking an analysis algorithm corresponding to a site or morphological structure based on the site or morphological structure in another embodiment;
FIG. 7 is a block diagram of an exemplary computer-aided image analysis apparatus;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The computer-aided image analysis method provided by the application can be applied to the application environment shown in fig. 1. Wherein the medical scanning device 102 communicates with the server 104 over a network. The medical scanning device 102 acquires raw image data, and the server 104 extracts a target medical image from the raw medical image. The server 104 performs image recognition on the target medical image, and determines a part or a morphological structure included in the target medical image. The server 104 matches and invokes an analysis algorithm corresponding to the part or morphological structure based on the part or morphological structure. The server 104 analyzes the target medical image according to the analysis algorithm to obtain an analysis result. Among other things, the medical scanning device 102 may be, but not limited to, a single-modality CT (Computed Tomography) device, a PET (Positron Emission Tomography) device, an MRI (Magnetic Resonance Imaging) device, and a multi-modality PET/CT device, PET/MR device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a computer-aided image analysis method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step S202, image recognition is carried out on the target medical image, and a part or a morphological structure included in the target medical image is determined.
The target medical image refers to a medical image for performing image recognition, including but not limited to a CT image and an MRI image, and the target medical image is obtained according to an original image. The part can be understood as human body part, such as foot, hand, etc. Morphological structures are understood to be human organ structures, such as lung structures, etc. In this embodiment, the target medical image includes at least two layers of medical images, namely, an identification layer and an adjacent layer corresponding to the identification layer. The adjacent layers are an upper adjacent layer and a lower adjacent layer corresponding to the recognition layer, that is, the adjacent layers can be understood as three layers in total of the target medical image.
Specifically, the medical scanning device scans a target scanning object to obtain an original medical image corresponding to the target scanning object, where the original medical image is 3D volume data. The medical scanning equipment sends the original medical image to the server. When the server receives the original medical image, at least two layers of medical images are extracted from the original medical image to be used as a target medical image. That is, a layer to be identified is extracted from an original medical image of 3D volume data as an identification layer, a corresponding adjacent layer of the identification layer is extracted, and the identification layer and the adjacent layer are used as a target medical image. And performing image recognition on the target medical image to obtain a part or a morphological structure included in the target medical image.
And step S204, matching and calling an analysis algorithm corresponding to the part or the morphological structure based on the part or the morphological structure.
The analysis algorithm is a medical image analysis algorithm, and refers to an algorithm for analyzing a part or a morphological structure included in a medical image. It is understood that different regions or morphological structures have different analysis algorithms.
Specifically, after the server determines a part or a morphological structure included in the target medical image by performing image recognition on the target medical image, an analysis algorithm corresponding to the part or the morphological structure is matched from the medical image analysis algorithm, and the matched analysis algorithm is called at the same time.
And S206, analyzing the target medical image according to an analysis algorithm to obtain an analysis result.
Specifically, after an analysis algorithm corresponding to the part or the morphological structure is called, the analysis algorithm is used to analyze the target medical image corresponding to the part or the morphological structure, and an analysis result is obtained.
In one embodiment, analyzing the target medical image according to an analysis algorithm to obtain an analysis result specifically includes: determining the part or morphological structure included by each transverse sectional image in the target medical image; acquiring an analysis algorithm corresponding to the part or the morphological structure included in each transverse sectional image; and inputting each transverse-layer image into a corresponding analysis algorithm respectively, and analyzing each transverse-layer image by using the corresponding analysis algorithm to obtain an analysis result.
Specifically, the target medical image includes a plurality of cross-sectional images, i.e., different cross-sectional images are input into the corresponding analysis algorithms. Firstly, determining the part or morphological structure included in each cross-sectional image in the target medical image, and acquiring the analysis algorithm corresponding to the part or morphological structure included in the target medical image, namely the analysis algorithm corresponding to each cross-sectional image. Therefore, the analysis algorithm corresponding to the cross-sectional image is determined by determining the analysis algorithm corresponding to the part or the morphological structure included in the cross-sectional image. And after the corresponding analysis algorithm is determined, inputting the transverse sectional image into the corresponding analysis algorithm for analysis.
The computer-aided image analysis method identifies the target medical image, determines the part or the morphological structure included in the target medical image, does not need to judge the medical image manually, and can obtain the morphological structure in the medical image without calling a header file of the medical image. Based on the matching of the part or the morphological structure and the calling of an analysis algorithm corresponding to the part or the morphological structure, the target medical image is analyzed according to the analysis algorithm, so that the analysis algorithm can be automatically called according to the identified part or morphological structure to complete auxiliary analysis, and the full automation and the intellectualization of computer-aided image analysis are realized.
In one embodiment, the image recognition of the target medical image and the determination of the portion or the morphological structure included in the target medical image specifically include: and carrying out image recognition on the target medical image by utilizing the neural network, and determining a part or a morphological structure included in the target medical image. Wherein, the neural network is obtained by training according to the labeled medical image.
The labeled medical image is the medical image labeled with a label, and the label can be understood as any one or more of a number, a name and a character.
Specifically, taking the number as an example, the number of the part or the morphological structure in the target medical image is mapped through the neural network, so as to obtain the number corresponding to the part or the morphological structure in the target medical image. The part or morphological structure included in the target medical image can be determined through the number. For example, when training neural networksWhen the number a is used for representing the lung structure, the number a is obtained when the neural network performs number mapping on the part or the morphological structure in the target medical image, and it is determined that the target medical image includes the lung structure. Wherein, if the neural network is represented by B, the input target medical image is represented by V i-1 ,V i ,V i+1 ) Is represented by (V) i-1 ,V i ,V i+1 ) Representing successive three-layer transverse sectional images, V i For the identification layer, V i-1 And V i+1 Are two adjacent layers. The flag is represented by N, the number mapping relationship can be represented as:
Figure BDA0003745398200000061
when the target medical image comprises a plurality of transverse sectional images, the neural network is used for carrying out image recognition on each transverse sectional image in the target medical image, the number corresponding to each transverse sectional image is obtained respectively, and the part or the morphological structure included in each transverse sectional image can be determined according to the number.
Furthermore, when the target medical image is acquired, in order to ensure the quality of the image input into the neural network, namely, by eliminating irrelevant information in the image, useful real information is recovered, the detectability of the relevant information is enhanced, and the data is simplified to the maximum extent. The target neural network is preprocessed before the target medical image is input into the neural network, for example, preprocessing includes, but is not limited to, smoothing, median filtering, enhancing, drying, normalizing, and the like, of the image. And when the neural network is used for image recognition, the used neural network needs to be trained in advance according to the labeled medical image, so that the neural network can recognize the image. It can be understood that the neural network for performing image recognition on the target medical image is a neural network that is trained in advance and has a function of performing image recognition on the target medical image. If the number is used as a mark, the medical image with the number marked in advance is used for training. The structure of the Neural Network includes, but is not limited to, a CNN (Convolutional Neural Network) structure, an FCN (full Convolutional Network) structure, a VGG (Visual Geometry Group Network) structure, and the like.
In one embodiment, referring to fig. 3, a flow chart of a process for image recognition is provided. In the present embodiment, the Neural network is preferably CNN (Convolutional Neural Networks). The indicia are preferably numbers. Specifically, the target medical image of the three-layer continuous transverse sectional image is subjected to image preprocessing, and the preprocessed target medical image is input to the convolutional neural network. The convolutional neural network performs operations such as convolution and pooling on the preprocessed target medical image, maps to obtain a number corresponding to a part or a morphological structure included in the preprocessed target medical image, and outputs the number. And carrying out post-processing and judgment according to the serial number to obtain a final result, namely obtaining the part and the morphological structure included in the preprocessed target medical image.
In one embodiment, the neural network training process includes: firstly, a training sample is obtained, wherein the training sample is a medical image used for training, and the labeled medical image is obtained by labeling the training sample. And iteratively training the neural network according to the labeled medical image and a preset loss function until the model is stable, namely, waiting until the loss function of the model is converged, namely, finishing the training of the neural network. Wherein, the medical images used for training in the training sample comprise different numbers of cross section layer images. When training is performed, the labeled medical image input to the neural network also includes a recognition layer and an adjacent layer corresponding to the recognition layer. It can be understood that the labeled medical images include cross-sectional layer images of different numbers, and during training, any one of the cross-sectional layer images and the cross-sectional layer image adjacent to the cross-sectional layer image are input into the neural network together for training, that is, three layers of cross-sectional images with the cross-sectional layer as the middle layer are used. The continuous medical images can ensure the continuity of the human body structure information, so that the obtained result is smoother and more accurate. Compared with the traditional method of utilizing a single image, the three-dimensional semantic information can be added to the three continuous layers of images, the discrimination of the human body similar structure can be improved, and the more accurate and robust recognition effect can be realized through semantic perception. And, in this embodiment, the mean square error MSE is preferably used as a loss function for training.
In one embodiment, the acquiring the labeled medical image according to the preset standard image specifically includes: acquiring a preset standard image, and determining the number of the divided parts on the standard image; determining the number of cross-sectional images included in a training sample; calculating based on the numbers of the divided parts and the number of the transverse sectional images to respectively obtain the numbers of the transverse sectional images; and labeling each transverse sectional image in the training sample based on the number of each transverse sectional image to obtain a labeled medical image.
The standard image is a medical image labeled with a number, and the labeled part or morphological structure in the standard image is a part or morphological structure in an average state. The assumption is that the entire body structure is subjected to image recognition, i.e. the labeled medical image used in training needs to have a medical image including the entire body structure, and the corresponding need has a standard image including the entire body structure. The human body structure in the standard image should be a moderate image, i.e. the part or morphological structure in the standard image should be in an average state. After the human body structure in the standard image is labeled, other training medical images are labeled according to the labeled standard image. Namely, the training samples are labeled according to the labeled standard images, and if the training samples comprise a plurality of training medical images, all the training medical images are labeled according to the labeled standard images.
Specifically, referring to fig. 4, the standard image including the whole body structure and the marks are labeled and described by taking the number as an example: according to the degree of fineness actually required, the human body structure can be divided into N parts, such as a head, a neck, a chest, an abdomen and the like, namely, the human body structure is marked with numbers of 0 to N. That is, the number of the human body structure is 0 to 4 when the human body is divided into 4 parts such as the head and neck part, the pectoral-pulmonary part, the abdominal cavity part, and other parts. The number of the human body structure is 0-7 when the human body structure is divided into 7 parts such as the head, the neck, the chest, the lung, the abdomen, the pelvic cavity and other parts. It follows that when N is large, the division of the part of the human body structure is finer, and when N is small, the division of the part of the human body structure is more rough. The value of N may be set according to actual circumstances, and may be set to N100, N200, or the like. And after the human body structure in the standard image is numbered according to the set number, the medical image used for training is numbered according to the number marked in the standard image, and the marked image obtained after marking can be input to a neural network for training.
For example, the standard image including the whole body structure and the mark is also labeled and described by taking the number as an example: if the standard image is divided into three parts of the human body structure, namely the head and neck part, the chest and lung part and the abdominal cavity basin part. I.e. the human anatomy comprises the head and neck R 1 Pectoral-pulmonary region R 2 Abdominopelvic cavity R 3 And other sites R not mentioned 4 Then, the parts in each part are divided again finely and numbered. Wherein, the head and neck R 1 Division into N 1 Each part is numbered from 0 to N 1 (ii) a Chest Lung R 2 Division into N 2 A part, numbered N 1 ~N 1 +N 2 Abdominopelvic cavity R 3 Division into N 3 Each part numbered N 2 ~N 1 +N 2 +N 3 . Wherein N is 1 =30,N 2 =30,N 3 If 40, the total number N is 100, N 1 +N 2 +N 3 N, that is, in the head and neck R 1 The given number in the range is 0-30, and as long as the number identified by the neural network belongs to 0-30, the corresponding part can be represented as the head and neck. Similarly, in the chest and lung region R 2 The given number in the range is 30-60, and as long as the number identified by the neural network belongs to 30-60, the corresponding part can be represented as the chest and lung part. And in the abdominopelvic cavity R 3 The given number in the range is 60-100, and as long as the number identified by the neural network belongs to 60-100, the corresponding part can be represented as the pelvic cavity. This is inAfter the human body structure is divided into parts, each part is finely divided because the length proportion of the human body parts is not completely consistent among different individuals. Therefore, the generalization of the algorithm can be increased by adopting the piecewise linear method.
When the training sample is labeled according to the standard image, the number of the cross-sectional images included in the training sample is firstly determined, and the training sample can be generally divided into 32 layers, 64 layers, 128 layers and the like according to different scanning detectors. And then, calculating based on the numbers of the divided parts in the preset standard and the number of the transverse sectional images included in the medical images to obtain the number of each transverse sectional image in the training sample. It is assumed that when the human body structure includes M layers of cross-sectional images, the number n in the cross-sectional image of any layer M (0< M) can be calculated by the following formula:
Figure BDA0003745398200000091
similarly, if the head and neck R is used alone 1 By way of example, i.e. for the head and neck R alone 1 Labeling is performed assuming head and neck R 1 Also includes 64 layers of cross-sectional images, and it has been determined that the head and neck are set to 30 numbers, i.e., N, based on standard images 1 30, N in the above formula is N 1 . Calculated by the formula, the head and neck R 1 The numbers of the 64-layer cross-sectional images are respectively 64 numbers such as n-0.46875, 0.9375, 1.40625, etc., which are not described herein again. However, as can be seen from the above equation, the number N obtained is not necessarily an integer unless the number of the cross-sectional images is equal to the total number N of the numbers set. In addition to numbers that are not integers, numbers that are similar in number may be used to indicate the same position or configuration. Therefore, 64 numbers are numbered and combined through a preset rule, for example, a rule such as rounding is adopted. So that the similar numbers can represent the same part or morphological structure, the labeling process can be simplified through the design, and diversified data can be simultaneously applied to a neural networkAnd (5) training. It is understood that numbers 0.9375 and 1.40625 are indicated as the same part or morphological structure according to the set rules, i.e. 1.40625 is equivalent to 0.9375, and that 0.9375 and 1.40625 are equivalent to number 1. That is, the cross-sectional image numbered 1.40625 corresponds to the part or morphological structure indicated by the cross-sectional image numbered 0.9375. For example 1.40625 denotes head and neck R 1 The nose of the region, 0.9375 also indicates the nose. Then, when the cross-sectional image of the layer with the number 0.9375 is required to be identified, that is, the layer with the number 0.9375 is an identification layer, the cross-sectional images of the adjacent layers, that is, the layers with the numbers 0.46875 and 1.40625, are acquired. When the three-layer cross-sectional image is inputted to a neural network for recognition, numbers 0.46875, 0.9375 and 1.40625 can be obtained by the neural network, and numbers 0.46875, 0.9375 and 1.40625 indicate the same site or morphological structure, so that only two sites or morphological structures of the head and neck region can be obtained according to numbers 0.46875, 0.9375 and 1.40625.
In one embodiment, as shown in fig. 5, matching and invoking an analysis algorithm corresponding to a part or morphological structure based on the part or morphological structure comprises the following steps:
step S502, determine whether the part or morphological structure includes a target part or a target morphological structure.
In step S504, when it is determined that the part or the morphological structure includes the target part or the target morphological structure, an analysis algorithm corresponding to the target part or the target morphological structure is called.
The target part or the target structure refers to a specific part, and it can be understood that the auxiliary analysis of the image at this time needs to determine whether a specific part or a morphological structure, that is, the target part or the target structure, is included in the target medical image.
Specifically, when the site or morphological structure is obtained, a predetermined target site or target morphological structure is acquired. And matching the part or morphological structure with the target part or target morphological structure, and determining that the part or morphological structure comprises the target part or target morphological structure if the target part or target morphological structure corresponding to the acquired part or morphological structure exists in the target part or target morphological structure. And calling an analysis algorithm corresponding to the target part or the target morphological structure, and taking the analysis algorithm as the analysis algorithm corresponding to the obtained part or morphological structure. For example, if the target region includes a region 1 and a region 2, and the region acquired this time is a region 2, when the target region is matched with the region acquired this time, it may be determined that the region 2 in the target region is a matching region of the region 2 acquired this time, that is, an analysis algorithm corresponding to the region 2 is called. If the target part comprises the part 1 and the part 2, and the part obtained at this time comprises the part 1, the part 2 and the part 3, the part 1 and the part 2 can only be matched in the target part, and the analysis algorithms corresponding to the part 1 and the part 2 are called at the same time. In this embodiment, an analysis algorithm of a specific part is called according to the judgment, and the image is analyzed in a targeted manner, so that resources are saved.
In another embodiment, as shown in fig. 6, matching and invoking an analysis algorithm corresponding to a site or morphological structure based on the site or morphological structure comprises the following steps:
step S602, determining an analysis algorithm which can be called according to the part or the morphological structure.
In step S604, the analysis algorithm that can be called is used as the analysis algorithm corresponding to the part or the morphological structure.
Specifically, in this embodiment, it is not necessary to determine whether a part or a morphological structure includes a specific part, and an analysis algorithm that can be invoked is determined directly according to the acquired part and morphological structure. It can be understood that, how many parts or morphological structures are included in the acquired parts or morphological structures, how many corresponding analysis algorithms are called to analyze the target medical image. For example, if the part includes the part 1 and the part 2, the analysis algorithm corresponding to the part 1 and the part 2 is directly called, thereby speeding up the computer-aided analysis.
It should be understood that although the various steps in the flowcharts of fig. 2, 5-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 5-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided a computer-aided image analysis apparatus, including: an image recognition module 702, an algorithm matching module 704, and an image analysis module 706, wherein:
the image recognition module 702 is configured to perform image recognition on the target medical image, and determine a part or a morphological structure included in the target medical image.
And an algorithm matching module 704, configured to match and invoke an analysis algorithm corresponding to the part or the morphological structure based on the part or the morphological structure.
And the image analysis module 706 is configured to analyze the target medical image according to an analysis algorithm to obtain an analysis result.
In one embodiment, the image recognition module 702 is further configured to perform image recognition on the target medical image by using a neural network, and determine a part or a morphological structure included in the target medical image; and the neural network is obtained by training according to the labeled medical image.
In one embodiment, the image recognition module 702 is further configured to perform image recognition on each cross-sectional image in the target medical image by using a neural network, and obtain a number corresponding to each cross-sectional image; and determining the part or the morphological structure included in the transverse sectional image according to the number.
In one embodiment, the image analysis module 706 is further configured to determine a portion or morphological structure included in each cross-sectional image of the target medical image; acquiring an analysis algorithm corresponding to the part or the morphological structure included in each transverse sectional image; and respectively inputting each transverse sectional image into a corresponding analysis algorithm, and analyzing the transverse sectional images by using the corresponding analysis algorithm to obtain an analysis result.
In one embodiment, the computer-aided image analysis apparatus further includes a labeling module, configured to obtain a preset standard image and determine the number of the divided portion on the standard image; determining the number of cross-sectional images included in the training sample; calculating based on the numbers of the divided parts and the number of the transverse sectional images to respectively obtain the numbers of the transverse sectional images; and labeling each transverse sectional image of the training sample based on the number of each transverse sectional image to obtain a labeled medical image.
In one embodiment, the algorithm matching module 704 is further configured to determine a callable analysis algorithm according to the portion or shape structure, and to treat the callable analysis algorithm as an analysis algorithm corresponding to the portion or shape structure.
For the specific limitations of the computer-aided image analysis apparatus, reference may be made to the above limitations of the computer-aided image analysis method, which are not described herein again. All or part of the modules in the computer-aided image analysis device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a computer-aided image analysis method.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of:
performing image recognition on the target medical image, and determining a part or a morphological structure included in the target medical image;
matching and calling an analysis algorithm corresponding to the part or the morphological structure based on the part or the morphological structure;
and analyzing the target medical image according to an analysis algorithm to obtain an analysis result.
In one embodiment, the processor, when executing the computer program, further performs the steps of: performing image recognition on the target medical image by using a neural network, and determining a part or a morphological structure included in the target medical image; and the neural network is obtained by training according to the labeled medical image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
utilizing a neural network to carry out image identification on each transverse sectional image in the target medical image, and respectively obtaining the number corresponding to each transverse sectional image; and determining the part or the morphological structure included in the transverse sectional image according to the number.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
utilizing a neural network to carry out image identification on each transverse sectional image in the target medical image, and respectively obtaining the number corresponding to each transverse sectional image; and determining the part or the morphological structure included in the transverse sectional image according to the number.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a preset standard image, and determining the divided numbers on the standard image; determining the number of cross-sectional images included in the training sample; calculating based on the numbers of the divided parts and the number of the transverse sectional images to respectively obtain the numbers of the transverse sectional images; and labeling each transverse sectional image of the medical image based on the number of each transverse sectional image to obtain a labeled medical image.
In one embodiment, the processor, when executing the computer program, further performs the steps of: judging whether the part or the morphological structure comprises a target part or a target morphological structure; when it is determined that the portion or morphological structure includes a target portion or target morphological structure, an analysis algorithm corresponding to the target portion or target morphological structure is invoked.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
performing image recognition on the target medical image, and determining a part or a morphological structure included in the target medical image;
matching and calling an analysis algorithm corresponding to the part or the morphological structure based on the part or the morphological structure;
and analyzing the target medical image according to an analysis algorithm to obtain an analysis result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing image recognition on the target medical image by using a neural network, and determining a part or a morphological structure included in the target medical image; and the neural network is obtained by training according to the labeled medical image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
utilizing a neural network to carry out image identification on each transverse sectional image in the target medical image, and respectively obtaining the number corresponding to each transverse sectional image; and determining the part or the morphological structure included in the transverse sectional image according to the number.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the part or morphological structure included by each transverse sectional image in the target medical image; acquiring an analysis algorithm corresponding to the part or the morphological structure included in each transverse sectional image; and respectively inputting each transverse sectional image into a corresponding analysis algorithm, and analyzing the transverse sectional images by using the corresponding analysis algorithm to obtain an analysis result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a preset standard image, and determining the divided numbers on the standard image; determining the number of cross-sectional images included in the training sample; calculating based on the numbers of the divided parts and the number of the transverse sectional images to respectively obtain the numbers of the transverse sectional images; and labeling each transverse sectional image of the medical image based on the number of each transverse sectional image to obtain a labeled medical image. In one embodiment, the computer program when executed by the processor further performs the steps of: judging whether the part or the morphological structure comprises a target part or a target morphological structure; when it is determined that the portion or morphological structure includes a target portion or target morphological structure, an analysis algorithm corresponding to the target portion or target morphological structure is invoked.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining an analysis algorithm corresponding to a part or a morphological structure included in the target medical image; and inputting the target medical image into a corresponding analysis algorithm so as to analyze the target medical image by using the corresponding analysis algorithm to obtain an analysis result.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of computer-aided image analysis, the method comprising:
performing image recognition on a target medical image, and determining a part or a morphological structure included in each cross-sectional image in the target medical image; the sites or morphological structures comprise non-uniform sites or morphological structures;
and calling an analysis algorithm corresponding to the part or the morphological structure included by each transverse-sectional image, and analyzing each transverse-sectional image in the target medical image to obtain an analysis result.
2. The method according to claim 1, wherein the image recognition of the target medical image and the determination of the portion or morphological structure included in each cross-sectional image of the target medical image comprises:
and carrying out image recognition on the target medical image by utilizing a neural network, and determining the part or morphological structure included by each transverse sectional image in the target medical image.
3. The method according to claim 2, wherein the image recognition of the target medical image by using the neural network to determine the portion or morphological structure included in each cross-sectional image in the target medical image comprises:
utilizing a neural network to perform image recognition on a transverse sectional image in the target medical image to obtain a number corresponding to the transverse sectional image;
and determining the part or the morphological structure included in the transverse tomography image according to the number.
4. The method according to claim 3, wherein the neural network is trained from a labeled medical image obtained from a predetermined standard image, and the step of obtaining the labeled medical image from the predetermined standard image comprises:
acquiring a preset standard image, and determining the number of a divided part on the standard image;
determining the number of cross-sectional images included in a training sample;
calculating based on the numbers of the divided parts and the number of the transverse sectional images to respectively obtain the numbers of the transverse sectional images;
and labeling each transverse sectional image in the training sample based on the serial number of each transverse sectional image to obtain a labeled medical image.
5. The method according to claim 1, wherein the invoking an analysis algorithm corresponding to a portion or a morphological structure included in each of the cross-sectional images to analyze each of the cross-sectional images in the target medical image to obtain an analysis result comprises:
matching and calling an analysis algorithm corresponding to the part or the morphological structure based on the part or the morphological structure;
and analyzing the target medical image according to the analysis algorithm to obtain the analysis result.
6. The method according to claim 5, wherein matching and invoking an analysis algorithm corresponding to the part or morphological structure based on the part or morphological structure comprises:
determining whether the site or morphological structure comprises a target site or a target morphological structure;
when it is determined that the portion or morphological structure comprises a target portion or target morphological structure, invoking an analysis algorithm corresponding to the target portion or target morphological structure.
7. The method of any one of claims 1-6, wherein the target medical image comprises at least two layer cross-sectional images including an identification layer and an adjacent layer corresponding to the identification layer.
8. The method of claim 7, wherein the at least two layer cross sectional image is a continuous three layer cross sectional image comprising an identified layer and upper and lower adjacent layers corresponding to the identified layer.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1-8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202210823460.9A 2019-07-18 2019-07-18 Computer-aided image analysis method, computer device and storage medium Pending CN115063397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210823460.9A CN115063397A (en) 2019-07-18 2019-07-18 Computer-aided image analysis method, computer device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210823460.9A CN115063397A (en) 2019-07-18 2019-07-18 Computer-aided image analysis method, computer device and storage medium
CN201910650614.7A CN110490841B (en) 2019-07-18 2019-07-18 Computer-aided image analysis method, computer device and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910650614.7A Division CN110490841B (en) 2019-07-18 2019-07-18 Computer-aided image analysis method, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN115063397A true CN115063397A (en) 2022-09-16

Family

ID=68547363

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910650614.7A Active CN110490841B (en) 2019-07-18 2019-07-18 Computer-aided image analysis method, computer device and storage medium
CN202210823460.9A Pending CN115063397A (en) 2019-07-18 2019-07-18 Computer-aided image analysis method, computer device and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910650614.7A Active CN110490841B (en) 2019-07-18 2019-07-18 Computer-aided image analysis method, computer device and storage medium

Country Status (1)

Country Link
CN (2) CN110490841B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027469B (en) * 2019-12-09 2024-03-01 上海联影智能医疗科技有限公司 Human body part recognition method, computer device, and readable storage medium
CN111161239B (en) * 2019-12-27 2024-02-27 上海联影智能医疗科技有限公司 Medical image analysis method, device, storage medium and computer equipment
CN112766314A (en) * 2020-12-31 2021-05-07 上海联影智能医疗科技有限公司 Anatomical structure recognition method, electronic device, and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10169647B2 (en) * 2016-07-27 2019-01-01 International Business Machines Corporation Inferring body position in a scan
CN108172275B (en) * 2016-12-05 2022-02-11 北京东软医疗设备有限公司 Medical image processing method and device
CN107767935A (en) * 2017-09-15 2018-03-06 深圳市前海安测信息技术有限公司 Medical image specification processing system and method based on artificial intelligence
CN107767928A (en) * 2017-09-15 2018-03-06 深圳市前海安测信息技术有限公司 Medical image report preparing system and method based on artificial intelligence
CN108288499B (en) * 2018-01-22 2021-08-06 东软医疗系统股份有限公司 Automatic triage method and device
CN108665963A (en) * 2018-05-15 2018-10-16 上海商汤智能科技有限公司 A kind of image data analysis method and relevant device
CN109658377B (en) * 2018-10-31 2023-10-10 泰格麦迪(北京)医疗科技有限公司 Breast MRI lesion area detection method based on multidimensional information fusion

Also Published As

Publication number Publication date
CN110490841A (en) 2019-11-22
CN110490841B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
KR101857624B1 (en) Medical diagnosis method applied clinical information and apparatus using the same
Zhang et al. Detecting anatomical landmarks from limited medical imaging data using two-stage task-oriented deep neural networks
AU2017292642B2 (en) System and method for automatic detection, localization, and semantic segmentation of anatomical objects
JP6567179B2 (en) Pseudo CT generation from MR data using feature regression model
CN110310256B (en) Coronary stenosis detection method, coronary stenosis detection device, computer equipment and storage medium
CN110766730B (en) Image registration and follow-up evaluation method, storage medium and computer equipment
CN110415792B (en) Image detection method, image detection device, computer equipment and storage medium
JP7221421B2 (en) Vertebral localization method, device, device and medium for CT images
CN110490841B (en) Computer-aided image analysis method, computer device and storage medium
CN109712163B (en) Coronary artery extraction method, device, image processing workstation and readable storage medium
CN110717905B (en) Brain image detection method, computer device, and storage medium
CN111179231A (en) Image processing method, device, equipment and storage medium
CN111383259A (en) Image analysis method, computer device, and storage medium
CN111539956A (en) Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium
WO2023088275A1 (en) Automatic roi positioning method and apparatus, surgical robot system, device and medium
CN112580404A (en) Ultrasonic parameter intelligent control method, storage medium and ultrasonic diagnostic equipment
CN113706514B (en) Focus positioning method, device, equipment and storage medium based on template image
CN110992439A (en) Fiber bundle tracking method, computer device and storage medium
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
Lu et al. Landmark localization for cephalometric analysis using multiscale image patch-based graph convolutional networks
CN113822323A (en) Brain scanning image identification processing method, device, equipment and storage medium
CN111160441B (en) Classification method, computer device, and storage medium
CN110310314B (en) Image registration method and device, computer equipment and storage medium
CN109712186B (en) Method, computer device and storage medium for delineating a region of interest in an image
Delmoral et al. Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination