CN110895812A - CT image detection method and device, storage medium and electronic equipment - Google Patents

CT image detection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110895812A
CN110895812A CN201911212094.8A CN201911212094A CN110895812A CN 110895812 A CN110895812 A CN 110895812A CN 201911212094 A CN201911212094 A CN 201911212094A CN 110895812 A CN110895812 A CN 110895812A
Authority
CN
China
Prior art keywords
layer
detected
image
dimensional image
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911212094.8A
Other languages
Chinese (zh)
Inventor
李玉才
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Infervision Technology Co Ltd
Infervision Co Ltd
Original Assignee
Infervision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Co Ltd filed Critical Infervision Co Ltd
Priority to CN201911212094.8A priority Critical patent/CN110895812A/en
Publication of CN110895812A publication Critical patent/CN110895812A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The invention discloses a detection method, a device, a storage medium and electronic equipment of a CT image, wherein a single-layer two-dimensional image to be detected of a single-layer two-dimensional image to be detected in a multi-layer two-dimensional image, a continuous N-layer two-dimensional image on one side of the single-layer two-dimensional image to be detected and/or a continuous N-layer two-dimensional image on the other side of the single-layer two-dimensional image to be detected are/is obtained as a current image to be detected, wherein N is an integer greater than or equal to 1, then a neural network model is used for generating an interested region of the CT image to be detected, and the interested region in the CT image is replaced by manual detection, so that the workload of medical personnel is greatly reduced, the detection efficiency; in addition, considering that the positions of the fracture and the like are usually reflected on continuous multilayer images, the detection result is obtained by detecting the multilayer continuous images and combining the upper layer image and the lower layer image, and the detection accuracy is further improved.

Description

CT image detection method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for detecting a CT image, a computer-readable storage medium, and an electronic device.
Background
Computed Tomography (CT) is a three-dimensional radiographic medical image reconstructed by using digital geometry processing. The technology mainly irradiates a human body through the rotation of X-rays with a single axial surface, and because different tissues have different absorption capacities (or called refractive indexes) to the X-rays, a fault surface image can be reconstructed by using a three-dimensional technology of a computer, fault images of corresponding tissues can be obtained through window width and window level processing, and the fault images are stacked layer by layer to form a three-dimensional image.
CT scan of human body is an important means for judging fracture in hospital. The human body is subjected to external force to cause fracture, some serious fractures are easy to find, and some slight fractures are not easy to directly find through appearance, and the fracture positions are checked and determined through auxiliary means such as X-ray or CT images. However, CT scans for bone fractures are typically thin, 0.5-2.0 mm thick, e.g., a thoraco-abdominal scan will contain at least 300-500 slices. At present, fracture positions are determined mainly by means of manually reading CT tomographic images by radiologists, so that huge workload is caused to the radiologists, and missed detection is easy to occur under high-intensity work, particularly fine fracture positions.
Disclosure of Invention
In order to solve the technical problem, the application provides a detection method of a CT image, a single-layer two-dimensional image to be detected in a multilayer two-dimensional image, a continuous N-layer two-dimensional image on one side of the single-layer two-dimensional image to be detected and/or a continuous N-layer two-dimensional image on the other side of the single-layer two-dimensional image to be detected are/is obtained as a current image to be detected, wherein N is an integer greater than or equal to 1, then the current image to be detected is input into a neural network model to generate an interesting region of the CT image to be detected, the neural network model can be used for replacing the interesting region in the CT image to be detected manually, so that the workload of medical personnel in the radiology department is greatly reduced, the detection efficiency is improved, meanwhile; in addition, considering that the fracture position and the like are usually reflected on continuous multilayer images, by detecting a single-layer two-dimensional CT image and upper and lower multilayer continuous images thereof, a detection result can be comprehensively obtained by combining the upper and lower layer images, and the detection accuracy is further improved.
According to an aspect of the present application, there is provided a method for detecting a CT image, wherein the CT image to be detected includes a multi-layer two-dimensional image, including: acquiring a single-layer two-dimensional image to be detected in the multilayer two-dimensional images, and N continuous two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N continuous two-dimensional images on the other side of the single-layer two-dimensional image to be detected as a current image to be detected, wherein N is an integer greater than or equal to 1; and inputting the current image to be detected into a neural network model to generate the region of interest of the CT image to be detected.
In an embodiment, the acquiring a single-layer two-dimensional image to be detected and N consecutive two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N consecutive two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected includes: sequentially taking each layer of two-dimensional image in the multilayer two-dimensional images as the single-layer two-dimensional image to be detected; and acquiring the single-layer two-dimensional image to be detected and the continuous N layers of two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or the continuous N layers of two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected.
In an embodiment, the acquiring a single-layer two-dimensional image to be detected and N consecutive two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N consecutive two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected includes: and when the number of the two-dimensional images on one side of the single-layer two-dimensional image to be detected is less than N, giving up acquiring the single-layer two-dimensional image to be detected.
In an embodiment, the acquiring a single-layer two-dimensional image to be detected and N consecutive two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N consecutive two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected includes: when the number of the two-dimensional images on one side of the single-layer two-dimensional image to be detected is smaller than N and larger than zero, the two-dimensional image on the side of the single-layer two-dimensional image to be detected is repeatedly selected, so that the number of the two-dimensional images on the side is N.
In an embodiment, after the generating the region of interest of the CT image to be detected, the method further includes: and classifying the region of interest, and calculating the confidence of the classification result of the region of interest.
In one embodiment, the training method of the neural network model includes: inputting a second sample input and a corresponding second sample output into the neural network model for training, wherein the second sample input comprises a single-layer training two-dimensional image and N continuous two-dimensional images on one side of the single-layer training two-dimensional image and/or N continuous two-dimensional images on the other side of the single-layer training two-dimensional image, and the second sample output comprises a classification result of an interested area corresponding to the second sample input.
In an embodiment, the training method further comprises: inputting a second verification sample input into the neural network model to obtain a second verification sample output, wherein the second verification sample input comprises a single-layer verification two-dimensional image and a continuous N-layer two-dimensional image on one side of the single-layer verification two-dimensional image and/or a continuous N-layer two-dimensional image on the other side of the single-layer verification two-dimensional image; calculating a second error between the second validation sample output and a second standard result; and stopping training when the second error is less than a second preset error value.
In an embodiment, after said calculating a second error between said second validation sample output and a second standard result, further comprising: and when the second error is greater than or equal to the second preset error value, performing derivation optimization on the neural network model.
In an embodiment, after the generating the region of interest of the CT image to be detected, the method further includes: and accurately positioning the region of interest to obtain an abnormal region.
In one embodiment, the training method of the neural network model includes: inputting a first sample input and a corresponding first sample output into the neural network model for training, wherein the first sample input comprises a single-layer training two-dimensional image and N continuous two-dimensional images on one side of the single-layer training two-dimensional image and/or N continuous two-dimensional images on the other side of the single-layer training two-dimensional image, and the first sample output comprises the position of an interested area corresponding to the first sample input.
In an embodiment, the training method further comprises: performing any one or a combination of the following operations on the first sample input: adjusting the window width and window level of the first sample input, randomly turning the first sample input, randomly adjusting the brightness of the first sample input, and randomly cutting the first sample input.
In an embodiment, the training method further comprises: inputting a first verification sample input into the neural network model to obtain a first verification sample output, wherein the first verification sample input comprises a single-layer verification two-dimensional image and a continuous N-layer two-dimensional image on one side of the single-layer verification two-dimensional image and/or a continuous N-layer two-dimensional image on the other side of the single-layer verification two-dimensional image; calculating a first error between the first validation sample output and a first standard result; and stopping training when the first error is less than a first preset error value.
In an embodiment, after said calculating a first error between said first validation sample output and a first standard result, further comprising: and when the first error is larger than or equal to the first preset error value, performing derivation optimization on the neural network model.
According to another aspect of the present application, there is provided an apparatus for detecting a CT image, wherein the CT image to be detected includes a multi-layer two-dimensional image, including: the image acquisition module is used for acquiring a single-layer two-dimensional image to be detected in the multilayer two-dimensional images, and N continuous two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N continuous two-dimensional images on the other side of the single-layer two-dimensional image to be detected as a current image to be detected, wherein N is an integer greater than or equal to 1; and the region generation module is used for inputting the current image to be detected into a neural network model to generate the region of interest of the CT image to be detected.
According to another aspect of the present application, there is provided a computer-readable storage medium storing a computer program for executing the method for detecting a CT image according to any one of the above-mentioned methods.
According to another aspect of the present application, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; the processor is used for executing the CT image detection method.
According to the CT image detection method, a single-layer two-dimensional image to be detected in a multi-layer two-dimensional image, a continuous N-layer two-dimensional image on one side of the single-layer two-dimensional image to be detected and/or a continuous N-layer two-dimensional image on the other side of the single-layer two-dimensional image to be detected are/is obtained as a current image to be detected, wherein N is an integer greater than or equal to 1, then the current image to be detected is input into a neural network model to generate an interested region of the CT image to be detected, the neural network model can be used for replacing the interested region in the CT image to be detected manually, so that the workload of medical personnel in the radiology department is greatly reduced, the detection efficiency is improved, meanwhile, the condition; in addition, considering that the fracture position and the like are usually reflected on continuous multilayer images, by detecting a single-layer two-dimensional CT image and upper and lower multilayer continuous images thereof, a detection result can be comprehensively obtained by combining the upper and lower layer images, and the detection accuracy is further improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a flowchart illustrating a method for detecting a CT image according to an exemplary embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a method for detecting a CT image according to another exemplary embodiment of the present application.
Fig. 3 is a flowchart illustrating a method for detecting a CT image according to another exemplary embodiment of the present application.
Fig. 4 is a flowchart illustrating a neural network model training method applied to CT image detection according to an exemplary embodiment of the present application.
Fig. 5 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application.
Fig. 6 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application.
Fig. 7 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application.
Fig. 8 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application.
Fig. 9 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application.
Fig. 10 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application.
Fig. 11 is a schematic structural diagram of a detection apparatus for CT images according to an exemplary embodiment of the present application.
Fig. 12 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 13 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 14 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 15 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 16 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 17 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 18 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 19 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 20 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 21 is a schematic structural diagram of a CT image detection apparatus according to another exemplary embodiment of the present application.
Fig. 22 is a block diagram of an electronic device provided in an exemplary embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
The CT tomographic image is formed by stacking a plurality of two-dimensional images, and has three-dimensional characteristics, and is an important means and basis for determining whether or not a fracture or other traumatic symptoms occur. Most of the existing methods still use professional medical staff to manually check a plurality of two-dimensional images and accordingly obtain the positions (regions of interest) of the lesions such as fractures, so that the working efficiency is obviously low, the tasks are heavy and the working pressure is high due to the serious shortage of the professional medical staff, and under the high-intensity working state, the condition of missing detection may occur in manual detection, especially for the detection of the fine fracture positions.
With the rapid development of artificial intelligence, artificial intelligence has begun to be applied in various industries, including the medical field, and it is possible to greatly reduce the workload of medical staff by using artificial intelligence instead of human labor to perform a large amount of and highly repetitive work, so that the medical staff can be more attentive to work more professionally or have to manually handle (for example, diagnosis and treatment of diseases). The CT image detection of the disease position has large workload and strong repeatability, and the work can be realized by processing the image through artificial intelligence. However, the detection of CT images by using artificial intelligence or the classification and judgment of 3D voxels in human bodies is complicated, and the position of a detection frame (for a standard lesion region) that can be read by medical staff cannot be obtained, which is not favorable for the further diagnosis of the medical staff; or only the fracture condition with obvious characteristics can be detected due to the limitation of image processing, so that the method can be only applied to the detection of the fracture focus at a small number of positions, the detection precision is not high, and especially the condition of missed detection can occur on the slight fracture.
In order to solve the problem that no method which can be applied to obtaining the positions of fracture focuses at each part of a human body by checking a CT image exists at present, the method for detecting the CT image comprises the steps of obtaining a single-layer two-dimensional image to be detected in a multi-layer two-dimensional image, and a continuous N-layer two-dimensional image on one side of the single-layer two-dimensional image to be detected and/or a continuous N-layer two-dimensional image on the other side of the single-layer two-dimensional image to be detected as a current image to be detected, wherein N is an integer greater than or equal to 1, inputting the current image to be detected into a neural network model to generate an interested region of the CT image to be detected, and using the neural network model to replace manual detection of the interested region in the CT image, so that the workload of medical personnel in the radiology department is; in addition, considering that the fracture position and the like are usually reflected on continuous multilayer images, by detecting a single-layer two-dimensional CT image and upper and lower multilayer continuous images thereof, a detection result can be comprehensively obtained by combining the upper and lower layer images, and the detection accuracy is further improved.
Exemplary method
Fig. 1 is a flowchart illustrating a method for detecting a CT image according to an exemplary embodiment of the present disclosure. The CT image to be detected includes a multi-layer two-dimensional image, and this embodiment can be applied to an electronic device, as shown in fig. 1, the method includes the following steps:
step 110: and acquiring a single-layer two-dimensional image to be detected in the multilayer two-dimensional images, and N continuous two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N continuous two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected, wherein N is an integer greater than or equal to 1.
Because the CT image is composed of a plurality of thin layers, and the focus is usually a three-dimensional structure, namely the focus can be embodied in a plurality of layers of two-dimensional images, for example, the focus of fracture usually has corresponding fracture areas in continuous multi-layer two-dimensional CT images, therefore, adjacent or similar two-dimensional images have certain relevance, and have certain auxiliary function for detecting the focus area in the CT image to be detected, thereby improving the detection accuracy. When a layer of two-dimensional image in a CT image to be detected is detected, the layer of two-dimensional image and N layers of two-dimensional images on two sides (such as the upper side and the lower side) of the layer of two-dimensional image are detected simultaneously, and the interference caused by individual impurities in the layer of two-dimensional image can be eliminated by utilizing the correlation among continuous images, so that the detection accuracy is improved. In an embodiment, when the number of layers of the two-dimensional image on one side of the single-layer two-dimensional image to be detected is less than N, acquiring the single-layer two-dimensional image to be detected may be abandoned. Because each layer of two-dimensional image in the CT image to be detected is always selected as a sample to be detected (including the two-dimensional image to be detected as a single layer and the two-dimensional image to be detected as one side of the two-dimensional image to be detected as a single layer), when the number of layers of the two-dimensional image to be detected as one side of the two-dimensional image to be detected as a single layer is less than N, the single layer of the two-dimensional image to be detected is directly abandoned. In an embodiment, when the number of layers of the two-dimensional image on one side of the single-layer two-dimensional image to be detected is less than N and greater than zero, the number of the two-dimensional images on the side of the single-layer two-dimensional image to be detected (which may be one layer or multiple layers) may be realized by repeatedly selecting the two-dimensional image on the side to be detected as N, and then selecting the single-layer two-dimensional image to be detected, the N layers of the two-dimensional image on the side of the single-layer two-dimensional image to be detected, and/or the continuous N layers of the two-dimensional image on the other side of the single-layer. It should be understood that, in the embodiment of the present application, a single-layer two-dimensional image to be detected and single-side continuous N-layer two-dimensional images thereof (N +1 layers in total) may be selected according to a requirement of an actual application scene, or in order to better improve detection accuracy, in the embodiment of the present application, when a certain layer of two-dimensional image is detected, the single-layer two-dimensional image to be detected and two-side continuous N-layer two-dimensional images thereof (2N +1 layers in total) may be simultaneously selected, as long as the selected number of detection layers can ensure detection accuracy, and the embodiment of the present application does not limit a specific selection scheme of the number of detection layers.
In an embodiment, the specific implementation manner of step 110 may be: sequentially taking each layer of two-dimensional image in the multilayer two-dimensional images as a single-layer two-dimensional image to be detected; and acquiring the single-layer two-dimensional image to be detected and the continuous N layers of two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or the continuous N layers of two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected. The method comprises the steps of detecting each layer of two-dimensional image in the CT image to be detected from top to bottom or from bottom to top layer by layer, and detecting N layers of two-dimensional images on one side or two sides of each layer of two-dimensional image when each layer of two-dimensional image is detected, so that the detection accuracy of each layer of two-dimensional image is improved, and the detection accuracy of the whole CT image is improved. It should be understood that, in the embodiment of the present application, the separation distance between two adjacent detected single-layer two-dimensional images to be detected may also be selected according to the requirement of the actual application scene, and may be, for example, 1 (i.e., layer-by-layer detection), or other data smaller than N, as long as the selected separation distance can accurately detect each layer of two-dimensional image, and the specific separation distance is not limited in the embodiment of the present application.
Step 120: and inputting the current image to be detected into the neural network model to generate the region of interest of the CT image to be detected.
Taking the obtained 2N +1 layer (for convenience of description, the 2N +1 layer is taken as an example hereinafter, but it should be understood that the embodiment of the present application is not limited thereto) two-dimensional image including the currently detected single-layer two-dimensional image to be detected as a whole, that is, the currently detected image, inputting the trained neural network model, extracting feature information of the currently detected image by using the neural network model, and generating an interested region of the to-be-detected CT image according to the obtained feature information, where the interested region includes all possible lesion areas, that is, the interested region is a result obtained by primarily detecting the to-be-detected CT image, and it can be understood that, of course, when the detection accuracy or precision of the neural network model is high enough, the generated interested region is a lesion area. According to the CT images at different positions, such as a chest CT image, an abdomen CT image, a leg CT image and the like, the neural network model can automatically extract different characteristic information to be used as a basis for detecting the CT images. In one embodiment, the neural network model may be a deep learning neural network model, and further may include a plurality of composite model layers composed of convolutional layers, pooling layers, and activation layers connected in series. Through deep learning of the neural network model, feature transformation layer by layer can be achieved, feature representation of the sample in an original space is transformed into a new feature space, the internal rule and the representation level of the sample data are learned, and therefore classification or prediction is easier. In an embodiment, after extracting the feature information of the current image to be detected, the method may further include: the characteristic information is up-sampled. In the feature extraction process of the current image to be detected, the size of the feature information of the current image to be detected is smaller and smaller, so that the detection precision is not facilitated, and therefore the size of the feature information is amplified through up-sampling (namely amplification processing) of the feature information, and the detection precision is further improved.
According to the CT image detection method, a single-layer two-dimensional image to be detected in a multi-layer two-dimensional image, a continuous N-layer two-dimensional image on one side of the single-layer two-dimensional image to be detected and/or a continuous N-layer two-dimensional image on the other side of the single-layer two-dimensional image to be detected are/is obtained as a current image to be detected, wherein N is an integer greater than or equal to 1, then the current image to be detected is input into a neural network model to generate an interested region of the CT image to be detected, the neural network model can be used for replacing the interested region in the CT image to be detected manually, so that the workload of medical personnel in the radiology department is greatly reduced, the detection efficiency is improved, meanwhile, the condition; in addition, considering that the fracture position and the like are usually reflected on continuous multilayer images, by detecting a single-layer two-dimensional CT image and upper and lower multilayer continuous images thereof, a detection result can be comprehensively obtained by combining the upper and lower layer images, and the detection accuracy is further improved.
In an embodiment, after the step 120, the multi-layer two-dimensional image may be subjected to a segmentation process, and then the region of interest may be optimized according to the multi-layer two-dimensional image after the segmentation process. The multi-layer two-dimensional image in the CT image to be detected is segmented, parts irrelevant to the detection result, such as internal organs, clothes and the like of a human body, are deleted, muscle, bone tissues and the like influencing the detection result are segmented and reserved, and then the region of interest existing in an interference region of the internal organs, the clothes and the like is kicked out by comparing with the region of interest, so that the generated result is further optimized. A specific segmentation processing method may be to set a threshold, such as a luminance threshold, a grayscale threshold, a density threshold, and the like, and segment a partial image required for detection by determining the threshold, so as to reduce the influence of other parts on the detection, thereby further improving the accuracy of the detection.
Fig. 2 is a flowchart illustrating a method for detecting a CT image according to another exemplary embodiment of the present application. As shown in fig. 2, after step 120, the method may further include:
step 130: the region of interest is classified.
According to the characteristic information of the region of interest, comparing the characteristic information of the focuses of each category, and dividing the region of interest into the focuses of different categories (when the region of interest is not a focus region, deleting the region of interest), which can also be realized by the neural network model, that is, one of the output results of the neural network model is the category of the region of interest (including whether the fracture is caused, the degree of fracture, and the like). In an embodiment, a confidence level of the classification result of the region of interest may also be calculated. The confidence level of the region of interest as a certain class of focus can be obtained according to the comparison result of the feature information of each region of interest and the feature information of different classes of focuses, and a quantifiable reference basis can be provided for medical staff by setting the confidence level, for example, when the confidence level is greater than a first confidence level threshold, the classified result is considered to be credible, the medical staff can be confident to adopt, and when the confidence level is less than a second confidence level threshold, the classified result is considered to be unreliable, and the medical staff needs to further detect, wherein the first confidence level threshold is greater than or equal to the second confidence level threshold.
Fig. 3 is a flowchart illustrating a method for detecting a CT image according to another exemplary embodiment of the present application. As shown in fig. 3, after step 120, the method may further include:
step 140: and accurately positioning the region of interest to obtain an abnormal region.
The region of interest is obtained through preliminary detection, then the region of interest can be further accurately positioned, so that an accurate abnormal region (namely a focus region) is obtained, the range of the region of interest can be effectively required through further accurate positioning, and accurate and reliable detection result support is provided for medical staff in a more targeted manner.
Fig. 4 is a flowchart illustrating a neural network model training method applied to CT image detection according to an exemplary embodiment of the present application. As shown in fig. 4, the training method of the neural network model may include:
step 510: and inputting a first sample input and a corresponding first sample output into the neural network model for training, wherein the first sample input comprises a single-layer training two-dimensional image and N continuous two-dimensional images on one side of the single-layer training two-dimensional image and/or N continuous two-dimensional images on the other side of the single-layer training two-dimensional image, and the first sample output comprises the position of an interested area corresponding to the first sample input.
Before detection, a neural network model needs to be trained to ensure that the neural network model can meet the requirement of detection accuracy, the training process can be that a first sample input and a corresponding first sample output of a first sample are input into the neural network model for training, wherein the first sample input is a layer of two-dimensional image consistent with an image to be detected during detection and a continuous N layer of two-dimensional image on one side of the single layer of two-dimensional image and/or a continuous N layer of two-dimensional image on the other side of the single layer of two-dimensional image, the first sample output is the position of an interested area of the first sample, and the detection accuracy of the neural network model can be improved through input training of a plurality of first samples.
Fig. 5 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application. As shown in fig. 5, prior to step 510, the method may further comprise:
step 520: the first sample input is pre-processed.
In an embodiment, the pre-processing may include any one or combination of the following operations: and adjusting the window width and window level of the first sample input, randomly turning the first sample input, randomly adjusting the brightness of the first sample input, and randomly cutting the first sample input. Through the preprocessing or the enhancement processing of the first sample, more training samples and different expression forms (such as CT images of the same person at different angles) of the same training sample are obtained, so that the training intensity is improved, and the detection accuracy of the neural network model is further improved.
Fig. 6 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application. As shown in fig. 6, after step 510, the training method may further include:
step 530: and inputting the first verification sample into the neural network model to obtain a first verification sample output, wherein the first verification sample input comprises a single-layer verification two-dimensional image, and N continuous-layer two-dimensional images on one side of the single-layer verification two-dimensional image and/or N continuous-layer two-dimensional images on the other side of the single-layer verification two-dimensional image.
After multiple times of input training of a first sample, obtaining a first verification sample output according to the input of the first verification sample through a trained neural network model, wherein the input of the first verification sample is a layer of two-dimensional image consistent with an image to be detected during detection, N continuous layers of two-dimensional images on one side of the layer of two-dimensional image and/or N continuous layers of two-dimensional images on the other side of the layer of two-dimensional image.
Step 540: a first error between the first validation sample output and the first standard result is calculated.
And calculating a first error between the output of the first verification sample and a first standard result to acquire the detection accuracy of the neural network model, wherein the first standard result is an interested area corresponding to the first verification sample or the abnormal area. The specific way of calculating the first error may include calculating a distance between the output of the first verification sample and the first standard result, and since the output of the first verification sample and the first standard result are both labeled areas, the distance may be obtained by calculating a manhattan distance between each point in the two areas, specifically, a specific calculation formula of the manhattan distance is:
Figure BDA0002294320010000101
the first error is:
Figure BDA0002294320010000111
step 550: and when the first error is smaller than a first preset error value, stopping training.
By presetting a first preset error value, when the first error is smaller than the first preset error value, the detection accuracy of the trained neural network model reaches a preset standard, and the training can be stopped.
Fig. 7 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application. As shown in fig. 7, after step 540, the training method may further include:
step 560: and when the first error is larger than or equal to a first preset error value, performing derivation optimization on the neural network model.
When the first error is greater than or equal to the first preset error value, it is stated that the accuracy of the trained neural network model does not reach the preset standard yet, and therefore, the neural network model needs to be further trained until the first error is less than the first preset error value, and the training is stopped.
Fig. 8 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application. As shown in fig. 8, the training method of the neural network model may include:
step 570: and inputting a second sample input and a corresponding second sample output into the neural network model for training, wherein the second sample input comprises a single-layer training two-dimensional image and N continuous two-dimensional images on one side of the single-layer training two-dimensional image and/or N continuous two-dimensional images on the other side of the single-layer training two-dimensional image, and the second sample output comprises a classification result of an interested area corresponding to the second sample input.
The training process can be to input a second sample of the second sample and a corresponding second sample output into the neural network model for training, wherein the second sample input is a layer of two-dimensional image consistent with an image to be detected during detection and N continuous two-dimensional images on one side of the layer of two-dimensional image and/or N continuous two-dimensional images on the other side of the layer of two-dimensional image, and the second sample output is a classification result of an interested region of the second sample. It should be understood that step 570 and step 510 may be performed simultaneously, and the first sample and the second sample may be the same sample, and the output of the same sample includes the corresponding region of interest and the classification result of the region of interest, although step 570 may also be performed separately.
Fig. 9 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application. As shown in fig. 9, prior to step 510, the method may further comprise:
step 580: and inputting a second verification sample into the neural network model to obtain a second verification sample output, wherein the second verification sample input comprises a single-layer verification two-dimensional image, and N continuous two-dimensional images on one side of the single-layer verification two-dimensional image and/or N continuous two-dimensional images on the other side of the single-layer verification two-dimensional image.
And after multiple times of input training of a second sample, obtaining second verification sample output according to the second verification sample input through the trained neural network model, wherein the second verification sample input is a layer of two-dimensional image consistent with the image to be detected during detection, and N continuous layers of two-dimensional images on one side of the layer of two-dimensional image and/or N continuous layers of two-dimensional images on the other side of the layer of two-dimensional image.
Step 590: a second error between the second validation sample output and the second standard result is calculated.
And calculating a second error between the output of the second verification sample and a second standard result to obtain the detection accuracy of the neural network model, wherein the second standard result is a classification result of the region of interest corresponding to the second verification sample. The specific way of calculating the second error may include calculating an intersection ratio between the second verification sample output and the second standard result, and since the second verification sample output includes a confidence of the classification result, and the higher the confidence is, the smaller the second error is, therefore, the following relationship exists between the second error L2 and the confidence p:
L2=-((1-p)2+log10(p)),0≤p≤1
step 5100: and when the second error is smaller than a second preset error value, stopping training.
And presetting a second preset error value, and when the second error is smaller than the second preset error value, indicating that the detection accuracy of the trained neural network model reaches a preset standard, and stopping training. It should be understood that steps 580, 590, 5100 and steps 530, 540, 550 may be performed simultaneously (as shown in fig. 9), and the first and second validation samples may be the same validation sample whose output includes the corresponding region of interest and the classification result of the region of interest, although steps 580, 590, 5100 may also be performed separately.
Fig. 10 is a flowchart illustrating a neural network model training method applied to CT image detection according to another exemplary embodiment of the present application. As shown in fig. 10, after step 590, the training method may further include:
step 5110: and when the second error is larger than or equal to a second preset error value, performing derivation optimization on the neural network model.
When the second error is greater than or equal to the second preset error value, it is stated that the accuracy of the trained neural network model does not reach the preset standard yet, and therefore, the neural network model needs to be further trained until the second error is smaller than the second preset error value, and the training is stopped.
Exemplary devices
Fig. 11 is a schematic diagram illustrating a result of a detection apparatus for CT images according to an exemplary embodiment of the present application. As shown in fig. 11, the detection apparatus 100 for CT images includes the following modules: the image acquisition module 1 is configured to acquire a single-layer two-dimensional image to be detected in the multilayer two-dimensional images, and N continuous two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N continuous two-dimensional images on the other side of the single-layer two-dimensional image to be detected as a current image to be detected, where N is an integer greater than or equal to 1; and the region generation module 2 is used for inputting the current image to be detected into a neural network model to generate the region of interest of the CT image to be detected.
According to the CT image detection device, a single-layer two-dimensional image to be detected in a multi-layer two-dimensional image, a continuous N-layer two-dimensional image on one side of the single-layer two-dimensional image to be detected and/or a continuous N-layer two-dimensional image on the other side of the single-layer two-dimensional image to be detected are/is obtained through an image obtaining module 1 and used as the current image to be detected, wherein N is an integer larger than or equal to 1, then the current image to be detected is input into a neural network model through a region generating module 2, an interesting region of the CT image to be detected is generated, the neural network model can be used for replacing an interesting region in the CT image to be detected manually, so that the workload of medical personnel in the radiology department is greatly reduced, the detection efficiency is; in addition, considering that the fracture position and the like are usually reflected on continuous multilayer images, by detecting a single-layer two-dimensional CT image and upper and lower multilayer continuous images thereof, a detection result can be comprehensively obtained by combining the upper and lower layer images, and the detection accuracy is further improved.
Fig. 12 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 12, the apparatus may further include a feature extraction module 3, configured to extract feature information of the current image to be detected; the region generation module 2 is configured to: and generating the interested region of the CT image to be detected according to the characteristic information.
Fig. 13 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 13, the apparatus may further include an upsampling module 4 for performing an upsampling operation on the feature information.
In an embodiment, the image acquisition module 1 is further configured to: and when the number of the two-dimensional images on one side of the single-layer two-dimensional image to be detected is less than N, the acquisition of the single-layer two-dimensional image to be detected can be abandoned.
In an embodiment, the image acquisition module 1 is further configured to: when the number of the two-dimensional images on one side of the single-layer two-dimensional image to be detected is smaller than N and larger than zero, the two-dimensional images on the side of the single-layer two-dimensional image to be detected can be repeatedly selected, so that the number of the two-dimensional images on the side is N.
In an embodiment, the image acquisition module 1 is further configured to: sequentially taking each layer of two-dimensional image in the multilayer two-dimensional images as a single-layer two-dimensional image to be detected; and acquiring the single-layer two-dimensional image to be detected and the continuous N layers of two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or the continuous N layers of two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected.
Fig. 14 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 14, the apparatus may further include a segmentation module 5 for performing segmentation processing on the multi-layer two-dimensional image. By performing segmentation processing on a multi-layer two-dimensional image in a CT image to be detected, parts irrelevant to the detection result, such as internal organs and clothes of a human body, are deleted, and muscle and bone tissues affecting the detection result are segmented and reserved.
Fig. 15 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 15, the apparatus may further comprise a classification module 6 for classifying the region of interest. In an embodiment, classification module 6 may be configured to: and calculating the confidence of the classification result of the region of interest.
Fig. 16 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 16, the apparatus may further include a positioning module 7 for precisely positioning the region of interest to obtain the abnormal region.
Fig. 17 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 17, the apparatus may further include a first training module 8 for inputting a first sample input and a corresponding first sample output into the neural network model for training, wherein the first sample input includes a single-layer training two-dimensional image and N consecutive two-dimensional images on one side of the single-layer training two-dimensional image and/or N consecutive two-dimensional images on the other side of the single-layer training two-dimensional image, and the first sample output includes a location of a region of interest corresponding to the first sample input.
Fig. 18 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 18, the apparatus may further comprise a pre-processing module 9 for pre-processing the first sample input. In an embodiment, the pre-processing may include any one or combination of the following operations: and adjusting the window width and window level of the first sample input, randomly turning the first sample input, randomly adjusting the brightness of the first sample input, and randomly cutting the first sample input.
Fig. 19 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 19, the apparatus may further include a first verification module 10 for verifying a training effect of the neural network model. Wherein the first authentication module 10 comprises the following sub-modules:
the first input submodule 101 is configured to input a first verification sample into the neural network model to obtain a first verification sample output, where the first verification sample input includes a single-layer verification two-dimensional image and N consecutive layers of two-dimensional images on one side of the single-layer verification two-dimensional image and/or N consecutive layers of two-dimensional images on the other side of the single-layer verification two-dimensional image; a first calculation submodule 102 for calculating a first error between the first validation sample output and the first standard result; and the first judgment submodule 103 is used for stopping training when the first error is smaller than a first preset error value, and otherwise, performing derivation optimization on the neural network model.
Fig. 20 is a diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 20, the apparatus may further include a second training module 11, configured to input a second sample input and a corresponding second sample output into the neural network model for training, where the second sample input includes a single-layer training two-dimensional image and N consecutive two-dimensional images on one side of the single-layer training two-dimensional image and/or N consecutive two-dimensional images on the other side of the single-layer training two-dimensional image, and the second sample output includes a classification result of a region of interest corresponding to the second sample input.
Fig. 21 is a schematic diagram illustrating a result of a CT image detection apparatus according to another exemplary embodiment of the present application. As shown in fig. 21, the apparatus may further include a second verification module 12 for verifying the training effect of the neural network model. Wherein the second verification module 12 comprises the following sub-modules:
the second input submodule 121 is configured to input a second verification sample into the neural network model to obtain a second verification sample output, where the second verification sample input includes a single-layer verification two-dimensional image and N consecutive layers of two-dimensional images on one side of the single-layer verification two-dimensional image and/or N consecutive layers of two-dimensional images on the other side of the single-layer verification two-dimensional image; a second calculation submodule 122 for calculating a second error between the second validation sample output and the second standard result; and the second judgment submodule 123 is configured to stop training when the second error is smaller than a second preset error value, and otherwise perform derivation optimization on the neural network model.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 22. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
FIG. 22 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 22, the electronic device 20 includes one or more processors 21 and a memory 22.
The processor 21 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 20 to perform desired functions.
Memory 22 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 21 to implement the above-described CT image detection method of various embodiments of the present application and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 20 may further include: an input device 23 and an output device 24, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is a first device or a second device, the input device 23 may be a camera for capturing an input signal of an image. When the electronic device is a stand-alone device, the input means 23 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
The input device 23 may also include, for example, a keyboard, a mouse, and the like.
The output device 24 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 24 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for the sake of simplicity, only some of the components related to the present application in the electronic device 20 are shown in fig. 22, and components such as a bus, an input/output interface, and the like are omitted. In addition, the electronic device 20 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of detection of CT images according to various embodiments of the present application described in the "exemplary methods" section of this specification, supra.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps in the method for detecting CT images according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (16)

1. A method for detecting a CT image, wherein the CT image to be detected comprises a multi-layer two-dimensional image, is characterized by comprising the following steps:
acquiring a single-layer two-dimensional image to be detected in the multilayer two-dimensional images, and N continuous two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N continuous two-dimensional images on the other side of the single-layer two-dimensional image to be detected as a current image to be detected, wherein N is an integer greater than or equal to 1; and
and inputting the current image to be detected into a neural network model to generate the region of interest of the CT image to be detected.
2. The detection method according to claim 1, wherein the acquiring of the single-layer two-dimensional image to be detected and the N consecutive two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or the N consecutive two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected comprises:
sequentially taking each layer of two-dimensional image in the multilayer two-dimensional images as the single-layer two-dimensional image to be detected; and
and acquiring the single-layer two-dimensional image to be detected and the continuous N layers of two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or the continuous N layers of two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected.
3. The detection method according to claim 1, wherein the acquiring of the single-layer two-dimensional image to be detected and the N consecutive two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or the N consecutive two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected comprises:
and when the number of the two-dimensional images on one side of the single-layer two-dimensional image to be detected is less than N, giving up acquiring the single-layer two-dimensional image to be detected.
4. The detection method according to claim 1, wherein the acquiring of the single-layer two-dimensional image to be detected and the N consecutive two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or the N consecutive two-dimensional images on the other side of the single-layer two-dimensional image to be detected as the current image to be detected comprises:
when the number of the two-dimensional images on one side of the single-layer two-dimensional image to be detected is smaller than N and larger than zero, the two-dimensional image on the side of the single-layer two-dimensional image to be detected is repeatedly selected, so that the number of the two-dimensional images on the side is N.
5. The detection method according to claim 1, further comprising, after the generating the region of interest of the CT image to be detected:
and classifying the region of interest, and calculating the confidence of the classification result of the region of interest.
6. The detection method according to claim 5, wherein the training method of the neural network model comprises:
inputting a second sample input and a corresponding second sample output into the neural network model for training, wherein the second sample input comprises a single-layer training two-dimensional image and N continuous two-dimensional images on one side of the single-layer training two-dimensional image and/or N continuous two-dimensional images on the other side of the single-layer training two-dimensional image, and the second sample output comprises a classification result of an interested area corresponding to the second sample input.
7. The detection method of claim 6, wherein the training method further comprises:
inputting a second verification sample input into the neural network model to obtain a second verification sample output, wherein the second verification sample input comprises a single-layer verification two-dimensional image and a continuous N-layer two-dimensional image on one side of the single-layer verification two-dimensional image and/or a continuous N-layer two-dimensional image on the other side of the single-layer verification two-dimensional image;
calculating a second error between the second validation sample output and a second standard result; and
and when the second error is smaller than a second preset error value, stopping training.
8. The detection method of claim 7, further comprising, after said calculating a second error between the second validation sample output and a second standard result:
and when the second error is greater than or equal to the second preset error value, performing derivation optimization on the neural network model.
9. The detection method according to claim 1, further comprising, after the generating the region of interest of the CT image to be detected:
and accurately positioning the region of interest to obtain an abnormal region.
10. The detection method according to any one of claims 1 to 9, wherein the training method of the neural network model comprises:
inputting a first sample input and a corresponding first sample output into the neural network model for training, wherein the first sample input comprises a single-layer training two-dimensional image and N continuous two-dimensional images on one side of the single-layer training two-dimensional image and/or N continuous two-dimensional images on the other side of the single-layer training two-dimensional image, and the first sample output comprises the position of an interested area corresponding to the first sample input.
11. The detection method according to any one of claims 1 to 9, wherein the training method further comprises performing any one or a combination of more of the following operations on the first sample input:
adjusting the window width and window level of the first sample input, randomly turning the first sample input, randomly adjusting the brightness of the first sample input, and randomly cutting the first sample input.
12. The detection method according to claim 10, wherein the training method further comprises:
inputting a first verification sample input into the neural network model to obtain a first verification sample output, wherein the first verification sample input comprises a single-layer verification two-dimensional image and a continuous N-layer two-dimensional image on one side of the single-layer verification two-dimensional image and/or a continuous N-layer two-dimensional image on the other side of the single-layer verification two-dimensional image;
calculating a first error between the first validation sample output and a first standard result; and
and when the first error is smaller than a first preset error value, stopping training.
13. The method of testing as defined in claim 12, further comprising, after said calculating a first error between said first validation sample output and a first standard result:
and when the first error is larger than or equal to the first preset error value, performing derivation optimization on the neural network model.
14. An apparatus for detecting a CT image, wherein the CT image to be detected includes a plurality of layers of two-dimensional images, comprising:
the image acquisition module is used for acquiring a single-layer two-dimensional image to be detected in the multilayer two-dimensional images, and N continuous two-dimensional images on one side of the single-layer two-dimensional image to be detected and/or N continuous two-dimensional images on the other side of the single-layer two-dimensional image to be detected as a current image to be detected, wherein N is an integer greater than or equal to 1; and
and the region generation module is used for inputting the current image to be detected into a neural network model to generate the region of interest of the CT image to be detected.
15. A computer-readable storage medium storing a computer program for executing the method for detecting a CT image according to any one of claims 1 to 13.
16. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for executing the CT image detection method of any one of the claims 1-13.
CN201911212094.8A 2019-11-28 2019-11-28 CT image detection method and device, storage medium and electronic equipment Pending CN110895812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911212094.8A CN110895812A (en) 2019-11-28 2019-11-28 CT image detection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911212094.8A CN110895812A (en) 2019-11-28 2019-11-28 CT image detection method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN110895812A true CN110895812A (en) 2020-03-20

Family

ID=69788286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911212094.8A Pending CN110895812A (en) 2019-11-28 2019-11-28 CT image detection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110895812A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899850A (en) * 2020-08-12 2020-11-06 上海依智医疗技术有限公司 Medical image information processing method, display method and readable storage medium
CN111914841A (en) * 2020-08-07 2020-11-10 温州医科大学 CT image processing method and device
CN111967539A (en) * 2020-09-29 2020-11-20 北京大学口腔医学院 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
US9846938B2 (en) * 2015-06-01 2017-12-19 Virtual Radiologic Corporation Medical evaluation machine learning workflows and processes
CN107945168A (en) * 2017-11-30 2018-04-20 上海联影医疗科技有限公司 The processing method and magic magiscan of a kind of medical image
CN108520519A (en) * 2018-04-11 2018-09-11 上海联影医疗科技有限公司 A kind of image processing method, device and computer readable storage medium
US20180268526A1 (en) * 2017-02-22 2018-09-20 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach using scan specific metadata
CN110047075A (en) * 2019-03-15 2019-07-23 天津大学 A kind of CT image partition method based on confrontation network
CN110111313A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image detection method and relevant device based on deep learning
CN110210519A (en) * 2019-05-10 2019-09-06 上海联影智能医疗科技有限公司 Classification method, computer equipment and storage medium
US10417788B2 (en) * 2016-09-21 2019-09-17 Realize, Inc. Anomaly detection in volumetric medical images using sequential convolutional and recurrent neural networks
CN110310723A (en) * 2018-03-20 2019-10-08 青岛海信医疗设备股份有限公司 Bone image processing method, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9846938B2 (en) * 2015-06-01 2017-12-19 Virtual Radiologic Corporation Medical evaluation machine learning workflows and processes
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
US10417788B2 (en) * 2016-09-21 2019-09-17 Realize, Inc. Anomaly detection in volumetric medical images using sequential convolutional and recurrent neural networks
US20180268526A1 (en) * 2017-02-22 2018-09-20 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach using scan specific metadata
CN107945168A (en) * 2017-11-30 2018-04-20 上海联影医疗科技有限公司 The processing method and magic magiscan of a kind of medical image
CN110310723A (en) * 2018-03-20 2019-10-08 青岛海信医疗设备股份有限公司 Bone image processing method, electronic equipment and storage medium
CN108520519A (en) * 2018-04-11 2018-09-11 上海联影医疗科技有限公司 A kind of image processing method, device and computer readable storage medium
CN110047075A (en) * 2019-03-15 2019-07-23 天津大学 A kind of CT image partition method based on confrontation network
CN110111313A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image detection method and relevant device based on deep learning
CN110210519A (en) * 2019-05-10 2019-09-06 上海联影智能医疗科技有限公司 Classification method, computer equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914841A (en) * 2020-08-07 2020-11-10 温州医科大学 CT image processing method and device
CN111914841B (en) * 2020-08-07 2023-10-13 温州医科大学 CT image processing method and device
CN111899850A (en) * 2020-08-12 2020-11-06 上海依智医疗技术有限公司 Medical image information processing method, display method and readable storage medium
CN111967539A (en) * 2020-09-29 2020-11-20 北京大学口腔医学院 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
CN111967539B (en) * 2020-09-29 2021-08-31 北京大学口腔医学院 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment

Similar Documents

Publication Publication Date Title
CN110992376A (en) CT image-based rib segmentation method, device, medium and electronic equipment
CN110969245B (en) Target detection model training method and device for medical image
US11227418B2 (en) Systems and methods for deep learning-based image reconstruction
CN110895812A (en) CT image detection method and device, storage medium and electronic equipment
CN113744183B (en) Pulmonary nodule detection method and system
US9275478B2 (en) Method, control system, and computer program for compression of digital breast tomosynthesis data
US9361711B2 (en) Lesion-type specific reconstruction and display of digital breast tomosynthesis volumes
Xu et al. Automated cavity detection of infectious pulmonary tuberculosis in chest radiographs
Garlapati et al. Detection of COVID-19 using X-ray image classification
US20220284578A1 (en) Image processing for stroke characterization
US20220285011A1 (en) Document creation support apparatus, document creation support method, and program
US11837346B2 (en) Document creation support apparatus, method, and program
Nazia Fathima et al. Diagnosis of Osteoporosis using modified U-net architecture with attention unit in DEXA and X-ray images
Xiao et al. A cascade and heterogeneous neural network for CT pulmonary nodule detection and its evaluation on both phantom and patient data
WO2020172558A1 (en) System and method for automatic detection of vertebral fractures on imaging scans using deep networks
Abed Lung Cancer Detection from X-ray images by combined Backpropagation Neural Network and PCA
WO2022056297A1 (en) Method and apparatus for analyzing medical image data in a latent space representation
CN111724356B (en) Image processing method and system for CT image pneumonia recognition
CN111080625B (en) Training method and training device for lung image strip and rope detection model
CN112288708B (en) Method, device, medium, and electronic device for detecting lymph node in CT image
KR102332472B1 (en) Tumor automatic segmentation using deep learning based on dual window setting in a medical image
US20170079604A1 (en) System and method for digital breast tomosynthesis
JP7240845B2 (en) Image processing program, image processing apparatus, and image processing method
CN113469942A (en) CT image lesion detection method
CN111353975A (en) Network model training method and device and focus positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200320