WO2020114158A1 - 一种病灶检测方法、装置、设备及存储介质 - Google Patents

一种病灶检测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2020114158A1
WO2020114158A1 PCT/CN2019/114452 CN2019114452W WO2020114158A1 WO 2020114158 A1 WO2020114158 A1 WO 2020114158A1 CN 2019114452 W CN2019114452 W CN 2019114452W WO 2020114158 A1 WO2020114158 A1 WO 2020114158A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature map
lesion
generate
neural network
preset
Prior art date
Application number
PCT/CN2019/114452
Other languages
English (en)
French (fr)
Inventor
黄锐
高云河
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2021500548A priority Critical patent/JP7061225B2/ja
Priority to KR1020207038088A priority patent/KR20210015972A/ko
Priority to SG11202013074SA priority patent/SG11202013074SA/en
Publication of WO2020114158A1 publication Critical patent/WO2020114158A1/zh
Priority to US17/134,771 priority patent/US20210113172A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5223Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data generating planar views from image data, e.g. extracting a coronal view from a 3D image
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to a method, device, equipment, and storage medium for detecting lesions.
  • CAD Computer-aided diagnosis
  • a lesion refers to a tissue or organ that is affected by a pathogenic factor and causes a lesion, and is a part of the body where a lesion occurs. For example, a part of the human lung is destroyed by tuberculosis bacteria, then this part is a tuberculosis lesion.
  • CT image-based lesion detection methods have received more and more attention.
  • the present disclosure provides a method, device, equipment, and storage medium for detecting lesions to accurately detect lesions in multiple parts of a patient's body to achieve a preliminary assessment of the patient's entire body of cancer.
  • the present disclosure provides a method for detecting a lesion, the method comprising: acquiring a first image including multiple sampling slices, the first image being a three-dimensional image including an X-axis dimension, a Y-axis dimension, and a Z-axis dimension Performing feature extraction on the first image to generate a first feature map containing the features and location of the lesion; the first feature map includes the three-dimensional features of the X-axis dimension, Y-axis dimension, and Z-axis dimension; The features included in the first feature map are subjected to dimensionality reduction processing to generate a second feature map; the second feature map is a two-dimensional image including the X-axis dimension and the Y-axis dimension; the second feature The image is detected to obtain the position of each lesion in the second feature map and the confidence corresponding to the position.
  • the acquiring the first image including multiple sampling slices includes: resampling the acquired CT image of the patient at the first sampling interval to generate including multiple samplings The first image of the slice.
  • the feature extraction of the first image to generate a first feature map including the feature and position of the lesion includes: the first neural network The image is down-sampled to generate a third feature map; the third neural network is down-sampled by the residual module of the second neural network to generate a fourth feature map; the DenseASPP module of the second neural network is used to pair Extracting features of lesions at different scales in the fourth feature map; after processing by the DenseASPP module, generating a fourth preset feature map with the same resolution size as the fourth feature map, and using the first The deconvolution layer of the second neural network and the residual module upsample the feature map processed by the DenseASPP module to generate a third preset feature map with the same resolution size as the third feature map; Generating the first feature map with the same resolution size as the third preset feature map from the third feature map and the third preset feature map, and the fourth feature map and the fourth feature map Preset feature maps are fused to generate a
  • the feature extraction of the first image to generate a first feature map including the feature and position of the lesion includes: a pair of residual modules through a second neural network Downsampling the first image to generate a fourth feature map with a resolution lower than that of the first image; using the DenseASPP module of the second neural network to characterize lesions of different scales in the fourth feature map Extraction; after processing by the DenseASPP module, up-sampling the feature map processed by the DenseASPP module through the deconvolution layer of the second neural network and the residual module to generate the first The first preset feature map with the same image resolution;
  • the first preset feature map includes the location of the lesion;
  • the position of the lesion is used to generate the position of the lesion in the first feature map.
  • the feature extraction of the first image to generate a first feature map including the feature and position of the lesion includes: the first neural network The image is down-sampled to generate a third feature map with a resolution lower than that of the first image; the third feature map is down-sampled by the residual module of the second neural network to generate a third feature map A fourth feature map with a low resolution of the feature map; down-sampling the fourth feature map through the residual module of the second neural network to generate a fifth feature with a lower resolution than the fourth feature map Figure; through the DenseASPP module of the second neural network to extract the features of the lesions of different scales in the fifth feature map; after processing by the DenseASPP module, generate the same resolution as the fifth feature map The fifth preset feature map; the up-sampling of the feature map processed by the DenseASPP module through the deconvolution layer of the second neural network and the residual module to generate the feature map of the fourth feature map A fourth preset feature map with the
  • the first neural network includes: a convolutional layer and a residual module cascaded with the convolutional layer; and the second neural network includes: 3D U-Net network, the 3D U-Net network includes: a convolution layer, a deconvolution layer, a residual module, and the DenseASPP module.
  • the second neural network is a stack of multiple 3D U-Net networks.
  • the residual module includes: a convolutional layer, a batch normalization layer, a ReLU activation function, and a maximum pooling layer.
  • the dimensionality reduction processing of the features included in the first feature map to generate a second feature map includes: separately dividing all the features of the first feature map The channel dimension and the Z axis dimension of each feature in the feature are combined, so that the dimension of each feature in all the features of the first feature map is composed of the X axis dimension and the Y axis dimension; the dimension of each feature in all the features The first feature map composed of the X-axis dimension and the Y-axis dimension is the second feature map.
  • the detecting the second feature map includes: detecting the second feature map through a first detection sub-network to detect the second feature The coordinates of the position of each lesion in the figure; the second feature map is detected through a second detection sub-network, and the confidence corresponding to each lesion in the second feature map is detected.
  • the first detection sub-network includes: a plurality of convolutional layers, each of the plurality of convolutional layers is connected to a ReLU activation function;
  • the second detection sub-network includes: a plurality of convolutional layers, and each of the plurality of convolutional layers is connected to a ReLU activation function.
  • the method before performing feature extraction on the first image and generating a first feature map including the features and locations of the lesions, the method further includes: The annotated three-dimensional image is input to the first neural network, and the lesion annotation is used to annotate the lesion; and the first neural network, the second neural network, the DenseASPP module, Various parameters of the first detection sub-network and the second detection sub-network are trained; wherein, the position of each lesion in the plurality of lesions is output by the first detection sub-network.
  • the method before performing feature extraction on the first image and generating a first feature map including the features and locations of the lesions, the method further includes: The annotated three-dimensional image is input to the first neural network, and the annotated lesion is used to annotate the lesion; and the second neural network, the DenseASPP module, and the first detector are respectively gradient-descent The network and the parameters of the second detection subnet are trained; wherein, the position of each lesion in the plurality of lesions is output by the first detection subnet.
  • the present disclosure provides a lesion detection device, the device includes: an acquisition unit for acquiring a first image including a plurality of sampling slices, the first image includes an X-axis dimension, a Y-axis dimension and Z A three-dimensional image in the axis dimension; a first generating unit for feature extraction on the first image to generate a first feature map containing the features and location of the lesion; the first feature map includes the X-axis dimension, Y Three-dimensional features of the axis dimension and the Z-axis dimension; a second generating unit, configured to perform dimensionality reduction on the features included in the first feature map to generate a second feature map; the second feature map includes the X axis Dimension and the two-dimensional feature of the Y-axis dimension; a detection unit, configured to detect the second feature map to obtain the position of each lesion in the second feature map and the confidence corresponding to the position.
  • the acquisition unit is specifically configured to resample the acquired CT image of the patient at a first sampling interval to generate a first image including multiple sampling slices.
  • the first generating unit is specifically configured to: downsample the first image through the first neural network to generate a resolution lower than that of the first image
  • the third feature map of the second feature network is down-sampled by the residual module of the second neural network to generate a fourth feature map with a lower resolution than the third feature map
  • the DenseASPP module of the second neural network extracts the features of lesions at different scales in the fourth feature map; after processing by the DenseASPP module, a fourth preset feature with the same resolution size as the fourth feature map is generated Figure, and the deconvolution layer of the second neural network and the residual module up-sampling the feature map processed by the DenseASPP module to generate the same resolution size as the third feature map A third preset feature map; generating the first feature map with the same resolution size as the third preset feature map from the third feature map and the third preset feature map, and converting the fourth feature map A feature map is merged with the fourth preset feature map
  • the first generating unit is specifically configured to: downsample the first image through the first neural network to generate a resolution lower than that of the first image
  • the fourth feature map of the second neural network the DenseASPP module of the second neural network extracts the features of lesions of different scales in the fourth feature map; after processing by the DenseASPP module, through the second neural network
  • the convolutional layer and the residual module upsample the feature map processed by the DenseASPP module to generate the first preset feature map with the same resolution size as the first image; convert the first The image and the first preset feature map generate a first feature map having the same resolution and size as the first preset feature map; the first preset feature map includes the location of the lesion; the location of the lesion is used To generate the position of the lesion in the first feature map.
  • the first generating unit is specifically configured to: downsample the residual module of the first image through a second neural network to generate a ratio that is greater than the first image A third feature map with a small resolution; a third feature map is down-sampled by a residual module of the second neural network to generate a fourth feature map with a lower resolution than the third feature map; Down-sampling the fourth feature map through the residual module of the second neural network to generate a fifth feature map with a lower resolution than the fourth feature map; through the DenseASPP module of the second neural network Extracting features of lesions at different scales in the fifth feature map; after processing by the DenseASPP module, generating a fifth preset feature map having the same resolution size as the fifth feature map; The deconvolution layer of the second neural network and the residual module upsample the feature map processed by the DenseASPP module to generate a fourth preset feature map with the same resolution size as the fourth feature map; Or, up-sampling the
  • the first neural network includes: a convolutional layer and a residual module cascaded with the convolutional layer; and the second neural network includes: 3D U-Net network, the 3D U-Net network includes: a convolution layer, a deconvolution layer, a residual module, and the DenseASPP module.
  • the second neural network is a stack of multiple 3D U-Net networks.
  • the residual module includes: a convolutional layer, a batch normalization layer, a ReLU activation function, and a maximum pooling layer.
  • the third feature unit is specifically configured to separately merge the channel dimension and the Z-axis dimension of each feature in all the features of the first feature map, so that The dimension of each feature in all features of the first feature map is composed of X-axis dimension and Y-axis dimension; the dimension of each feature in all features is composed of X-axis dimension and Y-axis dimension.
  • the second feature map is specifically configured to separately merge the channel dimension and the Z-axis dimension of each feature in all the features of the first feature map, so that The dimension of each feature in all features of the first feature map is composed of X-axis dimension and Y-axis dimension; the dimension of each feature in all features is composed of X-axis dimension and Y-axis dimension.
  • the detection unit is specifically configured to: detect the second feature map through the first detection sub-network to detect each lesion in the second feature map The coordinates of the position of; the second feature map is detected through a second detection sub-network, and the confidence corresponding to each lesion in the second feature map is detected.
  • the first detection sub-network includes: a plurality of convolutional layers, and each of the plurality of convolutional layers is connected to a ReLU activation function;
  • the second detection sub-network includes: a plurality of convolutional layers, and each of the plurality of convolutional layers is connected to a ReLU activation function.
  • it further includes: a training unit, which is specifically configured to: perform feature extraction on the first image in the first generation unit to generate a first feature map containing features of the lesion Before, by inputting a pre-stored three-dimensional image containing multiple lesion annotations to the first neural network, the lesion annotations are used to annotate the lesions; and the first neural network and the Various parameters of the second neural network, the first detection sub-network, and the second detection sub-network are trained; wherein, the position of each lesion in the plurality of lesions is output by the first detection sub-network.
  • a training unit which is specifically configured to: perform feature extraction on the first image in the first generation unit to generate a first feature map containing features of the lesion Before, by inputting a pre-stored three-dimensional image containing multiple lesion annotations to the first neural network, the lesion annotations are used to annotate the lesions; and the first neural network and the Various parameters of the second neural network, the first detection sub-network, and the second
  • it further includes: a training unit, which is specifically configured to: perform feature extraction on the first image in the first generation unit to generate a first including the feature and position of the lesion Before the feature map, by inputting a three-dimensional image containing multiple lesion annotations to the second neural network, the lesion annotation is used to annotate the lesion; and the second neural network and the Various parameters of the first detection subnet and the second detection subnet are trained; wherein, the position of each lesion in the plurality of lesions is output by the first detection subnet.
  • a training unit which is specifically configured to: perform feature extraction on the first image in the first generation unit to generate a first including the feature and position of the lesion Before the feature map, by inputting a three-dimensional image containing multiple lesion annotations to the second neural network, the lesion annotation is used to annotate the lesion; and the second neural network and the Various parameters of the first detection subnet and the second detection subnet are trained; wherein, the position of each lesion in the plurality of lesions is output
  • the present disclosure provides a lesion detection device, including a processor, a display and a memory, the processor, the display and the memory are connected to each other, wherein the display is used to display the position of the lesion and the position corresponding to the position With confidence, the memory is used to store application program code, and the processor is configured to call the program code to perform the lesion detection method of the first aspect described above.
  • the present disclosure provides a computer-readable storage medium for storing one or more computer programs.
  • the one or more computer programs include instructions.
  • the instructions For performing the method for detecting a lesion in the first aspect.
  • the present disclosure provides a computer program that includes a lesion detection instruction.
  • the computer program When the computer program is executed on a computer, the above-described utilization lesion detection instruction is used to perform the lesion detection method provided in the first aspect.
  • the present disclosure provides a method, device, equipment and storage medium for detecting lesions.
  • a first image including multiple sampling slices is obtained, and the first image is a three-dimensional image including an X-axis dimension, a Y-axis dimension, and a Z-axis dimension.
  • feature extraction is performed on the first image to generate a first feature map containing the features and positions of the lesion; the first feature map includes a three-dimensional image of X-axis dimension, Y-axis dimension, and Z-axis dimension.
  • the features included in the first feature map are subjected to dimensionality reduction processing to generate a second feature map; the second feature map includes two-dimensional features in the X-axis dimension and the Y-axis dimension.
  • the features of the second feature map are detected to obtain the confidence level corresponding to the feature and position of each lesion in the second feature map.
  • FIG. 1 is a schematic diagram of a network architecture of a lesion detection system provided by the present disclosure
  • FIG. 2 is a schematic flowchart of a method for detecting a lesion provided by the present disclosure
  • FIG. 3 is a schematic block diagram of a lesion detection device provided by the present disclosure.
  • FIG. 4 is a schematic structural diagram of a lesion detection device provided by the present disclosure.
  • the term “if” may be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context .
  • the phrase “if determined” or “if [described condition or event] is detected” may be interpreted in the context to mean “once determined” or “in response to determination” or “once detected [described condition or event ]” or “In response to detection of [the described condition or event]”.
  • the devices described in this disclosure include, but are not limited to, other portable devices such as laptop computers or tablet computers with touch-sensitive surfaces (eg, touch screen displays and/or touch pads). It should also be understood that, in some embodiments, the device is not a portable communication device, but a desktop computer with a touch-sensitive surface (eg, touch screen display and/or touch pad).
  • the device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
  • the device supports various applications, such as one or more of the following: drawing applications, presentation applications, word processing applications, website creation applications, disk burning applications, spreadsheet applications, game applications, phone applications Programs, video conferencing applications, email applications, instant messaging applications, exercise support applications, photo management applications, digital camera applications, digital camera applications, web browsing applications, digital music player applications and /Or digital video player application.
  • applications such as one or more of the following: drawing applications, presentation applications, word processing applications, website creation applications, disk burning applications, spreadsheet applications, game applications, phone applications Programs, video conferencing applications, email applications, instant messaging applications, exercise support applications, photo management applications, digital camera applications, digital camera applications, web browsing applications, digital music player applications and /Or digital video player application.
  • Various applications that can be executed on the device can use at least one common physical user interface device such as a touch-sensitive surface.
  • One or more functions of the touch-sensitive surface and corresponding information displayed on the device can be adjusted and/or changed between applications and/or within the corresponding applications.
  • the common physical architecture of the device eg, touch-sensitive surface
  • FIG. 1 is a schematic diagram of a lesion detection system provided by the present disclosure.
  • the system 10 may include a first neural network 101, a second neural network 102, and a detection subnet (Detection Subnet) 103.
  • Detection Subnet detection subnet
  • a lesion refers to a part of a tissue or organ that is affected by a pathogenic factor and causes a lesion, and is a part where a lesion occurs on the body.
  • a part of the human lung is destroyed by tuberculosis bacteria, then this part is a tuberculosis lesion.
  • the first neural network 101 includes a convolutional layer (Conv1) and a residual block (SEResBlock) cascaded with the convolutional layer.
  • the residual module may include: a batch normalization layer (Batch Normalization, BN), a modified linear unit (ReLU) activation function, and a maximum pooling layer (Max-pooling).
  • the first neural network 101 may be used to downsample the first image input to the first neural network 101 in the X-axis dimension and the Y-axis dimension to generate a third feature map.
  • the first image is a three-dimensional image including the X-axis dimension, the Y-axis dimension, and the Z-axis dimension (that is, the first image is a plurality of two-dimensional images including the X-axis dimension and the Y-axis dimension Including X-axis dimension, Y-axis dimension and Z-axis dimension three-dimensional images), for example, the first image may be a three-dimensional image of 512*512*9.
  • the first neural network 101 processes the first image through convolution kernel generation in the convolutional layer to generate a feature map. Furthermore, the first neural network 101 pools the specific feature map through the residual module to generate The third feature map with a smaller resolution than the first image. For example, the first neural network 101 can process 512*512*9 three-dimensional images into 256*256*9 three-dimensional images, or the first neural network 101 can process 512*512*9 three-dimensional images. It is a three-dimensional image of 128*128*9. The process of down-sampling can extract the lesion features contained in the input first image, and remove some unnecessary areas in the first image.
  • the purpose of downsampling in the embodiments of the present disclosure is to generate a thumbnail of the first image so that the first image conforms to the size of the display area.
  • the purpose of the up-sampling in the embodiment of the present disclosure is to enlarge the original image by inserting new pixels by interpolating between the pixels of the original image. Conducive to the detection of small lesions.
  • the second neural network 102 may include four stacked 3D U-net networks.
  • the expanded view of the 3D U-net network is shown as 104 in Figure 1.
  • the detection of multiple 3D U-net networks can improve the accuracy of the detection.
  • the embodiment of the present disclosure only exemplifies the number of 3D U-net networks and does not limit them.
  • the 3D U-Net network includes: convolution layer (conv), deconvolution layer (deconv), residual module and DenseASPP module.
  • the residual module of the second neural network 102 may be used to downsample the third feature map output by the first neural network 101 in the X-axis dimension and the Y-axis dimension to generate a fourth feature map.
  • the residual module of the second neural network 102 can also be used to downsample the fourth feature map in the X-axis dimension and the Y-axis dimension to generate a fifth feature map.
  • the features of the lesions at different scales in the fifth feature map are extracted through the DenseASPP module of the second neural network 102.
  • a fifth preset feature map with the same resolution size as the fifth feature map is generated; the deconvolution layer of the second neural network 102 and the residual module pass through the DenseASPP module
  • the processed feature map is up-sampled to generate a fourth preset feature map with the same resolution as the fourth feature map; or, through the deconvolution layer and the residual module of the second neural network 102
  • the feature map processed by the DenseASPP module is up-sampled to generate a third preset feature map with the same resolution size as the third feature map.
  • the third feature map and the third preset feature map are fused to generate a first feature map with the same resolution and size as the third preset feature map; the fourth feature map and the fourth preset feature map are fused to generate the fourth feature map A first feature map with the same resolution size as the preset feature map; and fusing the fifth feature map with the fifth preset feature map to generate the first feature map with the same resolution size as the fifth preset feature map;
  • the third preset feature map, the fourth preset feature map, and the fifth preset feature map respectively include the position of the lesion; the position of the lesion is used to generate the position of the lesion in the first feature map.
  • the DenseASPP module includes five expansion convolution combination cascades with different expansion rates, which can extract the features of lesions of different scales.
  • the detection sub-network 103 may include: a first detection sub-network and a second detection sub-network.
  • the first detection sub-network includes: multiple convolutional layers, and each of the multiple convolutional layers is connected to a ReLU activation function.
  • the second detection sub-network includes: multiple convolutional layers, and each of the multiple convolutional layers is connected to a ReLU activation function.
  • the first detection sub-network is used to detect the second feature map after dimensionality reduction by the first feature map, and detect the coordinates of the position of each lesion in the second feature map.
  • the input second feature map is processed through four cascaded convolutional layers in the first detection sub-network, where each convolutional layer includes a Y*Y convolution kernel, which can be obtained by successively obtaining each
  • the coordinates of the upper left corner of the lesion (x1, y1) and the coordinates of the lower right corner of the lesion (x2, y2) are used to determine the position of each lesion in the second feature map.
  • the second feature map is detected through the second detection sub-network, and the confidence corresponding to each lesion in the second feature map is detected.
  • the input second feature map is processed through 4 cascaded convolution layers in the second detection sub-network, where each convolution layer includes a Y*Y convolution kernel, which can be obtained by successively obtaining each
  • the coordinates of the upper left corner of the lesion (x1, y1) and the coordinates of the lower right corner of the lesion (x2, y2) are used to determine the position of each lesion in the second feature map, and then, the confidence corresponding to the position is output.
  • the confidence corresponding to the position in the embodiment of the present disclosure is the degree to which the user believes that the position is the authenticity of the lesion.
  • the confidence of the location of a certain lesion may be 90%.
  • it can accurately detect the lesions in multiple parts of the patient's body, and can achieve a preliminary assessment of the patient's whole body cancer.
  • the lesion annotations are used to annotate the lesions (for example: on the one hand, the lesions are marked out in the form of boxes, on the other hand, they are marked out The coordinates of the position of the lesion); and use gradient descent method to train the parameters of the first neural network, the second neural network, the first detection sub-network and the second detection sub-network; where each of the multiple lesions The position of a lesion is output by the first detection sub-network.
  • the gradient of the gradient descent method can be calculated by the back propagation algorithm.
  • the lesion annotations are used to annotate the lesions; and the second neural network, the first detector sub-network and the second detector are respectively gradient-descent method Various parameters of the network are trained; wherein, the position of each lesion in the multiple lesions is output by the first detection sub-network.
  • the lesion detection method may be performed by an electronic device such as a terminal device or a server, and the terminal device may be a user equipment (User Equipment, UE), mobile device, user terminal, terminal, cordless phone, or individual Digital processing (Personal Digital Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • the method can be implemented by the processor calling computer-readable instructions stored in the memory.
  • the method can be performed by a server.
  • the method may include at least the following steps:
  • the first image is a three-dimensional image including an X-axis dimension, a Y-axis dimension, and a Z-axis dimension.
  • the acquired CT image of the patient is resampled at a first sampling interval to generate a first image including multiple sampling slices.
  • the CT image of the patient may include 130 slices, the thickness of each slice is 2.0 mm, and the first sampling interval in the X-axis dimension and Y-axis dimension may be 2.0 mm.
  • the CT image of the patient is a scan sequence including multiple tomographic numbers about the tissue or organ of the patient, and the tomographic number may be 130.
  • Lesion refers to the part of the patient's tissue or organ that is affected by the pathogenic factor and causes the lesion, which is the part of the body where the lesion occurs. For example, a part of the human lung is destroyed by tuberculosis bacteria, then this part is a tuberculosis lesion.
  • the first image is a three-dimensional image including X-axis dimension, Y-axis dimension and Z-axis dimension (that is, the first image is N pieces of two-dimensional images including X-axis dimension and Y-axis dimension Three-dimensional images including X-axis dimension, Y-axis dimension and Z-axis dimension, N is greater than or equal to 2; each two-dimensional image is a cross-sectional image at different positions of the tissue to be detected), for example, the first image may be 512*512*9 Three-dimensional image.
  • S202 Perform feature extraction on the first image to generate a first feature map containing features of the lesion; the first feature map includes the three-dimensional features of the X-axis dimension, the Y-axis dimension, and the Z-axis dimension.
  • feature extraction is performed on the first image to generate a first feature map containing features and positions of the lesion, which may include, but is not limited to, the following situations.
  • Case 1 Down-sampling the first image through the first neural network to generate a third feature map.
  • the third feature map is down-sampled by the residual module of the second neural network to generate a fourth feature map.
  • the features of the lesions of different scales in the fourth feature map are extracted through the DenseASPP module of the second neural network.
  • a fourth preset feature map with the same resolution as the fourth feature map is generated, and the feature map processed by the DenseASPP module is processed by the deconvolution layer and the residual module of the second neural network Up-sampling to generate a third preset feature map with the same resolution size as the third feature map.
  • the third feature map and the third preset feature map are used to generate a first feature map with the same resolution as the third preset feature map, and the fourth feature map and the fourth preset feature map are fused to generate the fourth feature map.
  • the first feature map with the same resolution size as the preset feature map; the third preset feature map and the fourth preset feature map respectively include the position of the lesion; the position of the lesion is used to generate the position of the lesion in the first feature map.
  • Case 2 The first image is down-sampled by the residual module of the second neural network to generate a fourth feature map.
  • the features of the lesions of different scales in the fourth feature map are extracted through the DenseASPP module of the second neural network.
  • the feature map processed by the DenseASPP module is up-sampled by the deconvolution layer and the residual module of the second neural network to generate a first preset feature map with the same resolution size as the first image.
  • the first preset feature map includes the position of the lesion; the position of the lesion is used to generate the first The location of the lesion in a feature map.
  • Case 3 The first image is down-sampled by the first neural network to generate a third feature map.
  • the third feature map is down-sampled by the residual module of the second neural network to generate a fourth feature map.
  • the fourth feature map is down-sampled by the residual module of the second neural network to generate a fifth feature map.
  • the features of the lesions at different scales in the fifth feature map are extracted through the DenseASPP module of the second neural network.
  • a fifth preset feature map with the same resolution as the fifth feature map is generated; the feature map processed by the DenseASPP module is uploaded through the deconvolution layer and the residual module of the second neural network Sampling to generate a fourth preset feature map with the same resolution as the fourth feature map; or, up-sampling the feature map processed by the DenseASPP module through the deconvolution layer and the residual module of the second neural network, Generate a third preset feature map with the same resolution size as the third feature map.
  • the third feature map and the third preset feature map generate a first feature map with the same resolution and size as the third preset feature map; the fourth feature map and the fourth preset feature map are fused to generate a fourth Set a first feature map with the same resolution size as the feature map; and fuse the fifth feature map with the fifth preset feature map to generate a first feature map with the same resolution size as the fifth preset feature map;
  • the third preset feature map, the fourth preset feature map, and the fifth preset feature map respectively include the position of the lesion; the position of the lesion is used to generate the position of the lesion in the first feature map.
  • the first neural network includes: a convolutional layer and a residual module cascaded with the convolutional layer;
  • the second neural network includes: 3D U-Net network; wherein, the 3D U-Net network includes: convolution layer, deconvolution layer, residual module and DenseASPP module.
  • the residual module may include: a convolutional layer, a batch normalization layer (BN layer), a ReLU activation function, and a maximum pooling layer.
  • the second neural network is a stack of multiple 3D U-Net networks. If the second neural network is a stack of multiple 3D U-Net networks, the stability of the lesion detection system and the accuracy of the detection can be improved.
  • the embodiments of the present disclosure do not limit the number of 3D U-net networks.
  • S203 Perform dimensionality reduction on the features included in the first feature map to generate a second feature map; the second feature map includes two-dimensional features in the X-axis dimension and the Y-axis dimension.
  • the channel dimension and the Z-axis dimension of each feature in all the features of the first feature map are combined, so that the dimension of each feature in all the features of the first feature map is composed of the X-axis dimension and the Y-axis dimension;
  • the dimension of each feature of all features is composed of the X-axis dimension and the Y-axis dimension.
  • the first feature map is the second feature map.
  • the first feature map is a three-dimensional feature map, and when output to the detection sub-network 103 for detection, it needs to be converted to two-dimensional, so the first feature map needs to be dimension-reduced.
  • the above channel of a certain feature represents the distribution data of a certain feature.
  • the second feature map is detected through the first detection sub-network, and the coordinates of the position of each lesion in the second feature map are detected.
  • the input second feature map is processed through multiple cascaded convolution layers in the first detection sub-network, where each convolution layer includes a Y*Y convolution kernel, which can be obtained by successively The coordinates of the upper left corner of each lesion (x1, y1) and the coordinates of the lower right corner of the lesion (x2, y2) are used to determine the position of each lesion in the second feature map.
  • the second feature map is detected through a second detection sub-network, and the confidence corresponding to each lesion in the second feature map is detected.
  • the input second feature map is processed through multiple cascaded convolution layers in the second detection sub-network, where each convolution layer includes a Y*Y convolution kernel, which can be obtained by successively
  • the coordinates of the upper left corner of each lesion (x1, y1) and the coordinates of the lower right corner of the lesion (x2, y2) are used to determine the position of each lesion in the second feature map, and then, the confidence corresponding to the position is output.
  • the embodiments of the present disclosure can accurately detect the lesions in multiple parts of the patient's body, and realize the preliminary assessment of the patient's whole body cancer.
  • the lesion annotations are used to annotate the lesions; and the first neural network, the second neural network, and the first detection sub-network are respectively used by the gradient descent method And training various parameters of the second detection sub-network; wherein, the position of each lesion in the multiple lesions is output by the first detection sub-network.
  • the lesion annotations are used to annotate the lesions; and the second neural network, the first detection sub-network, and the second detection sub-network are respectively gradient-descent.
  • Various parameters are trained; wherein, the position of each lesion in the multiple lesions is output by the first detection sub-network.
  • a first image including multiple sampling slices is acquired, and the first image is a three-dimensional image including an X-axis dimension, a Y-axis dimension, and a Z-axis dimension.
  • feature extraction is performed on the first image to generate a first feature map containing features of the lesion; the first feature map includes three-dimensional features of X-axis dimension, Y-axis dimension, and Z-axis dimension.
  • the features included in the first feature map are subjected to dimensionality reduction processing to generate a second feature map; the second feature map includes two-dimensional features in the X-axis dimension and the Y-axis dimension.
  • the features of the second feature map are detected to obtain the location of each lesion in the second feature map and the confidence corresponding to the location.
  • the lesion detection device 30 includes an acquisition unit 301, a first generation unit 302, a second generation unit 303, and a detection unit 304. among them:
  • the obtaining unit 301 is configured to obtain a first image including a plurality of sampling slices.
  • the first image is a three-dimensional image including an X-axis dimension, a Y-axis dimension, and a Z-axis dimension.
  • the first generating unit 302 is configured to perform feature extraction on the first image and generate a first feature map including the features and positions of the lesion; the first feature map includes three-dimensional features of X-axis dimension, Y-axis dimension, and Z-axis dimension.
  • the second generating unit 303 is configured to perform dimension reduction processing on the features included in the first feature map to generate a second feature map; the second feature map includes two-dimensional features in the X-axis dimension and the Y-axis dimension.
  • the detecting unit 304 is configured to detect the second feature map to obtain the position of each lesion in the second feature map and the confidence corresponding to the position.
  • the obtaining unit 302 is specifically used for:
  • the first generating unit 303 can be specifically used in the following three situations:
  • Case 1 Down-sampling the first image through the first neural network to generate a third feature map.
  • the third feature map is down-sampled by the residual module of the second neural network to generate a fourth feature map.
  • the features of the lesions of different scales in the fourth feature map are extracted through the DenseASPP module of the second neural network.
  • a fourth preset feature map with the same resolution as the fourth feature map is generated, and the feature map processed by the DenseASPP module is processed by the deconvolution layer and the residual module of the second neural network Up-sampling to generate a third preset feature map with the same resolution size as the third feature map.
  • the third feature map and the third preset feature map are used to generate a first feature map with the same resolution as the third preset feature map, and the fourth feature map and the fourth preset feature map are fused to generate the fourth feature map.
  • the first feature map with the same resolution size as the preset feature map; the third preset feature map and the fourth preset feature map respectively include the position of the lesion; the position of the lesion is used to generate the position of the lesion in the first feature map.
  • Case 2 The first image is down-sampled by the residual module of the second neural network to generate a fourth feature map
  • the features of the lesions of different scales in the fourth feature map are extracted through the DenseASPP module of the second neural network.
  • the feature map processed by the DenseASPP module is up-sampled by the deconvolution layer and the residual module of the second neural network to generate a first preset feature map with the same resolution size as the first image.
  • the first preset feature map includes the position of the lesion; the position of the lesion is used to generate the first feature The location of the lesion in the picture.
  • Case 3 Down-sampling the first image through the first neural network to generate a third feature map.
  • the third feature map is down-sampled by the residual module of the second neural network to generate a fourth feature map.
  • the fourth feature map is down-sampled by the residual module of the second neural network to generate a fifth feature map.
  • the features of the lesions at different scales in the fifth feature map are extracted through the DenseASPP module of the second neural network.
  • a fifth preset feature map with the same resolution as the fifth feature map is generated; the feature map processed by the DenseASPP module is uploaded through the deconvolution layer and the residual module of the second neural network Sampling to generate a fourth preset feature map with the same resolution as the fourth feature map; or, up-sampling the feature map processed by the DenseASPP module through the deconvolution layer and the residual module of the second neural network, Generate a third preset feature map with the same resolution size as the third feature map.
  • the third feature map and the third preset feature map generate a first feature map with the same resolution and size as the third preset feature map; the fourth feature map and the fourth preset feature map are fused to generate a fourth Let the first feature map with the same resolution size of the feature map; and fuse the fifth feature map with the fifth preset feature map to generate the first feature map with the same resolution size as the fifth preset feature map; the third The preset feature map, the fourth preset feature map, and the fifth preset feature map respectively include the position of the lesion; the position of the lesion is used to generate the position of the lesion in the first feature map.
  • the first neural network includes: a convolutional layer and a residual module cascaded with the convolutional layer;
  • the second neural network includes: 3D U-Net network; wherein, the 3D U-Net network may include: convolution layer, deconvolution layer, residual module and DenseASPP module.
  • the second neural network may include a plurality of stacked 3D U-Net networks.
  • the detection of multiple 3D U-net networks can improve the accuracy of the detection.
  • the embodiments of the present disclosure only take the number of 3D U-net networks as an example.
  • the residual module may include: a convolutional layer, a batch normalization layer (BN layer), a ReLU activation function, and a maximum pooling layer.
  • the third feature unit 304 is specifically configured to respectively merge the channel dimension and the Z axis dimension of each feature in all features of the first feature map, so that the dimension of each feature in all features of the first feature map is determined by the X axis
  • the dimension and the Y-axis dimension are composed; the dimension of each feature of all features is composed of the X-axis dimension and the Y-axis dimension.
  • the first feature map is the second feature map.
  • the detection unit 305 is specifically used for:
  • the second feature map is detected through the first detection sub-network, and the coordinates of the position of each lesion in the second feature map are detected.
  • the second feature map is detected through the second detection sub-network, and the confidence corresponding to each lesion in the second feature map is detected.
  • the first detection sub-network includes: multiple convolutional layers, and each of the multiple convolutional layers is connected to a ReLU activation function.
  • the second detection sub-network includes: multiple convolutional layers, and each of the multiple convolutional layers is connected to a ReLU activation function.
  • the lesion detection device 30 includes an acquisition unit 301, a first generation unit 302, a second generation unit 303, and a detection unit 304, and further includes a display unit.
  • the display unit is specifically used to display the position of the lesion detected by the detection unit 304 and the confidence of the position.
  • the lesion detection device 30 includes an acquisition unit 301, a first generation unit 302, a second generation unit 303, and a detection unit 304, and further includes a training unit.
  • Training unit specifically used for:
  • the first generating unit Before the first generating unit performs feature extraction on the first image to generate a first feature map including the features and positions of the lesion, by inputting a pre-stored three-dimensional image containing multiple lesion annotations to the first neural network, the lesion annotation Used to mark the lesions; and use gradient descent to train the parameters of the first neural network, the second neural network, the first detection sub-network and the second detection sub-network; each of the multiple lesions The position of the lesion is output by the first detection sub-network.
  • the first generating unit Before the first generating unit performs feature extraction on the first image to generate a first feature map containing the features and positions of the lesions, by inputting a three-dimensional image containing multiple lesion annotations to the second neural network, the lesion annotations are used to Mark the lesions; and use gradient descent method to train the parameters of the second neural network, the first detection subnet and the second detection subnet respectively.
  • the lesion detection device 30 is only an example provided by the embodiment of the present disclosure, and the lesion detection device 30 may have more or fewer components than those shown, and two or more components may be combined, or It can be realized with different configurations of components.
  • the lesion detection device may include a mobile phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a mobile Internet device (Mobile Internet Device (MID), and a smart wearable device (such as a smart watch, smart bracelet ) And other devices, the embodiments of the present disclosure are not limited.
  • the lesion detection device 40 may include: a baseband chip 401, a memory 402 (one or more computer-readable storage media), and a peripheral system 403. These components can communicate on one or more communication buses 404.
  • the baseband chip 401 includes one or more processors (CPU) 405 and one or more graphics processors (GPU) 406.
  • the graphics processor 406 can be used to process the input normal map.
  • the memory 402 is coupled to the processor 405 and can be used to store various software programs and/or multiple sets of instructions.
  • the memory 402 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 402 may store an operating system (hereinafter referred to as a system), such as an embedded operating system such as ANDROID, IOS, WINDOWS, or LINUX.
  • the memory 402 may also store a network communication program, which may be used to communicate with one or more additional devices, one or more devices, or one or more network devices.
  • the memory 402 can also store a user interface program, which can display the content of the application program vividly through a graphical operation interface, and receive user control operations on the application program through input controls such as menus, dialog boxes, and keys .
  • the memory 402 may be used to store program code for implementing a method for detecting a lesion.
  • the processor 405 may be used to call the program code stored in the memory 402 to execute the lesion detection method.
  • the memory 402 may also store one or more application programs. As shown in FIG. 4, these applications may include: social applications (such as Facebook), image management applications (such as albums), map applications (such as Google Maps), browsers (such as Safari, Google Chrome), etc. .
  • social applications such as Facebook
  • image management applications such as albums
  • map applications such as Google Maps
  • browsers such as Safari, Google Chrome
  • the peripheral system 403 is mainly used to realize the interactive function between the lesion detection device 40 and the user/external environment, mainly including the input and output devices of the lesion detection device 40.
  • the peripheral system 403 may include: a display screen controller 407, a camera controller 408, a mouse-keyboard controller 409, and an audio controller 410. Wherein, each controller may be coupled with their corresponding peripheral devices (such as display screen 411, camera 412, mouse-keyboard 413, and audio circuit 414).
  • the display screen may be a display screen configured with a self-capacitive floating touch panel, or may be a display screen configured with an infrared floating touch panel.
  • the camera 412 may be a 3D camera. It should be noted that the peripheral system 403 may also include other I/O peripherals.
  • the display screen 411 may be used to display the position and confidence of the detected lesion.
  • the lesion detection device 40 is only an example provided by the embodiment of the present disclosure, and the lesion detection device 40 may have more or less components than those shown, and two or more components may be combined, or It can be realized with different configurations of components.
  • the present disclosure provides a computer-readable storage medium that stores a computer program, which is implemented when executed by a processor.
  • the computer-readable storage medium may be an internal storage unit of the device described in any of the foregoing embodiments, such as a hard disk or a memory of the device.
  • the computer-readable storage medium may also be an external storage device of the device, such as a plug-in hard disk equipped on the device, a smart memory card (Smart) Card (SMC), a secure digital (SD) card, and a flash memory card (Flash Card) etc.
  • the computer-readable storage medium may also include both an internal storage unit of the device and an external storage device.
  • the computer-readable storage medium is used to store computer programs and other programs and data required by the device.
  • the computer-readable storage medium can also be used to temporarily store data that has been or will be output.
  • the present disclosure also provides a computer program product including a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform any of the methods described in the above method embodiments Part or all steps.
  • the computer program product may be a software installation package, and the computer includes an electronic device.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a division of logical functions.
  • there may be another division manner for example, multiple units or components may be combined or integrated into Another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections through some interfaces, devices, or units, and may also be electrical, mechanical, or other forms of connection.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present disclosure.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure essentially or part of the contribution to the existing technology, or all or part of the technical solution can be embodied in the form of a software product
  • the computer software product is stored in a storage medium
  • several instructions are included to enable a computer device (which may be a personal computer, a target blockchain node device, or a network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Veterinary Medicine (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Optics & Photonics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Pulmonology (AREA)
  • Human Computer Interaction (AREA)
  • Physiology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

本公开公开了一种病灶检测方法、装置、设备及存储介质,方法包括:获取包括多张采样切片的第一图像,所述第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像;对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图;所述第一特征图包括X轴维度、Y轴维度以及Z轴维度的三维特征;将所述第一特征图所包含的特征进行降维处理,生成第二特征图;所述第二特征图包括X轴维度以及Y轴维度的二维特征;对所述第二特征图的特征进行检测,得到所述第二特征图中每一个病灶的位置以及所述位置对应的置信度。采用本公开,可准确地检测出患者体内多个部位的病灶情况,实现对患者全身范围的癌症初步评估。

Description

一种病灶检测方法、装置、设备及存储介质 技术领域
本公开涉及计算机技术领域,尤其涉及一种病灶检测的方法、装置、设备及存储介质。
背景技术
计算机辅助诊断(Computer aided diagosis,CAD)是指通过影像学、医学图像分析技术以及其他可能的生理、生化等手段,结合计算机的分析计算,自动地从影像中发现病灶。实践证明,计算机辅助诊断在提高诊断准确率、减少漏诊和提高医生工作效率等方面起到了极大的积极促进作用。其中,病灶指的是组织或器官遭受致病因子的作用而引起病变的部位,是机体上发生病变的部分。例如,人体肺部的某一部分被结核菌破坏,那么这一部分就是肺结核病灶。
近年来,随着计算机视觉和深度学习技术的快速发展,基于CT图像的病灶检测方法受到越来越多的关注。
发明内容
本公开提供一种病灶检测方法、装置、设备及存储介质,准确地检测出患者体内多个部位的病灶情况,实现对患者全身范围的癌症初步评估。
第一方面,本公开提供了一种病灶检测方法,该方法包括:获取包括多张采样切片的第一图像,所述第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像;对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图;所述第一特征图包括所述X轴维度、Y轴维度以及Z轴维度的三维特征;将所述第一特征图所包含的特征进行降维处理,生成第二特征图;所述第二特征图为包括所述X轴维度以及所述Y轴维度的二维图像;对所述第二特征图进行检测,得到所述第二特征图中每一个病灶的位置以及所述位置对应的置信度。
结合第一方面,在一些可能的实施例中,所述获取包括多张采样切片的第一图像,包括:以第一采样间隔对获取到的患者的CT图像进行重采样,生成包括多张采样切片的第一图像。
结合第一方面,在一些可能的实施例中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图,包括:通过第一神经网络对所述第一图像进行下采样,生成第三特征图;通过所述第二神经网络的残差模块对所述第三特征图进行下采样,生成第四特征图;通过所述第二神经网络的DenseASPP模块对所述第四特征图中不同尺度的病灶的特征进行提取;经过所述DenseASPP模块处理后,生成与所述第四特征图的分辨率大小相同的第四预设特征图,以及通过所述第二神经网络的反卷积层以及所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图;将所述第三特征图与所述第三预设特征图生成与所述第三预设特征图的分辨 率大小相同的第一特征图,以及将所述第四特征图与所述第四预设特征图进行融合生成与所述第四预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图及所述第四预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
结合第一方面,在一些可能的实施例中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图,包括:通过第二神经网络的残差模块对所述第一图像进行下采样,生成比所述第一图像的分辨率小的第四特征图;通过所述第二神经网络的DenseASPP模块对所述第四特征图中不同尺度的病灶的特征进行提取;经过所述DenseASPP模块处理后,通过所述第二神经网络的反卷积层以及所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第一图像分辨率大小相同的所述第一预设特征图;
将所述第一图像与所述第一预设特征图生成与所述第一预设特征图的分辨率大小相同的第一特征图;所述第一预设特征图包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
结合第一方面,在一些可能的实施例中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图,包括:通过第一神经网络对所述第一图像进行下采样,生成比所述第一图像的分辨率小的第三特征图;通过所述第二神经网络的残差模块对所述第三特征图进行下采样,生成比所述第三特征图的分辨率小的第四特征图;通过所述第二神经网络的残差模块对所述第四特征图进行下采样,生成比所述第四特征图的分辨率小的第五特征图;通过所述第二神经网络的DenseASPP模块对所述第五特征图中不同尺度的病灶的特征进行提取;经过所述DenseASPP模块处理后,生成与所述第五特征图的分辨率大小相同的第五预设特征图;通过所述第二神经网络的反卷积层和所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第四特征图的分辨率大小相同的第四预设特征图;或者,通过所述第二神经网络的反卷积层和残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图;将所述第三特征图与所述第三预设特征图生成与所述第三预设特征图的分辨率大小相同的第一特征图;将所述第四特征图与所述第四预设特征图进行融合生成与所述第四预设特征图的分辨率大小相同的第一特征图;以及将所述第五特征图与所述第五预设特征图进行融合生成与所述第五预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图、所述第四预设特征图以及所述第五预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
结合第一方面,在一些可能的实施例中,所述第一神经网络,包括:卷积层以及与所述卷积层相级联的残差模块;所述第二神经网络,包括:3D U-Net网络,所述3D U-Net网络包括:卷积层、反卷积层、残差模块以及所述DenseASPP模块。
结合第一方面,在一些可能的实施例中,所述第二神经网络为堆叠的多个3D U-Net网络。
结合第一方面,在一些可能的实施例中,所述残差模块包括:卷积层、批量归一化层、ReLU激活函数以及最大池化层。
结合第一方面,在一些可能的实施例中,所述将所述第一特征图所包含的特征进行降维 处理,生成第二特征图,包括:分别将所述第一特征图的所有特征中每一个特征的通道维度和Z轴维度进行合并,使得所述第一特征图的所有特征中每一个特征的维度由X轴维度以及Y轴维度组成;所述所有特征中每一个特征的维度由X轴维度以及Y轴维度组成的第一特征图为所述第二特征图。
结合第一方面,在一些可能的实施例中,所述对所述第二特征图进行检测,包括:通过第一检测子网络对所述第二特征图进行检测,检测出所述第二特征图中每一个病灶的位置的坐标;通过第二检测子网络对所述第二特征图进行检测,检测出所述第二特征图中每一个病灶对应的置信度。
结合第一方面,在一些可能的实施例中,所述第一检测子网络包括:多个卷积层,所述多个卷积层中每一个卷积层与一个ReLU激活函数相连;所述第二检测子网络包括:多个卷积层,所述多个卷积层中每一个卷积层与一个ReLU激活函数相连。
结合第一方面,在一些可能的实施例中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,还包括:通过将预存的包含多个病灶标注的三维图像输入到所述第一神经网络,所述病灶标注用于对病灶进行标注;并利用梯度下降法分别对所述第一神经网络、所述第二神经网络、所述DenseASPP模块、所述第一检测子网络以及所述第二检测子网络的各项参数进行训练;其中,所述多个病灶中每一个病灶的位置由所述第一检测子网络输出。
结合第一方面,在一些可能的实施例中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,还包括:通过将预存的包含多个病灶标注的三维图像输入到所述第一神经网络,所述病灶标注用于对病灶进行标注;,并利用梯度下降法分别对所述第二神经网络、所述DenseASPP模块、所述第一检测子网以及所述第二检测子网的各项参数进行训练;其中,所述多个病灶中每一个病灶的位置由所述第一检测子网络输出。
第二方面,本公开提供了一种病灶检测装置,该装置包括:获取单元,用于获取包括多张采样切片的第一图像,所述第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像;第一生成单元,用于对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图;所述第一特征图包括所述X轴维度、Y轴维度以及Z轴维度的三维特征;第二生成单元,用于将所述第一特征图所包含的特征进行降维处理,生成第二特征图;所述第二特征图包括所述X轴维度以及所述Y轴维度的二维特征;检测单元,用于对所述第二特征图进行检测,得到第二特征图中每一个病灶的位置以及所述位置对应的置信度。
结合第二方面,在一些可能的实施例中,所述获取单元,具体用于:以第一采样间隔对获取到的患者的CT图像进行重采样,生成包括多张采样切片的第一图像。
结合第二方面,在一些可能的实施例中,所述第一生成单元,具体用于:通过第一神经网络对所述第一图像进行下采样,生成比所述第一图像的分辨率小的第三特征图;通过所述第二神经网络的残差模块对所述第三特征图进行下采样,生成比所述第三特征图的分辨率小的第四特征图;通过所述第二神经网络的DenseASPP模块对所述第四特征图中不同尺度的病灶的特征进行提取;经过所述DenseASPP模块处理后,生成与所述第四特征图的分辨率大小相同的第四预设特征图,以及通过所述第二神经网络的反卷积层以及所述残差模块对经过所 述DenseASPP模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图;将所述第三特征图与所述第三预设特征图生成与所述第三预设特征图的分辨率大小相同的第一特征图,以及将所述第四特征图与所述第四预设特征图进行融合生成与所述第四预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图及所述第四预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
结合第二方面,在一些可能的实施例中,所述第一生成单元,具体用于:通过第一神经网络对所述第一图像进行下采样,生成比所述第一图像的分辨率小的第四特征图;通过所述第二神经网络的DenseASPP模块对所述第四特征图中不同尺度的病灶的特征进行提取;经过所述DenseASPP模块处理后,通过所述第二神经网络的反卷积层以及所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第一图像分辨率大小相同的所述第一预设特征图;将所述第一图像与所述第一预设特征图生成与所述第一预设特征图的分辨率大小相同的第一特征图;所述第一预设特征图包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
结合第二方面,在一些可能的实施例中,所述第一生成单元,具体用于:通过第二神经网络对所述第一图像的残差模块进行下采样,生成比所述第一图像的分辨率小的第三特征图;通过所述第二神经网络的残差模块对所述第三特征图进行下采样,生成比所述第三特征图的分辨率小的第四特征图;通过所述第二神经网络的残差模块对所述第四特征图进行下采样,生成比所述第四特征图的分辨率小的第五特征图;通过所述第二神经网络的DenseASPP模块对所述第五特征图中不同尺度的病灶的特征进行提取;经过所述DenseASPP模块处理后,生成与所述第五特征图的分辨率大小相同的第五预设特征图;通过所述第二神经网络的反卷积层和所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第四特征图的分辨率大小相同的第四预设特征图;或者,通过所述第二神经网络的反卷积层和残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图;将所述第三特征图与所述第三预设特征图生成与所述第三预设特征图的分辨率大小相同的第一特征图;将所述第四特征图与所述第四预设特征图进行融合生成与所述第四预设特征图的分辨率大小相同的第一特征图;以及将所述第五特征图与所述第五预设特征图进行融合生成与所述第五预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图、所述第四预设特征图以及所述第五预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
结合第二方面,在一些可能的实施例中,所述第一神经网络,包括:卷积层以及与所述卷积层相级联的残差模块;所述第二神经网络,包括:3D U-Net网络,所述3D U-Net网络包括:卷积层、反卷积层、残差模块以及所述DenseASPP模块。
结合第二方面,在一些可能的实施例中,所述第二神经网络为堆叠的多个3D U-Net网络。
结合第二方面,在一些可能的实施例中,所述残差模块包括:卷积层、批量归一化层、ReLU激活函数以及最大池化层。
结合第二方面,在一些可能的实施例中,所述第三特征单元,具体用于:分别将所述第 一特征图的所有特征中每一个特征的通道维度和Z轴维度进行合并,使得所述第一特征图的所有特征中每一个特征的维度由X轴维度以及Y轴维度组成;所述所有特征中每一个特征的维度由X轴维度以及Y轴维度组成的第一特征图为所述第二特征图。
结合第二方面,在一些可能的实施例中,所述检测单元,具体用于:通过第一检测子网络对所述第二特征图进行检测,检测出所述第二特征图中每一个病灶的位置的坐标;通过第二检测子网络对所述第二特征图进行检测,检测出所述第二特征图中每一个病灶对应的置信度。
结合第二方面,在一些可能的实施例中,所述第一检测子网络包括:多个卷积层,所述多个卷积层中每一个卷积层与一个ReLU激活函数相连;所述第二检测子网络包括:多个卷积层,所述多个卷积层中每一个卷积层与一个ReLU激活函数相连。
结合第二方面,在一些可能的实施例中,还包括:训练单元,具体用于:在所述第一生成单元对所述第一图像进行特征提取,生成包含病灶的特征的第一特征图之前,通过将预存的包含多个病灶标注的三维图像输入到所述第一神经网络,所述病灶标注用于对病灶进行标注;并利用梯度下降法分别对所述第一神经网络、所述第二神经网络、所述第一检测子网络以及所述第二检测子网络的各项参数进行训练;其中,所述多个病灶中每一个病灶的位置由所述第一检测子网络输出。
结合第二方面,在一些可能的实施例中,还包括:训练单元,具体用于:在所述第一生成单元对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,通过将包含多个病灶标注的三维图像输入到所述第二神经网络,所述病灶标注用于对病灶进行标注;并利用梯度下降法分别对所述第二神经网络、所述第一检测子网以及所述第二检测子网的各项参数进行训练;其中,所述多个病灶中每一个病灶的位置由所述第一检测子网络输出。
第三方面,本公开提供了一种病灶检测设备,包括处理器、显示器和存储器,所述处理器、显示器和存储器相互连接,其中,所述显示器用于显示病灶的位置以及所述位置对应的置信度,所述存储器用于存储应用程序代码,所述处理器被配置用于调用所述程序代码,执行上述第一方面的病灶检测方法。
第四方面,本公开提供了一种计算机可读的存储介质,用于存储一个或多个计算机程序,上述一个或多个计算机程序包括指令,当上述计算机程序在计算机上运行时,上述指令用于执行上述第一方面的病灶检测方法。
第五方面,本公开提供了一种计算机程序,该计算机程序包括病灶检测指令,当该计算机程序在计算机上执行时,上述利用病灶检测指令用于执行上述第一方面提供的病灶检测方法。
本公开提供了一种病灶检测方法、装置、设备及存储介质。首先,获取包括多张采样切片的第一图像,第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像。进而,对第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图;第一特征图包括X轴维度、Y轴维度以及Z轴维度的三维图像。然后,将第一特征图所包含的特征进行降维处理,生成第二特征图;第二特征图包括X轴维度以及Y轴维度的二维特征。最后,对第二特征图的特征进行检测,得到第二特征图中每一个病灶的特征以及位置对应的置信度。采用本公开,可 准确地检测出患者体内多个部位的病灶情况,实现对患者全身范围的癌症初步评估。
附图说明
为了更清楚地说明本公开实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本公开提供的一种病灶检测系统的网络架构示意图;
图2是本公开提供的一种病灶检测方法的示意流程图;
图3是本公开提供的一种病灶检测装置的示意性框图;
图4是本公开提供的一种病灶检测设备的结构示意图。
具体实施方式
下面将结合本公开中的附图,对本公开中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本公开说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本公开。如在本公开说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本公开说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
具体实现中,本公开中描述的设备包括但不限于诸如具有触摸敏感表面(例如,触摸屏显示器和/或触摸板)膝上型计算机或平板计算机之类的其它便携式设备。还应当理解的是,在某些实施例中,所述设备并非便携式通信设备,而是具有触摸敏感表面(例如,触摸屏显示器和/或触摸板)的台式计算机。
在接下来的讨论中,描述了包括显示器和触摸敏感表面的设备。然而,应当理解的是,设备可以包括诸如物理键盘、鼠标和/或控制杆的一个或多个其它物理用户接口设备。
设备支持各种应用程序,例如以下中的一个或多个:绘图应用程序、演示应用程序、文字处理应用程序、网站创建应用程序、盘刻录应用程序、电子表格应用程序、游戏应用程序、电话应用程序、视频会议应用程序、电子邮件应用程序、即时消息收发应用程序、锻炼支持 应用程序、照片管理应用程序、数码相机应用程序、数字摄影机应用程序、web浏览应用程序、数字音乐播放器应用程序和/或数字视频播放器应用程序。
可以在设备上执行的各种应用程序可以使用诸如触摸敏感表面的至少一个公共物理用户接口设备。可以在应用程序之间和/或相应应用程序内调整和/或改变触摸敏感表面的一个或多个功能以及设备上显示的相应信息。这样,设备的公共物理架构(例如,触摸敏感表面)可以支持具有对用户而言直观且透明的用户界面的各种应用程序。
为了更好的理解本公开,下面对本公开适用的网络架构进行描述。请参阅图1,图1是本公开提供的一种病灶检测系统的示意图。如图1所示,系统10可包括:第一神经网络101、第二神经网络102、检测子网络(Detection Subnet)103。
本公开实施例中,病灶指的是组织或器官遭受致病因子的作用而引起病变的部位,是机体上发生病变的部分。例如,人体肺部的某一部分被结核菌破坏,那么这一部分就是肺结核病灶。
应当说明的,第一神经网络101包括卷积层(Conv1)以及与卷积层级联的残差模块(SEResBlock)。其中,残差模块可包括:批量归一化层(Batch Normalization,BN)、修正线性单元(ReLU)激活函数以及最大池化层(Max-pooling)。
其中,第一神经网络101可用于对输入到第一神经网络101的第一图像进行在X轴维度以及Y轴维度的下采样,生成第三特征图。应当说明的,第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像(也即是说,第一图像为多张包括由X轴维度、Y轴维度的二维图像组成的包括X轴维度、Y轴维度以及Z轴维度的三维图像),例如第一图像可为512*512*9的三维图像。
具体的,第一神经网络101通过卷积层中的卷积核生成对第一图像进行处理,生成特征图,进而,第一神经网络101通过残差模块对特定特征图进行池化,可生成分辨率比第一图像小的第三特征图。举例来说,可通过第一神经网络101将512*512*9的三维图像处理为256*256*9的三维图像,或还可通过第一神经网络101将512*512*9的三维图像处理为128*128*9的三维图像。下采样的过程可以将输入的第一图像中包含的病灶特征提取出来,剔除第一图像中一些不必要的区域。
应当说明的,本公开实施例中下采样的目的生成第一图像的缩略图,使第一图像符合显示区域的大小。本公开实施例中上采样的目的是通过在原始图像的像素之间进行内插值的方式插入新的像素实现放大原始图像。有利于小的病灶的检测。
下面例举一个例子对本公开实施例中的下采样进行简单说明。例如:对于一幅图像I的尺寸为M*N,对图像I进行S倍下采样,即可得到(M/S)*(N/S)尺寸的分辨率图像。也即是说,把原始图像I内S*S窗口内的图像变成一个像素,其中,该像素的像素值为该S*S窗口内所有像素的最大值。其中,水平方向或垂直方向滑动的步长(Stride)可为2。
第二神经网络102可包括四个堆叠的3D U-net网络。3D U-net网络的展开图如图1中所示的104。多个3D U-net网络的检测可以提升检测的准确性,本公开实施例对3D U-net网络的个数仅作举例,不作限定。其中,3D U-Net网络包括:卷积层(conv)、反卷积层(deconv)、 残差模块以及DenseASPP模块。
其中,第二神经网络102的残差模块可用于对第一神经网络101输出的第三特征图在X轴维度以及Y轴维度上进行下采样,生成第四特征图。
另外,第二神经网络102的残差模块还可用于对第四特征图在X轴维度以及Y轴维度上进行下采样,生成第五特征图。
接着,通过第二神经网络102的DenseASPP模块对第五特征图中不同尺度的病灶的特征进行提取。
经过DenseASPP模块处理后,生成与第五特征图的分辨率大小相同的第五预设特征图;通过所述第二神经网络102的反卷积层和所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第四特征图的分辨率大小相同的第四预设特征图;或者,通过所述第二神经网络102的反卷积层和残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图。
将第三特征图与第三预设特征图融合生成与第三预设特征图的分辨率大小相同的第一特征图;将第四特征图与第四预设特征图进行融合生成与第四预设特征图的分辨率大小相同的第一特征图;以及将第五特征图与第五预设特征图进行融合生成与第五预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图、所述第四预设特征图以及所述第五预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
应当说明的,DenseASPP模块包括5个扩张率不同的扩张卷积组合级联,可对不同尺度的病灶的特征进行提取。其中,5个扩张率(dilate)不同的扩张卷积分别为:扩张率d=3的扩张卷积、扩张率d=6的扩张卷积、扩张率d=12的扩张卷积、扩张率d=18的扩张卷积以及扩张率d=24的扩张卷积。
检测子网络103可包括:第一检测子网络以及第二检测子网络。第一检测子网络包括:多个卷积层,多个卷积层中每一个卷积层与一个ReLU激活函数相连。同理,第二检测子网络包括:多个卷积层,多个卷积层中每一个卷积层与一个ReLU激活函数相连。
第一检测子网络用于对由第一特征图进行降维后的第二特征图进行检测,检测出第二特征图中每一个病灶的位置的坐标。
具体的,通过第一检测子网络中4个级联的卷积层对输入的第二特征图进行处理,其中,每个卷积层包括一个Y*Y的卷积核,可通过先后获得每一个病灶的左上角的坐标(x1,y1)以及病灶的右下角的坐标(x2,y2),以确定出第二特征图中各个病灶的位置。
通过第二检测子网络对上述第二特征图进行检测,检测出第二特征图中每一个病灶对应的置信度。
具体的,通过第二检测子网络中4个级联的卷积层对输入的第二特征图进行处理,其中,每个卷积层包括一个Y*Y的卷积核,可通过先后获得每一个病灶的左上角的坐标(x1,y1)以及病灶的右下角的坐标(x2,y2),以确定出第二特征图中各个病灶的位置,进而,输出该位置所对应的置信度。
应当说明的,本公开实施例中的位置对应的置信度为用户对该位置为病灶的真实性相信的程度。例如某个病灶的位置的置信度可为90%。
综上所述,从而可实现准确地检测出患者体内多个部位的病灶情况,并可实现对患者全身范围的癌症初步评估。
应当说明的,在对第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,还包括以下步骤:
通过将预存的包含多个病灶标注的三维图像输入到所述第一神经网络,病灶标注用于对病灶进行标注(例如:一方面,通过框的形式将病灶标注出来,另一方面,标注出该病灶的位置的坐标);并利用梯度下降法分别对第一神经网络、第二神经网络、第一检测子网络以及第二检测子网络的各项参数进行训练;其中,多个病灶中每一个病灶的位置由第一检测子网络输出。
应当说明的,通过梯度下降法对各项参数进行训练的过程中,可通过反向传播算法对梯度下降法的梯度进行计算。
或者,
通过将预存的包含多个病灶标注的三维图像输入到第二神经网络,病灶标注用于对病灶进行标注;并利用梯度下降法分别对第二神经网络、第一检测子网络以及第二检测子网络的各项参数进行训练;其中,多个病灶中每一个病灶的位置由第一检测子网络输出。
参见图2,是本公开提供的一种病灶检测方法的示意流程图。在一种可能的实现方式中,所述病灶检测方法可以由终端设备或服务器等电子设备执行,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等,所述方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。或者,可通过服务器执行所述方法。
如图2所示,该方法可以至少包括以下几个步骤:
S201、获取包括多张采样切片的第一图像,第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像。
具体的,在一种可选的实现方式中,以第一采样间隔对获取到的患者的CT图像进行重采样,生成包括多张采样切片的第一图像。其中,患者的CT图像可包括130层的断层数,每一层的断层的厚度为2.0mm,在X轴维度、Y轴维度上的第一采样间隔可为2.0mm。
本公开实施例中,患者的CT图像为关于患者的组织或器官的一个包括多个断层数的扫描序列,断层数可为130。
病灶指的是指患者的组织或器官遭受致病因子的作用而引起病变的部位,是机体上发生病变的部分。例如,人体肺部的某一部分被结核菌破坏,那么这一部分就是肺结核病灶。
应当说明的,第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像(也即是说,第一图像为N张包括由X轴维度、Y轴维度的二维图像组成的包括X轴维度、Y轴维度以及Z轴维度的三维图像,N大于或等于2;每张二维图像为待检测组织的不同位置上的横截面图像),例如第一图像可为512*512*9的三维图像。
应当说明的,在对CT图像进行重采样之前,还包括以下步骤:
基于阈值法去除CT图像中多余的背景。
S202、对第一图像进行特征提取,生成包含病灶的特征的第一特征图;第一特征图包括所述X轴维度、Y轴维度以及Z轴维度的三维特征。
具体的,对第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图,可包括但不限于以下几种情形。
情形1:通过第一神经网络对第一图像进行下采样,生成第三特征图。
通过第二神经网络的残差模块对第三特征图进行下采样,生成第四特征图。
通过第二神经网络的DenseASPP模块对第四特征图中不同尺度的病灶的特征进行提取。
经过DenseASPP模块处理后,生成与第四特征图的分辨率大小相同的第四预设特征图,以及通过第二神经网络的反卷积层以及残差模块对经过DenseASPP模块处理后的特征图进行上采样,生成与第三特征图的分辨率大小相同的第三预设特征图。
将第三特征图与第三预设特征图生成与第三预设特征图的分辨率大小相同的第一特征图,以及将第四特征图与第四预设特征图进行融合生成与第四预设特征图的分辨率大小相同的第一特征图;第三预设特征图及第四预设特征图分别包括病灶的位置;病灶的位置用于生成第一特征图中病灶的位置。
情形2:通过第二神经网络的残差模块对第一图像进行下采样,生成第四特征图。
通过第二神经网络的DenseASPP模块对第四特征图中不同尺度的病灶的特征进行提取。
经过DenseASPP模块处理后,通过第二神经网络的反卷积层以及残差模块对经过DenseASPP模块处理后的特征图进行上采样,生成与第一图像分辨率大小相同的第一预设特征图。
将所述第一图像与第一预设特征图生成与第一预设特征图的分辨率大小相同的第一特征图;第一预设特征图包括病灶的位置;病灶的位置用于生成第一特征图中病灶的位置。
情形3:通过第一神经网络对第一图像进行下采样,生成第三特征图。
通过第二神经网络的残差模块对第三特征图进行下采样,生成第四特征图。
通过第二神经网络的残差模块对第四特征图进行下采样,生成第五特征图。
通过第二神经网络的DenseASPP模块对第五特征图中不同尺度的病灶的特征进行提取。
经过DenseASPP模块处理后,生成与第五特征图的分辨率大小相同的第五预设特征图;通过第二神经网络的反卷积层和残差模块对经过DenseASPP模块处理后的特征图进行上采样,生成与第四特征图的分辨率大小相同的第四预设特征图;或者,通过第二神经网络的反卷积层和残差模块对经过DenseASPP模块处理后的特征图进行上采样,生成与第三特征图的分辨率大小相同的第三预设特征图。
将第三特征图与第三预设特征图生成与第三预设特征图的分辨率大小相同的第一特征图;将第四特征图与第四预设特征图进行融合生成与第四预设特征图的分辨率大小相同的第一特征图;以及将第五特征图与第五预设特征图进行融合生成与第五预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图、所述第四预设特征图以及所述第五预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
应当说明的,第一神经网络,包括:卷积层以及与卷积层相级联的残差模块;
第二神经网络,包括:3D U-Net网络;其中,3D U-Net网络包括:卷积层、反卷积层、 残差模块以及DenseASPP模块。
其中,残差模块可包括:卷积层、批量归一化层(BN层)、ReLU激活函数以及最大池化层。
可选的,第二神经网络为堆叠的多个3D U-Net网络。如果第二神经网络为堆叠的多个3D U-Net网络,则可提高病灶检测系统的稳定性以及检测的准确性,本公开实施例对3D U-net网络的个数不做限制。
S203、将第一特征图所包含的特征进行降维处理,生成第二特征图;第二特征图包括X轴维度以及Y轴维度的二维特征。
具体的,分别将第一特征图的所有特征中每一个特征的通道维度和Z轴维度进行合并,使得第一特征图的所有特征中每一个特征的维度由X轴维度以及Y轴维度组成;所有特征中每一个特征的维度由X轴维度以及Y轴维度组成的第一特征图为第二特征图。第一特征图是三维的特征图,而输出至检测子网络103进行检测时,需转换为二维,因此需要对第一特征图进行降维。
应当说明的,上述某个特征的通道表示某个特征的分布数据。
S204、对第二特征图的特征进行检测,将检测到的第二特征图中每一个病灶的特征以及位置对应的置信度进行显示。
具体的,通过第一检测子网络对第二特征图进行检测,检测出第二特征图中每一个病灶的位置的坐标。
更具体的,通过第一检测子网络中多个级联的卷积层对输入的第二特征图进行处理,其中,每个卷积层包括一个Y*Y的卷积核,可通过先后获得每一个病灶的左上角的坐标(x1,y1)以及病灶的右下角的坐标(x2,y2),以确定出第二特征图中各个病灶的位置。
通过第二检测子网络对所述第二特征图进行检测,检测出所述第二特征图中每一个病灶对应的置信度。
更具体的,通过第二检测子网络中多个级联的卷积层对输入的第二特征图进行处理,其中,每个卷积层包括一个Y*Y的卷积核,可通过先后获得每一个病灶的左上角的坐标(x1,y1)以及病灶的右下角的坐标(x2,y2),以确定出第二特征图中各个病灶的位置,进而,输出该位置所对应的置信度。
综上可知,本公开实施例可准确地检测出患者体内多个部位的病灶情况,实现对患者全身范围的癌症初步评估。
应当说明的,在对第一图像进行特征提取,生成包含病灶的特征的第一特征图之前,还包括以下步骤:
通过将预存的包含多个病灶标注的三维图像输入到第一神经网络,病灶标注用于对病灶进行标注;并利用梯度下降法分别对第一神经网络、第二神经网络、第一检测子网络以及第二检测子网络的各项参数进行训练;其中,多个病灶中每一个病灶的位置由第一检测子网络输出。
或者,
通过将包含多个病灶标注的三维图像输入到第二神经网络,病灶标注用于对病灶进行标 注;并利用梯度下降法分别对第二神经网络、第一检测子网络以及第二检测子网络的各项参数进行训练;其中,多个病灶中每一个病灶的位置由第一检测子网络输出。
综上所述,本公开中,首先,获取包括多张采样切片的第一图像,第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像。进而,对第一图像进行特征提取,生成包含病灶的特征的第一特征图;第一特征图包括X轴维度、Y轴维度以及Z轴维度的三维特征。然后,将第一特征图所包含的特征进行降维处理,生成第二特征图;第二特征图包括X轴维度以及Y轴维度的二维特征。最后,对第二特征图的特征进行检测,得到第二特征图中每一个病灶的位置以及位置对应的置信度。通过采用本公开实施例,可准确地检测出患者体内多个部位的病灶情况,实现对患者全身范围的癌症初步评估。
可理解的,图2方法实施例中未提供的相关定义和说明可参考图1的实施例,此处不再赘述。
参见图3,是本公开提供的一种病灶检测装置。如图3所示,病灶检测装置30包括:获取单元301、第一生成单元302、第二生成单元303以及检测单元304。其中:
获取单元301,用于获取包括多张采样切片的第一图像,第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像。
第一生成单元302,用于对第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图;第一特征图包括X轴维度、Y轴维度以及Z轴维度的三维特征。
第二生成单元303,用于将第一特征图所包含的特征进行降维处理,生成第二特征图;第二特征图包括X轴维度以及Y轴维度的二维特征。
检测单元304,用于对第二特征图进行检测,得到第二特征图中每一个病灶的位置以及位置对应的置信度。
获取单元302,具体用于:
以第一采样间隔对获取到的患者的CT图像进行重采样,生成包括多张采样切片的第一图像。
第一生成单元303,具体可用于以下三种情况:
情况1:通过第一神经网络对第一图像进行下采样,生成第三特征图。
通过第二神经网络的残差模块对第三特征图进行下采样,生成第四特征图。
通过第二神经网络的DenseASPP模块对第四特征图中不同尺度的病灶的特征进行提取。
经过DenseASPP模块处理后,生成与第四特征图的分辨率大小相同的第四预设特征图,以及通过第二神经网络的反卷积层以及残差模块对经过DenseASPP模块处理后的特征图进行上采样,生成与第三特征图的分辨率大小相同的第三预设特征图。
将第三特征图与第三预设特征图生成与第三预设特征图的分辨率大小相同的第一特征图,以及将第四特征图与第四预设特征图进行融合生成与第四预设特征图的分辨率大小相同的第一特征图;第三预设特征图及第四预设特征图分别包括病灶的位置;病灶的位置用于生成第一特征图中病灶的位置。
情况2:通过第二神经网络的残差模块对所述第一图像进行下采样,生成第四特征图;
通过第二神经网络的DenseASPP模块对第四特征图中不同尺度的病灶的特征进行提取。
经过DenseASPP模块处理后,通过第二神经网络的反卷积层以及残差模块对经过DenseASPP模块处理后的特征图进行上采样,生成与第一图像分辨率大小相同的第一预设特征图。
将第一图像与第一预设特征图生成与第一预设特征图的分辨率大小相同的第一特征图;第一预设特征图包括病灶的位置;病灶的位置用于生成第一特征图中病灶的位置。
情况3:通过第一神经网络对所述第一图像进行下采样,生成第三特征图。
通过第二神经网络的残差模块对第三特征图进行下采样,生成第四特征图。
通过第二神经网络的残差模块对第四特征图进行下采样,生成第五特征图。
通过第二神经网络的DenseASPP模块对第五特征图中不同尺度的病灶的特征进行提取。
经过DenseASPP模块处理后,生成与第五特征图的分辨率大小相同的第五预设特征图;通过第二神经网络的反卷积层和残差模块对经过DenseASPP模块处理后的特征图进行上采样,生成与第四特征图的分辨率大小相同的第四预设特征图;或者,通过第二神经网络的反卷积层和残差模块对经过DenseASPP模块处理后的特征图进行上采样,生成与第三特征图的分辨率大小相同的第三预设特征图。
将第三特征图与第三预设特征图生成与第三预设特征图的分辨率大小相同的第一特征图;将第四特征图与第四预设特征图进行融合生成与第四预设特征图的分辨率大小相同的第一特征图;以及将第五特征图与第五预设特征图进行融合生成与第五预设特征图的分辨率大小相同的第一特征图;第三预设特征图、第四预设特征图以及第五预设特征图分别包括病灶的位置;病灶的位置用于生成第一特征图中病灶的位置。
应当说明的,第一神经网络,包括:卷积层以及与卷积层相级联的残差模块;
第二神经网络,包括:3D U-Net网络;其中,3D U-Net网络可包括:卷积层、反卷积层、残差模块以及DenseASPP模块。
可选的,第二神经网络可包括堆叠的多个3D U-Net网络。多个3D U-net网络的检测可以提升检测的准确性,本公开实施例对3D U-net网络的个数仅作举例。
应当说明的,残差模块可包括:卷积层、批量归一化层(BN层)、ReLU激活函数以及最大池化层。
第三特征单元304,具体用于:分别将第一特征图的所有特征中每一个特征的通道维度和Z轴维度进行合并,使得第一特征图的所有特征中每一个特征的维度由X轴维度以及Y轴维度组成;所有特征中每一个特征的维度由X轴维度以及Y轴维度组成的第一特征图为第二特征图。
检测单元305,具体用于:
通过第一检测子网络对第二特征图进行检测,检测出第二特征图中每一个病灶的位置的坐标。
通过第二检测子网络对第二特征图进行检测,检测出第二特征图中每一个病灶对应的置信度。
应当说明的,第一检测子网络包括:多个卷积层,多个卷积层中每一个卷积层与一个 ReLU激活函数相连。
第二检测子网络包括:多个卷积层,多个卷积层中每一个卷积层与一个ReLU激活函数相连。
病灶检测装置30包括:获取单元301、第一生成单元302、第二生成单元303以及检测单元304之外,还包括:显示单元。
显示单元,具体用于对检测单元304检测到的病灶的位置以及位置的置信度进行显示。
病灶检测装置30包括:获取单元301、第一生成单元302、第二生成单元303以及检测单元304之外,还包括:训练单元。
训练单元,具体用于:
在第一生成单元对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,通过将预存的包含多个病灶标注的三维图像输入到第一神经网络,病灶标注用于对病灶进行标注;并利用梯度下降法分别对第一神经网络、第二神经网络、第一检测子网络以及第二检测子网络的各项参数进行训练;其中,多个病灶中每一个病灶的位置由第一检测子网络输出。
或者,
在第一生成单元对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,通过将包含多个病灶标注的三维图像输入到第二神经网络,病灶标注用于对病灶进行标注;并利用梯度下降法分别对第二神经网络、第一检测子网以及第二检测子网的各项参数进行训练。
应当理解,病灶检测装置30仅为本公开实施例提供的一个例子,并且,病灶检测装置30可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。
可理解的,关于图3的病灶检测装置30包括的功能块的具体实现方式,可参考前述图2所述的方法实施例,这里不再赘述。
图4是本公开提供的一种病灶检测设备的结构示意图。本公开实施例中,病灶检测设备可以包括移动手机、平板电脑、个人数字助理(Personal Digital Assistant,PDA)、移动互联网设备(Mobile Internet Device,MID)、智能穿戴设备(如智能手表、智能手环)等各种设备,本公开实施例不作限定。如图4所示,病灶检测设备40可包括:基带芯片401、存储器402(一个或多个计算机可读存储介质)、外围系统403。这些部件可在一个或多个通信总线404上通信。
基带芯片401包括:一个或多个处理器(CPU)405、一个或多个图形处理器(GPU)406。其中,图形处理器406可用于对输入的法线贴图进行处理。
存储器402与处理器405耦合,可用于存储各种软件程序和/或多组指令。具体实现中,存储器402可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器402可以存储操作系统(下述简称系统),例如ANDROID,IOS,WINDOWS,或者LINUX等嵌入式操作系统。存储器 402还可以存储网络通信程序,该网络通信程序可用于与一个或多个附加设备,一个或多个设备,一个或多个网络设备进行通信。存储器402还可以存储用户接口程序,该用户接口程序可以通过图形化的操作界面将应用程序的内容形象逼真的显示出来,并通过菜单、对话框以及按键等输入控件接收用户对应用程序的控制操作。
可理解的,存储器402可用于存储实现病灶检测方法的程序代码。
可理解的,处理器405可用于调用存储于存储器402的执行病灶检测方法的程序代码。
存储器402还可以存储一个或多个应用程序。如图4所示,这些应用程序可包括:社交应用程序(例如Facebook),图像管理应用程序(例如相册),地图类应用程序(例如谷歌地图),浏览器(例如Safari,Google Chrome)等等。
外围系统403主要用于实现病灶检测设备40和用户/外部环境之间的交互功能,主要包括病灶检测设备40的输入输出设备。具体实现中,外围系统403可包括:显示屏控制器407、摄像头控制器408、鼠标-键盘控制器409以及音频控制器410。其中,各个控制器可与各自对应的外围设备(如显示屏411、摄像头412、鼠标-键盘413以及音频电路414)耦合。在一些实施例中,显示屏可以配置有自电容式的悬浮触控面板的显示屏,也可以是配置有红外线式的悬浮触控面板的显示屏。在一些实施例中,摄像头412可以是3D摄像头。需要说明的,外围系统403还可以包括其他I/O外设。
可理解的,显示屏411可用于对检测到的病灶的位置和位置的置信度进行显示。
应当理解,病灶检测设备40仅为本公开实施例提供的一个例子,并且,病灶检测设备40可具有比示出的部件更多或更少的部件,可以组合两个或更多个部件,或者可具有部件的不同配置实现。
可理解的,关于图4的病灶检测设备40包括的功能模块的具体实现方式,可参考图2的方法实施例,此处不再赘述。
本公开提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现。
该计算机可读存储介质可以是前述任一实施例所述的设备的内部存储单元,例如设备的硬盘或内存。该计算机可读存储介质也可以是设备的外部存储设备,例如设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步的,该计算机可读存储介质还可以既包括设备的内部存储单元也包括外部存储设备。该计算机可读存储介质用于存储计算机程序以及设备所需的其他程序和数据。该计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
本公开还提供一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,该计算机程序可操作来使计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,该计算机包括电子装置。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件 的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的设备和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
上述描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、设备或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本公开实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,目标区块链节点设备,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。

Claims (29)

  1. 一种病灶检测方法,其中,包括:
    获取包括多张采样切片的第一图像,所述第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像;
    对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图;所述第一特征图包括所述X轴维度、Y轴维度以及Z轴维度的三维特征;
    将所述第一特征图所包含的特征进行降维处理,生成第二特征图;所述第二特征图包括所述X轴维度以及所述Y轴维度的二维特征;
    对所述第二特征图进行检测,得到所述第二特征图中每一个病灶的位置以及所述位置对应的置信度。
  2. 如权利要求1所述的方法,其中,所述获取包括多张采样切片的第一图像,包括:
    以第一采样间隔对获取到的患者的CT图像进行重采样,生成包括多张采样切片的第一图像。
  3. 如权利要求1所述的方法,其中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图,包括:
    通过第一神经网络对所述第一图像进行下采样,生成第三特征图;
    通过所述第二神经网络的残差模块对所述第三特征图进行下采样,生成第四特征图;
    通过所述第二神经网络的DenseASPP模块对所述第四特征图中不同尺度的病灶的特征进行提取;
    经过所述DenseASPP模块处理后,生成与所述第四特征图的分辨率大小相同的第四预设特征图,以及通过所述第二神经网络的反卷积层以及所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图;
    将所述第三特征图与所述第三预设特征图生成与所述第三预设特征图的分辨率大小相同的第一特征图,以及将所述第四特征图与所述第四预设特征图进行融合生成与所述第四预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图及所述第四预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
  4. 如权利要求1所述的方法,其中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图,包括:
    通过第二神经网络的残差模块对所述第一图像进行下采样,生成第四特征图;
    通过所述第二神经网络的DenseASPP模块对所述第四特征图中不同尺度的病灶的特征进行提取;
    经过所述DenseASPP模块处理后,通过所述第二神经网络的反卷积层以及所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第一图像分辨率大小相同的所述第一预设特征图;
    将所述第一图像与所述第一预设特征图生成与所述第一预设特征图的分辨率大小相同的第一特征图;所述第一预设特征图包括病灶的位置;所述病灶的位置用于生成第一特征图 中病灶的位置。
  5. 如权利要求1所述的方法,其中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图,包括:
    通过第一神经网络对所述第一图像进行下采样,生成第三特征图;
    通过所述第二神经网络的残差模块对所述第三特征图进行下采样,生成第四特征图;
    通过所述第二神经网络的残差模块对所述第四特征图进行下采样,生成比所述第四特征图的分辨率小的第五特征图;
    通过所述第二神经网络的DenseASPP模块对所述第五特征图中不同尺度的病灶的特征进行提取;
    经过所述DenseASPP模块处理后,生成与所述第五特征图的分辨率大小相同的第五预设特征图;通过所述第二神经网络的反卷积层和所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第四特征图的分辨率大小相同的第四预设特征图;或者,通过所述第二神经网络的反卷积层和残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图;
    将所述第三特征图与所述第三预设特征图生成与所述第三预设特征图的分辨率大小相同的第一特征图;将所述第四特征图与所述第四预设特征图进行融合生成与所述第四预设特征图的分辨率大小相同的第一特征图;以及将所述第五特征图与所述第五预设特征图进行融合生成与所述第五预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图、所述第四预设特征图以及所述第五预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
  6. 如权利要求3或5所述的方法,其中,
    所述第一神经网络,包括:卷积层以及与所述卷积层相级联的残差模块;
    所述第二神经网络,包括:3D U-Net网络,所述3D U-Net网络包括:卷积层、反卷积层、残差模块以及所述DenseASPP模块。
  7. 如权利要求5或6所述的方法,其中:
    所述第二神经网络为堆叠的多个3D U-Net网络。
  8. 如权利要求5或6所述的方法,其中:
    所述残差模块包括:卷积层、批量归一化层、ReLU激活函数以及最大池化层。
  9. 如权利要求1所述的方法,其中,所述将所述第一特征图所包含的特征进行降维处理,生成第二特征图,包括:
    分别将所述第一特征图的所有特征中每一个特征的通道维度和Z轴维度进行合并,使得所述第一特征图的所有特征中每一个特征的维度由X轴维度以及Y轴维度组成;所述所有特征中每一个特征的维度由X轴维度以及Y轴维度组成的第一特征图为所述第二特征图。
  10. 如权利要求1所述的方法,其中,所述对所述第二特征图进行检测,包括:
    通过第一检测子网络对所述第二特征图进行检测,检测出所述第二特征图中每一个病灶的位置的坐标;
    通过第二检测子网络对所述第二特征图进行检测,检测出所述第二特征图中每一个病灶 对应的置信度。
  11. 如权利要求10所述的方法,其中,
    所述第一检测子网络包括:多个卷积层,所述多个卷积层中每一个卷积层与一个ReLU激活函数相连;
    所述第二检测子网络包括:多个卷积层,所述多个卷积层中每一个卷积层与一个ReLU激活函数相连。
  12. 如权利要求1、2、3、5、6、7、8、9、10和11中任一项所述的方法,其中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,还包括:
    通过将预存的包含多个病灶标注的三维图像输入到所述第一神经网络,所述病灶标注用于对病灶进行标注;并利用梯度下降法分别对所述第一神经网络、所述第二神经网络、所述第一检测子网络以及所述第二检测子网络的各项参数进行训练;其中,所述多个病灶中每一个病灶的位置由所述第一检测子网络输出。
  13. 如权利要求1、2、4、7、9、10和11中任一项所述的方法,其中,所述对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,还包括:
    通过将包含多个病灶标注的三维图像输入到所述第二神经网络,所述病灶标注用于对病灶进行标注;并利用梯度下降法分别对所述第二神经网络、所述第一检测子网以及所述第二检测子网的各项参数进行训练;其中,所述多个病灶中每一个病灶的位置由所述第一检测子网络输出。
  14. 一种病灶检测装置,其中,包括:
    获取单元,用于获取包括多张采样切片的第一图像,所述第一图像为包括X轴维度、Y轴维度以及Z轴维度的三维图像;
    第一生成单元,用于对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图;所述第一特征图包括所述X轴维度、Y轴维度以及Z轴维度的三维特征;
    第二生成单元,用于将所述第一特征图所包含的特征进行降维处理,生成第二特征图;所述第二特征图包括X轴维度以及Y轴维度的二维特征;
    检测单元,用于对所述第二特征图进行检测,得到所述第二特征图中每一个病灶的位置以及所述位置对应的置信度。
  15. 如权利要求14所述的装置,其中,所述获取单元,具体用于:
    以第一采样间隔对获取到的患者的CT图像进行重采样,生成包括多张采样切片的第一图像。
  16. 如权利要求14所述的装置,其中,所述第一生成单元,具体用于:
    通过第一神经网络对所述第一图像进行下采样,生成第三特征图;
    通过所述第二神经网络的残差模块对所述第三特征图进行下采样,生成第四特征图;
    通过所述第二神经网络的DenseASPP模块对所述第四特征图中不同尺度的病灶的特征进行提取;
    经过所述DenseASPP模块处理后,生成与所述第四特征图的分辨率大小相同的第四预设特征图,以及通过所述第二神经网络的反卷积层以及所述残差模块对经过所述DenseASPP 模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图;
    将所述第三特征图与所述第三预设特征图生成与所述第三预设特征图的分辨率大小相同的第一特征图,以及将所述第四特征图与所述第四预设特征图进行融合生成与所述第四预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图及所述第四预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
  17. 如权利要求14所述的装置,其中,所述第一生成单元,具体用于:
    通过第二神经网络的残差模块对所述第一图像进行下采样,生成第四特征图;
    通过所述第二神经网络的DenseASPP模块对所述第四特征图中不同尺度的病灶的特征进行提取;
    经过所述DenseASPP模块处理后,通过所述第二神经网络的反卷积层以及所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第一图像分辨率大小相同的所述第一预设特征图;
    将所述第一图像与所述第一预设特征图生成与所述第一预设特征图的分辨率大小相同的第一特征图;所述第一预设特征图包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
  18. 如权利要求14所述的装置,其中,所述第一生成单元,具体用于:
    通过第一神经网络对所述第一图像进行下采样,生成比所述第一图像的分辨率小的第三特征图;
    通过所述第二神经网络的残差模块对所述第三特征图进行下采样,生成第四特征图;
    通过所述第二神经网络的残差模块对所述第四特征图进行下采样,生成第五特征图;
    通过所述第二神经网络的DenseASPP模块对所述第五特征图中不同尺度的病灶的特征进行提取;
    经过所述DenseASPP模块处理后,生成与所述第五特征图的分辨率大小相同的第五预设特征图;通过所述第二神经网络的反卷积层和所述残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第四特征图的分辨率大小相同的第四预设特征图;或者,通过所述第二神经网络的反卷积层和残差模块对经过所述DenseASPP模块处理后的特征图进行上采样,生成与所述第三特征图的分辨率大小相同的第三预设特征图;
    将所述第三特征图与所述第三预设特征图生成与所述第三预设特征图的分辨率大小相同的第一特征图;将所述第四特征图与所述第四预设特征图进行融合生成与所述第四预设特征图的分辨率大小相同的第一特征图;以及将所述第五特征图与所述第五预设特征图进行融合生成与所述第五预设特征图的分辨率大小相同的第一特征图;所述第三预设特征图、所述第四预设特征图以及所述第五预设特征图分别包括病灶的位置;所述病灶的位置用于生成第一特征图中病灶的位置。
  19. 如权利要求16或者18所述的装置,其中,
    所述第一神经网络,包括:卷积层以及与所述卷积层相级联的残差模块;
    所述第二神经网络,包括:3D U-Net网络,所述3D U-Net网络包括:卷积层、反卷积 层、残差模块以及所述DenseASPP模块。
  20. 如权利要求18或19所述的装置,其中,
    所述第二神经网络为堆叠的多个3D U-Net网络。
  21. 如权利要求18或19所述的装置,其中,
    所述残差模块包括:卷积层、批量归一化层、ReLU激活函数以及最大池化层。
  22. 如权利要求14所述的装置,其中,
    所述第二生成单元,具体用于:分别将所述第一特征图的所有特征中每一个特征的通道维度和Z轴维度进行合并,使得所述第一特征图的所有特征中每一个特征的维度由X轴维度以及Y轴维度组成;所述所有特征中每一个特征的维度由X轴维度以及Y轴维度组成的第一特征图为所述第二特征图。
  23. 如权利要求14所述的装置,其中,
    所述检测单元,具体用于:
    通过第一检测子网络对所述第二特征图进行检测,以检测出所述第二特征图中每一个病灶的位置的坐标;
    通过第二检测子网络对所述第二特征图进行检测,以检测出所述第二特征图中每一个病灶对应的置信度。
  24. 如权利要求23所述的装置,其中,
    所述第一检测子网络包括:多个卷积层,所述多个卷积层中每一个卷积层与一个ReLU激活函数相连;
    所述第二检测子网络包括:多个卷积层,所述多个卷积层中每一个卷积层与一个ReLU激活函数相连。
  25. 如权利要求14-24任一项所述的装置,其中,还包括:
    训练单元,具体用于:
    在所述第一生成单元对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,通过将预存的包含多个病灶标注的三维图像输入到所述第一神经网络,病灶标注用于对病灶进行标注;并利用梯度下降法分别对所述第一神经网络、所述第二神经网络、所述第一检测子网络以及所述第二检测子网络的各项参数进行训练;其中,所述多个病灶中每一个病灶的位置由所述第一检测子网络输出。
  26. 如权利要求14-24任一项所述的装置,其中,还包括:
    训练单元,具体用于:
    在所述第一生成单元对所述第一图像进行特征提取,生成包含病灶的特征和位置的第一特征图之前,通过将包含多个病灶标注的三维图像输入到所述第二神经网络,所述病灶标注用于对病灶进行标注;并利用梯度下降法分别对所述第二神经网络、所述第一检测子网以及所述第二检测子网的各项参数进行训练;其中,所述多个病灶中每一个病灶的位置由所述第一检测子网络输出。
  27. 一种病灶检测设备,其中,包括:显示器、存储器以及耦合于所述存储器的处理器,其中,所述显示器用于显示病灶的位置以及所述位置对应的置信度,所述存储器用于存储应 用程序代码,所述处理器被配置用于调用所述程序代码,执行如权利要求1-13任一项所述的病灶检测方法。
  28. 一种计算机可读存储介质,其中,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-13任一项所述的病灶检测方法。
  29. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-13任一项所述的病灶检测方法。
PCT/CN2019/114452 2018-12-07 2019-10-30 一种病灶检测方法、装置、设备及存储介质 WO2020114158A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021500548A JP7061225B2 (ja) 2018-12-07 2019-10-30 病巣検出方法、装置、機器および記憶媒体
KR1020207038088A KR20210015972A (ko) 2018-12-07 2019-10-30 병소 검출 방법, 장치, 기기 및 기억 매체
SG11202013074SA SG11202013074SA (en) 2018-12-07 2019-10-30 Method, apparatus and device for detecting lesion, and storage medium
US17/134,771 US20210113172A1 (en) 2018-12-07 2020-12-28 Lesion Detection Method, Apparatus and Device, and Storage Medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811500631.4A CN109754389B (zh) 2018-12-07 2018-12-07 一种图像处理方法、装置及设备
CN201811500631.4 2018-12-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/134,771 Continuation US20210113172A1 (en) 2018-12-07 2020-12-28 Lesion Detection Method, Apparatus and Device, and Storage Medium

Publications (1)

Publication Number Publication Date
WO2020114158A1 true WO2020114158A1 (zh) 2020-06-11

Family

ID=66402643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/114452 WO2020114158A1 (zh) 2018-12-07 2019-10-30 一种病灶检测方法、装置、设备及存储介质

Country Status (7)

Country Link
US (1) US20210113172A1 (zh)
JP (1) JP7061225B2 (zh)
KR (1) KR20210015972A (zh)
CN (2) CN109754389B (zh)
SG (1) SG11202013074SA (zh)
TW (1) TWI724669B (zh)
WO (1) WO2020114158A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754389B (zh) * 2018-12-07 2021-08-24 北京市商汤科技开发有限公司 一种图像处理方法、装置及设备
CN110175993A (zh) * 2019-05-27 2019-08-27 西安交通大学医学院第一附属医院 一种基于FPN的Faster R-CNN肺结核征象检测系统及方法
CN110533637B (zh) * 2019-08-02 2022-02-11 杭州依图医疗技术有限公司 一种检测对象的方法及装置
CN110580948A (zh) * 2019-09-12 2019-12-17 杭州依图医疗技术有限公司 医学影像的显示方法及显示设备
CN111402252B (zh) * 2020-04-02 2021-01-15 和宇健康科技股份有限公司 精准医疗图像分析方法及机器人手术系统
CN111816281B (zh) * 2020-06-23 2024-05-14 无锡祥生医疗科技股份有限公司 超声影像查询装置
CN112116562A (zh) * 2020-08-26 2020-12-22 重庆市中迪医疗信息科技股份有限公司 基于肺部影像数据检测病灶的方法、装置、设备及介质
CN112258564B (zh) * 2020-10-20 2022-02-08 推想医疗科技股份有限公司 生成融合特征集合的方法及装置
CN112017185B (zh) * 2020-10-30 2021-02-05 平安科技(深圳)有限公司 病灶分割方法、装置及存储介质
US11830622B2 (en) * 2021-06-11 2023-11-28 International Business Machines Corporation Processing multimodal images of tissue for medical evaluation
CN114943717B (zh) * 2022-05-31 2023-04-07 北京医准智能科技有限公司 一种乳腺病灶检测方法、装置、电子设备及可读存储介质
CN115170510B (zh) * 2022-07-04 2023-04-07 北京医准智能科技有限公司 一种病灶检测方法、装置、电子设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150087982A1 (en) * 2013-09-21 2015-03-26 General Electric Company Method and system for lesion detection in ultrasound images
CN106780460A (zh) * 2016-12-13 2017-05-31 杭州健培科技有限公司 一种用于胸部ct影像的肺结节自动检测系统
CN108171709A (zh) * 2018-01-30 2018-06-15 北京青燕祥云科技有限公司 肝占位性病灶区域的检测方法、装置和实现装置
CN108257674A (zh) * 2018-01-24 2018-07-06 龙马智芯(珠海横琴)科技有限公司 疾病预测方法和装置、设备、计算机可读存储介质
CN109754389A (zh) * 2018-12-07 2019-05-14 北京市商汤科技开发有限公司 一种病灶检测方法、装置及设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5974108A (en) * 1995-12-25 1999-10-26 Kabushiki Kaisha Toshiba X-ray CT scanning apparatus
US7747057B2 (en) * 2006-05-26 2010-06-29 General Electric Company Methods and apparatus for BIS correction
US9208556B2 (en) * 2010-11-26 2015-12-08 Quantitative Insights, Inc. Method, system, software and medium for advanced intelligent image analysis and display of medical images and information
WO2016054779A1 (en) * 2014-10-09 2016-04-14 Microsoft Technology Licensing, Llc Spatial pyramid pooling networks for image processing
CA2994713C (en) 2015-08-15 2019-02-12 Salesforce.Com, Inc. Three-dimensional (3d) convolution with 3d batch normalization
JP6849966B2 (ja) * 2016-11-21 2021-03-31 東芝エネルギーシステムズ株式会社 医用画像処理装置、医用画像処理方法、医用画像処理プログラム、動体追跡装置および放射線治療システム
KR101879207B1 (ko) 2016-11-22 2018-07-17 주식회사 루닛 약한 지도 학습 방식의 객체 인식 방법 및 장치
JP7054787B2 (ja) * 2016-12-22 2022-04-15 パナソニックIpマネジメント株式会社 制御方法、情報端末、及びプログラム
CN108022238B (zh) 2017-08-09 2020-07-03 深圳科亚医疗科技有限公司 对3d图像中对象进行检测的方法、计算机存储介质和系统
CN108447046B (zh) * 2018-02-05 2019-07-26 龙马智芯(珠海横琴)科技有限公司 病灶的检测方法和装置、计算机可读存储介质
CN108764241A (zh) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 分割股骨近端的方法、装置、计算机设备和存储介质
CN108852268A (zh) * 2018-04-23 2018-11-23 浙江大学 一种消化内镜图像异常特征实时标记系统及方法
CN108717569B (zh) * 2018-05-16 2022-03-22 中国人民解放军陆军工程大学 一种膨胀全卷积神经网络装置及其构建方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150087982A1 (en) * 2013-09-21 2015-03-26 General Electric Company Method and system for lesion detection in ultrasound images
CN106780460A (zh) * 2016-12-13 2017-05-31 杭州健培科技有限公司 一种用于胸部ct影像的肺结节自动检测系统
CN108257674A (zh) * 2018-01-24 2018-07-06 龙马智芯(珠海横琴)科技有限公司 疾病预测方法和装置、设备、计算机可读存储介质
CN108171709A (zh) * 2018-01-30 2018-06-15 北京青燕祥云科技有限公司 肝占位性病灶区域的检测方法、装置和实现装置
CN109754389A (zh) * 2018-12-07 2019-05-14 北京市商汤科技开发有限公司 一种病灶检测方法、装置及设备

Also Published As

Publication number Publication date
CN109754389A (zh) 2019-05-14
CN109754389B (zh) 2021-08-24
TW202032579A (zh) 2020-09-01
TWI724669B (zh) 2021-04-11
SG11202013074SA (en) 2021-01-28
US20210113172A1 (en) 2021-04-22
KR20210015972A (ko) 2021-02-10
CN111292301A (zh) 2020-06-16
JP2021531565A (ja) 2021-11-18
JP7061225B2 (ja) 2022-04-27

Similar Documents

Publication Publication Date Title
WO2020114158A1 (zh) 一种病灶检测方法、装置、设备及存储介质
CN111815755B (zh) 虚拟物体被遮挡的区域确定方法、装置及终端设备
Andriole et al. Optimizing analysis, visualization, and navigation of large image data sets: one 5000-section CT scan can ruin your whole day
JP7337104B2 (ja) 拡張現実によるモデル動画多平面インタラクション方法、装置、デバイス及び記憶媒体
US8836703B2 (en) Systems and methods for accurate measurement with a mobile device
CN114779934A (zh) 基于确定的约束与虚拟对象的交互
US11734899B2 (en) Headset-based interface and menu system
CN110276408B (zh) 3d图像的分类方法、装置、设备及存储介质
CN105096353B (zh) 一种图像处理方法及装置
EP4170673A1 (en) Auto-focus tool for multimodality image review
WO2020223940A1 (zh) 姿势预测方法、计算机设备和存储介质
JP2019536505A (ja) コンテキスト依存拡大鏡
CN107480673B (zh) 确定医学图像中感兴趣区域的方法、装置及图像编辑系统
CN115515487A (zh) 基于使用多视图图像的3d人体姿势估计的基于视觉的康复训练系统
WO2021238151A1 (zh) 图像标注方法、装置、电子设备、存储介质及计算机程序
Borgbjerg Web‐based imaging viewer for real‐color volumetric reconstruction of human visible project and DICOM datasets
CN113129362A (zh) 一种三维坐标数据的获取方法及装置
TW202125406A (zh) 影像處理方法、系統及非暫態電腦可讀取儲存媒體
WO2023109086A1 (zh) 文字识别方法、装置、设备及存储介质
WO2018209515A1 (zh) 显示系统及方法
Tang et al. The implementation of an AR (augmented reality) approach to support mammographic interpretation training: an initial feasibility study
CN112488909A (zh) 多人脸的图像处理方法、装置、设备及存储介质
US20240046555A1 (en) Arcuate Imaging for Altered Reality Visualization
CN113420721B (zh) 标注图像关键点的方法和装置
US20220406017A1 (en) Health management system, and human body information display method and human body model generation method applied to same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19892654

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20207038088

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021500548

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19892654

Country of ref document: EP

Kind code of ref document: A1