WO2021057148A1 - Brain tissue layering method and device based on neural network, and computer device - Google Patents

Brain tissue layering method and device based on neural network, and computer device Download PDF

Info

Publication number
WO2021057148A1
WO2021057148A1 PCT/CN2020/098936 CN2020098936W WO2021057148A1 WO 2021057148 A1 WO2021057148 A1 WO 2021057148A1 CN 2020098936 W CN2020098936 W CN 2020098936W WO 2021057148 A1 WO2021057148 A1 WO 2021057148A1
Authority
WO
WIPO (PCT)
Prior art keywords
brain
image
instance
neural network
information
Prior art date
Application number
PCT/CN2020/098936
Other languages
French (fr)
Chinese (zh)
Inventor
卓柏全
周鑫
吕传峰
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021057148A1 publication Critical patent/WO2021057148A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a brain tissue layering method, device, computer equipment, and storage medium based on neural networks.
  • an embodiment of the present application provides a brain tissue layering method based on a neural network, which includes the following steps:
  • the semantic information of the instance category and the pixel-level position information of the instance are obtained;
  • the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
  • an embodiment of the present application also provides a brain tissue layering device based on a neural network, which adopts the following technical solutions:
  • Brain tissue layering device based on neural network including:
  • the first acquisition module is used to acquire brain CT image information
  • An extraction module for extracting the features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
  • the second acquiring module is used to acquire the semantic information of the instance category and the pixel-level position information of the instance after performing the candidate frame alignment operation on the feature map of the brain CT image;
  • the output module is used to input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
  • the embodiments of the present application also provide a computer device, which adopts the following technical solutions:
  • the computer device includes a memory and a processor, and a computer process is stored in the memory.
  • the processor executes the computer process, the steps of the neural network-based brain tissue layering method are implemented:
  • the semantic information of the instance category and the pixel-level position information of the instance are obtained;
  • the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
  • the embodiments of the present application also provide a computer-readable storage medium, which adopts the following technical solutions:
  • the computer-readable storage medium stores a computer process, and when the computer process is executed by a processor, the steps of the neural network-based brain tissue layering method are realized:
  • the semantic information of the instance category and the pixel-level position information of the instance are obtained;
  • the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
  • the image information of the brain CT is acquired; the features of the brain CT image information are extracted through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image; the brain CT image After the candidate frame alignment operation is performed on the feature map, the semantic information of the instance category and the pixel-level position information of the instance are obtained; the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained hierarchical neural network, and the output is The stratification result of the brain CT image.
  • the brain CT image information is extracted from the pre-trained brain cutting convolutional neural network to obtain the feature map.
  • the semantic information of the instance category and the pixel-level position information of the instance are obtained as the result of brain segmentation.
  • the semantic information and pixel-level position information of the instance are obtained through the pre-trained layered neural network to obtain the brain layered results of the brain CT image.
  • Figure 1 is an exemplary system architecture diagram to which the present application can be applied;
  • Fig. 2 is a flowchart of an embodiment of a method for brain tissue layering based on neural network according to the present application
  • FIG. 3 is a flowchart of a specific implementation of step 202 in FIG. 2;
  • FIG. 4 is a flowchart of a specific implementation of step 203 in FIG. 2;
  • FIG. 5 is a flowchart of a specific implementation of step 204 in FIG. 2;
  • Fig. 6 is a schematic structural diagram of an embodiment of an apparatus for layering brain tissue based on a neural network according to the present application
  • Fig. 7 is a schematic structural diagram of an embodiment of a computer device according to the present application.
  • the brain tissue layering method based on neural network includes the following steps:
  • Step 201 Obtain brain CT image information.
  • the electronic device (such as the server/terminal device shown in FIG. 1) on which the neural network-based brain tissue layering method runs can obtain brain CT image information through a wired connection or a wireless connection.
  • the above-mentioned wireless connection methods can include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, as well as other wireless connection methods that are currently known or developed in the future.
  • the above-mentioned brain CT image information can be obtained by scanning the target object with a CT machine, and then obtained from the CT machine through the above-mentioned wired connection or wireless connection, or from the data package derived from the CT machine, or by extracting DICOM
  • the data format is obtained from brain CT image data stored in the database.
  • wired connection or wireless connection the brain CT image information of multiple CT machines can be obtained at the same time, and the data transmission capacity can be improved.
  • Step 202 Extract the features of the brain CT image information through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image.
  • Convolutional Neural Networks is a type of Feedforward Neural Networks (Feedforward Neural Networks) that includes convolution calculations and has a deep structure, and is one of the representative algorithms of deep learning.
  • the convolutional neural network includes: data input layer, convolutional calculation layer, ReLU excitation layer, pooling layer, and fully connected layer. The purpose is to extract features from a certain model, and then classify, identify, and identify things based on the features. Forecast or decision-making, etc.
  • the above-mentioned brain cutting convolutional neural network obtains the corresponding cutting feature map by performing convolution calculation on the input brain CT image data. It should be noted that the above feature map is the intermediate image data in the brain cutting convolutional neural network, not the final brain cutting image data.
  • Step 203 After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained.
  • a candidate frame is generated for each pixel and the candidate frame alignment operation is performed.
  • the candidate frame is the position of the pixel on the feature map, and the network can be generated directly through a region (Region Proposal Network (RPN) is generated; then, an instance mask is generated for each candidate frame after the alignment operation of the candidate frame through a full convolutional neural network (FCN network), so as to segment the instance; finally, the instance segmented by the mask is input to
  • RPN Registered Proposal Network
  • FCN network full convolutional neural network
  • FCN network full convolutional neural network
  • Step 204 Input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
  • the above-mentioned layered neural network includes a convolutional layer and a fully connected layer.
  • the convolutional layer is used to perform secondary feature extraction on the segmented instances to obtain highly separable local features of the instance, and then combine the local features It is input to the fully connected layer for feature combination and then classified by the logistic regression of the layer, and the category label is obtained and output as the layered result of the above-mentioned brain CT image.
  • the above-mentioned brain cutting convolutional neural network and hierarchical neural network need to be pre-trained, that is, after the neural network model is constructed, the training data set is input to the model to make the output of the model meet the expectations or the error is as small as possible.
  • the acquired sample data set needs to be pre-processed, such as positive and negative examples sampling, filtering, etc.
  • cross-checking is applied after the brain CT image sample data is obtained, and the sample data is divided into four points Using three of them to train and one test, the samples with obvious differences in the test set can be shaved off, and the filtered sample data is the real training sample and the verification sample.
  • the image information of the brain CT is acquired; the features of the brain CT image information are extracted through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image; the brain CT image After the candidate frame alignment operation is performed on the feature map, the semantic information of the instance category and the pixel-level position information of the instance are obtained; the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained hierarchical neural network, and the output is The stratification result of the brain CT image.
  • the brain CT image information is extracted from the pre-trained brain cutting convolutional neural network to obtain the feature map.
  • the semantic information of the instance category and the pixel-level position information of the instance are obtained as the result of brain segmentation.
  • the semantic information and pixel-level position information of the instance are obtained through the pre-trained layered neural network to obtain the brain layered results of the brain CT image.
  • FIG. 1 is a system architecture diagram that may be used in this application.
  • the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105.
  • the network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105.
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages, data, and so on.
  • the terminal devices 101, 102, 103 may be installed with various communication client application APPs, such as web browser applications, shopping applications, search applications, instant messaging tools, email clients, social platform software, etc.
  • the terminal devices 101, 102, 103 can be various electronic devices with a display screen and supporting web browsing, including but not limited to smart phones, tablets, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic Video experts compress standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, Motion Picture Experts compress standard audio layer 4) Players, laptops and desktop computers, etc.
  • MP3 players Moving Picture Experts Group Audio Layer III, dynamic Video experts compress standard audio level 3
  • MP4 Motion Picture Experts compress standard audio layer 4
  • Players laptops and desktop computers, etc.
  • the server 105 may be a server that provides various services, for example, a background server that provides support for pages displayed on the terminal devices 101, 102, and 103.
  • the neural network-based brain tissue layering method provided in the embodiments of the present application is generally executed by a server/terminal device.
  • the neural network-based brain tissue layering device is generally set in the server/terminal device.
  • terminal devices, networks, and servers in FIG. 1 are merely illustrative. According to implementation needs, there can be any number of terminal devices, networks, and servers.
  • the brain tissue layering method based on neural network includes the following steps:
  • Step 201 Obtain brain CT image information.
  • the electronic device (such as the server/terminal device shown in FIG. 1) on which the neural network-based brain tissue layering method runs can obtain brain CT image information through a wired connection or a wireless connection.
  • the above-mentioned wireless connection methods can include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, as well as other wireless connection methods that are currently known or developed in the future.
  • the above-mentioned brain CT image information can be obtained by scanning the target object with a CT machine, and then obtained from the CT machine through the above-mentioned wired connection or wireless connection, or from the data package derived from the CT machine, or by extracting DICOM
  • the data format is obtained from brain CT image data stored in the database.
  • wired connection or wireless connection the brain CT image information of multiple CT machines can be obtained at the same time, and the data transmission capacity can be improved.
  • Step 202 Extract the features of the brain CT image information through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image.
  • Convolutional Neural Networks is a type of Feedforward Neural Networks (Feedforward Neural Networks) that includes convolution calculations and has a deep structure, and is one of the representative algorithms of deep learning.
  • the convolutional neural network includes: data input layer, convolutional calculation layer, ReLU excitation layer, pooling layer, and fully connected layer. The purpose is to extract features from a certain model, and then classify, identify, and identify things based on the features. Forecast or decision-making, etc.
  • the above-mentioned brain cutting convolutional neural network obtains the corresponding cutting feature map by performing convolution calculation on the input brain CT image data. It should be noted that the above feature map is the intermediate image data in the brain cutting convolutional neural network, not the final brain cutting image data.
  • Step 203 After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained.
  • a candidate frame is generated for each pixel and the candidate frame alignment operation is performed.
  • the candidate frame is the position of the pixel on the feature map, and the network can be generated directly through a region (Region Proposal Network (RPN) is generated; then, an instance mask is generated for each candidate frame after the alignment operation of the candidate frame through a full convolutional neural network (FCN network), so as to segment the instance; finally, the instance segmented by the mask is input to
  • RPN Registered Proposal Network
  • FCN network full convolutional neural network
  • FCN network full convolutional neural network
  • Step 204 Input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
  • the above-mentioned layered neural network includes a convolutional layer and a fully connected layer.
  • the convolutional layer is used to perform secondary feature extraction on the segmented instances to obtain highly separable local features of the instance, and then combine the local features It is input to the fully connected layer for feature combination and then classified by the logistic regression of the layer, and the category label is obtained and output as the layered result of the above-mentioned brain CT image.
  • the above-mentioned brain cutting convolutional neural network and hierarchical neural network need to be pre-trained, that is, after the neural network model is constructed, the training data set is input to the model to make the output of the model meet the expectations or the error is as small as possible.
  • the acquired sample data set needs to be pre-processed, such as positive and negative examples sampling, filtering, etc.
  • cross-checking is applied after the brain CT image sample data is obtained, and the sample data is divided into four points Using three of them to train and one test, the samples with obvious differences in the test set can be shaved off, and the filtered sample data is the real training sample and the verification sample.
  • the image information of the brain CT is acquired; the features of the brain CT image information are extracted through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image; the brain CT image After the candidate frame alignment operation is performed on the feature map, the semantic information of the instance category and the pixel-level position information of the instance are obtained; the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained hierarchical neural network, and the output is The stratification result of the brain CT image.
  • the brain CT image information is extracted from the pre-trained brain cutting convolutional neural network to obtain the feature map.
  • the semantic information of the instance category and the pixel-level position information of the instance are obtained as the result of brain segmentation.
  • the semantic information and pixel-level position information of the instance are obtained through the pre-trained layered neural network to obtain the brain layered results of the brain CT image.
  • the method further includes:
  • Step 200 Perform channel preprocessing on the brain CT image to obtain brain CT image information.
  • CT Computer Tomography imaging is a scanning method that uses computer technology to reconstruct the tomographic image of the measured object to obtain a three-dimensional tomographic image. This scanning method is to penetrate the object under test through the rays of a single axis. According to the different absorption and transmittance of each part of the object under test, the transmitted rays are collected by the computer and imaged through three-dimensional reconstruction.
  • Brain CT images include information on the structure and shape of brain tissue, which can be used to clearly show the number, location, size, contour, density, intratumoral hemorrhage, calcification, and degree of spread of intracranial tumors.
  • the above-mentioned brain CT images can be obtained by scanning the target object by a CT machine and are mostly stored in DICOM data format. To obtain brain CT image data, it is necessary to parse the DICOM file.
  • the structured information of DICOM can be divided into four layers: Patient, Study, Series and Image. Each information (IE, Information entity) is stored by the combination of "key-value" (Tag, Value), and the brain information can be converted from DICOM file to HU value image by analyzing the Tag location, and then the above-mentioned brain CT image information is obtained through channel preprocessing.
  • the CT image of HU value is converted into a three-channel gray image value according to the three window widths and window levels of full window (range of 0-4096), brain window (40,120), and bone window (450, 2000).
  • step 202 specifically includes:
  • Step 2021 Input the acquired brain CT image information into the trained ResNet convolutional neural network, and extract the feature map of the brain CT image.
  • a standard convolutional neural network (usually ResNet50 and ResNet101) is trained as a feature extractor.
  • the bottom layer of the convolutional neural network detects low-level features (edges and corners, etc.), and the higher layer detects more advanced features (cars, people, sky, etc.).
  • the brain CT image data preprocessed by the above channels are input to the input layer of the above-mentioned standard trained convolutional neural network, and the feature map of the image is obtained after convolution calculation, pooling dimensionality reduction, and fully connected classification.
  • the feature map will be used as the input data for the next step.
  • step 203 specifically includes:
  • Step 2031 Obtain a candidate frame of each pixel on the feature map of the brain CT image and perform a candidate frame alignment operation on the obtained candidate frame.
  • 9 kinds of anchor points are generated for each pixel on the above feature map through the region generation network (RPN).
  • These 9 kinds of initial anchors can contain three kinds of areas (128 ⁇ 128, 256 ⁇ 256, 512 ⁇ 512), each area can contain three aspect ratios (1:1, 1:2, 2:1).
  • the RPN first judges whether the anchor is the foreground or the background, that is, whether the anchor covers the target or not, and the second is the first coordinate correction for the anchor belonging to the foreground, so as to obtain the candidate frame of each pixel. . Then return to the feature map to perform feature selection based on the obtained candidate box, and mark the instance through the candidate box.
  • Step 2032 Input the feature map after the candidate frame alignment operation is performed into the fully connected layer network to obtain the semantic information of the instance category of the feature map and the position information of the instance pixel level.
  • FC fully connected layers
  • the fully connected layer plays the role of a "classifier” in the neural network, and can integrate local information with category discrimination in the convolutional layer or the pooling layer.
  • the fully connected layer often appears in the last few layers.
  • Operations such as the convolutional layer, pooling layer, and activation function layer map the original data to the hidden layer feature space, and the fully connected layer It plays the role of mapping the learned "distributed feature representation" to the sample label space.
  • the fully connected layer can be realized by the convolution operation: the fully connected layer that is fully connected to the previous layer can be transformed into a convolution with a 1x1 convolution kernel; and the fully connected layer that is the convolutional layer in the previous layer can be transformed into
  • the convolution kernel is the global convolution of hxw, and h and w are the height and width of the previous convolution result.
  • the above-mentioned examples labeled with candidate boxes obtained after candidate box alignment are input into the fully connected layer network, and the semantic information of the instance (ie, the category label of the instance) is obtained after the candidate box classification is performed, and then the candidate box regression is performed (Further fine-tune the position and size of the candidate frame) Obtain the pixel-level position information of the instance (including the pixel points of the instance and the coordinates of the pixel points on the feature map).
  • step 2031 and before step 2032 it may further include:
  • step 20311 a mask is generated for each pixel after the candidate frame alignment operation through the full convolutional neural network, and the instances are segmented.
  • a mask neural network branch containing a full convolutional neural network is used to generate an instance mask for each pixel after the candidate frame alignment operation, so that the features can be segmented.
  • FCN can accept input images of any size, and use the deconvolution layer to upsample the feature map of the last convolution layer to restore it to the same size of the input image, so that a prediction can be generated for each pixel
  • the spatial information in the original input image is retained, and finally pixel-by-pixel classification is performed on the up-sampled feature map.
  • step 204 specifically includes:
  • Step 2041 Jointly input the semantic information of the instance category and the pixel-level position information of the instance into the convolutional layer of the hierarchical neural network to extract features again.
  • FC fully connected layers
  • the convolutional layer of the hierarchical network can be used to perform convolution and pooling operations on the joint information of the instances of the input network (that is, the semantic information of the instance category and the location information of the instance pixel level), and perform secondary operations.
  • Feature extraction so as to obtain highly separable local features of the instance, which facilitates accurate stratification of brain CT images.
  • step 2042 the features extracted from the convolutional layer of the hierarchical neural network are input to the fully connected layer of the hierarchical neural network for classification, and the hierarchical result of the brain CT image is obtained and output.
  • the local features obtained by the above-mentioned secondary feature extraction are input to the fully connected layer for feature combination, and then classified and predicted through the softmax logistic regression of the layer, that is, several categories and corresponding probabilities, where The category with the highest probability is the result of brain CT image layering and then output the result.
  • the category with the highest probability is the result of brain CT image layering and then output the result.
  • the above-mentioned brain cutting convolutional neural network and hierarchical neural network need to be pre-trained, that is, after the neural network model is constructed, the training data set is input to the model to make the output of the model meet the expectations or the error is as small as possible.
  • the acquired brain CT image sample data set needs to be preprocessed, such as positive and negative examples sampling, filtering, etc.
  • cross-checking is applied to divide the sample data into Four points, using three of them to train a test, you can shave the samples with obvious differences in the test set, and the filtered sample data set is the real training data, and then input to the above-mentioned brain cutting convolutional neural network and layered neural network Perform pre-training and verification.
  • the computer process can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
  • the aforementioned storage medium may be a magnetic disk, an optical disk, a read-only storage memory (Read-Only Memory, ROM) and other non-volatile storage media, or random storage memory (Random Access Memory, RAM) etc.
  • this application provides an embodiment of a neural network-based brain tissue layering device.
  • the device embodiment and the diagram Corresponding to the method embodiment shown in 2, the device can be specifically applied to various electronic devices.
  • the neural network-based brain tissue layering apparatus 300 in this embodiment includes: a first acquisition module 301, an extraction module 302, a second acquisition module 303, and an output module 304. among them:
  • the first obtaining module 301 is used to obtain brain CT image information
  • the extraction module 302 is configured to extract the features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
  • the second obtaining module 303 is configured to obtain the semantic information of the instance category and the pixel-level position information of the instance after performing the candidate frame alignment operation on the feature map of the brain CT image;
  • the output module 304 is configured to input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
  • the foregoing apparatus 300 further includes:
  • the preprocessing module 305 is configured to perform channel preprocessing on the brain CT image to obtain brain CT image information.
  • the neural network-based brain tissue layering device provided in the embodiments of the present application can realize the various implementation modes in the method embodiments in FIGS. 2 to 5 and the corresponding beneficial effects. To avoid repetition, details are not described herein again.
  • FIG. 7 is a block diagram of the basic structure of the computer device in this embodiment.
  • the computer device 7 includes a memory 71, a processor 72, and a network interface 73 that are connected to each other in communication via a system bus. It should be noted that the figure only shows the computer device 7 with components 71-73, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. Its hardware includes, but is not limited to, a microprocessor, a dedicated Integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • the computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
  • the memory 71 includes at least one type of readable storage medium, the readable storage medium includes flash memory, hard disk, multimedia card, card type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, etc., the computer readable storage
  • the medium can be non-volatile or volatile.
  • the memory 71 may be an internal storage unit of the computer device 7, such as a hard disk or memory of the computer device 7.
  • the memory 71 may also be an external storage device of the computer device 7, such as a plug-in hard disk equipped on the computer device 7, a smart memory card (Smart Memory Card). Media Card, SMC), Secure Digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 71 may also include both the internal storage unit of the computer device 7 and the external storage device thereof.
  • the memory 71 is generally used to store the operating system and various application software installed in the computer device 7, such as the process code of a neural network-based brain tissue layering method.
  • the memory 71 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 72 may be a central processing unit (Central Processing Unit) in some embodiments. Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip.
  • the processor 72 is generally used to control the overall operation of the computer device 7.
  • the processor 72 is configured to run the process code or process data stored in the memory 71, for example, run the process code of the neural network-based brain tissue layering method.
  • the network interface 73 may include a wireless network interface or a wired network interface, and the network interface 73 is generally used to establish a communication connection between the computer device 7 and other electronic devices.
  • This application also provides another implementation manner, that is, a computer-readable storage medium storing a neural network-based brain tissue layering process, and the neural network-based brain tissue layering process
  • the process may be executed by at least one processor, so that the at least one processor executes the steps of the above-mentioned neural network-based brain tissue layering method.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Abstract

A brain tissue layering method and device based on a neural network, a computer device, and a storage medium, relating to the field of artificial intelligence. The method comprises: obtaining brain CT image information (201); extracting features of the brain CT image information by means of a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image (202); performing a candidate box alignment operation on the feature map of the brain CT image to obtain semantic information of an instance category and location information of an instance pixel level (203); and inputting the semantic information of the instance category and the location information of the instance pixel level into a pre-trained layering neural network, and outputting a layered result of the brain CT image (204). By fusing the brain cutting convolutional neural network and the brain layering neural network, the results of brain cutting and brain layering are obtained simultaneously by using one model, so that the operation time and the consumption of operation resources are reduced, and the tasks of brain cutting and brain layering can share feature information, thereby improving the accuracy of brain layering.

Description

基于神经网络的脑组织分层方法、装置、计算机设备Brain tissue layering method, device and computer equipment based on neural network
本申请以2019年09月25日提交的申请号为2019109090928,名称为“基于神经网络的脑组织分层方法、装置、计算机设备”的中国发明专利申请为基础,并要求其优先权。This application is based on the Chinese invention patent application filed on September 25, 2019 with the application number 2019109090928, titled "Neural network-based brain tissue layering method, device, and computer equipment", and claims its priority.
技术领域Technical field
本申请涉及人工智能技术领域,尤其涉及一种基于神经网络的脑组织分层方法、装置、计算机设备及存储介质。This application relates to the field of artificial intelligence technology, and in particular to a brain tissue layering method, device, computer equipment, and storage medium based on neural networks.
背景技术Background technique
近年来,深度学习技术被广泛运用在各大领域,特别是计算机视觉,被用来实现脸部识别、目标检测、图像分割等。在医学领域,通常需要对脑部CT影像进行分析,包含很多信息的分析,可能需要脑出血检测、脑组织分割以外,甚至脑分层也是很重要的一项信息需要提供。目前,普遍采用深度学习技术中的神经网络来构建模型以实现检测、分类、预测等任务,但在实现本申请的过程中,发明人意识到,通常采用多个模型分别对应处理多个任务,比如说针对脑出血出一个检测模型,脑组织分割出一个分割模型,脑分层再出一个分类模型,这样的多个模型解决多个任务的策略会造成运算时间加倍、运算资源消耗也加倍,并因为单一分类模型信息不全面导致模型对没见过的脑部CT影像错误分类的状况,从而降低了脑分层的准确度。In recent years, deep learning technology has been widely used in various fields, especially computer vision, which is used to realize face recognition, target detection, image segmentation, etc. In the medical field, it is usually necessary to analyze brain CT images, which contain a lot of information. In addition to cerebral hemorrhage detection and brain tissue segmentation, even brain stratification is also a very important piece of information to be provided. At present, neural networks in deep learning technology are commonly used to construct models to achieve tasks such as detection, classification, and prediction. However, in the process of implementing this application, the inventor realized that multiple models are usually used to handle multiple tasks respectively. For example, a detection model for cerebral hemorrhage, a segmentation model for brain tissue, and a classification model for brain stratification. Such a strategy of multiple models to solve multiple tasks will double the computing time and computing resource consumption. And because the information of a single classification model is not comprehensive, the model misclassifies unseen brain CT images, thereby reducing the accuracy of brain stratification.
技术问题technical problem
采用多个模型分别对应处理多个任务会造成运算时间加倍、运算资源消耗也加倍,降低了脑分层的准确度的问题。Using multiple models to handle multiple tasks respectively will double the computing time and computing resource consumption, which reduces the accuracy of brain layering.
技术解决方案Technical solutions
为了解决上述技术问题,本申请实施例提供一种基于神经网络的脑组织分层方法,包括下述步骤:In order to solve the above technical problems, an embodiment of the present application provides a brain tissue layering method based on a neural network, which includes the following steps:
获取脑CT的影像信息;Obtain imaging information of brain CT;
通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;Extracting features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained;
将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
为了解决上述技术问题,本申请实施例还提供一种基于神经网络的脑组织分层装置,采用了如下所述的技术方案:In order to solve the above technical problems, an embodiment of the present application also provides a brain tissue layering device based on a neural network, which adopts the following technical solutions:
基于神经网络的脑组织分层装置,包括:Brain tissue layering device based on neural network, including:
第一获取模块,用于获取脑CT的影像信息;The first acquisition module is used to acquire brain CT image information;
提取模块,用于通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;An extraction module for extracting the features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
第二获取模块,用于将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;The second acquiring module is used to acquire the semantic information of the instance category and the pixel-level position information of the instance after performing the candidate frame alignment operation on the feature map of the brain CT image;
输出模块,用于将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The output module is used to input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
为了解决上述技术问题,本申请实施例还提供一种计算机设备,采用了如下所述的技术方案:In order to solve the above technical problems, the embodiments of the present application also provide a computer device, which adopts the following technical solutions:
所述计算机设备,包括存储器和处理器,所述存储器中存储有计算机流程,所述处理器执行所述计算机流程时实现所述的基于神经网络的脑组织分层方法的步骤:The computer device includes a memory and a processor, and a computer process is stored in the memory. When the processor executes the computer process, the steps of the neural network-based brain tissue layering method are implemented:
获取脑CT的影像信息;Obtain imaging information of brain CT;
通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;Extracting features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained;
将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
为了解决上述技术问题,本申请实施例还提供一种计算机可读存储介质,采用了如下所述的技术方案:In order to solve the above technical problems, the embodiments of the present application also provide a computer-readable storage medium, which adopts the following technical solutions:
所述计算机可读存储介质上存储有计算机流程,所述计算机流程被处理器执行时实现所述的基于神经网络的脑组织分层方法的步骤:The computer-readable storage medium stores a computer process, and when the computer process is executed by a processor, the steps of the neural network-based brain tissue layering method are realized:
获取脑CT的影像信息;Obtain imaging information of brain CT;
通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;Extracting features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained;
将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
有益效果Beneficial effect
在本实施例中,获取脑CT的影像信息;通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。由预训练好的脑切割卷积神经网络提取脑CT影像信息得到特征图,将特征图进行候选框对齐操作后得到实例类别的语义信息和实例像素级的位置信息并作为脑分割的结果,然后将实例的语义信息和像素级位置信息通过预训练好的分层神经网络得到脑CT影像的脑分层结果,通过融合脑切割卷积神经网络和脑分层神经网络,可以以一个模型同时得到脑分割以及脑分层的结果,减少了运算时间以及运算资源的消耗,并使得脑分割和脑分层两个任务可以共享特征信息,从而提高了脑分层的准确率。In this embodiment, the image information of the brain CT is acquired; the features of the brain CT image information are extracted through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image; the brain CT image After the candidate frame alignment operation is performed on the feature map, the semantic information of the instance category and the pixel-level position information of the instance are obtained; the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained hierarchical neural network, and the output is The stratification result of the brain CT image. The brain CT image information is extracted from the pre-trained brain cutting convolutional neural network to obtain the feature map. After the feature map is subjected to the candidate frame alignment operation, the semantic information of the instance category and the pixel-level position information of the instance are obtained as the result of brain segmentation. The semantic information and pixel-level position information of the instance are obtained through the pre-trained layered neural network to obtain the brain layered results of the brain CT image. By fusing the brain cutting convolutional neural network and the brain layered neural network, one model can be obtained at the same time The result of brain segmentation and brain layering reduces the computing time and the consumption of computing resources, and enables the two tasks of brain segmentation and brain layering to share feature information, thereby improving the accuracy of brain layering.
附图说明Description of the drawings
为了更清楚地说明本申请中的方案,下面将对本申请实施例描述中所需要使用的附图作一个简单介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the solution in this application more clearly, the following will briefly introduce the drawings used in the description of the embodiments of the application. Obviously, the drawings in the following description are some embodiments of the application. Ordinary technicians can obtain other drawings based on these drawings without creative work.
图1是本申请可以应用于其中的示例性系统架构图;Figure 1 is an exemplary system architecture diagram to which the present application can be applied;
图2 根据本申请的基于神经网络的脑组织分层的方法的一个实施例的流程图;Fig. 2 is a flowchart of an embodiment of a method for brain tissue layering based on neural network according to the present application;
图3是图2中步骤202的一种具体实施方式的流程图;FIG. 3 is a flowchart of a specific implementation of step 202 in FIG. 2;
图4是图2中步骤203的一种具体实施方式的流程图;FIG. 4 is a flowchart of a specific implementation of step 203 in FIG. 2;
图5是图2中步骤204的一种具体实施方式的流程图;FIG. 5 is a flowchart of a specific implementation of step 204 in FIG. 2;
图6是根据本申请的基于神经网络的脑组织分层的装置的一个实施例的结构示意图;Fig. 6 is a schematic structural diagram of an embodiment of an apparatus for layering brain tissue based on a neural network according to the present application;
图7是根据本申请的计算机设备的一个实施例的结构示意图。Fig. 7 is a schematic structural diagram of an embodiment of a computer device according to the present application.
本发明的最佳实施方式The best mode of the present invention
参考图2,示出了根据本申请的基于神经网络的脑组织分层的方法的一个实施例的流程图。所述的基于神经网络的脑组织分层方法,包括以下步骤:Referring to FIG. 2, there is shown a flowchart of an embodiment of a method for layering brain tissue based on a neural network according to the present application. The brain tissue layering method based on neural network includes the following steps:
步骤201,获取脑CT的影像信息。Step 201: Obtain brain CT image information.
在本实施例中,基于神经网络的脑组织分层方法运行于其上的电子设备(例如图1所示的服务器/终端设备)可以通过有线连接方式或者无线连接方式获取脑CT的影像信息。需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB( ultra wideband )连接、以及其他现在已知或将来开发的无线连接方式。In this embodiment, the electronic device (such as the server/terminal device shown in FIG. 1) on which the neural network-based brain tissue layering method runs can obtain brain CT image information through a wired connection or a wireless connection. It should be pointed out that the above-mentioned wireless connection methods can include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, as well as other wireless connection methods that are currently known or developed in the future.
上述的脑CT影像信息可以通过CT机对目标对象进行扫描得到,然后通过上述有线连接方式或者无线连接方式从CT机获取,也可以由CT机中导出的数据包得到,还可以通过提取以DICOM数据格式存储于数据库中的脑CT影像数据得到。通过有线连接方式或者无线连接的网络方式,可以同时获取多个CT机的脑CT影像信息,提高数据传输能力。The above-mentioned brain CT image information can be obtained by scanning the target object with a CT machine, and then obtained from the CT machine through the above-mentioned wired connection or wireless connection, or from the data package derived from the CT machine, or by extracting DICOM The data format is obtained from brain CT image data stored in the database. Through wired connection or wireless connection, the brain CT image information of multiple CT machines can be obtained at the same time, and the data transmission capacity can be improved.
步骤202,通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图。Step 202: Extract the features of the brain CT image information through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image.
在本实施例中,卷积神经网络(Convolutional Neural Networks, CNN)是一类包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),是深度学习的代表算法之一。卷积神经网络包括:数据输入层、卷积计算层、ReLU激励层、池化层、全连接层,目的是以一定的模型对事物进行特征提取,而后根据特征对该事物进行分类、识别、预测或决策等。In this embodiment, Convolutional Neural Networks (CNN) is a type of Feedforward Neural Networks (Feedforward Neural Networks) that includes convolution calculations and has a deep structure, and is one of the representative algorithms of deep learning. The convolutional neural network includes: data input layer, convolutional calculation layer, ReLU excitation layer, pooling layer, and fully connected layer. The purpose is to extract features from a certain model, and then classify, identify, and identify things based on the features. Forecast or decision-making, etc.
上述的脑切割卷积神经网络通过对输入的脑CT影像数据进行卷积计算,得到对应的切割特征图。需要说明的是上述的特征图为脑切割卷积神经网络中的中间图像数据,而不是最终的脑切割图像数据。The above-mentioned brain cutting convolutional neural network obtains the corresponding cutting feature map by performing convolution calculation on the input brain CT image data. It should be noted that the above feature map is the intermediate image data in the brain cutting convolutional neural network, not the final brain cutting image data.
步骤203,将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息。Step 203: After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained.
在本实施例中,得到上述脑CT影像的特征图后,为每一个像素点生成后候选框并进行候选框对齐操作,候选框是特征图上像素点的位置,可以直接通过一个区域生成网络(Region Proposal Network,RPN)生成;然后通过全卷积神经网络(FCN网络)为每一个候选框对齐操作后的候选框生成实例掩码,从而将实例分割出来;最后将掩码分割出来的实例输入到全连接层网络,得到特征图的实例类别的语义信息和实例的像素级位置信息,作为脑切割的结果;实例类别的语义信息包括上述实例的类别标签,位置信息包括实例在特征图上的坐标。In this embodiment, after the feature map of the brain CT image is obtained, a candidate frame is generated for each pixel and the candidate frame alignment operation is performed. The candidate frame is the position of the pixel on the feature map, and the network can be generated directly through a region (Region Proposal Network (RPN) is generated; then, an instance mask is generated for each candidate frame after the alignment operation of the candidate frame through a full convolutional neural network (FCN network), so as to segment the instance; finally, the instance segmented by the mask is input to The fully connected layer network obtains the semantic information of the instance category of the feature map and the pixel-level location information of the instance as a result of brain cutting; the semantic information of the instance category includes the category label of the above-mentioned instance, and the location information includes the coordinate of the instance on the feature map .
步骤204,将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。Step 204: Input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
在本实施例中,上述分层神经网络包括卷积层和全连接层,卷积层用于对分割出来的实例进行二次特征提取,得到实例的高度可分的局部特征,然后将局部特征输入给全连接层进行特征组合再经过该层的逻辑回归进行分类,得到类别标签并作为上述脑CT影像的分层结果输出。通过共享脑切割卷积神经网络提取的实例特征信息,可以提高脑分层的准确率。In this embodiment, the above-mentioned layered neural network includes a convolutional layer and a fully connected layer. The convolutional layer is used to perform secondary feature extraction on the segmented instances to obtain highly separable local features of the instance, and then combine the local features It is input to the fully connected layer for feature combination and then classified by the logistic regression of the layer, and the category label is obtained and output as the layered result of the above-mentioned brain CT image. By sharing the instance feature information extracted by the brain cutting convolutional neural network, the accuracy of brain layering can be improved.
需要说明的是,上述脑切割卷积神经网络及分层神经网络需要预训练好,即构建好神经网络模型后向模型输入训练数据集使模型的输出符合预期或误差尽可能小。其中,需要对获取到的样本数据集进行预处理,例如正例负例采样、过滤等,在本实施例中,得到脑CT影像样本数据后会应用交叉验正,将样本数据分为四分,利用其中三份训练一份测试,可以将测试集中明显差异的样本剃除,过滤后的样本数据为真正的训练样本和验证样本。It should be noted that the above-mentioned brain cutting convolutional neural network and hierarchical neural network need to be pre-trained, that is, after the neural network model is constructed, the training data set is input to the model to make the output of the model meet the expectations or the error is as small as possible. Among them, the acquired sample data set needs to be pre-processed, such as positive and negative examples sampling, filtering, etc. In this embodiment, cross-checking is applied after the brain CT image sample data is obtained, and the sample data is divided into four points Using three of them to train and one test, the samples with obvious differences in the test set can be shaved off, and the filtered sample data is the real training sample and the verification sample.
在本实施例中,获取脑CT的影像信息;通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。由预训练好的脑切割卷积神经网络提取脑CT影像信息得到特征图,将特征图进行候选框对齐操作后得到实例类别的语义信息和实例像素级的位置信息并作为脑分割的结果,然后将实例的语义信息和像素级位置信息通过预训练好的分层神经网络得到脑CT影像的脑分层结果,通过融合脑切割卷积神经网络和脑分层神经网络,可以以一个模型同时得到脑分割以及脑分层的结果,减少了运算时间以及运算资源的消耗,并使得脑分割和脑分层两个任务可以共享特征信息,从而提高了脑分层的准确率。In this embodiment, the image information of the brain CT is acquired; the features of the brain CT image information are extracted through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image; the brain CT image After the candidate frame alignment operation is performed on the feature map, the semantic information of the instance category and the pixel-level position information of the instance are obtained; the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained hierarchical neural network, and the output is The stratification result of the brain CT image. The brain CT image information is extracted from the pre-trained brain cutting convolutional neural network to obtain the feature map. After the feature map is subjected to the candidate frame alignment operation, the semantic information of the instance category and the pixel-level position information of the instance are obtained as the result of brain segmentation. The semantic information and pixel-level position information of the instance are obtained through the pre-trained layered neural network to obtain the brain layered results of the brain CT image. By fusing the brain cutting convolutional neural network and the brain layered neural network, one model can be obtained at the same time The result of brain segmentation and brain layering reduces the computing time and the consumption of computing resources, and enables the two tasks of brain segmentation and brain layering to share feature information, thereby improving the accuracy of brain layering.
本发明的实施方式Embodiments of the present invention
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同;本文中在申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请;本申请的说明书和权利要求书及上述附图说明中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本申请的说明书和权利要求书或上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。Unless otherwise defined, all technical and scientific terms used herein have the same meanings as commonly understood by those skilled in the technical field of the application; the terms used in the specification of the application herein are only for describing specific embodiments. The purpose is not to limit the application; the terms "including" and "having" in the specification and claims of the application and the above-mentioned description of the drawings and any variations thereof are intended to cover non-exclusive inclusions. The terms "first", "second", etc. in the specification and claims of the application or the above-mentioned drawings are used to distinguish different objects, rather than to describe a specific sequence.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference to "embodiments" herein means that a specific feature, structure, or characteristic described in conjunction with the embodiments may be included in at least one embodiment of the present application. The appearance of the phrase in various places in the specification does not necessarily refer to the same embodiment, nor is it an independent or alternative embodiment mutually exclusive with other embodiments. Those skilled in the art clearly and implicitly understand that the embodiments described herein can be combined with other embodiments.
为了使本技术领域的人员更好地理解本申请方案,下面将结合附图,对本申请实施例中的技术方案进行清楚、完整地描述。In order to enable those skilled in the art to better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the accompanying drawings.
如图1所示,图1是本申请可能会用到的系统架构图,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 1, FIG. 1 is a system architecture diagram that may be used in this application. The system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息、数据等。终端设备101、102、103上可以安装有各种通讯客户端应用APP,例如网页浏览器应用、购物类应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。The user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages, data, and so on. The terminal devices 101, 102, 103 may be installed with various communication client application APPs, such as web browser applications, shopping applications, search applications, instant messaging tools, email clients, social platform software, etc.
终端设备101、102、103可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器( Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3 )、MP4( Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4 )播放器、膝上型便携计算机和台式计算机等等。The terminal devices 101, 102, 103 can be various electronic devices with a display screen and supporting web browsing, including but not limited to smart phones, tablets, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic Video experts compress standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, Motion Picture Experts compress standard audio layer 4) Players, laptops and desktop computers, etc.
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的页面提供支持的后台服务器。The server 105 may be a server that provides various services, for example, a background server that provides support for pages displayed on the terminal devices 101, 102, and 103.
需要说明的是,本申请实施例所提供的基于神经网络的脑组织分层方法一般由服务器/终端设备执行,相应地,基于神经网络的脑组织分层装置一般设置于服务器/终端设备中。It should be noted that the neural network-based brain tissue layering method provided in the embodiments of the present application is generally executed by a server/terminal device. Correspondingly, the neural network-based brain tissue layering device is generally set in the server/terminal device.
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the numbers of terminal devices, networks, and servers in FIG. 1 are merely illustrative. According to implementation needs, there can be any number of terminal devices, networks, and servers.
继续参考图2,示出了根据本申请的基于神经网络的脑组织分层的方法的一个实施例的流程图。所述的基于神经网络的脑组织分层方法,包括以下步骤:Continuing to refer to FIG. 2, a flowchart of an embodiment of a method for layering brain tissue based on a neural network according to the present application is shown. The brain tissue layering method based on neural network includes the following steps:
步骤201,获取脑CT的影像信息。Step 201: Obtain brain CT image information.
在本实施例中,基于神经网络的脑组织分层方法运行于其上的电子设备(例如图1所示的服务器/终端设备)可以通过有线连接方式或者无线连接方式获取脑CT的影像信息。需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB( ultra wideband )连接、以及其他现在已知或将来开发的无线连接方式。In this embodiment, the electronic device (such as the server/terminal device shown in FIG. 1) on which the neural network-based brain tissue layering method runs can obtain brain CT image information through a wired connection or a wireless connection. It should be pointed out that the above-mentioned wireless connection methods can include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, as well as other wireless connection methods that are currently known or developed in the future.
上述的脑CT影像信息可以通过CT机对目标对象进行扫描得到,然后通过上述有线连接方式或者无线连接方式从CT机获取,也可以由CT机中导出的数据包得到,还可以通过提取以DICOM数据格式存储于数据库中的脑CT影像数据得到。通过有线连接方式或者无线连接的网络方式,可以同时获取多个CT机的脑CT影像信息,提高数据传输能力。The above-mentioned brain CT image information can be obtained by scanning the target object with a CT machine, and then obtained from the CT machine through the above-mentioned wired connection or wireless connection, or from the data package derived from the CT machine, or by extracting DICOM The data format is obtained from brain CT image data stored in the database. Through wired connection or wireless connection, the brain CT image information of multiple CT machines can be obtained at the same time, and the data transmission capacity can be improved.
步骤202,通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图。Step 202: Extract the features of the brain CT image information through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image.
在本实施例中,卷积神经网络(Convolutional Neural Networks, CNN)是一类包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),是深度学习的代表算法之一。卷积神经网络包括:数据输入层、卷积计算层、ReLU激励层、池化层、全连接层,目的是以一定的模型对事物进行特征提取,而后根据特征对该事物进行分类、识别、预测或决策等。In this embodiment, Convolutional Neural Networks (CNN) is a type of Feedforward Neural Networks (Feedforward Neural Networks) that includes convolution calculations and has a deep structure, and is one of the representative algorithms of deep learning. The convolutional neural network includes: data input layer, convolutional calculation layer, ReLU excitation layer, pooling layer, and fully connected layer. The purpose is to extract features from a certain model, and then classify, identify, and identify things based on the features. Forecast or decision-making, etc.
上述的脑切割卷积神经网络通过对输入的脑CT影像数据进行卷积计算,得到对应的切割特征图。需要说明的是上述的特征图为脑切割卷积神经网络中的中间图像数据,而不是最终的脑切割图像数据。The above-mentioned brain cutting convolutional neural network obtains the corresponding cutting feature map by performing convolution calculation on the input brain CT image data. It should be noted that the above feature map is the intermediate image data in the brain cutting convolutional neural network, not the final brain cutting image data.
步骤203,将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息。Step 203: After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained.
在本实施例中,得到上述脑CT影像的特征图后,为每一个像素点生成后候选框并进行候选框对齐操作,候选框是特征图上像素点的位置,可以直接通过一个区域生成网络(Region Proposal Network,RPN)生成;然后通过全卷积神经网络(FCN网络)为每一个候选框对齐操作后的候选框生成实例掩码,从而将实例分割出来;最后将掩码分割出来的实例输入到全连接层网络,得到特征图的实例类别的语义信息和实例的像素级位置信息,作为脑切割的结果;实例类别的语义信息包括上述实例的类别标签,位置信息包括实例在特征图上的坐标。In this embodiment, after the feature map of the brain CT image is obtained, a candidate frame is generated for each pixel and the candidate frame alignment operation is performed. The candidate frame is the position of the pixel on the feature map, and the network can be generated directly through a region (Region Proposal Network (RPN) is generated; then, an instance mask is generated for each candidate frame after the alignment operation of the candidate frame through a full convolutional neural network (FCN network), so as to segment the instance; finally, the instance segmented by the mask is input to The fully connected layer network obtains the semantic information of the instance category of the feature map and the pixel-level location information of the instance as a result of brain cutting; the semantic information of the instance category includes the category label of the above-mentioned instance, and the location information includes the coordinate of the instance on the feature map .
步骤204,将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。Step 204: Input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
在本实施例中,上述分层神经网络包括卷积层和全连接层,卷积层用于对分割出来的实例进行二次特征提取,得到实例的高度可分的局部特征,然后将局部特征输入给全连接层进行特征组合再经过该层的逻辑回归进行分类,得到类别标签并作为上述脑CT影像的分层结果输出。通过共享脑切割卷积神经网络提取的实例特征信息,可以提高脑分层的准确率。In this embodiment, the above-mentioned layered neural network includes a convolutional layer and a fully connected layer. The convolutional layer is used to perform secondary feature extraction on the segmented instances to obtain highly separable local features of the instance, and then combine the local features It is input to the fully connected layer for feature combination and then classified by the logistic regression of the layer, and the category label is obtained and output as the layered result of the above-mentioned brain CT image. By sharing the instance feature information extracted by the brain cutting convolutional neural network, the accuracy of brain layering can be improved.
需要说明的是,上述脑切割卷积神经网络及分层神经网络需要预训练好,即构建好神经网络模型后向模型输入训练数据集使模型的输出符合预期或误差尽可能小。其中,需要对获取到的样本数据集进行预处理,例如正例负例采样、过滤等,在本实施例中,得到脑CT影像样本数据后会应用交叉验正,将样本数据分为四分,利用其中三份训练一份测试,可以将测试集中明显差异的样本剃除,过滤后的样本数据为真正的训练样本和验证样本。It should be noted that the above-mentioned brain cutting convolutional neural network and hierarchical neural network need to be pre-trained, that is, after the neural network model is constructed, the training data set is input to the model to make the output of the model meet the expectations or the error is as small as possible. Among them, the acquired sample data set needs to be pre-processed, such as positive and negative examples sampling, filtering, etc. In this embodiment, cross-checking is applied after the brain CT image sample data is obtained, and the sample data is divided into four points Using three of them to train and one test, the samples with obvious differences in the test set can be shaved off, and the filtered sample data is the real training sample and the verification sample.
在本实施例中,获取脑CT的影像信息;通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。由预训练好的脑切割卷积神经网络提取脑CT影像信息得到特征图,将特征图进行候选框对齐操作后得到实例类别的语义信息和实例像素级的位置信息并作为脑分割的结果,然后将实例的语义信息和像素级位置信息通过预训练好的分层神经网络得到脑CT影像的脑分层结果,通过融合脑切割卷积神经网络和脑分层神经网络,可以以一个模型同时得到脑分割以及脑分层的结果,减少了运算时间以及运算资源的消耗,并使得脑分割和脑分层两个任务可以共享特征信息,从而提高了脑分层的准确率。In this embodiment, the image information of the brain CT is acquired; the features of the brain CT image information are extracted through the pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image; the brain CT image After the candidate frame alignment operation is performed on the feature map, the semantic information of the instance category and the pixel-level position information of the instance are obtained; the semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained hierarchical neural network, and the output is The stratification result of the brain CT image. The brain CT image information is extracted from the pre-trained brain cutting convolutional neural network to obtain the feature map. After the feature map is subjected to the candidate frame alignment operation, the semantic information of the instance category and the pixel-level position information of the instance are obtained as the result of brain segmentation. The semantic information and pixel-level position information of the instance are obtained through the pre-trained layered neural network to obtain the brain layered results of the brain CT image. By fusing the brain cutting convolutional neural network and the brain layered neural network, one model can be obtained at the same time The result of brain segmentation and brain layering reduces the computing time and the consumption of computing resources, and enables the two tasks of brain segmentation and brain layering to share feature information, thereby improving the accuracy of brain layering.
进一步的,在所述获取脑CT的影像信息的步骤201之前,还包括:Further, before the step 201 of acquiring brain CT image information, the method further includes:
步骤200,将脑CT图进行通道预处理得到脑CT影像信息。Step 200: Perform channel preprocessing on the brain CT image to obtain brain CT image information.
其中,CT(Computed Tomography)成像,是利用计算机技术对被测物体断层扫描图像进行重建获得三维断层图像的扫描方式。该扫描方式是通过单一轴面的射线穿透被测物体,根据被测物体各部分对射线的吸收与透过率不同,由计算机采集透过射线并通过三维重构成像。脑CT图像包括脑组织的结构、形状信息,可通过这些信息来明确显示颅内肿瘤的数目、部位、大小、轮廓、密度、瘤内出血、钙化以及扩散程度等。Among them, CT (Computed Tomography imaging is a scanning method that uses computer technology to reconstruct the tomographic image of the measured object to obtain a three-dimensional tomographic image. This scanning method is to penetrate the object under test through the rays of a single axis. According to the different absorption and transmittance of each part of the object under test, the transmitted rays are collected by the computer and imaged through three-dimensional reconstruction. Brain CT images include information on the structure and shape of brain tissue, which can be used to clearly show the number, location, size, contour, density, intratumoral hemorrhage, calcification, and degree of spread of intracranial tumors.
上述的脑CT图可以通过CT机对目标对象进行扫描得到并多以DICOM数据格式储存,获取脑CT影像数据需要解析DICOM档案。DICOM的结构化信息主要可以分为Patient, Study, Series和Image四层,每个信息(IE,Information entity)由“键-值”(Tag,Value)组合存储,通过解析Tag定位到脑部信息才可由DICOM档案转为HU值影像,再经过通道预处理得到上述脑CT影像信息,具体为将脑CT的HU值影像图依据全窗 (值域为0-4096)、脑窗(40,120)、骨窗(450, 2000)三种窗宽窗位转为三通道灰度影像值。The above-mentioned brain CT images can be obtained by scanning the target object by a CT machine and are mostly stored in DICOM data format. To obtain brain CT image data, it is necessary to parse the DICOM file. The structured information of DICOM can be divided into four layers: Patient, Study, Series and Image. Each information (IE, Information entity) is stored by the combination of "key-value" (Tag, Value), and the brain information can be converted from DICOM file to HU value image by analyzing the Tag location, and then the above-mentioned brain CT image information is obtained through channel preprocessing. The CT image of HU value is converted into a three-channel gray image value according to the three window widths and window levels of full window (range of 0-4096), brain window (40,120), and bone window (450, 2000).
进一步的,如图3所示,上述步骤202具体包括:Further, as shown in FIG. 3, the above step 202 specifically includes:
步骤2021,将获取到的脑CT影像信息输入到训练好的ResNet卷积神经网络,提取所述脑CT影像的特征图。Step 2021: Input the acquired brain CT image information into the trained ResNet convolutional neural network, and extract the feature map of the brain CT image.
在本实施例中,可以选择标准的卷积神经网络(通常来说是 ResNet50 和 ResNet101)经过训练后作为特征提取器,该卷积神经网络的底层检测的是低级特征(边缘和角等),较高层检测的是更高级的特征(汽车、人、天空等)。将上述通道预处理后的脑CT影像数据输入到上述标准的训练好的卷积神经网络的输入层,经过卷积计算、池化降维和全连接分类后得到该影像的特征图,并将该特征图将作为下一步骤的输入数据。In this embodiment, you can choose a standard convolutional neural network (usually ResNet50 and ResNet101) is trained as a feature extractor. The bottom layer of the convolutional neural network detects low-level features (edges and corners, etc.), and the higher layer detects more advanced features (cars, people, sky, etc.). The brain CT image data preprocessed by the above channels are input to the input layer of the above-mentioned standard trained convolutional neural network, and the feature map of the image is obtained after convolution calculation, pooling dimensionality reduction, and fully connected classification. The feature map will be used as the input data for the next step.
进一步的,如图4所示,上述步骤203具体包括:Further, as shown in FIG. 4, the above step 203 specifically includes:
步骤2031,获取脑CT影像的特征图上每一个像素点的候选框并对获取到的候选框进行候选框对齐操作。Step 2031: Obtain a candidate frame of each pixel on the feature map of the brain CT image and perform a candidate frame alignment operation on the obtained candidate frame.
在本实施例中,首先通过区域生成网络(RPN)为上述特征图上的每一个像素点生成9种锚点anchor,这9种初始anchor可以包含三种面积(128×128,256×256,512×512),每种面积又可以包含三种长宽比(1:1,1:2,2:1)。对于生成的anchor,RPN第一是判断anchor是前景还是背景,即判断这个anchor到底有没有覆盖目标,第二是为属于前景的anchor进行第一次坐标修正,从而得到每一个像素点的候选框。然后根据得到的候选框再回到特征图上进行特征选中,将实例通过候选框标注出来。In this embodiment, firstly, 9 kinds of anchor points are generated for each pixel on the above feature map through the region generation network (RPN). These 9 kinds of initial anchors can contain three kinds of areas (128×128, 256×256, 512×512), each area can contain three aspect ratios (1:1, 1:2, 2:1). For the generated anchor, the RPN first judges whether the anchor is the foreground or the background, that is, whether the anchor covers the target or not, and the second is the first coordinate correction for the anchor belonging to the foreground, so as to obtain the candidate frame of each pixel. . Then return to the feature map to perform feature selection based on the obtained candidate box, and mark the instance through the candidate box.
步骤2032,将进行候选框对齐操作后的特征图输入到全连接层网络,得到特征图的实例类别的语义信息和实例像素级的位置信息。Step 2032: Input the feature map after the candidate frame alignment operation is performed into the fully connected layer network to obtain the semantic information of the instance category of the feature map and the position information of the instance pixel level.
其中,全连接层(fully connected layers,FC)在神经网络中起到“分类器”的作用,可以整合卷积层或者池化层中具有类别区分性的局部信息。例如在卷积神经网络中,如在 CNN 中,全连接层常出现在最后几层,卷积层、池化层和激活函数层等操作是将原始数据映射到隐层特征空间,全连接层则起到将学到的“分布式特征表示”映射到样本标记空间的作用。在实际使用中,全连接层可由卷积操作实现:对前层是全连接的全连接层可以转化为卷积核为1x1的卷积;而前层是卷积层的全连接层可以转化为卷积核为hxw的全局卷积,h和w分别为前层卷积结果的高和宽。Among them, fully connected layers (FC) play the role of a "classifier" in the neural network, and can integrate local information with category discrimination in the convolutional layer or the pooling layer. For example, in convolutional neural networks, such as in CNN, the fully connected layer often appears in the last few layers. Operations such as the convolutional layer, pooling layer, and activation function layer map the original data to the hidden layer feature space, and the fully connected layer It plays the role of mapping the learned "distributed feature representation" to the sample label space. In actual use, the fully connected layer can be realized by the convolution operation: the fully connected layer that is fully connected to the previous layer can be transformed into a convolution with a 1x1 convolution kernel; and the fully connected layer that is the convolutional layer in the previous layer can be transformed into The convolution kernel is the global convolution of hxw, and h and w are the height and width of the previous convolution result.
在本实施例中,将上述经过候选框对齐后获得的候选框标注出来的实例输入全连接层网络,进行候选框分类后得到实例的语义信息(即实例的类别标签),然后进行候选框回归(进一步精调候选框的位置和尺寸)得到实例的像素级位置信息(包括实例像素点及像素点在特征图上的坐标)。In this embodiment, the above-mentioned examples labeled with candidate boxes obtained after candidate box alignment are input into the fully connected layer network, and the semantic information of the instance (ie, the category label of the instance) is obtained after the candidate box classification is performed, and then the candidate box regression is performed (Further fine-tune the position and size of the candidate frame) Obtain the pixel-level position information of the instance (including the pixel points of the instance and the coordinates of the pixel points on the feature map).
进一步的,在步骤2031之后、步骤2032之前,还可以包括:Further, after step 2031 and before step 2032, it may further include:
步骤20311,通过全卷积神经网络为每一个候选框对齐操作后的像素点生成掩码,将实例分割出来。In step 20311, a mask is generated for each pixel after the candidate frame alignment operation through the full convolutional neural network, and the instances are segmented.
在本实施例中,通过一个包含全卷积神经网络(FCN)的mask(掩码)神经网络分支来为每一个候选框对齐操作后的像素点逐像素生成实例掩码,从而可以分割出特征图上不同的实例。其中, FCN可以接受任意尺寸的输入图像,采用反卷积层对最后一个卷积层的特征图进行上采样, 使它恢复到输入图像相同的尺寸,从而可以对每个像素都产生了一个预测, 同时保留了原始输入图像中的空间信息, 最后在上采样的特征图上进行逐像素分类。In this embodiment, a mask neural network branch containing a full convolutional neural network (FCN) is used to generate an instance mask for each pixel after the candidate frame alignment operation, so that the features can be segmented. Different examples on the map. Among them, FCN can accept input images of any size, and use the deconvolution layer to upsample the feature map of the last convolution layer to restore it to the same size of the input image, so that a prediction can be generated for each pixel At the same time, the spatial information in the original input image is retained, and finally pixel-by-pixel classification is performed on the up-sampled feature map.
进一步的,如图5所示,上述步骤204具体包括:Further, as shown in FIG. 5, the above step 204 specifically includes:
步骤2041,将实例类别的语义信息和实例像素级的位置信息联合输入到分层神经网络的卷积层再次提取特征。Step 2041: Jointly input the semantic information of the instance category and the pixel-level position information of the instance into the convolutional layer of the hierarchical neural network to extract features again.
其中,全连接层(fully connected layers,FC)在神经网络中起到“分类器”的作用,可以整合卷积层或者池化层中具有类别区分性的局部信息。在本实施例中,分层网络的卷积层可以用来对输入网络的实例的联合信息(即实例类别的语义信息和实例像素级的位置信息)进行卷积和池化操作,进行二次特征提取,从而得到实例的高度可分的局部特征,便于对脑CT影像准确分层。Among them, fully connected layers (FC) play the role of a "classifier" in the neural network, and can integrate local information with category discrimination in the convolutional layer or the pooling layer. In this embodiment, the convolutional layer of the hierarchical network can be used to perform convolution and pooling operations on the joint information of the instances of the input network (that is, the semantic information of the instance category and the location information of the instance pixel level), and perform secondary operations. Feature extraction, so as to obtain highly separable local features of the instance, which facilitates accurate stratification of brain CT images.
步骤2042,将分层神经网络的卷积层提取到的特征输入到分层神经网络的全连接层进行分类,得到所述脑CT影像的分层结果并输出。In step 2042, the features extracted from the convolutional layer of the hierarchical neural network are input to the fully connected layer of the hierarchical neural network for classification, and the hierarchical result of the brain CT image is obtained and output.
在本申请实施例中,将上述二次特征提取得到的局部特征输入给全连接层进行特征组合后再经过该层的softmax逻辑回归进行分类和预测,也就是几个类别和对应的概率,其中概率最大的那个类别即是脑CT影像分层的结果然后将结果输出,具体而言,对于脑CT影像图,可以获得该影像显示的是大脑的哪一层,如颅底蝶鞍层、鞍上池层、侧脑室顶部层等。In the embodiment of the present application, the local features obtained by the above-mentioned secondary feature extraction are input to the fully connected layer for feature combination, and then classified and predicted through the softmax logistic regression of the layer, that is, several categories and corresponding probabilities, where The category with the highest probability is the result of brain CT image layering and then output the result. Specifically, for brain CT image, you can get which layer of the brain the image shows, such as the sella at the base of the skull and the saddle. Upper cistern layer, top layer of lateral ventricle, etc.
需要说明的是,上述脑切割卷积神经网络及分层神经网络需要预训练好,即构建好神经网络模型后向模型输入训练数据集使模型的输出符合预期或误差尽可能小。其中,需要对获取到的脑CT影像样本数据集进行预处理,例如正例负例采样、过滤等,在本实施例中,得到脑CT影像数据后会应用交叉验正,将样本数据分为四分,利用其中三份训练一份测试,可以将测试集中明显差异的样本剃除,过滤后的样本数据集为真正的训练数据,然后输入到上述脑切割卷积神经网络及分层神经网络进行预训练和验证。It should be noted that the above-mentioned brain cutting convolutional neural network and hierarchical neural network need to be pre-trained, that is, after the neural network model is constructed, the training data set is input to the model to make the output of the model meet the expectations or the error is as small as possible. Among them, the acquired brain CT image sample data set needs to be preprocessed, such as positive and negative examples sampling, filtering, etc. In this embodiment, after the brain CT image data is obtained, cross-checking is applied to divide the sample data into Four points, using three of them to train a test, you can shave the samples with obvious differences in the test set, and the filtered sample data set is the real training data, and then input to the above-mentioned brain cutting convolutional neural network and layered neural network Perform pre-training and verification.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机流程来指令相关的硬件来完成,该计算机流程可存储于一计算机可读取存储介质中,该流程在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer process. The computer process can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments. Among them, the aforementioned storage medium may be a magnetic disk, an optical disk, a read-only storage memory (Read-Only Memory, ROM) and other non-volatile storage media, or random storage memory (Random Access Memory, RAM) etc.
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowchart of the drawings are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless explicitly stated in this article, the execution of these steps is not strictly limited in order, and they can be executed in other orders. Moreover, at least part of the steps in the flowchart of the drawings may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times, and the order of execution is also It is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
进一步参考图6,作为对上述图2所示基于神经网络的脑组织分层方法的实现,本申请提供了一种基于神经网络的脑组织分层装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。With further reference to FIG. 6, as an implementation of the neural network-based brain tissue layering method shown in FIG. 2, this application provides an embodiment of a neural network-based brain tissue layering device. The device embodiment and the diagram Corresponding to the method embodiment shown in 2, the device can be specifically applied to various electronic devices.
如图6所示,本实施例所述的基于神经网络的脑组织分层装置300包括:第一获取模块301、提取模块302、第二获取模块303、输出模块304。其中:As shown in FIG. 6, the neural network-based brain tissue layering apparatus 300 in this embodiment includes: a first acquisition module 301, an extraction module 302, a second acquisition module 303, and an output module 304. among them:
第一获取模块301,用于获取脑CT的影像信息;The first obtaining module 301 is used to obtain brain CT image information;
提取模块302,用于通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;The extraction module 302 is configured to extract the features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
第二获取模块303,用于将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;The second obtaining module 303 is configured to obtain the semantic information of the instance category and the pixel-level position information of the instance after performing the candidate frame alignment operation on the feature map of the brain CT image;
输出模块304,用于将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The output module 304 is configured to input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
在本实施例的一些可选的实现方式中,上述装置300还包括:In some optional implementation manners of this embodiment, the foregoing apparatus 300 further includes:
预处理模块305,用于将脑CT图进行通道预处理得到脑CT影像信息。The preprocessing module 305 is configured to perform channel preprocessing on the brain CT image to obtain brain CT image information.
本申请实施例提供的基于神经网络的脑组织分层装置能够实现图2至图5的方法实施例中的各个实施方式,以及相应有益效果,为避免重复,这里不再赘述。The neural network-based brain tissue layering device provided in the embodiments of the present application can realize the various implementation modes in the method embodiments in FIGS. 2 to 5 and the corresponding beneficial effects. To avoid repetition, details are not described herein again.
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图7,图7为本实施例计算机设备基本结构框图。In order to solve the above technical problems, the embodiments of the present application also provide computer equipment. Please refer to FIG. 7 for details. FIG. 7 is a block diagram of the basic structure of the computer device in this embodiment.
所述计算机设备7包括通过系统总线相互通信连接存储器71、处理器72、网络接口73。需要指出的是,图中仅示出了具有组件71-73的计算机设备7,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器 (Digital Signal Processor,DSP)、嵌入式设备等。The computer device 7 includes a memory 71, a processor 72, and a network interface 73 that are connected to each other in communication via a system bus. It should be noted that the figure only shows the computer device 7 with components 71-73, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. Its hardware includes, but is not limited to, a microprocessor, a dedicated Integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。The computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The computer device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
所述存储器71至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等,所述计算机可读存储介质可以是非易失性,也可以是易失性。在一些实施例中,所述存储器71可以是所述计算机设备7的内部存储单元,例如该计算机设备7的硬盘或内存。在另一些实施例中,所述存储器71也可以是所述计算机设备7的外部存储设备,例如该计算机设备7上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。当然,所述存储器71还可以既包括所述计算机设备7的内部存储单元也包括其外部存储设备。本实施例中,所述存储器71通常用于存储安装于所述计算机设备7的操作系统和各类应用软件,例如基于神经网络的脑组织分层方法的流程代码等。此外,所述存储器71还可以用于暂时地存储已经输出或者将要输出的各类数据。The memory 71 includes at least one type of readable storage medium, the readable storage medium includes flash memory, hard disk, multimedia card, card type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disks, optical disks, etc., the computer readable storage The medium can be non-volatile or volatile. In some embodiments, the memory 71 may be an internal storage unit of the computer device 7, such as a hard disk or memory of the computer device 7. In other embodiments, the memory 71 may also be an external storage device of the computer device 7, such as a plug-in hard disk equipped on the computer device 7, a smart memory card (Smart Memory Card). Media Card, SMC), Secure Digital (Secure Digital, SD) card, flash card (Flash Card), etc. Of course, the memory 71 may also include both the internal storage unit of the computer device 7 and the external storage device thereof. In this embodiment, the memory 71 is generally used to store the operating system and various application software installed in the computer device 7, such as the process code of a neural network-based brain tissue layering method. In addition, the memory 71 can also be used to temporarily store various types of data that have been output or will be output.
所述处理器72在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器72通常用于控制所述计算机设备7的总体操作。本实施例中,所述处理器72用于运行所述存储器71中存储的流程代码或者处理数据,例如运行所述基于神经网络的脑组织分层方法的流程代码。The processor 72 may be a central processing unit (Central Processing Unit) in some embodiments. Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip. The processor 72 is generally used to control the overall operation of the computer device 7. In this embodiment, the processor 72 is configured to run the process code or process data stored in the memory 71, for example, run the process code of the neural network-based brain tissue layering method.
所述网络接口73可包括无线网络接口或有线网络接口,该网络接口73通常用于在所述计算机设备7与其他电子设备之间建立通信连接。The network interface 73 may include a wireless network interface or a wired network interface, and the network interface 73 is generally used to establish a communication connection between the computer device 7 and other electronic devices.
本申请还提供了另一种实施方式,即提供一种计算机可读存储介质,所述计算机可读存储介质存储有基于神经网络的脑组织分层流程,所述基于神经网络的脑组织分层流程可被至少一个处理器执行,以使所述至少一个处理器执行如上述的基于神经网络的脑组织分层方法的步骤。This application also provides another implementation manner, that is, a computer-readable storage medium storing a neural network-based brain tissue layering process, and the neural network-based brain tissue layering process The process may be executed by at least one processor, so that the at least one processor executes the steps of the above-mentioned neural network-based brain tissue layering method.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above implementation manners, those skilled in the art can clearly understand that the above-mentioned embodiment method can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present application.
显然,以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。Obviously, the embodiments described above are only a part of the embodiments of the present application, rather than all of the embodiments. The drawings show preferred embodiments of the present application, but do not limit the patent scope of the present application. This application can be implemented in many different forms. On the contrary, the purpose of providing these examples is to make the understanding of the disclosure of this application more thorough and comprehensive. Although this application has been described in detail with reference to the foregoing embodiments, for those skilled in the art, it is still possible for those skilled in the art to modify the technical solutions described in each of the foregoing specific embodiments, or equivalently replace some of the technical features. . All equivalent structures made using the contents of the description and drawings of this application, directly or indirectly used in other related technical fields, are similarly within the scope of patent protection of this application.

Claims (20)

  1. 一种基于神经网络的脑组织分层方法,其中,包括:A brain tissue layering method based on neural network, which includes:
    获取脑CT的影像信息;Obtain imaging information of brain CT;
    通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;Extracting features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
    将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained;
    将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
  2. 如权利要求1所述的方法,其中,在所述获取脑CT的影像信息的步骤之前,还包括步骤:The method according to claim 1, wherein before the step of acquiring brain CT image information, the method further comprises the step of:
    将脑CT图进行通道预处理得到脑CT影像信息。The brain CT image is preprocessed to obtain the brain CT image information.
  3. 如权利要求1所述的方法,其中,所述通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图的步骤具体包括:The method of claim 1, wherein the step of extracting the features of the brain CT image information through a pre-trained brain cutting convolutional neural network, and obtaining the feature map of the brain CT image specifically comprises:
    将获取到的脑CT影像信息输入到训练好的ResNet卷积神经网络,提取所述脑CT影像的特征图。Input the acquired brain CT image information into the trained ResNet convolutional neural network, and extract the feature map of the brain CT image.
  4. 如权利要求3所述的方法,其中,所述将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息的步骤具体包括:The method according to claim 3, wherein the step of obtaining the semantic information of the instance category and the position information of the instance pixel level after performing the candidate frame alignment operation on the feature map of the brain CT image specifically comprises:
    获取脑CT影像的特征图上每一个像素点的候选框并对获取到的候选框进行候选框对齐操作;Obtain the candidate frame of each pixel on the feature map of the brain CT image and perform the candidate frame alignment operation on the obtained candidate frame;
    将进行候选框对齐操作后的特征图输入到全连接层网络,得到特征图的实例类别的语义信息和实例像素级的位置信息。The feature map after the candidate frame alignment operation is input into the fully connected layer network, and the semantic information of the instance category of the feature map and the position information of the instance pixel level are obtained.
  5. 如权利要求4所述的方法,其中,在所述获取脑CT影像的特征图上每一个像素点的候选框并对获取到的候选框进行候选框对齐操作的步骤之后,还包括步骤:The method according to claim 4, wherein, after the step of obtaining the candidate frame of each pixel on the feature map of the brain CT image and performing the candidate frame alignment operation on the obtained candidate frame, the method further comprises:
    通过全卷积神经网络为每一个候选框对齐操作后的像素点生成掩码,将实例分割出来。Through the full convolutional neural network, a mask is generated for each pixel point after the alignment operation of the candidate frame, and the instance is segmented.
  6. 如权利要求4所述的方法,其中,所述将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果的步骤具体包括:The method according to claim 4, wherein the semantic information of the instance category and the pixel-level position information of the instance are input to a pre-trained layered neural network, and the layered result of the brain CT image is output The specific steps include:
    将实例类别的语义信息和实例像素级的位置信息联合输入到分层神经网络的卷积层再次提取特征;The semantic information of the instance category and the pixel-level position information of the instance are jointly input to the convolutional layer of the hierarchical neural network to extract the features again;
    将分层神经网络的卷积层提取到的特征输入到分层神经网络的全连接层进行分类,得到所述脑CT影像的分层结果并输出。The features extracted from the convolutional layer of the hierarchical neural network are input to the fully connected layer of the hierarchical neural network for classification, and the hierarchical result of the brain CT image is obtained and output.
  7. 一种基于神经网络的脑组织分层装置,其中,包括:A brain tissue layering device based on neural network, which includes:
    第一获取模块,用于获取脑CT的影像信息;The first acquisition module is used to acquire brain CT image information;
    提取模块,用于通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;An extraction module for extracting the features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
    第二获取模块,用于将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;The second acquiring module is used to acquire the semantic information of the instance category and the pixel-level position information of the instance after performing the candidate frame alignment operation on the feature map of the brain CT image;
    输出模块,用于将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The output module is used to input the semantic information of the instance category and the pixel-level position information of the instance into the pre-trained layered neural network, and output the layered result of the brain CT image.
  8. 如权利要求7所述装置,其中,在所述第一获取模块之前还包括:8. The device according to claim 7, wherein before the first obtaining module, the method further comprises:
    预处理模块,用于将脑CT图进行通道预处理得到脑CT影像信息。The preprocessing module is used for channel preprocessing the brain CT image to obtain brain CT image information.
  9. 如权利要求7所述的基于神经网络的脑组织分层装置,其中,所述提取模块包括:8. The neural network-based brain tissue layering device according to claim 7, wherein the extraction module comprises:
    提取子模块,用于将获取到的脑CT影像信息输入到训练好的ResNet卷积神经网络,提取所述脑CT影像的特征图。The extraction sub-module is used to input the acquired brain CT image information into the trained ResNet convolutional neural network, and extract the feature map of the brain CT image.
  10. 如权利要求9所述的基于神经网络的脑组织分层装置,其中,所述第二获取模块包括:9. The neural network-based brain tissue layering device according to claim 9, wherein the second acquisition module comprises:
    选框对齐子模块,用于获取脑CT影像的特征图上每一个像素点的候选框并对获取到的候选框进行候选框对齐操作;The selection frame alignment sub-module is used to obtain the candidate frame of each pixel on the feature map of the brain CT image and perform the candidate frame alignment operation on the obtained candidate frame;
    第二获取子模块,用于将进行候选框对齐操作后的特征图输入到全连接层网络,得到特征图的实例类别的语义信息和实例像素级的位置信息。The second acquisition sub-module is used to input the feature map after the candidate frame alignment operation is performed into the fully connected layer network to obtain the semantic information of the instance category of the feature map and the pixel-level position information of the instance.
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下所述的基于神经网络的脑组织分层方法的步骤:A computer device including a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor, wherein the processor executes the computer-readable instructions as follows The steps of the neural network-based brain tissue layering method:
    获取脑CT的影像信息;Obtain imaging information of brain CT;
    通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;Extracting features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
    将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained;
    将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
  12. 如权利要求11所述的计算机设备,其中,在所述获取脑CT的影像信息的步骤之前,还包括步骤:The computer device according to claim 11, wherein, before the step of obtaining brain CT image information, it further comprises the step of:
    将脑CT图进行通道预处理得到脑CT影像信息。The brain CT image is preprocessed to obtain the brain CT image information.
  13. 如权利要求11所述的计算机设备,其中,所述通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图的步骤具体包括:11. The computer device of claim 11, wherein the step of extracting features of the brain CT image information through a pre-trained brain cutting convolutional neural network, and obtaining the feature map of the brain CT image specifically comprises:
    将获取到的脑CT影像信息输入到训练好的ResNet卷积神经网络,提取所述脑CT影像的特征图。Input the acquired brain CT image information into the trained ResNet convolutional neural network, and extract the feature map of the brain CT image.
  14. 如权利要求13所述的计算机设备,其中,所述将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息的步骤具体包括:The computer device according to claim 13, wherein the step of obtaining the semantic information of the instance category and the position information of the instance pixel level after performing the candidate frame alignment operation on the feature map of the brain CT image specifically comprises:
    获取脑CT影像的特征图上每一个像素点的候选框并对获取到的候选框进行候选框对齐操作;Obtain the candidate frame of each pixel on the feature map of the brain CT image and perform the candidate frame alignment operation on the obtained candidate frame;
    将进行候选框对齐操作后的特征图输入到全连接层网络,得到特征图的实例类别的语义信息和实例像素级的位置信息。The feature map after the candidate frame alignment operation is input to the fully connected layer network to obtain the semantic information of the instance category of the feature map and the location information of the instance pixel level.
  15. 如权利要求14所述的计算机设备,其中,在所述获取脑CT影像的特征图上每一个像素点的候选框并对获取到的候选框进行候选框对齐操作的步骤之后,还包括步骤:The computer device according to claim 14, wherein after the step of obtaining the candidate frame of each pixel on the feature map of the brain CT image and performing the candidate frame alignment operation on the obtained candidate frame, the method further comprises:
    通过全卷积神经网络为每一个候选框对齐操作后的像素点生成掩码,将实例分割出来。Through the full convolutional neural network, a mask is generated for each pixel point after the alignment operation of the candidate frame, and the instance is segmented.
  16. 一种计算机可读存储介质,其中,所述计算机可读指令被一种处理器执行时,使得所述一种处理执行如下步骤:A computer-readable storage medium, wherein, when the computer-readable instruction is executed by a processor, the one processing is caused to perform the following steps:
    获取脑CT的影像信息;Obtain imaging information of brain CT;
    通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图;Extracting features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain a feature map of the brain CT image;
    将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息;After performing a candidate frame alignment operation on the feature map of the brain CT image, the semantic information of the instance category and the pixel-level position information of the instance are obtained;
    将所述实例类别的语义信息和实例像素级的位置信息输入到预训练好的分层神经网络,输出所述脑CT影像的分层结果。The semantic information of the instance category and the pixel-level position information of the instance are input to the pre-trained layered neural network, and the layered result of the brain CT image is output.
  17. 如权利要求16所述的计算机可读存储介质,其中,在所述获取脑CT的影像信息的步骤之前,还包括步骤:15. The computer-readable storage medium of claim 16, wherein before the step of acquiring brain CT image information, the method further comprises:
    将脑CT图进行通道预处理得到脑CT影像信息。The brain CT image is preprocessed to obtain the brain CT image information.
  18. 如权利要求16所述的计算机可读存储介质,其中,所述通过预训练好的脑切割卷积神经网络提取所述脑CT影像信息的特征,得到所述脑CT影像的特征图的步骤具体包括:The computer-readable storage medium of claim 16, wherein the step of extracting the features of the brain CT image information through a pre-trained brain cutting convolutional neural network to obtain the feature map of the brain CT image is specifically include:
    将获取到的脑CT影像信息输入到训练好的ResNet卷积神经网络,提取所述脑CT影像的特征图。Input the acquired brain CT image information into the trained ResNet convolutional neural network, and extract the feature map of the brain CT image.
  19. 如权利要求18所述的计算机可读存储介质,其中,所述将所述脑CT影像的特征图进行候选框对齐操作之后获取实例类别的语义信息和实例像素级的位置信息的步骤具体包括:18. The computer-readable storage medium according to claim 18, wherein the step of obtaining the semantic information of the instance category and the position information of the instance pixel after performing the candidate frame alignment operation on the feature map of the brain CT image specifically comprises:
    获取脑CT影像的特征图上每一个像素点的候选框并对获取到的候选框进行候选框对齐操作;Obtain the candidate frame of each pixel on the feature map of the brain CT image and perform the candidate frame alignment operation on the obtained candidate frame;
    将进行候选框对齐操作后的特征图输入到全连接层网络,得到特征图的实例类别的语义信息和实例像素级的位置信息。The feature map after the candidate frame alignment operation is input to the fully connected layer network to obtain the semantic information of the instance category of the feature map and the location information of the instance pixel level.
  20. 如权利要求19所述的计算机可读存储介质,其中,在所述获取脑CT影像的特征图上每一个像素点的候选框并对获取到的候选框进行候选框对齐操作的步骤之后,还包括步骤:The computer-readable storage medium of claim 19, wherein after the step of obtaining the candidate frame of each pixel on the feature map of the brain CT image and performing the candidate frame alignment operation on the obtained candidate frame, further Including steps:
    通过全卷积神经网络为每一个候选框对齐操作后的像素点生成掩码,将实例分割出来。Through the full convolutional neural network, a mask is generated for each pixel point after the alignment operation of the candidate frame, and the instance is segmented.
PCT/CN2020/098936 2019-09-25 2020-06-29 Brain tissue layering method and device based on neural network, and computer device WO2021057148A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910909092.8 2019-09-25
CN201910909092.8A CN110827236B (en) 2019-09-25 2019-09-25 Brain tissue layering method, device and computer equipment based on neural network

Publications (1)

Publication Number Publication Date
WO2021057148A1 true WO2021057148A1 (en) 2021-04-01

Family

ID=69548241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098936 WO2021057148A1 (en) 2019-09-25 2020-06-29 Brain tissue layering method and device based on neural network, and computer device

Country Status (2)

Country Link
CN (1) CN110827236B (en)
WO (1) WO2021057148A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953345A (en) * 2023-03-09 2023-04-11 同心智医科技(北京)有限公司 Method and device for synthesizing lesions of cerebral hemorrhage medical image and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827236B (en) * 2019-09-25 2024-04-05 平安科技(深圳)有限公司 Brain tissue layering method, device and computer equipment based on neural network
CN111754520B (en) * 2020-06-09 2023-09-15 江苏师范大学 Deep learning-based cerebral hematoma segmentation method and system
CN112102239A (en) * 2020-08-10 2020-12-18 北京工业大学 Image processing method and system for full-layer brain CT image
CN113768528A (en) * 2021-09-26 2021-12-10 华中科技大学 CT image cerebral hemorrhage auxiliary positioning system
CN116342603B (en) * 2023-05-30 2023-08-29 杭州脉流科技有限公司 Method for obtaining arterial input function

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493330A (en) * 2018-11-06 2019-03-19 电子科技大学 A kind of nucleus example dividing method based on multi-task learning
US20190139216A1 (en) * 2017-11-03 2019-05-09 Siemens Healthcare Gmbh Medical Image Object Detection with Dense Feature Pyramid Network Architecture in Machine Learning
CN110060244A (en) * 2019-04-15 2019-07-26 深圳市麦迪普科技有限公司 The system and method for cell detection and segmentation based on deep learning neural network
CN110263656A (en) * 2019-05-24 2019-09-20 南方科技大学 A kind of cancer cell identification methods, devices and systems
CN110827236A (en) * 2019-09-25 2020-02-21 平安科技(深圳)有限公司 Neural network-based brain tissue layering method and device, and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303979B2 (en) * 2016-11-16 2019-05-28 Phenomic Ai Inc. System and method for classifying and segmenting microscopy images with deep multiple instance learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190139216A1 (en) * 2017-11-03 2019-05-09 Siemens Healthcare Gmbh Medical Image Object Detection with Dense Feature Pyramid Network Architecture in Machine Learning
CN109493330A (en) * 2018-11-06 2019-03-19 电子科技大学 A kind of nucleus example dividing method based on multi-task learning
CN110060244A (en) * 2019-04-15 2019-07-26 深圳市麦迪普科技有限公司 The system and method for cell detection and segmentation based on deep learning neural network
CN110263656A (en) * 2019-05-24 2019-09-20 南方科技大学 A kind of cancer cell identification methods, devices and systems
CN110827236A (en) * 2019-09-25 2020-02-21 平安科技(深圳)有限公司 Neural network-based brain tissue layering method and device, and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HE KAIMING; GKIOXARI GEORGIA; DOLLAR PIOTR; GIRSHICK ROSS: "Mask R-CNN", 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)​, 22 October 2017 (2017-10-22), pages 2980 - 2988, XP033283165, DOI: 10.1109/ICCV.2017.322 *
YIN HANG: "Research and Application of Ventricular Segmentation Using Nuclear Magnetic Resonance Imaging Based on Deep Learning", CHINESE MASTER'S THESES FULL-TEXT DATABASE, 1 April 2019 (2019-04-01), pages 1 - 67, XP055795803, ISSN: 1674-0246 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953345A (en) * 2023-03-09 2023-04-11 同心智医科技(北京)有限公司 Method and device for synthesizing lesions of cerebral hemorrhage medical image and storage medium

Also Published As

Publication number Publication date
CN110827236A (en) 2020-02-21
CN110827236B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
WO2021057148A1 (en) Brain tissue layering method and device based on neural network, and computer device
JP6843086B2 (en) Image processing systems, methods for performing multi-label semantic edge detection in images, and non-temporary computer-readable storage media
US11861829B2 (en) Deep learning based medical image detection method and related device
WO2022077917A1 (en) Instance segmentation model sample screening method and apparatus, computer device and medium
CN111524106B (en) Skull fracture detection and model training method, device, equipment and storage medium
CN112262395A (en) Classification based on annotation information
CN111369576B (en) Training method of image segmentation model, image segmentation method, device and equipment
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
JP7026826B2 (en) Image processing methods, electronic devices and storage media
US10853409B2 (en) Systems and methods for image search
US20220189142A1 (en) Ai-based object classification method and apparatus, and medical imaging device and storage medium
CN109858333B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
JP7391267B2 (en) Medical image processing methods, devices, equipment, storage media and computer programs
WO2020190480A1 (en) Classifying an input data set within a data category using multiple data recognition tools
CN111199541A (en) Image quality evaluation method, image quality evaluation device, electronic device, and storage medium
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
CN110533046A (en) A kind of image instance dividing method and device
CN113902945A (en) Multi-modal breast magnetic resonance image classification method and system
CN113724185A (en) Model processing method and device for image classification and storage medium
CN113822846A (en) Method, apparatus, device and medium for determining region of interest in medical image
CN117373034A (en) Method and system for identifying background information
KR20200054555A (en) Apparatus for medical image processing
CN113408596B (en) Pathological image processing method and device, electronic equipment and readable storage medium
WO2022247448A1 (en) Data processing method and apparatus, computing device, and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20870417

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20870417

Country of ref document: EP

Kind code of ref document: A1