WO2021189959A1 - 大脑中线识别方法、装置、计算机设备及存储介质 - Google Patents

大脑中线识别方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2021189959A1
WO2021189959A1 PCT/CN2020/135333 CN2020135333W WO2021189959A1 WO 2021189959 A1 WO2021189959 A1 WO 2021189959A1 CN 2020135333 W CN2020135333 W CN 2020135333W WO 2021189959 A1 WO2021189959 A1 WO 2021189959A1
Authority
WO
WIPO (PCT)
Prior art keywords
brain
midline
image
feature
recognition result
Prior art date
Application number
PCT/CN2020/135333
Other languages
English (en)
French (fr)
Inventor
周鑫
徐尚良
章古月
陈凯星
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021189959A1 publication Critical patent/WO2021189959A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • This application relates to the technical field of artificial intelligence image classification, in particular to a brain midline recognition method, device, computer equipment and storage medium.
  • the midline structure of the brain CT image is usually related to the intracranial pressure of the brain. Recognizing the midline of the brain can provide an important reference for determining the degree of brain occupation and the degree of increase in internal pressure. It is currently a brain that needs to be focused on. index.
  • This application provides a brain midline recognition method, device, computer equipment, and storage medium, which implement multi-scale extraction of midline features, feature fusion using a feature pyramid network model, and interpolation, weighted fusion, and midline using a weighted fusion model Segmentation to identify the midline of the brain, and finally synthesize an image of the midline of the brain.
  • This application is suitable for smart medical and other fields, which can further promote the construction of smart cities, can quickly and accurately identify the midline of the brain automatically, and improve the recognition accuracy. Improved efficiency.
  • a method for identifying the midline of the brain including:
  • the brain midline detection model includes a multi-scale deep network model, a feature pyramid network model, and a weighted fusion model;
  • the classification and recognition result characterizes whether the brain image can be segmented into a brain midline;
  • all the feature maps to be processed are input into the feature pyramid network model, and all the feature maps to be processed are characterized by the feature pyramid network model Fusion, generating at least one fusion feature map group;
  • a brain midline recognition device including:
  • the acquisition module is used to acquire a brain image associated with a user identification code, and perform image preprocessing on the brain image to obtain an image to be recognized;
  • the input module is used to input the image to be recognized into a trained brain midline detection model;
  • the brain midline detection model includes a multi-scale deep network model, a feature pyramid network model, and a weighted fusion model;
  • the extraction module is used to extract the midline feature of the image to be recognized through the multi-scale deep network model to generate at least one feature map to be processed and a classification and recognition result; the classification and recognition result characterizes whether the brain image can be Segment the midline of the brain;
  • the fusion module is used to input all the feature maps to be processed into the feature pyramid network model when it is detected that the classification and recognition result is that the brain midline can be segmented. Process the feature maps for feature fusion, and generate at least one fusion feature map group;
  • the segmentation module is used to input all the fusion feature map groups into the weighted fusion model, use bilinear interpolation to perform interpolation and weighted fusion on all the fusion feature map groups, to generate the feature image to be segmented, and Performing midline segmentation on the feature image to be segmented to obtain a brain midline segmentation recognition result;
  • the synthesis module is used to synthesize the brain image and the segmented recognition image in the brain midline segmentation recognition result to obtain a brain midline image, and combine the user identification code, the classification recognition result, and the brain midline
  • the image association is stored as the final recognition result of the brain midline.
  • a computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, and the processor implements the following steps when the processor executes the computer-readable instructions:
  • the brain midline detection model includes a multi-scale deep network model, a feature pyramid network model, and a weighted fusion model;
  • the classification and recognition result characterizes whether the brain image can be segmented into a brain midline;
  • all the feature maps to be processed are input into the feature pyramid network model, and all the feature maps to be processed are characterized by the feature pyramid network model Fusion, generating at least one fusion feature map group;
  • One or more readable storage media storing computer readable instructions, when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
  • the brain midline detection model includes a multi-scale deep network model, a feature pyramid network model, and a weighted fusion model;
  • the classification and recognition result characterizes whether the brain image can be segmented into a brain midline;
  • all the feature maps to be processed are input into the feature pyramid network model, and all the feature maps to be processed are characterized by the feature pyramid network model Fusion, generating at least one fusion feature map group;
  • the brain midline recognition method, device, computer equipment, and storage medium provided in this application obtain a brain image associated with a user identification code, and perform image preprocessing on the brain image to obtain an image to be recognized; Image input training completed brain midline detection model; the brain midline detection model includes a multi-scale deep network model, a feature pyramid network model, and a weighted fusion model; the midline feature extraction of the image to be recognized is performed through the multi-scale deep network model , Generate at least one feature map to be processed and a classification recognition result; when it is detected that the classification and recognition result is that the brain midline can be segmented, input all the feature maps to be processed into the feature pyramid network model, and pass the feature
  • the pyramid network model performs feature fusion on all the feature maps to be processed to generate at least one fusion feature map group; inputs all the fusion feature map groups into the weighted fusion model, and applies bilinear interpolation to all the feature maps.
  • the feature maps are feature fused to generate a fused feature map group; bilinear interpolation is used to interpolate and weight all the fused feature map groups through a weighted fusion model to generate the feature image to be segmented, and the feature to be segmented Perform midline segmentation of the image to obtain the brain midline segmentation recognition result; synthesize the brain image with the segmented recognition image in the brain midline segmentation recognition result to obtain the brain midline image, and then combine the user identification code and the classification
  • the recognition result and the brain midline image are associated and stored as the final brain midline recognition result.
  • the brain image associated with the user identification code is image preprocessed, and the midline feature is extracted through the multi-scale deep network model.
  • the brain midline can be segmented, and feature fusion is performed through the feature pyramid network model, and then the weighted fusion model is used for interpolation, weighted fusion, and midline segmentation to obtain the brain midline segmentation recognition result, and finally synthesize the brain midline image, and the user identification code .
  • the classification and recognition results and the brain midline image are associated and stored as the final brain midline recognition results, which can quickly and accurately automatically identify the brain midline and mark it in the brain image, which improves the recognition accuracy, improves the recognition efficiency, and is easy to view.
  • FIG. 1 is a schematic diagram of the application environment of the brain midline recognition method in an embodiment of the present application
  • FIG. 2 is a flowchart of a method for recognizing the midline of the brain in an embodiment of the present application
  • FIG. 3 is a flowchart of a brain midline recognition method in another embodiment of the present application.
  • step S10 of the brain midline recognition method in an embodiment of the present application
  • step S50 of the brain midline recognition method in an embodiment of the present application is a flowchart of step S50 of the brain midline recognition method in an embodiment of the present application.
  • Fig. 6 is a schematic block diagram of a brain midline recognition device in an embodiment of the present application.
  • Fig. 7 is a schematic diagram of a computer device in an embodiment of the present application.
  • the brain midline recognition method provided by this application can be applied in the application environment as shown in Fig. 1, in which the client (computer equipment) communicates with the server through the network.
  • the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
  • the server can be implemented as an independent server or a server cluster composed of multiple servers.
  • a method for recognizing the midline of the brain is provided, and the technical solution mainly includes the following steps S10-S60:
  • S10 Acquire a brain image associated with a user identification code, and perform image preprocessing on the brain image to obtain an image to be recognized.
  • the brain image is a CT image of the user's head scanned by a CT (Computed Tomography) device
  • the user identification code is a unique identification code assigned to the scanned user
  • the user identification code is associated with the brain image, indicating that the brain image is a CT image of the user's head associated with the user identification code
  • the image preprocessing is to sequentially pass through the brain image
  • the process of re-sampling, window width and window level transformation, normalization, and effective extraction is the re-sampling of CT images of different pixel sizes or coarse and fine-grained CT images at the same isomorphic resolution, and sampling and outputting Pixel images of the same size.
  • the resampling can unify all CT images into pixel images of one dimension, which is conducive to subsequent brain midline recognition.
  • the window width and window level are transformed into parameters according to the same window width and window level.
  • the effective extraction is to remove images without any image content in the image (for example: the first few blank images scanned). In this way, the effective extraction can only The images in the effective range are processed to remove invalid images, reducing the process of processing invalid images, and improving subsequent recognition efficiency.
  • the image to be recognized is the image after the image preprocessing, and the image to be recognized is It can speed up the subsequent recognition of the brain midline detection model.
  • step S10 that is, performing image preprocessing on the brain image to obtain the image to be recognized includes:
  • S101 Convert the brain image according to preset window width and window level parameters to obtain a transit image.
  • the window width is the CT value displayed on the CT image.
  • the tissues and lesions within this CT value range are displayed in different simulated gray scales, and the tissues and lesions with a CT value higher than this range, no matter it is high No matter how much it is, it is displayed in white shadow, and there is no gray scale difference.
  • the tissue below this range no matter how much it is below, is displayed in black shadow, and there is no gray scale difference.
  • the window level is a certain The center position of a window width range, the same window width, due to different window levels, the CT value including the CT range is different.
  • the window width and window level parameters refer to parameters that are conducive to identifying the brain midline in the brain image to set the window width and window level, and the window width and window level parameters include the window width parameter and the window level Parameters.
  • the conversion of the brain image includes the process of performing the re-sampling and the window width and window level conversion on the brain image. Firstly, the re-sampling process is performed on the brain image; secondly, , According to the window width and window level parameters, the resampled brain image is subjected to the window width and window level transformation to output an image; finally, the transformed image is determined as the transfer image, and the transfer image is It is helpful to identify the image after the window width and window level of the brain's midline.
  • S102 Perform normalization processing on the transfer image to obtain the image to be recognized.
  • the normalization process is to limit the data to be processed within a certain range after processing.
  • the normalization process can facilitate subsequent identification of the brain midline detection model, and the normalization process can be normalized to With a probability distribution between 0-1, the normalized transfer image is subjected to the effective extraction operation, that is, invalid images are removed to obtain the image to be recognized.
  • This application realizes that by converting the brain image according to the preset window width and window level parameters to obtain a transfer image, normalizing the transfer image to obtain the image to be recognized, so that it can be extracted
  • the useful information in the brain image helps to speed up the subsequent recognition of the brain midline detection model.
  • the brain midline detection model includes a multi-scale deep network model, a feature pyramid network model, and a weighted fusion model.
  • the brain midline detection model is a trained multi-model fusion neural network model, and the brain midline detection model combines the multi-scale deep network model, the feature pyramid network model, and the weighted fusion model , That is, the brain midline detection model includes the multi-scale deep network model, the feature pyramid network model, and the weighted fusion model.
  • the brain midline detection model can recognize the input image to be recognized, and recognize Whether there is a midline of the brain and the midline of the brain is identified, the multi-scale deep network model extracts the midline feature in the image to be recognized through multiple scales, and recognizes whether the image to be recognized has the midline of the brain according to the extracted midline feature,
  • the network structure of the multi-scale deep network model can be set according to requirements.
  • the network structure of the multi-scale deep network model is the network structure of ResNet50, ResNet101, GoogleNet, and VGG19.
  • the network structure of the multi-scale deep network model It is the network structure of ResNet50.
  • the feature pyramid network model is a deep neural network based on the BiFPN model.
  • the feature pyramid network model fuses high-level features (with stronger semantic information) to obtain more advanced features, and
  • a model for predicting the fused features the network structure of the feature pyramid network model is a BiFPN network structure, and the weighted fusion model uses bilinear interpolation to generate multiple sets of the same size as the brain image Images, through the weighted fusion of multiple sets of images generated, and predict the neural network model of the brain midline.
  • the method before step S20, that is, before inputting the image to be recognized into the trained brain midline detection model, the method includes:
  • the brain sample set includes a plurality of brain sample images, the brain sample images are associated with a brain midline identification label, and the brain midline identification label includes a brain midline binary classification label and a brain Annotate the image at the center line.
  • the brain sample set is a collection of all brain sample images
  • the brain sample image is a historically collected CT image of the head and an image preprocessed by the image.
  • the brain sample The image is associated with a brain midline identification tag
  • the brain midline identification tag is whether the brain sample image corresponding to it has brain midline information
  • the brain midline identification tag includes the brain midline binary classification label and the The brain midline annotated image
  • the brain midline binary classification label refers to whether the brain sample image corresponding to the brain midline identification label has a brain midline label category
  • the brain midline binary classification label includes two categories, wherein , One category is separable brain midline (it can be marked as 1 during model training), the other category is indivisible brain midline (it can be marked as 0 during model training), the brain midline
  • the labeled image is an image that marks the coordinate position of the brain midline for the brain sample image corresponding to the brain midline identification label, that is, marks the brain midline according to the brain sample image corresponding
  • S202 Input the brain sample image into an initial combined recognition model containing initial parameters; the initial combined recognition model includes an initial deep network model, an initial pyramid network model, and an initial weighted fusion model.
  • the combined recognition model is a multi-model fusion neural network model
  • the initial combined recognition model includes an initial deep network model, an initial pyramid network model, and an initial weighted fusion model
  • the initial combined recognition model includes the Initial parameters, where the initial parameters include all parameters of the initial deep network model, the initial pyramid network model, and the initial weighted fusion model.
  • S203 Perform the midline feature extraction on the brain sample image through the initial deep network model, and generate at least one feature map of the sample to be processed and a sample classification recognition result.
  • the midline feature is a feature related to the midline of the brain in multiple dimensions
  • the midline feature includes the symmetry and continuity features of the midline of the brain
  • the feature map of the sample to be processed is obtained after extracting the midline feature
  • the feature vector map with the midline feature that is, the sample feature map to be processed is a feature vector map obtained after convolution of the brain sample image
  • the sample feature map to be processed includes features of multiple levels
  • the feature map of the sample to be processed includes feature vector maps outputted at five levels respectively
  • the sample classification recognition result includes the brain midline that can be segmented and the brain midline that can not be segmented
  • the sample classification and recognition result is When the brain midline can be segmented, the sample classification recognition result indicates that the brain sample image can segment the brain midline and the result of identifying the brain sample image with the probability of the brain midline.
  • the multi-scale deep network model is obtained after the training of the initial deep network model is completed.
  • S204 Determine a first loss value according to the sample classification and recognition result and the brain midline binary classification label.
  • the sample classification and recognition result and the brain midline binary classification label are input into the first loss function in the initial deep network model, and the first loss value is calculated by the first loss function, namely Is L 1 , the first loss function can be set according to requirements, such as a cross-entropy loss function, and the first loss value indicates the sample classification and recognition result corresponding to the brain sample image and the brain midline
  • the gap between the two classification labels can be continuously moved closer to the direction of accurate recognition through the first loss value.
  • S205 When it is detected that the sample classification and recognition result is that the brain midline can be segmented, input all the feature maps of the samples to be processed into the initial pyramid network model, and compare the samples to be processed through the initial pyramid network model.
  • the feature maps are fused to generate at least one fused sample feature map group.
  • the sample classification recognition result of the brain sample image is that the brain midline can be segmented
  • the feature map is input into the initial pyramid network model.
  • the initial pyramid network model is a deep neural network based on the BiFPN model.
  • the BiFPN model can better balance feature information of different scales.
  • the BiFPN model is based on a FPN model.
  • the top-down channel is used to merge the features of multiple levels of output.
  • a bottom-up channel is added, and an extra edge is added to the features of the same level, so that more features can be merged at the same time without increasing the loss. , So as to repeatedly stack them to obtain a more advanced feature fusion method.
  • the five levels of the sample feature maps to be processed are fused through the initial pyramid network model to generate the fused sample feature map groups corresponding to the five levels one-to-one, that is, five groups of the fused sample feature maps
  • the five groups of the fused sample feature map groups respectively indicate five levels of different scales of fused feature information, and the feature pyramid network model is obtained after the initial pyramid network model training is completed.
  • S206 Determine a second loss value according to all the fused sample feature map groups and the brain midline annotated image.
  • the coordinate position of the brain midline can be predicted according to all the fused sample feature maps, and the coordinate position of the brain midline in the predicted brain midline and the coordinate position of the brain midline in the brain midline labeled image can be input to the second In the loss function, the difference between the predicted coordinate position of the brain midline and the coordinate position of the brain midline in the brain midline annotation image is calculated by the second loss function to obtain the second loss value, which is L 2 .
  • the Bilinear Upsampling method makes full use of the four pixels around the pixel in the feature vector image to jointly determine the pixel corresponding to the pixel in the output target feature vector image.
  • Value interpolation method using the bilinear interpolation method, the fusion sample feature map group corresponding to each level is up-sampled to an image of the same size as the brain sample image, and merged into A feature vector map of the sample to be fused corresponding to the fusion sample feature map group, and the weighted fusion is the weight parameter of each level in the initial weighted fusion model, and the feature vector map of the sample to be fused corresponding to the five levels Perform a weighted product and fuse into a feature vector map of the sample to be segmented.
  • the midline segmentation is to determine the brain midline in the feature vector map of the sample to be segmented according to the value corresponding to each pixel in the feature vector map of the sample to be segmented
  • the coordinate position of, that is, the probability of identifying whether each pixel in the feature vector map of the sample to be segmented of the same size as the brain sample image is a point in the brain midline, and the probability corresponding to the probability greater than the preset threshold
  • the process of marking a point in the brain midline and segmenting a sample segmentation image, the sample segmentation recognition result includes the sample segmentation image and the probability that each pixel point in the sample segmentation image corresponds to a point in the brain midline value.
  • the third loss value is obtained according to the sample segmentation image and the brain midline annotated image in the sample segmentation recognition result, that is, the brain midline annotated image is distance transformed to generate a brain midline distance image
  • the distance transformation method can be set according to requirements.
  • the distance transformation method can be Euclidean distance transformation, Manhattan/cityblock distance transformation or Chebyshev distance transformation.
  • the distance transformation The method is Euclidean distance transformation.
  • the brain midline distance image is an image with a distance field formed by the Euclidean distance from each point on the image to the coordinate position of the brain midline in the brain midline annotation image, and the sample is divided into the image
  • the function calculates the third loss value, which can introduce the loss based on the dimensionality of the distance field, and can better measure the difference between the sample segmentation image and the brain midline annotation image.
  • step S208 that is, determining a third loss value according to the sample segmentation recognition result and the brain midline annotated image, includes:
  • S2081 Perform distance transformation on the brain midline labeled image to obtain a brain midline distance image.
  • the brain midline labeled image is converted into the brain midline distance image through the distance transformation method, and the brain midline distance image is the brain from each point on the image to the brain midline labeled image
  • the average value of the Euclidean distance from a pixel in the brain midline distance image to the coordinate position of the brain midline in the brain midline marked image is obtained.
  • the distance field constitutes the brain midline distance image from the distance field of all pixels.
  • L 3 is the third loss value
  • A is the sample segmented image
  • B is the image of the midline distance of the brain
  • a ⁇ B is the product of pixels in the sample segmentation image and the brain midline annotation image.
  • the probability value corresponding to the same pixel in the sample segmentation image and the brain midline annotation image is multiplied by the distance field, and then the average value of each pixel after all the products is taken to obtain the total value.
  • the third loss value is multiplied by the distance field, and then the average value of each pixel after all the products is taken to obtain the total value.
  • This application obtains the brain midline distance image by performing distance transformation on the brain midline annotation image; input the sample segmentation image in the sample segmentation recognition result and the brain midline distance image into the distance loss function, and pass the distance loss The function calculates the third loss value.
  • the introduction of the loss based on the dimensionality of the distance field can better measure the gap between the sample segmentation image and the brain midline annotation image, and make the model more efficient and accurate The recognition results are closer to improve the recognition accuracy.
  • the preset first loss weight, second loss weight, and third loss weight are obtained, and the sum of the first loss weight, the second loss weight, and the third loss weight is 1, so The first loss weight, the second loss weight, and the third loss weight can be continuously adjusted during the training process until they are fixed after convergence.
  • the first loss value, the second loss value, and the The third loss value, the first loss weight, the second loss weight, and the third loss weight are input into a weighting function to obtain the total loss value; wherein the weighting function is:
  • L is the total loss value
  • L 1 is the first loss value
  • L 2 is the second loss value
  • L 3 is the third loss value
  • ⁇ 1 is the first loss weight
  • ⁇ 2 is the second loss weight
  • ⁇ 3 is the third loss weight.
  • the convergence condition may be a condition that the value of the total loss value is small and will not drop after 2000 calculations, that is, the value of the total loss value is small and will not drop after 2000 calculations. When it does not fall anymore, stop training, and record the initial combined recognition model after convergence as the trained brain midline detection model; the convergence condition can also be a condition that the total loss value is less than a set threshold, that is When the total loss value is less than the set threshold, stop training, and record the initial combined recognition model after convergence as the trained brain midline detection model.
  • the midline feature is a feature related to the midline of the brain in multiple dimensions
  • the midline feature includes the symmetry and continuity features of the midline of the brain
  • the feature map of the sample to be processed is obtained after extracting the midline feature
  • the to-be-processed feature map with the midline feature, the to-be-processed feature map includes features at multiple levels
  • the to-be-processed feature map includes feature vector maps outputted by five levels
  • the classification recognition result includes
  • the brain midline can be segmented (equivalent to the output recognized value close to 1) and the indivisible brain midline (equivalent to the output recognized value close to 0), and the classification recognition result is that the brain midline can be segmented
  • the classification recognition result indicates that the image to be recognized can be segmented into the brain midline and the recognition result that the image to be recognized has the probability of the brain midline.
  • the midline feature extraction of the image to be recognized is performed by the multi-scale deep network model to generate at least one feature map to be processed and classification and recognition After the results, include:
  • the brain image when it is detected that the classification and recognition result is inseparable from the brain midline, it indicates that the brain image does not have the characteristics of the brain midline, and the brain midline cannot be segmented, and the brain image is marked as the brain midline.
  • the brain-free midline image is associated with the user identification code. In this way, the brain-free midline image can be marked, and the three-dimensional image can be constructed subsequently to provide the three-dimensional midline offset feature recognition model to perform the brain midline offset type.
  • the feature pyramid network model is a deep neural network that is based on the BiFPN model and is trained.
  • the BiFPN model can better balance feature information of different scales.
  • the top-down channel is used to fuse the features of multiple levels of output.
  • a bottom-up channel is added, and an extra edge is added to the features of the same level to fuse more features at the same time without increasing the loss. Thus stacking them repeatedly to obtain a more advanced feature fusion method.
  • the five levels of the feature maps to be processed are fused through the feature pyramid network model, and the fused feature map groups corresponding to the five levels are generated one-to-one, that is, five groups of the fused feature map groups, and five The group of the fusion feature maps respectively indicate five levels of fusion feature information of different scales.
  • the bilinear interpolation method makes full use of the four pixels around the pixel in the feature vector image to jointly determine the value corresponding to the pixel in the output target feature vector image.
  • An amplified feature map corresponding to the image group, the interpolation value is that the pixel value corresponding to each pixel in the amplified feature map is determined using the bilinear interpolation method, and the weighted fusion is performed by the weighted fusion
  • the amplified feature maps corresponding to the five levels are weighted and multiplied to form a feature image to be segmented.
  • the midline segmentation is based on each pixel in the feature vector image of the sample to be segmented.
  • the value corresponding to the point to determine the coordinate position of the brain midline in the feature image to be segmented that is, to identify whether each pixel point in the feature image to be segmented of the same size as the brain image is a point in the brain midline Mark the pixel corresponding to the probability greater than the preset threshold as a point in the brain midline and segment a segmentation recognition image.
  • the segmentation recognition result includes the segmentation recognition image and the segmentation recognition Each pixel in the image corresponds to a probability value of a point in the brain midline, and the segmentation recognition image is a predicted image of the brain midline corresponding to the brain image.
  • step S50 using bilinear interpolation to perform weighted fusion on all the fused feature map groups to generate the feature image to be segmented, including:
  • the bilinear interpolation method makes full use of the four pixels around the pixel in the feature vector image to jointly determine the value corresponding to the pixel in the output target feature vector image.
  • the fusion feature map group includes a plurality of the fusion feature maps, and the fusion feature map is the feature information embodied at one level after being fused by the feature pyramid model.
  • S502 Perform weighted fusion of all the amplified feature maps through the weighted fusion model, and fuse them into one feature map to be segmented.
  • the weighted fusion is to perform the weighted product of the amplified feature maps corresponding to the five levels through the weighted parameters of each level in the weighted fusion model (that is, the parameters after the training is completed), and fuse Into a feature image to be segmented, and the size of the feature image to be segmented is the same as that of the brain image.
  • This application realizes that the bilinear interpolation method is used to interpolate each fusion feature map in the fusion feature map group through the weighted fusion model to generate an amplification feature map corresponding to the fusion feature map group, and the amplification
  • the feature map is the same size as the brain image; all the amplified feature maps are weighted and fused by the weighted fusion model to merge into one feature map to be segmented.
  • Method and weighted fusion can interpolate and weight the fusion feature map group of each level to obtain the feature map to be segmented that is conducive to the recognition of the brain midline, optimize the weight of the scale of each level, and improve the accuracy and reliability of recognition , Improve the efficiency of recognition.
  • the brain image and the segmentation recognition image are synthesized, and the synthesis is to superimpose the segmentation recognition image on the brain image, that is, the brain image is combined with the segmentation recognition image.
  • the pixel points of the coordinate position of the brain midline in the recognition image are replaced with the values of the same coordinate position in the segmented recognition image to obtain the brain midline image, and the user identification code, the classification recognition result, and the The brain midline images are stored in association with each other as a final recognition result of the brain midline.
  • the final recognition result of the brain midline indicates whether the brain image has a brain midline, and the result of the brain midline is marked if the brain midline is provided.
  • This application realizes that the image to be recognized is obtained by preprocessing the brain image associated with the user identification code; the midline feature extraction is performed through the multi-scale deep network model to generate the feature map to be processed and the classification recognition result;
  • the feature pyramid network model performs feature fusion on all the feature maps to be processed to generate a fusion feature map group; uses bilinear interpolation to interpolate and weight all the fusion feature map groups through a weighted fusion model to generate the to-be segmented Feature image, and perform midline segmentation on the feature image to be segmented to obtain a brain midline segmentation recognition result; synthesize the brain image with the segmented recognition image in the brain midline segmentation recognition result to obtain a brain midline image, and
  • the user identification code, the classification recognition result, and the brain midline image are associated and stored as the brain midline final recognition result.
  • the scale-depth network model extracts the midline feature to identify whether the brain midline can be segmented, and performs feature fusion through the feature pyramid network model, and then uses the weighted fusion model to perform interpolation, weighted fusion and midline segmentation to obtain the brain midline segmentation recognition result, and finally synthesize it
  • the brain midline image, and the user identification code, the classification recognition result and the brain midline image are associated and stored as the final brain midline recognition result, which can quickly and accurately automatically identify the brain midline and mark it in the brain image, which improves the accuracy of recognition
  • the recognition rate is improved, which is convenient for viewing and subsequent further recognition of brain midline deviation.
  • the method includes:
  • all the brain-free midline images and all the brain midline images are three-dimensionally reconstructed through the three-dimensional midline offset feature recognition model, that is, all the brain-free midline images and all the brain midline images are in accordance with
  • the scan sequence number corresponding to each image is reconstructed on the vertical line of defense, that is, all the brain-free midline images and all the brain midline images are superimposed in the vertical direction according to the scan sequence number corresponding to each image to form a three-dimensional
  • the three-dimensional image of the structure is extracted from the three-dimensional image by the three-dimensional centerline offset feature recognition model, and the offset feature is a feature related to the offset of the brain centerline in the stereo space, and the offset
  • the characteristic process is the continuous processing of the brain midline in the three-dimensional image, that is, all the brain midlines are smoothed on the numerical value of the three-dimensional space, which can correlate the identified brain midlines and better reflect the whole Cut the three-dimensional image after continuous processing to cut the overall brain midline and the surrounding
  • S90 Recognizing the extracted offset features through the three-dimensional centerline offset feature recognition model to obtain a brain centerline offset result; the brain centerline offset result represents the brain centerline offset corresponding to the user identification code Shift type.
  • the three-dimensional centerline offset feature recognition model recognizes the extracted offset features, and the recognition process is to predict the brain centerline offset by fully connecting the extracted offset features.
  • the brain midline deviation result represents the brain midline deviation type corresponding to the user identification code, and the brain midline deviation type includes no deviation, slight deviation to the right, slight deviation to the left, Severe shift to the right and severe shift to the left.
  • This application realizes the recognition by acquiring all the brain-free midline images and the brain midline images that are associated with the same user identification code; inputting all the brain-free midline images and all the brain midline images into the three-dimensional midline offset feature recognition
  • the offset feature extraction is performed on all the brain-free midline images and the brain midline images through the three-dimensional centerline offset feature recognition model; the extracted offset feature is extracted by the three-dimensional centerline offset feature recognition model Feature recognition is performed to obtain the brain midline offset result.
  • the brain-free midline image and brain midline image associated with the same user identification code are obtained, and the three-dimensional midline offset feature recognition model is used to perform three-dimensional reconstruction, cutting and deviation.
  • Shift feature extraction provides a method for automatically identifying the type of brain midline deviation corresponding to the user identification code, which can quickly and accurately identify the type of brain midline deviation, which is convenient for subsequent medical behaviors, and improves the accuracy of recognition and reliability.
  • a brain midline recognition device is provided, and the brain midline recognition device corresponds to the brain midline recognition method in the above-mentioned embodiment one-to-one.
  • the brain midline recognition device includes an acquisition module 11, an input module 12, an extraction module 13, a fusion module 14, a segmentation module 15 and a synthesis module 16.
  • the detailed description of each functional module is as follows:
  • the acquiring module 11 is configured to acquire a brain image associated with a user identification code, and perform image preprocessing on the brain image to obtain an image to be recognized;
  • the input module 12 is configured to input the image to be recognized into a trained brain midline detection model;
  • the brain midline detection model includes a multi-scale deep network model, a feature pyramid network model, and a weighted fusion model;
  • the extraction module 13 is configured to extract the midline feature of the image to be recognized through the multi-scale deep network model, and generate at least one feature map to be processed and a classification and recognition result; the classification and recognition result characterizes whether the brain image is The midline of the brain can be segmented;
  • the fusion module 14 is used to input all the feature maps to be processed into the feature pyramid network model when detecting that the classification and recognition result is that the brain midline can be segmented. Perform feature fusion on the feature maps to be processed to generate at least one fused feature map group;
  • the segmentation module 15 is configured to input all the fused feature map groups into the weighted fusion model, use bilinear interpolation to perform interpolation and weighted fusion on all the fused feature map groups, and generate feature images to be segmented, and Performing midline segmentation on the feature image to be segmented to obtain a brain midline segmentation recognition result;
  • the synthesis module 16 is used to synthesize the brain image with the segmented recognition image in the brain midline segmentation recognition result to obtain a brain midline image, and combine the user identification code, the classification recognition result, and the brain
  • the midline image is associated and stored as the final recognition result of the midline of the brain.
  • Each module in the above-mentioned brain midline recognition device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 7.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a readable storage medium and an internal memory.
  • the readable storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer readable instructions in the readable storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instructions are executed by the processor to realize a brain midline recognition method.
  • the readable storage medium provided in this embodiment includes a non-volatile readable storage medium and a volatile readable storage medium.
  • one or more readable storage media storing computer readable instructions are provided.
  • the readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage. Medium; the readable storage medium stores computer readable instructions, and when the computer readable instructions are executed by one or more processors, the one or more processors implement the brain midline recognition method in the above-mentioned embodiment.
  • a computer-readable storage medium on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the brain midline recognition method in the foregoing embodiment is implemented.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

涉及人工智能技术领域,一种大脑中线识别方法、装置、计算机设备及存储介质,所述方法包括:通过对与用户标识码关联的脑部图像进行图像预处理得到待识别图像;通过多尺度深度网络模型进行中线特征提取,生成待处理特征图和分类识别结果;通过特征金字塔网络模型对所有待处理特征图进行特征融合,生成融合特征图组;运用双线性插值法,通过加权融合模型对所有融合特征图组进行插值及加权融合,生成待分割特征图像,并对待分割特征图像进行中线分割,得到大脑中线分割识别结果;并合成得到大脑中线图像,输出最终识别结果。该方法自动识别出大脑中线并标明,适用于智慧医疗等领域,可进一步推动智慧城市的建设。

Description

大脑中线识别方法、装置、计算机设备及存储介质
本申请要求于2020年10月22日提交中国专利局、申请号为202011138413.8,发明名称为“大脑中线识别方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能的图像分类技术领域,尤其涉及一种大脑中线识别方法、装置、计算机设备及存储介质。
背景技术
发明人意识到目前,大脑颅内压增高或创伤性脑损伤,以致压迫脑干,是导致脑疝的主要原因。而脑部CT图像的中线结构通常与大脑颅内压相关,识别出脑部的中线可以为确定脑颅的占位程度及内压升高程度提供重要参照,是目前需要重点关注的一个脑部指标。
发明内容
本申请提供一种大脑中线识别方法、装置、计算机设备及存储介质,实现了通过多尺度的提取中线特征,以及通过特征金字塔网络模型进行特征融合,并通过加权融合模型进行插值、加权融合和中线分割,从而识别出大脑中线,最后合成出大脑中线图像,本申请适用于智慧医疗等领域,可进一步推动智慧城市的建设,能够快速地、准确地自动识别出大脑中线,提高了识别准确率,提升了效率。
一种大脑中线识别方法,包括:
获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
一种大脑中线识别装置,包括:
获取模块,用于获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
输入模块,用于将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
提取模块,用于通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
融合模块,用于在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
分割模块,用于将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
合成模块,用于将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
本申请提供的大脑中线识别方法、装置、计算机设备及存储介质,通过获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理得到待识别图像;将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果,如此,实现了通过对与用户标识码关联的脑部图像进行图像预处理得到待识别图像;通过所述多尺度深度网络模型进行中线特征提取,生成待处理特征图和分类识别结果;通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成融合特征图组;运用双线性插值法,通过加权融合模型对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果,因此,实现了通过对与用户标识码关联的脑部图像进行图像预处理,以及通过多尺度深度网络模型提取中线特征识别出是否可分割出大脑中线,并通过特征金字塔网络模型进行特征融合,再通过加权融合模型进行插值、加权融合和中线分割,得到大脑中线分割识别结果,最后合成出大脑中线图像,以及并将用户标识码、分类识别结果和大脑中线图像关联存储为大脑中线最终识别结果,能够快速地、准确地自动识别出大脑中线并在脑部图像中标明,提高了识别准确率,提升了识别效率,便于查看以及后续进一步的大脑中线偏移识别。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中大脑中线识别方法的应用环境示意图;
图2是本申请一实施例中大脑中线识别方法的流程图;
图3是本申请另一实施例中大脑中线识别方法的流程图;
图4是本申请一实施例中大脑中线识别方法的步骤S10的流程图;
图5是本申请一实施例中大脑中线识别方法的步骤S50的流程图;
图6是本申请一实施例中大脑中线识别装置的原理框图;
图7是本申请一实施例中计算机设备的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的大脑中线识别方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信。其中,客户端(计算机设备)包括但不限于为各种个人计算机、笔记本电脑、智能手机、平板电脑、摄像头和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一实施例中,如图2所示,提供一种大脑中线识别方法,其技术方案主要包括以下步骤S10-S60:
S10,获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像。
可理解地,所述脑部图像为通过CT(Computed Tomography,电子计算机断层扫描)设备扫描用户的头部的CT图像,所述用户标识码为给被扫描的用户赋予唯一的一个标识码,所述用户标识码与所述脑部图像关联,说明所述脑部图像为与所述用户标识码关联的用户的头部的CT图像,所述图像预处理为对所述脑部图像进行依次经过重采样、窗宽窗位变换、归一化处理和有效提取等操作的处理过程,所述重采样为对不同的像素尺寸或者粗细粒度的CT图像按照相同的同构分辨率重新采样,采样输出相同大小的像素图像,如此,所述重采样能将所有CT图像统一为一个维度的像素图像,有利于后续的大脑中线识别,所述窗宽窗位变换为按照相同的窗宽窗位的参数转换并对转换后的图像进行归一化处理的过程,所述有效提取为去除图像中没有任何图像内容的图像(例如:扫描的前面几张空白图像),如此,所述有效提取能够只对有效范围的图像进行处理,去除无效的图像,减少了对无效的图像的处理过程,提高了后续的识别效率,所述待识别图像为经过所述图像预处理之后的图像,所述待识别图像能够加快后续的大脑中线检测模型的识别。
在一实施例中,如图4所示,所述步骤S10中,即所述对所述脑部图像进行图像预处理,得到待识别图像,包括:
S101,按照预设的窗宽窗位参数,对所述脑部图像进行转换,得到中转图像。
可理解地,所述窗宽是CT图像上显示的CT值,在此CT值范围内组织和病变均以不同的模拟灰度显示,而CT值高于此范围的组织和病变,无论是高于多少,都均为白影显示,不再有灰度差异,反之,低于此范围的组织,不论是低于多少,均为黑影显示,也无灰度差异,所述窗位是某一窗宽范围的中心位置,同样的窗宽,由于窗位不同,其包括CT范围的CT值有差异。例如窗宽(w)同为w=80,当窗位为L=0时,其CT值范围为-40~+40;如窗位是+20时,则CT值范围为-20~+60,所述窗宽窗位参数指有利于识别所述脑部图像中的大脑中线而设置窗宽和窗位的参数,所述窗宽窗位参数包括所述窗宽的参数和所述窗位的参数。
其中,对所述脑部图像进行转换包括对所述脑部图像进行所述重采样和所述窗宽窗位变换的处理过程,首先,对所述脑部图像进行所述重采样处理;其次,按照所述窗宽窗位参数,将重采样后的脑部图像进行所述窗宽窗位变换输出图像;最后,将变换后的图像确定为所述中转图像,所述中转图像为经过有利于识别大脑中线的窗宽窗位变换后的图像。
S102,对所述中转图像进行归一化处理,得到所述待识别图像。
可理解地,所述归一化处理为将待处理的数据经过处理后限制在一定范围内,所述归一化处理能够便于后续大脑中线检测模型的识别,所述归一化处理可以归一至在0-1之间的概率分布,将归一化处理后的所述中转图像进行所述有效提取的操作,即去除无效的图像,得到所述待识别图像。
本申请实现了通过按照预设的窗宽窗位参数,对所述脑部图像进行转换,得到中转图 像,对所述中转图像进行归一化处理,得到所述待识别图像,如此,能够提取脑部图像中的有用信息,有利于加快后续的大脑中线检测模型的识别。
S20,将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型。
可理解地,所述大脑中线检测模型为训练完成的多模型融合的神经网络模型,所述大脑中线检测模型融合了所述多尺度深度网络模型、所述特征金字塔网络模型和所述加权融合模型,即所述大脑中线检测模型包括所述多尺度深度网络模型、所述特征金字塔网络模型和所述加权融合模型,所述大脑中线检测模型能够对输入的所述待识别图像进行识别,识别出是否具有大脑中线和标识出大脑中线,所述多尺度深度网络模型为通过多尺度提取所述待识别图像中的中线特征,并根据提取的中线特征识别出所述待识别图像是否具有大脑中线,所述多尺度深度网络模型的网络结构可以根据需求设定,比如多尺度深度网络模型的网络结构为ResNet50、ResNet101、GoogleNet、VGG19的网络结构,作为优选,所述多尺度深度网络模型的网络结构为ResNet50的网络结构,所述特征金字塔网络模型为基于BiFPN模型的深度神经网络,所述特征金字塔网络模型对高层级的特征(具有更强的语义信息)进行融合获得更高级的特征,并对融合后的特征进行预测的模型,所述特征金字塔网络模型的网络结构为BiFPN的网络结构,所述加权融合模型为运用双线性插值法生成多组与所述脑部图像的尺寸大小相同的图像,通过对生成的多组图像进行加权融合,并预测出大脑中线的神经网络模型。在一实施例中,所述步骤S20之前,即所述将所述待识别图像输入训练完成的大脑中线检测模型之前,包括:
S201,获取脑部样本集;所述脑部样本集包括多个脑部样本图像,所述脑部样本图像与一个大脑中线标识标签关联,所述大脑中线标识标签包括大脑中线二分类标签和大脑中线标注图像。
可理解地,所述脑部样本集为所有脑部样本图像的集合,所述脑部样本图像为历史采集的头部的CT图像并经过所述图像预处理之后的图像,所述脑部样本图像与一个大脑中线标识标签关联,所述大脑中线标识标签为与其对应的所述脑部样本图像中是否具备大脑中线的信息,所述大脑中线标识标签包括所述大脑中线二分类标签和所述大脑中线标注图像,所述大脑中线二分类标签指与所述大脑中线标识标签对应的所述脑部样本图像是否有大脑中线的标签类别,所述大脑中线二分类标签包含有两种类别,其中,一个类别为可分割出大脑中线(在模型训练过程中可以将其标记为1),另一个类别为不可分割出大脑中线(在模型训练过程中可以将其标记为0),所述大脑中线标注图像为针对与所述大脑中线标识标签对应的所述脑部样本图像标记出大脑中线的坐标位置的图像,即根据与所述大脑中线标识标签对应的所述脑部样本图像标记出大脑中线,并将该大脑中线的坐标位置移至一个空白的图像中生成的图像。
S202,将所述脑部样本图像输入含有初始参数的初始组合识别模型;所述初始组合识别模型包括初始深度网络模型、初始金字塔网络模型和初始加权融合模型。
可理解地,所述组合识别模型为多模型融合的神经网络模型,所述初始组合识别模型包括初始深度网络模型、初始金字塔网络模型和初始加权融合模型,所述初始组合识别模型包含有所述初始参数,所述初始参数包括所述初始深度网络模型、所述初始金字塔网络模型和所述初始加权融合模型的所有参数。
S203,通过所述初始深度网络模型对所述脑部样本图像进行所述中线特征提取,生成至少一个待处理样本特征图和样本分类识别结果。
可理解地,所述中线特征为多个维度与大脑中线相关的特征,所述中线特征包括大脑中线的对称性和连续性特征,所述待处理样本特征图为经过提取所述中线特征之后获得的具有所述中线特征的特征向量图,即所述待处理样本特征图为对所述脑部样本图像进行卷积后得到的特征向量图,所述待处理样本特征图包含多个层级的特征,作为优选地,所述 待处理样本特征图包括五个层级分别输出的特征向量图,所述样本分类识别结果包括可分割出大脑中线和不可分割出大脑中线,在所述样本分类识别结果为可分割出大脑中线时,所述样本分类识别结果表明了所述脑部样本图像可分割出大脑中线以及识别出所述脑部样本图像具有大脑中线的概率的结果。
其中,所述初始深度网络模型训练完成之后得到所述多尺度深度网络模型。
S204,根据所述样本分类识别结果与所述大脑中线二分类标签,确定出第一损失值。
可理解地,将所述样本分类识别结果与所述大脑中线二分类标签输入所述初始深度网络模型中的第一损失函数,通过所述第一损失函数计算出所述第一损失值,即为L 1,所述第一损失函数可以根据需求设定,比如交叉熵损失函数,所述第一损失值表明了与所述脑部样本图像对应的所述样本分类识别结果和所述大脑中线二分类标签之间的差距,可以通过所述第一损失值不断向识别准确的方向靠拢。
S205,在检测到所述样本分类识别结果为可分割出大脑中线时,将所有所述待处理样本特征图输入所述初始金字塔网络模型中,通过所述初始金字塔网络模型对所述待处理样本特征图进行融合,生成至少一个融合样本特征图组。
可理解地,在识别出所述脑部样本图像的所述样本分类识别结果为可分割出大脑中线时,即表明所述脑部样本图像中可以划分出大脑中线,将所有所述待处理样本特征图输入所述初始金字塔网络模型中,所述初始金字塔网络模型为基于BiFPN模型的深度神经网络,所述BiFPN模型能够更好地平衡不同尺度的特征信息,所述BiFPN模型在基于FPN的一条自顶向下的通道来融合多个层级输出的特征基础上增加了一条自底向上的通道,并将同一层级的特征添加一条额外的边,在不增加损失的情况下同时融合更多的特征,从而重复堆叠它们来获得更高级的特征融合方式。
其中,通过所述初始金字塔网络模型对五个层级的所述待处理样本特征图进行融合,生成与五个层级一一对应的所述融合样本特征图组,即五组所述融合样本特征图组,五组所述融合样本特征图组分别表明了五个层级的不同尺度的融合后的特征信息,所述初始金字塔网络模型训练完成之后得到所述特征金字塔网络模型。
S206,根据所有所述融合样本特征图组和所述大脑中线标注图像,确定出第二损失值。
可理解地,根据所有所述融合样本特征图组可以预测出大脑中线的坐标位置,通过将预测出的大脑中线的坐标位置与所述大脑中线标注图像中的大脑中线的坐标位置输入至第二损失函数中,通过所述第二损失函数计算出预测出的大脑中线的坐标位置与所述大脑中线标注图像中的大脑中线的坐标位置的差异,得到所述第二损失值,即为L 2
S207,运用双线性插值法,通过所述初始加权融合模型对所有所述融合样本特征图组进行加权融合以及中线分割,得到样本分割识别结果。
可理解地,所述双线性插值法(Bilinear Upsampling)为充分的利用了特征向量图中像素点四周的四个像素点来共同决定输出的目标特征向量图中的与该像素点对应的像素值的插值法,运用所述双线性插值法,将与各个层级对应的各组的所述融合样本特征图组进行上采样至与所述脑部样本图像等大小的图像,并合并成与该融合样本特征图组对应的一个待融合样本特征向量图,所述加权融合为通过所述初始加权融合模型中的各个层级的权重参数,对五个层级对应的所述待融合样本特征向量图进行加权乘积,融合成一个待分割样本特征向量图,所述中线分割为根据所述待分割样本特征向量图中各个像素点对应的值,确定出所述待分割样本特征向量图中的大脑中线的坐标位置,即识别出与所述脑部样本图像等大小的所述待分割样本特征向量图中各个像素点是否为大脑中线中的点的概率,将大于预设阈值的概率对应的像素点标记为大脑中线中的点并分割出一个样本分割图像的处理过程,所述样本分割识别结果包括所述样本分割图像以及所述样本分割图像中各像素点对应的为大脑中线中的点的概率值。
S208,根据所述样本分割识别结果和所述大脑中线标注图像,确定出第三损失值。
可理解地,根据所述样本分割识别结果中的所述样本分割图像和所述大脑中线标注图像,得到所述第三损失值,即将所述大脑中线标注图像进行距离变换生成大脑中线距离图像,所述距离变换的方法可以根据需求设定,比如距离变换的方法可以为欧式距离变换、曼哈顿距离(Manhattan/cityblock distance)变换或者切比雪夫距离(Chebyshev distance)变换,作为优选,所述距离变换的方法为欧式距离变换,所述大脑中线距离图像为该图像上各个点到所述大脑中线标注图像中的大脑中线的坐标位置的欧式距离形成的具有距离场的图像,将所述样本分割图像和所述大脑中线距离图像输入距离损失函数中,通过所述距离损失函数计算出所述第三损失值,从而引入距离变换将所述大脑中线标注图像转换成具有距离场的图像,运用距离损失函数计算出所述第三损失值,能够引入基于距离场的维度的损失,更加能够衡量出所述样本分割图像和所述大脑中线标注图像之间的差距。
在一实施例中,所述步骤S208中,即所述根据所述样本分割识别结果和所述大脑中线标注图像,确定出第三损失值,包括:
S2081,对所述大脑中线标注图像进行距离变换,得到大脑中线距离图像。
可理解地,通过所述距离变换的方法,将所述大脑中线标注图像转换成所述大脑中线距离图像,所述大脑中线距离图像为该图像上各个点到所述大脑中线标注图像中的大脑中线的坐标位置的欧式距离形成的具有距离场的图像,所述大脑中线距离图像中一个像素点到达所述大脑中线标注图像中的大脑中线的坐标位置的欧式距离的均值,得到该像素点的距离场,将所有像素点的距离场构成所述大脑中线距离图像。
S2082,将所述样本分割识别结果中的样本分割图像和所述大脑中线距离图像输入距离损失函数中,通过所述距离损失函数计算出所述第三损失值;所述距离损失函数为:
L 3=mean(A⊙B)
其中,
L 3为第三损失值;
A为所述样本分割图像;
B为所述大脑中线距离图像;
A⊙B为所述样本分割图像和所述大脑中线标注图像中的像素乘积。
可理解地,将所述样本分割图像和所述大脑中线标注图像中的相同像素点对应的概率值与所述距离场进行乘积,然后对所有乘积后的各个像素点取平均值,从而得到所述第三损失值。
本申请通过对所述大脑中线标注图像进行距离变换,得到大脑中线距离图像;将所述样本分割识别结果中的样本分割图像和所述大脑中线距离图像输入距离损失函数中,通过所述距离损失函数计算出所述第三损失值,如此,引入基于距离场的维度的损失,更加能够衡量出所述样本分割图像和所述大脑中线标注图像之间的差距,能够让模型更高效地向准确的识别结果靠拢,提高识别准确率。
S209,将所述第一损失值、所述第二损失值和所述第三损失值加权,得到总损失值。
可理解地,获取预设好的第一损失权重、第二损失权重和第三损失权重,所述第一损失权重、所述第二损失权重和所述第三损失权重之和为1,所述第一损失权重、所述第二损失权重和所述第三损失权重在训练过程中可以不断调整,直到收敛后固定不变,将所述第一损失值、所述第二损失值、所述第三损失值、所述第一损失权重、所述第二损失权重和所述第三损失权重输入加权函数,得到所述总损失值;其中,所述加权函数为:
L=α 1L 12L 23L 3
其中:
L为总损失值;
L 1为所述第一损失值;
L 2为所述第二损失值;
L 3为所述第三损失值;
α 1为所述第一损失权重;
α 2为所述第二损失权重;
α 3为所述第三损失权重。
如此,通过综合考虑三个维度的损失值优化大脑中线的识别,能够更高效地、准确地进行识别。
S210,在所述总损失值未达到预设的收敛条件时,迭代更新所述初始组合识别模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述初始组合识别模型记录为训练完成的大脑中线检测模型。
可理解地,所述收敛条件可以为所述总损失值经过了2000次计算后值为很小且不会再下降的条件,即在所述总损失值经过2000次计算后值为很小且不会再下降时,停止训练,并将收敛之后的所述初始组合识别模型记录为训练完成的大脑中线检测模型;所述收敛条件也可以为所述总损失值小于设定阈值的条件,即在所述总损失值小于设定阈值时,停止训练,并将收敛之后的所述初始组合识别模型记录为训练完成的大脑中线检测模型,如此,在所述总损失值未达到预设的收敛条件时,不断更新迭代所述初始组合识别模型的初始参数,并触发通过所述初始深度网络模型对所述脑部样本图像进行所述中线特征提取,生成至少一个待处理样本特征图和样本分类识别结果的步骤,可以不断向准确的结果靠拢,让识别的准确率越来越高。
S30,通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线。
可理解地,所述中线特征为多个维度与大脑中线相关的特征,所述中线特征包括大脑中线的对称性和连续性特征,所述待处理样本特征图为经过提取所述中线特征之后获得的具有所述中线特征的所述待处理特征图,所述待处理特征图包含多个层级的特征,所述待处理特征图包括五个层级分别输出的特征向量图,所述分类识别结果包括可分割出大脑中线(相当于输出的识别出的值向1靠拢)和不可分割出大脑中线(相当于输出的识别出的值向0靠拢),在所述分类识别结果为可分割出大脑中线时,所述分类识别结果表明了所述待识别图像可分割出大脑中线以及识别出所述待识别图像具有大脑中线的概率的识别结果。
在一实施例中,如图3所示,所述步骤S30之后,即所述通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果之后,包括:
S301,在检测到所述分类识别结果为不可分割出大脑中线时,将所述脑部图像标记为与所述用户标识码关联的无大脑中线图像。
可理解地,在检测到所述分类识别结果为不可分割出大脑中线时,表明了所述脑部图像不具备大脑中线的特征,不能分割出大脑中线,将所述脑部图像标记为所述无大脑中线图像,并与所述用户标识码关联,如此,能够标记出无大脑中线图像,为后续构建三维图像以提供给三维中线偏移特征识别模型进行大脑中线偏移类型。
S40,在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组。
可理解地,在识别出所述待识别图像的所述分类识别结果为可分割出大脑中线时,即表明所述待识别图像中可以划分出大脑中线,将所有所述待处理特征图输入所述特征金字塔网络模型中,所述特征金字塔网络模型为基于BiFPN模型且训练完成的深度神经网络, 所述BiFPN模型能够更好地平衡不同尺度的特征信息,所述BiFPN模型在基于FPN的一条自顶向下的通道来融合多个层级输出的特征基础上增加了一条自底向上的通道,并将同一层级的特征添加一条额外的边,在不增加损失的情况下同时融合更多的特征,从而重复堆叠它们来获得更高级的特征融合方式。
其中,通过所述特征金字塔网络模型对五个层级的所述待处理特征图进行融合,生成与五个层级一一对应的所述融合特征图组,即五组所述融合特征图组,五组所述融合特征图组分别表明了五个层级的不同尺度的融合后的特征信息。
S50,将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果。
可理解地,所述双线性插值法(Bilinear Upsampling)为充分的利用了特征向量图中像素点四周的四个像素点来共同决定输出的目标特征向量图中的与该像素点对应的值的插值法,运用所述双线性插值法,将与各个层级对应的各组的所述融合特征图组进行上采样至与所述脑部图像等大小的图像,并合并成与该融合特征图组对应的一个扩增特征图,所述插值为运用所述双线性插值法确定出所述扩增特征图中的各个像素点对应的像素值,所述加权融合为通过所述加权融合模型中的各个层级的权重参数,对五个层级对应的所述扩增特征图进行加权乘积,融合成一个待分割特征图像,所述中线分割为根据所述待分割样本特征向量图中各个像素点对应的值,确定出所述待分割特征图像中的大脑中线的坐标位置,即识别出与所述脑部图像等大小的所述待分割特征图像中各个像素点是否为大脑中线中的点的概率,将大于所述预设阈值的概率对应的像素点标记为大脑中线中的点并分割出一个分割识别图像的处理过程,所述分割识别结果包括所述分割识别图像以及所述分割识别图像中各像素点对应的为大脑中线中的点的概率值,所述分割识别图像为预测的与所述脑部图像对应的大脑中线的图像。
在一实施例中,如图5所示,所述步骤S50中,即运用双线性插值法,对所有所述融合特征图组进行加权融合,生成待分割特征图像,包括:
S501,运用双线性插值法,通过所述加权融合模型将所述融合特征图组中的各融合特征图插值生成与所述融合特征图组对应的扩增特征图,所述扩增特征图与所述脑部图像的尺寸大小相同。
可理解地,所述双线性插值法(Bilinear Upsampling)为充分的利用了特征向量图中像素点四周的四个像素点来共同决定输出的目标特征向量图中的与该像素点对应的值的插值法,运用所述双线性插值法,将与各个层级对应的各组的所述融合特征图组进行上采样至与所述脑部图像等大小的图像,并合并成与该融合特征图组对应的一个扩增特征图。
其中,所述融合特征图组包含多个所述融合特征图,所述融合特征图为经过所述特征金字塔模型融合后在一个层级体现的特征信息。S502,通过所述加权融合模型将所有所述扩增特征图进行加权融合,融合成一个所述待分割特征图。
可理解地,所述加权融合为通过所述加权融合模型中的各个层级的权重参数(即为已经训练完成后的参数),对五个层级对应的所述扩增特征图进行加权乘积,融合成一个待分割特征图像,所述待分割特征图像与所述脑部图像的尺寸大小相同。
本申请实现了运用双线性插值法,通过所述加权融合模型将所述融合特征图组中的各融合特征图插值生成与所述融合特征图组对应的扩增特征图,所述扩增特征图与所述脑部图像的尺寸大小相同;通过所述加权融合模型将所有所述扩增特征图进行加权融合,融合成一个所述待分割特征图,如此,实现了运用双线性插值法和加权融合,能够将各个层级的融合特征图组进行插值及加权融合,得到有利于大脑中线识别的待分割特征图,优化了各个层级的尺度的权重,提高了识别的准确性和可靠性,提升了识别的效率。
S60,将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得 到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
可理解地,将所述脑部图像和所述分割识别图像进行合成,所述合成为在所述脑部图像的基础上叠加所述分割识别图像,即将所述脑部图像中与所述分割识别图像中的大脑中线的坐标位置的像素点替代成在所述分割识别图像中相同的坐标位置的值,得到所述大脑中线图像,并且将所述用户标识码、所述分类识别结果和所述大脑中线图像彼此关联存储,作为大脑中线最终识别结果,所述大脑中线最终识别结果表明了所述脑部图像是否具备大脑中线,且在具备大脑中线的情况下标记出大脑中线的结果。
本申请实现了通过对与用户标识码关联的脑部图像进行图像预处理得到待识别图像;通过所述多尺度深度网络模型进行中线特征提取,生成待处理特征图和分类识别结果;通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成融合特征图组;运用双线性插值法,通过加权融合模型对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果,因此,实现了通过对与用户标识码关联的脑部图像进行图像预处理,以及通过多尺度深度网络模型提取中线特征识别出是否可分割出大脑中线,并通过特征金字塔网络模型进行特征融合,再通过加权融合模型进行插值、加权融合和中线分割,得到大脑中线分割识别结果,最后合成出大脑中线图像,以及并将用户标识码、分类识别结果和大脑中线图像关联存储为大脑中线最终识别结果,能够快速地、准确地自动识别出大脑中线并在脑部图像中标明,提高了识别准确率,提升了识别效率,便于查看以及后续进一步的大脑中线偏移识别。
在一实施例中,所述步骤S60之后,即所述并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果之后,包括:
S70,获取所有与相同的用户标识码关联的所述无大脑中线图像和所述大脑中线图像。
S80,将所有所述无大脑中线图像和所有所述大脑中线图像输入三维中线偏移特征识别模型中,通过所述三维中线偏移特征识别模型对所有所述无大脑中线图像和所述大脑中线图像进行偏移特征提取。
可理解地,通过所述三维中线偏移特征识别模型将所有所述无大脑中线图像和所有所述大脑中线图像进行三维重构,即将所有所述无大脑中线图像和所有所述大脑中线图像按照各个图像对应的扫描顺序号进行垂直防线上的重构,也即将所有所述无大脑中线图像和所有所述大脑中线图像按照各个图像对应的扫描顺序号进行垂直方向上的叠加,形成一个具有立体结构的三维图像,通过所述三维中线偏移特征识别模型对所述三维图像进行所述偏移特征提取,所述偏移特征为大脑中线在立体空间的偏移相关的特征,所述偏移特征的过程为对所述三维图像中的大脑中线进行连续性处理,即将所有大脑中线进行立体空间的数值上进行平滑处理,能够将识别出的大脑中线之间进行关联,能够更好的体现整体的大脑中线,将连续性处理后的三维图像进行切割,对整体的大脑中线及周围的三维图像切割出预设尺寸的立体方块图像,对切割出的所有立体方块图像进行立体像素值计算,得到各个所述立体方块图像的立体像素值,通过对由所有立体像素值构成的立体特征图进行所述偏移特征提取的过程。
S90,通过所述三维中线偏移特征识别模型对提取的所述偏移特征进行识别,得到大脑中线偏移结果;所述大脑中线偏移结果表征了与所述用户标识码对应的大脑中线偏移类型。
可理解地,所述三维中线偏移特征识别模型对提取的所述偏移特征进行识别,所述识别过程为通过对提取的偏移特征进行全连接后进行预测,预测出所述大脑中线偏移结果, 所述大脑中线偏移结果表征了与所述用户标识码对应的大脑中线偏移类型,所述大脑中线偏移类型包括无偏移、向右稍微偏移、向左稍微偏移、向右严重偏移和向左严重偏移。
本申请实现了通过获取所有与相同的用户标识码关联的所述无大脑中线图像和所述大脑中线图像;将所有所述无大脑中线图像和所有所述大脑中线图像输入三维中线偏移特征识别模型中,通过所述三维中线偏移特征识别模型对所有所述无大脑中线图像和所述大脑中线图像进行偏移特征提取;通过所述三维中线偏移特征识别模型对提取的所述偏移特征进行识别,得到大脑中线偏移结果,如此,实现了通过获取与相同的用户标识码关联的无大脑中线图像和大脑中线图像,通过三维中线偏移特征识别模型进行三维重构、切割和偏移特征提取,提供了自动识别出与用户标识码对应的大脑中线偏移类型的方法,能够快速地、准确地识别出大脑中线偏移类型,便于作出后续的医学行为,提升了识别准确性和可靠性。
在一实施例中,提供一种大脑中线识别装置,该大脑中线识别装置与上述实施例中大脑中线识别方法一一对应。如图6所示,该大脑中线识别装置包括获取模块11、输入模块12、提取模块13、融合模块14、分割模块15和合成模块16。各功能模块详细说明如下:
获取模块11,用于获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
输入模块12,用于将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
提取模块13,用于通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
融合模块14,用于在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
分割模块15,用于将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
合成模块16,用于将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
关于大脑中线识别装置的具体限定可以参见上文中对于大脑中线识别方法的限定,在此不再赘述。上述大脑中线识别装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图7所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括可读存储介质、内存储器。该可读存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为可读存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种大脑中线识别方法。本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质。
在一个实施例中,提供了一个或多个存储有计算机可读指令的可读存储介质,本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质;该可读存储介质上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,使得一个 或多个处理器实现上述实施例中大脑中线识别方法。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机可读指令,计算机可读指令被处理器执行时实现上述实施例中大脑中线识别方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质或易失性可读存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种大脑中线识别方法,其中,包括:
    获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
    将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
    通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
    在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
    将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
    将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
  2. 如权利要求1所述的大脑中线识别方法,其中,所述通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果之后,包括:
    在检测到所述分类识别结果为不可分割出大脑中线时,将所述脑部图像标记为与所述用户标识码关联的无大脑中线图像。
  3. 如权利要求2所述的大脑中线识别方法,其中,所述并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果之后,包括:
    获取所有与相同的用户标识码关联的所述无大脑中线图像和所述大脑中线图像;
    将所有所述无大脑中线图像和所有所述大脑中线图像输入三维中线偏移特征识别模型中,通过所述三维中线偏移特征识别模型对所有所述无大脑中线图像和所述大脑中线图像进行偏移特征提取;
    通过所述三维中线偏移特征识别模型对提取的所述偏移特征进行识别,得到大脑中线偏移结果;所述大脑中线偏移结果表征了与所述用户标识码对应的大脑中线偏移类型。
  4. 如权利要求1所述的大脑中线识别方法,其中,所述对所述脑部图像进行图像预处理,得到待识别图像,包括:
    按照预设的窗宽窗位参数,对所述脑部图像进行转换,得到中转图像;
    对所述中转图像进行归一化处理,得到所述待识别图像。
  5. 如权利要求1所述的大脑中线识别方法,其中,所述将所述待识别图像输入训练完成的大脑中线检测模型之前,包括:
    获取脑部样本集;所述脑部样本集包括多个脑部样本图像,所述脑部样本图像与一个大脑中线标识标签关联,所述大脑中线标识标签包括大脑中线二分类标签和大脑中线标注图像;
    将所述脑部样本图像输入含有初始参数的初始组合识别模型;所述初始组合识别模型包括初始深度网络模型、初始金字塔网络模型和初始加权融合模型;
    通过所述初始深度网络模型对所述脑部样本图像进行所述中线特征提取,生成至少一个待处理样本特征图和样本分类识别结果;
    根据所述样本分类识别结果与所述大脑中线二分类标签,确定出第一损失值;
    在检测到所述样本分类识别结果为可分割出大脑中线时,将所有所述待处理样本特征图输入所述初始金字塔网络模型中,通过所述初始金字塔网络模型对所述待处理样本特征图进行融合,生成至少一个融合样本特征图组;
    根据所有所述融合样本特征图组和所述大脑中线标注图像,确定出第二损失值;
    运用双线性插值法,通过所述初始加权融合模型对所有所述融合样本特征图组进行加权融合以及中线分割,得到样本分割识别结果;
    根据所述样本分割识别结果和所述大脑中线标注图像,确定出第三损失值;
    将所述第一损失值、所述第二损失值和所述第三损失值加权,得到总损失值;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述初始组合识别模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述初始组合识别模型记录为训练完成的大脑中线检测模型。
  6. 如权利要求5所述的大脑中线识别方法,其中,所述根据所述样本分割识别结果和所述大脑中线标注图像,确定出第三损失值,包括:
    对所述大脑中线标注图像进行距离变换,得到大脑中线距离图像;
    将所述样本分割识别结果中的样本分割图像和所述大脑中线距离图像输入距离损失函数中,通过所述距离损失函数计算出所述第三损失值;所述距离损失函数为:
    L 3=mean(A⊙B)
    其中,
    L 3为第三损失值;
    A为所述样本分割图像;
    B为所述大脑中线距离图像;
    A⊙B为所述样本分割图像和所述大脑中线标注图像中的像素乘积。
  7. 如权利要求1所述的大脑中线识别方法,其中,所述运用双线性插值法,对所有所述融合特征图组进行加权融合,生成待分割特征图像,包括:
    运用双线性插值法,通过所述加权融合模型将所述融合特征图组中的各融合特征图插值生成与所述融合特征图组对应的扩增特征图,所述扩增特征图与所述脑部图像的尺寸大小相同;
    通过所述加权融合模型将所有所述扩增特征图进行加权融合,融合成一个所述待分割特征图。
  8. 一种大脑中线识别装置,其中,包括:
    获取模块,用于获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
    输入模块,用于将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
    提取模块,用于通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
    融合模块,用于在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
    分割模块,用于将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
    合成模块,用于将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
    将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
    通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
    在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
    将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
    将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
  10. 如权利要求9所述的计算机设备,其中,所述通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    在检测到所述分类识别结果为不可分割出大脑中线时,将所述脑部图像标记为与所述用户标识码关联的无大脑中线图像。
  11. 如权利要求10所述的计算机设备,其中,所述并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取所有与相同的用户标识码关联的所述无大脑中线图像和所述大脑中线图像;
    将所有所述无大脑中线图像和所有所述大脑中线图像输入三维中线偏移特征识别模型中,通过所述三维中线偏移特征识别模型对所有所述无大脑中线图像和所述大脑中线图像进行偏移特征提取;
    通过所述三维中线偏移特征识别模型对提取的所述偏移特征进行识别,得到大脑中线偏移结果;所述大脑中线偏移结果表征了与所述用户标识码对应的大脑中线偏移类型。
  12. 如权利要求9所述的计算机设备,其中,所述对所述脑部图像进行图像预处理,得到待识别图像,包括:
    按照预设的窗宽窗位参数,对所述脑部图像进行转换,得到中转图像;
    对所述中转图像进行归一化处理,得到所述待识别图像。
  13. 如权利要求9所述的计算机设备,其中,所述将所述待识别图像输入训练完成的大脑中线检测模型之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取脑部样本集;所述脑部样本集包括多个脑部样本图像,所述脑部样本图像与一个大脑中线标识标签关联,所述大脑中线标识标签包括大脑中线二分类标签和大脑中线标注图像;
    将所述脑部样本图像输入含有初始参数的初始组合识别模型;所述初始组合识别模型包括初始深度网络模型、初始金字塔网络模型和初始加权融合模型;
    通过所述初始深度网络模型对所述脑部样本图像进行所述中线特征提取,生成至少一个待处理样本特征图和样本分类识别结果;
    根据所述样本分类识别结果与所述大脑中线二分类标签,确定出第一损失值;
    在检测到所述样本分类识别结果为可分割出大脑中线时,将所有所述待处理样本特征图输入所述初始金字塔网络模型中,通过所述初始金字塔网络模型对所述待处理样本特征图进行融合,生成至少一个融合样本特征图组;
    根据所有所述融合样本特征图组和所述大脑中线标注图像,确定出第二损失值;
    运用双线性插值法,通过所述初始加权融合模型对所有所述融合样本特征图组进行加权融合以及中线分割,得到样本分割识别结果;
    根据所述样本分割识别结果和所述大脑中线标注图像,确定出第三损失值;
    将所述第一损失值、所述第二损失值和所述第三损失值加权,得到总损失值;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述初始组合识别模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述初始组合识别模型记录为训练完成的大脑中线检测模型。
  14. 如权利要求13所述的计算机设备,其中,所述根据所述样本分割识别结果和所述大脑中线标注图像,确定出第三损失值,包括:
    对所述大脑中线标注图像进行距离变换,得到大脑中线距离图像;
    将所述样本分割识别结果中的样本分割图像和所述大脑中线距离图像输入距离损失函数中,通过所述距离损失函数计算出所述第三损失值;所述距离损失函数为:
    L 3=mean(A⊙B)
    其中,
    L 3为第三损失值;
    A为所述样本分割图像;
    B为所述大脑中线距离图像;
    A⊙B为所述样本分割图像和所述大脑中线标注图像中的像素乘积。
  15. 一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取与用户标识码关联的脑部图像,并对所述脑部图像进行图像预处理,得到待识别图像;
    将所述待识别图像输入训练完成的大脑中线检测模型;所述大脑中线检测模型包括多尺度深度网络模型、特征金字塔网络模型和加权融合模型;
    通过所述多尺度深度网络模型对所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果;所述分类识别结果表征了所述脑部图像是否可分割出大脑中线;
    在检测到所述分类识别结果为可分割出大脑中线时,将所有所述待处理特征图输入所述特征金字塔网络模型中,通过所述特征金字塔网络模型对所有所述待处理特征图进行特征融合,生成至少一个融合特征图组;
    将所有所述融合特征图组输入所述加权融合模型中,运用双线性插值法,对所有所述融合特征图组进行插值及加权融合,生成待分割特征图像,并对所述待分割特征图像进行中线分割,得到大脑中线分割识别结果;
    将所述脑部图像与所述大脑中线分割识别结果中的分割识别图像进行合成,得到大脑中线图像,并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果。
  16. 如权利要求15所述的可读存储介质,其中,所述通过所述多尺度深度网络模型对 所述待识别图像进行中线特征提取,生成至少一个待处理特征图和分类识别结果之后,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    在检测到所述分类识别结果为不可分割出大脑中线时,将所述脑部图像标记为与所述用户标识码关联的无大脑中线图像。
  17. 如权利要求16所述的可读存储介质,其中,所述并将所述用户标识码、所述分类识别结果和所述大脑中线图像关联存储为大脑中线最终识别结果之后,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    获取所有与相同的用户标识码关联的所述无大脑中线图像和所述大脑中线图像;
    将所有所述无大脑中线图像和所有所述大脑中线图像输入三维中线偏移特征识别模型中,通过所述三维中线偏移特征识别模型对所有所述无大脑中线图像和所述大脑中线图像进行偏移特征提取;
    通过所述三维中线偏移特征识别模型对提取的所述偏移特征进行识别,得到大脑中线偏移结果;所述大脑中线偏移结果表征了与所述用户标识码对应的大脑中线偏移类型。
  18. 如权利要求15所述的可读存储介质,其中,所述对所述脑部图像进行图像预处理,得到待识别图像,包括:
    按照预设的窗宽窗位参数,对所述脑部图像进行转换,得到中转图像;
    对所述中转图像进行归一化处理,得到所述待识别图像。
  19. 如权利要求15所述的可读存储介质,其中,所述将所述待识别图像输入训练完成的大脑中线检测模型之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    获取脑部样本集;所述脑部样本集包括多个脑部样本图像,所述脑部样本图像与一个大脑中线标识标签关联,所述大脑中线标识标签包括大脑中线二分类标签和大脑中线标注图像;
    将所述脑部样本图像输入含有初始参数的初始组合识别模型;所述初始组合识别模型包括初始深度网络模型、初始金字塔网络模型和初始加权融合模型;
    通过所述初始深度网络模型对所述脑部样本图像进行所述中线特征提取,生成至少一个待处理样本特征图和样本分类识别结果;
    根据所述样本分类识别结果与所述大脑中线二分类标签,确定出第一损失值;
    在检测到所述样本分类识别结果为可分割出大脑中线时,将所有所述待处理样本特征图输入所述初始金字塔网络模型中,通过所述初始金字塔网络模型对所述待处理样本特征图进行融合,生成至少一个融合样本特征图组;
    根据所有所述融合样本特征图组和所述大脑中线标注图像,确定出第二损失值;
    运用双线性插值法,通过所述初始加权融合模型对所有所述融合样本特征图组进行加权融合以及中线分割,得到样本分割识别结果;
    根据所述样本分割识别结果和所述大脑中线标注图像,确定出第三损失值;
    将所述第一损失值、所述第二损失值和所述第三损失值加权,得到总损失值;
    在所述总损失值未达到预设的收敛条件时,迭代更新所述初始组合识别模型的初始参数,直至所述总损失值达到所述预设的收敛条件时,将收敛之后的所述初始组合识别模型记录为训练完成的大脑中线检测模型。
  20. 如权利要求19所述的可读存储介质,其中,所述根据所述样本分割识别结果和所述大脑中线标注图像,确定出第三损失值,包括:
    对所述大脑中线标注图像进行距离变换,得到大脑中线距离图像;
    将所述样本分割识别结果中的样本分割图像和所述大脑中线距离图像输入距离损失函数中,通过所述距离损失函数计算出所述第三损失值;所述距离损失函数为:
    L 3=mean(A⊙B)
    其中,
    L 3为第三损失值;
    A为所述样本分割图像;
    B为所述大脑中线距离图像;
    A⊙B为所述样本分割图像和所述大脑中线标注图像中的像素乘积。
PCT/CN2020/135333 2020-10-22 2020-12-10 大脑中线识别方法、装置、计算机设备及存储介质 WO2021189959A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011138413.8 2020-10-22
CN202011138413.8A CN112241952B (zh) 2020-10-22 2020-10-22 大脑中线识别方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021189959A1 true WO2021189959A1 (zh) 2021-09-30

Family

ID=74169662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135333 WO2021189959A1 (zh) 2020-10-22 2020-12-10 大脑中线识别方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN112241952B (zh)
WO (1) WO2021189959A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690189A (zh) * 2022-11-07 2023-02-03 北京安德医智科技有限公司 脑中线偏移量的检测方法、装置、设备及介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762412B (zh) * 2021-09-26 2023-04-18 国网四川省电力公司电力科学研究院 一种配电网单相接地故障识别方法、系统、终端及介质
CN114419031B (zh) * 2022-03-14 2022-06-14 深圳科亚医疗科技有限公司 一种脑中线的自动定位方法及其装置
CN115294104B (zh) * 2022-09-28 2023-01-10 杭州健培科技有限公司 基于三维脑部ct图像的脑中线预测模型、方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254843A1 (en) * 2012-09-13 2015-09-10 The Regents Of The University Of California Lung, lobe, and fissure imaging systems and methods
CN110321920A (zh) * 2019-05-08 2019-10-11 腾讯科技(深圳)有限公司 图像分类方法、装置、计算机可读存储介质和计算机设备
CN110473172A (zh) * 2019-07-24 2019-11-19 上海联影智能医疗科技有限公司 医学图像解剖中线确定方法、计算机设备和存储介质
CN110956636A (zh) * 2019-11-28 2020-04-03 北京推想科技有限公司 一种图像处理方法及装置
CN111489324A (zh) * 2020-06-05 2020-08-04 华侨大学 一种融合多模态先验病理深度特征的宫颈癌病变诊断方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102499676B (zh) * 2011-11-03 2014-01-29 北京工业大学 基于有效时间序列和电极重组的脑电信号分类系统和方法
CN104834935A (zh) * 2015-04-27 2015-08-12 电子科技大学 一种稳定的脑肿瘤非监督疾病分类学成像方法
CN104825196A (zh) * 2015-05-26 2015-08-12 昆明医科大学第二附属医院 一种用于去骨瓣手术后脑水肿检测的手持式超声测量装置
CN109872306B (zh) * 2019-01-28 2021-01-08 腾讯科技(深圳)有限公司 医学图像分割方法、装置和存储介质
CN110443808B (zh) * 2019-07-04 2022-04-01 杭州深睿博联科技有限公司 用于脑中线检测的医疗图像处理方法及装置、设备、存储介质
CN110464380B (zh) * 2019-09-12 2021-10-29 李肯立 一种对中晚孕期胎儿的超声切面图像进行质量控制的方法
CN111144285B (zh) * 2019-12-25 2024-06-14 中国平安人寿保险股份有限公司 胖瘦程度识别方法、装置、设备及介质
CN111667464B (zh) * 2020-05-21 2024-02-02 平安科技(深圳)有限公司 危险品三维图像检测方法、装置、计算机设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150254843A1 (en) * 2012-09-13 2015-09-10 The Regents Of The University Of California Lung, lobe, and fissure imaging systems and methods
CN110321920A (zh) * 2019-05-08 2019-10-11 腾讯科技(深圳)有限公司 图像分类方法、装置、计算机可读存储介质和计算机设备
CN110473172A (zh) * 2019-07-24 2019-11-19 上海联影智能医疗科技有限公司 医学图像解剖中线确定方法、计算机设备和存储介质
CN110956636A (zh) * 2019-11-28 2020-04-03 北京推想科技有限公司 一种图像处理方法及装置
CN111489324A (zh) * 2020-06-05 2020-08-04 华侨大学 一种融合多模态先验病理深度特征的宫颈癌病变诊断方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690189A (zh) * 2022-11-07 2023-02-03 北京安德医智科技有限公司 脑中线偏移量的检测方法、装置、设备及介质

Also Published As

Publication number Publication date
CN112241952A (zh) 2021-01-19
CN112241952B (zh) 2023-09-05

Similar Documents

Publication Publication Date Title
WO2021189959A1 (zh) 大脑中线识别方法、装置、计算机设备及存储介质
CN110334587B (zh) 人脸关键点定位模型的训练方法、装置及关键点定位方法
EP3637317B1 (en) Method and apparatus for generating vehicle damage information
CN110930417B (zh) 图像分割模型的训练方法和装置、图像分割方法和装置
WO2020253629A1 (zh) 检测模型训练方法、装置、计算机设备和存储介质
WO2021036471A1 (zh) 样本生成方法、装置、计算机设备及存储介质
WO2020232872A1 (zh) 表格识别方法、装置、计算机设备和存储介质
CN112967236B (zh) 图像的配准方法、装置、计算机设备和存储介质
CN111950329A (zh) 目标检测及模型训练方法、装置、计算机设备和存储介质
WO2021003938A1 (zh) 图像分类方法、装置、计算机设备和存储介质
CN109858333B (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN110489951B (zh) 风险识别的方法、装置、计算机设备和存储介质
CN110516541B (zh) 文本定位方法、装置、计算机可读存储介质和计算机设备
CN111523414A (zh) 人脸识别方法、装置、计算机设备和存储介质
CN111291813B (zh) 图像标注方法、装置、计算机设备和存储介质
CN111667464A (zh) 危险品三维图像检测方法、装置、计算机设备及存储介质
WO2023130648A1 (zh) 一种图像数据增强方法、装置、计算机设备和存储介质
WO2021164280A1 (zh) 三维边缘检测方法、装置、存储介质和计算机设备
Wang et al. Learning to recognize thoracic disease in chest x-rays with knowledge-guided deep zoom neural networks
CN113706481A (zh) 精子质量检测方法、装置、计算机设备和存储介质
CN112115860A (zh) 人脸关键点定位方法、装置、计算机设备和存储介质
CN114898357A (zh) 缺陷识别方法、装置、电子设备及计算机可读存储介质
CN115908363B (zh) 肿瘤细胞统计方法、装置、设备和存储介质
CN111180011A (zh) 一种病灶基因突变的检测方法及装置
CN116091596A (zh) 一种自下而上的多人2d人体姿态估计方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926799

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926799

Country of ref document: EP

Kind code of ref document: A1