CN111008974A - Multi-model fusion femoral neck fracture region positioning and segmentation method and system - Google Patents

Multi-model fusion femoral neck fracture region positioning and segmentation method and system Download PDF

Info

Publication number
CN111008974A
CN111008974A CN201911156864.1A CN201911156864A CN111008974A CN 111008974 A CN111008974 A CN 111008974A CN 201911156864 A CN201911156864 A CN 201911156864A CN 111008974 A CN111008974 A CN 111008974A
Authority
CN
China
Prior art keywords
femoral neck
region
picture
segmentation
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911156864.1A
Other languages
Chinese (zh)
Inventor
郝鹏翼
叶涛涛
张跃华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Feitu Imaging Technology Co ltd
Original Assignee
Zhejiang Feitu Imaging Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Feitu Imaging Technology Co ltd filed Critical Zhejiang Feitu Imaging Technology Co ltd
Priority to CN201911156864.1A priority Critical patent/CN111008974A/en
Publication of CN111008974A publication Critical patent/CN111008974A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a multi-model fused femoral neck fracture region positioning and segmenting method and a multi-model fused femoral neck fracture region positioning and segmenting system, wherein the multi-model fused femoral neck fracture region positioning and segmenting method comprises the following steps: preprocessing the obtained femoral neck X-ray film; inputting the preprocessed X-ray image into a constructed and trained detection neural network for detection to obtain a femoral neck region image Pictureorigin(ii) a Image Picture of femoral neck regionoriginInputting the data into a constructed and trained segmentation network to obtain a segmentation effect graph Picturebinary(ii) a Fusing the obtained segmentation effect PicturebinaryAnd image Picture of femoral neck regionoriginTo form a thermodynamic diagram that characterizes the fracture region and extent of the fracture.

Description

Multi-model fusion femoral neck fracture region positioning and segmentation method and system
Technical Field
The invention relates to the field of computers, in particular to a multi-model fused femoral neck fracture region positioning and segmenting method and system.
Background
Femoral neck fracture and the influence generated after the operation are one of the current serious public health problems, and the incidence rate of the femoral neck fracture accounts for 3.58 percent of the total body fracture. Femoral neck fracture is one of the main injuries of old patients, the number of femoral neck fractures will increase due to the prolonging of the life of human beings and the increase of the population of old people, and meanwhile, the incidence rate of femoral neck fractures in young and strong years gradually increases due to the high-speed development of the traffic and construction industries in China. As many as 20% to 30% of femoral neck fracture patients will die in the following year, and treatment of femoral neck fractures in young and strong years is relatively difficult because of the high expectations of patients, and the high incidence of femoral head necrosis, non-union and deformity healing seriously affects the treatment effect.
Earlier diagnosis and treatment can not only maintain joint function, but also ensure mobility and quality of life of the patient. Anterior pelvic x-ray films (PXRs) are an important approach to diagnosing femoral neck fractures. However, PXRs are not ideally sensitive to femoral neck fractures. Studies have shown that the rate of early misdiagnosis is as high as 7% to 14%, and for occult fractures in the neck of the femur, diagnosis is difficult, especially for emergency physicians who are not orthopedics specialties and young orthopedics physicians, the delayed diagnosis and treatment worsens the disease after healing. And experienced doctor resources are mainly concentrated in cities, and doctors engaged in evaluation of femoral neck fractures in hospitals at the level of villages and towns are lack of resources. In recent years, in a traditional computer-aided diagnosis method, an expert is assisted in evaluating a femoral neck X-ray film, characteristics such as textures, shapes and the like in the X-ray film are mostly adopted for model training, but the extraction of the characteristics has high requirements on the quality of the X-ray film, and the quality of a sample easily influences the training result of the model. These factors make it more difficult to achieve higher performance with conventional methods.
Disclosure of Invention
The invention provides a multi-model fusion femoral neck fracture area positioning and dividing method and system for overcoming the defects of high difficulty, low efficiency and low precision of a femoral neck fracture diagnosis method in the prior art, which can automatically analyze a femoral neck X-ray film, accurately position a femoral neck fracture area, divide the fracture area, give a thermodynamic diagram effect, assist a doctor to quickly position the fracture area and diagnose the fracture degree.
In order to achieve the above object, the present invention provides a method for locating and segmenting a femoral neck fracture region by multi-model fusion, comprising:
preprocessing the acquired femoral neck X-ray image;
inputting the preprocessed X-ray image into a constructed and trained detection neural network for detection to obtain a femoral neck region image Pictureorigin
Image Picture of femoral neck regionoriginInputting the data into a constructed and trained segmentation network to obtain a segmentation effect graph Picturebinary
Fusing the obtained segmentation effect PicturebinaryAnd image Picture of femoral neck regionoriginTo form a thermodynamic diagram that characterizes the fracture region and extent of the fracture.
According to an embodiment of the present invention, the step of preprocessing the obtained X-ray film containing the femoral neck comprises:
calculating the maximum value max and the minimum value min of the pixels in the obtained X-ray film image;
carrying out normalization treatment, wherein the normalization formula is as follows:
Figure BDA0002285042110000021
according to an embodiment of the invention, the step of generating the thermodynamic diagram comprises:
picture of the segmentation effectbinaryMultiplied by 255 and changed into three channels to form a segmentation process map Pictureafter
Image Picture of femoral neck regionoriginAnd segmentation process map PictureafterGenerating thermodynamic diagram Picture by fusing formulasheatThe fusion formula is as follows:
Pictureheat=alpha*Pictureorigin+beta*Pictureafter+gamma
wherein alpha, beta represents weight, the sum of alpha and beta is equal to 1, and gamma represents deviation value.
According to an embodiment of the invention, when a detection neural network is constructed or used for detection, the following steps are adopted to obtain the detection characteristic F of the femoral neck X-ray filmdetection
Step 2.1: inputting a group of femoral neck X-ray film samples;
step 2.2: carrying out batch normalization after convolution operation of 7 × 7 size, and then carrying out Relu activation function operation;
step 2.3: extracting features through maximum pooling operation;
step 2.4: the extracted features are passed through a residual convolution module comprising 2 groups of 3 x 3 convolution operations and batch normalization;
step 2.5: repeating the step for 2.4 times to obtain the detection characteristic F of the femoral neck X-ray film sampledetection
According to an embodiment of the invention, when a detection neural network is constructed or used for detection, the following steps are adopted to obtain the image Picture of the femoral neck regionorigin
Step 2.6: the feature F obtained in step 2.5detectionInputting the candidate frames into a region candidate module to obtain a candidate frame of a network suggestion;
step 2.7: will be characterized by FdetectionInputting the data into a region of interest pooling module to obtain a feature FROI
Step 2.8: will be characterized by FROIInputting to the full connection layer to obtain the characteristic Ffc
Step 2.9: characteristic FfcRespectively obtaining the characteristics F of the class to which the candidate region with the size of N belongs through a full-connection layer containing N hidden nerve units and a full-connection layer containing N x 4 hidden unitsclsAnd a feature F of size N4 characterizing the exact position of the candidate region in the imageregression
According to an embodiment of the present invention, in step 2.6, the region candidate module construction process is:
step 2.6.1: characteristic F from step 2.5detectionObtaining the characteristic F through convolution operation of 3 x 3rpn
Step 2.6.2: characteristic FrpnRespectively obtaining the characteristics F through two convolution operations of 1 x 1rpn_clsAnd feature Frpn_regCharacteristic Frpn_regCandidate boxes representing RPN suggestions, feature Frpn_clsRepresents a feature Frpn_regObtaining the probability that the candidate frame is a foreground and a background;
step 2.6.3: retention feature Frpn_regMiddle feature Frpn_clsAnd (5) the candidate frames with the value more than or equal to 0.5, only the foreground frame is reserved, and the background frame is removed.
According to an embodiment of the present invention, in step 2.7, the region of interest pooling module construction process is:
step 2.7.1: characteristic F from step 2.5detectionCutting according to the candidate frame obtained in the step 2.6 to obtain a feature map Fcropped
Step 2.7.2: the obtained feature graph F after cuttingcroppedInput into a 2 x 2 global pooling layer;
step 2.7.3: in the global pooling layer, the feature map FcroppedObtaining 512 x 2 size one-dimensional vector characteristics F through poolingROIWherein the pooling window size and step formula is as follows:
Figure BDA0002285042110000041
wherein H and W respectively represent a characteristic diagram FcroppedHeight and width of (d); sizewAnd sizehRespectively representing the sizes of the pooling windows; swAnd ShRepresenting the step size of the movement of the pooling window over the width and height of the feature map, respectively;
Figure BDA0002285042110000042
and
Figure BDA0002285042110000043
the sublets represent the round-down and round-up.
According to an embodiment of the present invention, the segmentation network includes a feature extraction model and a segmentation network model, wherein the construction of the feature extraction model includes:
step 3.1: inputting a group of femoral neck region image samples processed by a detected neural network;
step 3.2: carrying out batch normalization processing and Relu activation function operation after convolution operation of 7 × 7 size;
step 3.3: extracting features through maximum pooling operation;
step 3.4: the extracted features are passed through a dense convolution module containing 4 sets of batch normalization, Relu activation function operations, and 1 × 1 convolution operations and 3 × 3 convolution operations;
step 3.5: the features from step 3.4 are input into a convolution operation with 1 x 1 and an average pooling level operation with a convolution kernel of 2 x 2 to obtain features F1
Step 3.6: by the feature F1As input, step 3.4 and step 3.5 are repeated twice in sequence to respectively obtain the characteristic F2And F3(ii) a By the feature F3Step 3.4 again as input, resulting in feature F4
According to an embodiment of the present invention, the split network model includes:
step 3.7: subjecting the feature F obtained in step 3.6 to4Performing upsampling operation and fusing the characteristics F obtained in the step 3.63Obtaining a characteristic Fupsample1
Step 38: will be characterized by Fupsample1Performing upsampling operation and fusing the characteristics F obtained in the step 3.62Obtaining a characteristic Fupsample2
Step 3.9: will be characterized by Fupsample2Performing upsampling operation and fusing the characteristics F obtained in the step 3.51Obtaining a characteristic Fupsample3
Step 3.10: will be characterized by Fupsample3Obtaining a characteristic F through an upsampling operationupsample4And obtaining a segmentation effect Picture through a sigmoid activation functionbinary
Correspondingly, the invention also provides a multi-model fused femoral neck fracture region positioning and segmenting system which comprises a preprocessing unit, a detecting unit, a segmenting unit and a thermodynamic diagram generating unit. The preprocessing unit is used for preprocessing the acquired femoral neck X-ray image. The detection unit inputs the preprocessed X-ray picture image toDetecting the constructed and trained detection neural network to obtain the image Picture of the femoral neck regionorigin. The segmentation unit images the neck region of the femuroriginInputting the data into a constructed and trained segmentation network to obtain a segmentation effect graph Picturebinary. The thermodynamic diagram generation unit fuses the obtained segmentation effect diagram PicturebinaryAnd image Picture of femoral neck regionoriginTo form a thermodynamic diagram that characterizes the fracture region and extent of the fracture.
In summary, the method and system for positioning and segmenting the femoral neck fracture region through multi-model fusion provided by the invention realize the positioning of the femoral neck fracture region by extracting the characteristics of the femoral neck X-ray image, segment the femoral neck fracture region, obtain the thermodynamic diagram representing the fracture severity and the fracture region, assist doctors in quickly positioning the fracture region and diagnose the fracture degree. Compared with the traditional computer-aided diagnosis method, the multi-model fusion femoral neck fracture area positioning and segmenting method provided by the invention has lower requirements on X-ray images and higher identification efficiency.
In order to make the aforementioned and other objects, features and advantages of the invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a flowchart illustrating a method for locating and segmenting a femoral neck fracture region through multi-model fusion according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a network structure for positioning a femoral neck fracture region.
Fig. 3 is a schematic diagram of a network structure for segmenting a femoral neck fracture region.
Fig. 4 is a functional block diagram of a multi-model fused femoral neck fracture region locating and segmenting system according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1, the method for locating and segmenting the fracture region of the femoral neck with multi-model fusion provided by the present embodiment includes preprocessing the acquired X-ray image of the femoral neck (step S1). Pre-processed X-ray pictureInputting the image into a constructed and trained detection neural network for detection to obtain a femoral neck region image Pictureorigin(step S2). Image Picture of femoral neck regionoriginInputting the data into a constructed and trained segmentation network to obtain a segmentation effect graph Picturebinary(step S3). Fusing the obtained segmentation effect PicturebinaryAnd image Picture of femoral neck regionoriginTo form a thermodynamic diagram characterizing the fracture region and extent of the fracture (step S4).
The multi-model fused femoral neck fracture region locating and segmenting method provided by the embodiment starts at step S1. In this step, the acquired femoral neck radiograph image is preprocessed. The specific pretreatment comprises the following steps:
step S1.1: calculating the maximum value max and the minimum value min of the pixels in the obtained X-ray film image;
step S1.2: carrying out normalization treatment, wherein the normalization formula is as follows:
Figure BDA0002285042110000061
after the preprocessing is finished, step S2 is executed, the preprocessed X-ray image is input into the constructed and trained detection neural network for detection, and the image Picture of the femoral neck region is obtainedorigin. In this step, the detection neural network architecture and detection are mainly composed of four parts, wherein the first part is an image detection feature FdetectionThe extraction comprises the following specific steps:
step 2.1: inputting a group of femoral neck X-ray image samples or femoral neck X-ray images to be detected;
step 2.2: carrying out batch normalization after convolution operation of 7 × 7 size, and then carrying out Relu activation function operation;
step 2.3: extracting features through maximum pooling operation;
step 2.4: the extracted features are passed through a residual convolution module comprising 2 groups of 3 x 3 convolution operations and batch normalization;
step 2.5: repeating the step 2.4 times to obtainDetection characteristic F of femoral neck X-ray film sampledetection
Then, a second partial region candidate module (RPN module), a third partial region of interest pooling module (ROI Pool module) and a fourth module full-connection layer are sequentially executed to obtain a femoral neck region image PictureoriginThe method comprises the following specific steps:
step 2.6: the feature F obtained in step 2.5detectionInputting the candidate frames into a region candidate module to obtain a candidate frame of a network suggestion;
step 2.7: will be characterized by FdetectionInputting the data into a region of interest pooling module to obtain a feature FROI
Step 2.8: will be characterized by FROIInputting to the full connection layer to obtain the characteristic Ffc
Step 2.9: characteristic FfcRespectively obtaining the characteristics F of the class to which the candidate region with the size of N belongs through a full-connection layer containing N hidden nerve units and a full-connection layer containing N x 4 hidden unitsclsAnd a feature F of size N4 characterizing the exact position of the candidate region in the imageregression
In this embodiment, the process of constructing the region candidate block in step 2.6 is as follows:
step 2.6.1: characteristic F from step 2.5detectionObtaining the characteristic F through convolution operation of 3 x 3rpn
Step 2.6.2: characteristic FrpnRespectively obtaining the characteristics F through two convolution operations of 1 x 1rpn_clsAnd feature Frpn_regCharacteristic Frpn_regCandidate boxes representing RPN suggestions, feature Frpn_clsRepresents a feature Frpn_regObtaining the probability that the candidate frame is a foreground and a background;
step 2.6.3: retention feature Frpn_regMiddle feature Frpn_clsAnd (5) the candidate frames with the value more than or equal to 0.5, only the foreground frame is reserved, and the background frame is removed.
In this embodiment, in step 2.7, the region-of-interest pooling module construction process is:
step 2.7.1: step by stepFeature F from step 2.5detectionCutting according to the candidate frame obtained in the step 2.6 to obtain a feature map Fcropped
Step 2.7.2: the obtained feature graph F after cuttingcroppedInput into a 2 x 2 global pooling layer;
step 2.7.3: in the global pooling layer, the feature map Fcropped512 x 2 vector features F by poolingROIWherein the pooling window size and step formula is as follows:
Figure BDA0002285042110000081
wherein H and W respectively represent a characteristic diagram FcroppedHeight and width of (d); sizewAnd sizehRespectively representing the sizes of the pooling windows; swAnd ShRepresenting the step size of the movement of the pooling window over the width and height of the feature map, respectively;
Figure BDA0002285042110000082
and
Figure BDA0002285042110000083
the sublets represent the round-down and round-up.
In step S2, the neural network architecture for detection consists essentially of four parts, ① extract image features consisting essentially of 1 convolutional layer, 1 max pooling layer, and 4 residual convolutional modules, each of which contains two convolutional layers, one shortcut branch, the start of the shortcut branch is input, and the end of the shortcut branch is added after the second convolutional layer, so that the input features can be numerically added directly to the features extracted by the second convolutional layer, ② area candidates consisting essentially of 3 convolutional layers, each convolutional layer normalizes the features and passes through a Relu activation function, ③ area-of-interest pooling modules crop feature maps according to the detection frames obtained by the area candidates, these cropped feature maps of different sizes are pooled globally, convert the different-sized inputs into fixed-sized outputs, ④ fully-connected layers are followed by a deactivation (dropout) layer, and set to 0.5, the fully-connected layers for classification and the regression are followed by output, the class of the fully-connected layers belonging to the fracture candidate region, and the class label of the fracture candidate region in the exact fracture region, including the class label of the fracture region.
Image Picture of femoral neck region is acquired in step S2originPerforming S3, and taking Picture of femoral neck regionoriginInputting the data into a constructed and trained segmentation network to obtain a segmentation effect graph Picturebinary
In this embodiment, the segmentation network includes a feature extraction model and a segmentation network model, where the construction of the feature extraction model includes:
step 3.1: inputting a group of femoral neck region image samples processed by a detected neural network;
step 3.2: carrying out batch normalization processing and Relu activation function operation after convolution operation of 7 × 7 size;
step 3.3: extracting features through maximum pooling operation;
step 3.4: the extracted features are passed through a dense convolution module containing 4 sets of batch normalization, Relu activation function operations, and 1 × 1 convolution operations and 3 × 3 convolution operations;
step 3.5: the features from step 3.4 are input into a convolution operation with 1 x 1 and an average pooling level operation with a convolution kernel of 2 x 2 to obtain features F1
Step 3.6: by the feature F1As input, step 3.4 and step 3.5 are repeated twice in sequence to respectively obtain the characteristic F2And F3(ii) a By the feature F3Step 3.4 again as input, resulting in feature F4
The segmentation network model comprises:
step 3.7: subjecting the feature F obtained in step 3.6 to4Performing upsampling operation and fusing the characteristics F obtained in the step 3.63Obtaining a characteristic Fupsample1
Step 3.8: will be characterized by Fupsample1Through the up-sampling operation and fusionCharacteristic F from step 3.62Obtaining a characteristic Fupsample2
Step 3.9: will be characterized by Fupsample2Performing upsampling operation and fusing the characteristics F obtained in the step 3.51Obtaining a characteristic Fupsample3
Step 3.10: will be characterized by Fupsample3Is subjected to an upsampling operation to obtain a feature Fupsample4And obtaining a segmentation effect Picture through a sigmoid activation functionbinary
In step S3, the feature extraction model mainly includes: extracting the image characteristic part: mainly comprises 1 convolution layer, 1 maximum pooling layer and 4 dense convolution modules; each dense volume module contains 4 sets of batch normalization-ReLU-convolution operations, and the features from the latter set are fused by branches with the features from the previous sets of operations.
In step S3, the segmentation network model is composed of a ① upsampling part that image features are upsampled three times and fused with feature maps in corresponding feature extraction models in each upsampling part, a ② segmentation map calculation part that each value in the final feature map obtained by segmentation is transformed into 1 through a sigmoid activation function, the value greater than or equal to 0.5 is transformed into 0, and the segmentation effect map Picture is finally obtainedbinary
Finally, step S4 is executed to fuse the obtained segmentation effect PicturebinaryAnd image Picture of femoral neck regionoriginTo form a thermodynamic diagram that characterizes the fracture region and extent of the fracture. The step of generating the thermodynamic diagram comprises the following steps:
picture of the segmentation effectbinaryMultiplied by 255 and changed into three channels to form a segmentation process map Pictureafter
Image Picture of femoral neck regionoriginAnd segmentation process map PictureafterGenerating thermodynamic diagram Picture by fusing formulasheatThe fusion formula is as follows:
Pictureheat=alpha*Pictureorigin+beta*Pictureafter+gamma
where alpha, beta represents the weight, the sum of alpha and beta equals 1, and gamma represents the bias value.
Specific examples are provided for the construction and training of the neural network detection and the segmentation network, and the specific examples are as follows:
① in the testing network model, 1327 samples of femoral neck X-ray films are selected as 331 from the samples as a test set, the rest 996 are selected as a training set, 2440 cut femoral neck X-ray films are selected from ② segmentation networks, 1954 from the samples are selected as a training set, and the rest 486 are used as a test set.
The method comprises the steps of ① image feature extraction, ② region candidate modules, ③ region-of-interest pooling module, ④ full-connected layer, 1 convolutional layer, 1 maximum pooling layer and 4 residual convolution modules in image feature extraction, wherein the region candidate modules comprise 3 convolutional layers, the region-of-interest pooling module comprises one global pooling layer, and the full-connected layer comprises two full-connected layers.
First, the convolution kernel size of the first convolutional layer is 7 × 7, and the sliding step size is 2. The convolution kernels in the residual convolution modules are all 3 x 3 in size, the sliding step of the first residual convolution module is 1, the rest are all 2, and the convolution kernels connected among the residual convolution modules are all 1 x 1 in size. The number of convolution kernels is increased as the residual convolution module is entered, and is respectively 64,128,256 and 512. The convolution kernel size of the first convolution layer in the region candidate module is 3 × 3, the sliding step size is 2, the sizes of the last two convolution layers are both 1 × 1, and the sliding length is 1.
Secondly, all the parameter weights in the convolutional layer are initialized to be initialized random orthogonal matrixes in a weight regularization mode of L2 regularization, and the offset value is initialized to be 0. In the fully-connected layer, the weight is initialized to be random normal distribution, the weight regularization mode is L2 regularization, and the bias value is initialized to be 0.
And then, training the detection network model, specifically adopting a batch training mode. The number of samples of each batch of the training set generator and the verification set generator is 1, after one round of training is completed, the generators return 5 times and calculate the loss of the verification set, and the loss functions are cross entropy and Smoooh L1. The model optimizer is gradient random descent with a learning rate of 0.001, and the learning rate is reduced by 10 times every 5 passes. The maximum training round of the model is 20, the training is stopped after the verification and the training loss are converged, and the model is stored.
And thirdly, constructing and training a segmentation network model, wherein the model mainly comprises ① parts for extracting image features, ② parts for up-sampling, 1 convolutional layer, 1 maximum pooling layer and 4 dense convolutional modules in the image feature extraction, and 3 up-sampling operations and 3 fusion operations are performed on the up-sampling parts.
First, the convolution kernel size of the first convolution layer of the feature extraction architecture is 7 × 7, and the sliding step size is 2. The convolution kernel size of the first convolution layer in the dense convolution module is 1 x 1, the sliding step size is 1, the convolution kernel sizes of the second convolution layers are all 3 x 3, and the sliding step size is 1. The number of convolution kernels is increased as the dense convolution module is entered, and is respectively 64,128,256 and 512.
Secondly, the convolution kernel of the samples on the first layer of the split network model architecture is 16 × 16, and the sliding step size is 8. The other upsampled convolution kernels are 4 x 4 with a sliding step size of 2. The number of convolution kernels decreases as one goes into the upsampling layer, 256,128, and 64, respectively.
Next, all the parameter weights in the convolutional layer are initialized to random orthogonal matrix initialization, the weight regularization method is L2 regularization, and the offset value is initialized to 0. In the fully-connected layer, the weight is initialized to be random normal distribution, the weight regularization mode is L2 regularization, and the bias value is initialized to be 0.
And finally, training the model in a batch training mode. The number of samples in each batch of the training set generator and the verification set generator is 32, after one round of training is completed, the generator returns 5 times and calculates the loss of the verification set, and the loss functions are cross entropy and sigmoid loss functions. The model optimizer is a random gradient descent, the learning rate is 0.01, and the learning rate is reduced by 10 times every 10 rounds. The maximum training round of the model is 60, the training is stopped after the verification and the training loss are converged, and the model is stored.
Fourth, a test network model and a split network test. Loading the model, and inputting the preprocessed femoral neck X-ray film test set sample into the model for analysis and test.
Correspondingly, as shown in fig. 4, the present embodiment further provides a multi-model fused femoral neck fracture region locating and segmenting system, which includes a preprocessing unit 1, a detecting unit 2, a segmenting unit 3 and a thermodynamic diagram generating unit 4. The preprocessing unit 1 preprocesses the acquired femoral neck X-ray image. The detection unit 2 inputs the preprocessed X-ray image into a constructed and trained detection neural network for detection to obtain a femoral neck region image Pictureorigin. The segmentation unit 3 images the neck region of the femur PictureoriginInputting the data into a constructed and trained segmentation network to obtain a segmentation effect graph Picturebinary. The thermodynamic diagram generation unit 4 fuses the obtained segmentation effect maps PicturebinaryAnd image Picture of femoral neck regionoriginTo form a thermodynamic diagram that characterizes the fracture region and extent of the fracture.
The working principle of the multi-model fusion femoral neck fracture region positioning and segmentation system provided in this embodiment is as described in steps S1 to S4, which are not described herein again.
In summary, the method and system for positioning and segmenting the femoral neck fracture region through multi-model fusion provided by the invention realize the positioning of the femoral neck fracture region by extracting the characteristics of the femoral neck X-ray image, segment the femoral neck fracture region, obtain the thermodynamic diagram representing the fracture severity and the fracture region, assist doctors in quickly positioning the fracture region and diagnose the fracture degree. Compared with the traditional computer-aided diagnosis method, the multi-model fusion femoral neck fracture area positioning and segmenting method provided by the invention has lower requirements on X-ray images and higher identification efficiency.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes and modifications can be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A multi-model fused femoral neck fracture region positioning and segmentation method is characterized by comprising the following steps:
preprocessing the acquired femoral neck X-ray image;
inputting the preprocessed X-ray image into a constructed and trained detection neural network for detection to obtain a femoral neck region image Pictureorigin
Image Picture of femoral neck regionoriginInputting the data into a constructed and trained segmentation network to obtain a segmentation effect graph Picturebinarv
Fusing the obtained segmentation effect PicturebinaryAnd image Picture of femoral neck regionoriginTo form a thermodynamic diagram that characterizes the fracture region and extent of the fracture.
2. The method for multi-model fused femoral neck fracture area localization and segmentation of claim 1, wherein the step of pre-processing the acquired X-ray film containing the femoral neck comprises:
calculating the maximum value max and the minimum value min of the pixels in the obtained X-ray film image;
carrying out normalization treatment, wherein the normalization formula is as follows:
Figure FDA0002285042100000011
3. the method for multi-model fused femoral neck fracture region localization and segmentation of claim 1, wherein the step of generating the thermal map comprises:
picture of the segmentation effectbinaryMultiplied by 255 and changed into three channels to form a segmentation process map Pictureafter
Femoral neck region mapLike PictureoriginAnd a segmentation process map PictureafterGenerating thermodynamic diagram Picture by fusing formulasheatThe fusion formula is as follows:
Pictureheat=alpha*Pictureorigin+beta*Pictureafter+gamma
wherein alpha, beta represents weight, the sum of alpha and beta is equal to 1, and gamma represents deviation value.
4. The method for positioning and segmenting the multi-model fused femoral neck fracture area according to claim 1, wherein the following steps are adopted to obtain the detection feature F of the femoral neck X-ray film when a detection neural network is constructed or adopted for detectiondetection
Step 2.1: inputting a group of femoral neck X-ray film samples;
step 2.2: carrying out batch normalization after convolution operation of 7 × 7 size, and then carrying out Relu activation function operation;
step 2.3: extracting features through maximum pooling operation;
step 2.4: the extracted features are passed through a residual convolution module comprising 2 groups of 3 x 3 convolution operations and batch normalization;
step 2.5: repeating the step for 2.4 times to obtain the detection characteristic F of the femoral neck X-ray film sampledetection
5. The method for positioning and segmenting the femoral neck fracture region through multi-model fusion according to claim 4, characterized in that the following steps are adopted to obtain the image Picture of the femoral neck region when a detection neural network is constructed or adopted for detectionorigin
Step 2.6: the feature F obtained in step 2.5detectionInputting the candidate frames into a region candidate module to obtain a candidate frame of a network suggestion;
step 2.7: will be characterized by FdetectionInputting the data into a region of interest pooling module to obtain a feature FROI
Step 2.8: will be characterized by FROIInputting to the full connection layer to obtain the characteristic Ffc
Step 2.9: characteristic FfcRespectively obtaining the characteristics F of the class to which the candidate region with the size of N belongs through a full-connection layer containing N hidden nerve units and a full-connection layer containing N x 4 hidden unitsclsAnd a feature F of size N4 characterizing the exact position of the candidate region in the imageregression
6. The method for locating and segmenting a multi-model fused femoral neck fracture region according to claim 5, wherein in the step 2.6, the region candidate module construction process is as follows:
step 2.6.1: characteristic F from step 2.5detectionObtaining the characteristic F through convolution operation of 3 x 3rpn
Step 2.6.2: characteristic FrpnRespectively obtaining the characteristics F through two convolution operations of 1 x 1rpn_clsAnd feature Frpn_regCharacteristic Frpn_regCandidate boxes representing RPN suggestions, feature Frpn_clsRepresents a feature Frpn_regObtaining the probability that the candidate frame is a foreground and a background;
step 2.6.3: retention feature Frpn_regMiddle feature Frpn_clsAnd (5) the candidate frames with the value more than or equal to 0.5, only the foreground frame is reserved, and the background frame is removed.
7. The method for locating and segmenting the multi-model fused femoral neck fracture region according to claim 5, wherein in the step 2.7, the region of interest pooling module construction process is as follows:
step 2.7.1: characteristic F from step 2.5detectionCutting according to the candidate frame obtained in the step 2.6 to obtain a feature map Fcropped
Step 2.7.2: the obtained feature graph F after cuttingcroppedInput into a 2 x 2 global pooling layer;
step 2.7.3: in the global pooling layer, the feature map Fcropped512 x 2 vector features F by poolingROIIn which the window is formedThe mouth size and step size formula is as follows:
Figure FDA0002285042100000031
wherein H and W respectively represent a characteristic diagram FcroppedHeight and width of (d); sizewAnd sizehRespectively representing the sizes of the pooling windows; swAnd ShRepresenting the step size of the movement of the pooling window over the width and height of the feature map, respectively;
Figure FDA0002285042100000032
and
Figure FDA0002285042100000033
the sublets represent the round-down and round-up.
8. The multi-model fused femoral neck fracture region localization and segmentation method of claim 1, wherein the segmentation network comprises a feature extraction model and a segmentation network model, wherein the construction of the feature extraction model comprises:
step 3.1: inputting a group of femoral neck region image samples processed by a detected neural network;
step 3.2: carrying out batch normalization processing and Relu activation function operation after convolution operation of 7 × 7 size;
step 3.3: extracting features through maximum pooling operation;
step 3.4: the extracted features are passed through a dense convolution module containing 4 sets of batch normalization, Relu activation function operations, and 1 × 1 convolution operations and 3 × 3 convolution operations;
step 3.5: the features from step 3.4 are input into a convolution operation with 1 x 1 and an average pooling level operation with a convolution kernel of 2 x 2 to obtain features F1
Step 3.6: by the feature F1As input, step 3.4 and step 3.5 are repeated twice in sequence to respectively obtain the characteristic F2And F3(ii) a By the feature F3As input againRepeating the step 3.4 to obtain the characteristic F4
9. The multi-model fused femoral neck fracture region localization and segmentation method of claim 8, wherein the segmentation network model comprises:
step 3.7: subjecting the feature F obtained in step 3.6 to4Performing upsampling operation and fusing the characteristics F obtained in the step 3.63Obtaining a characteristic Fupsample1
Step 3.8: will be characterized by Fupsample1Performing upsampling operation and fusing the characteristics F obtained in the step 3.62Obtaining a characteristic Fupsample2
Step 3.9: will be characterized by Fupsample2Performing upsampling operation and fusing the characteristics F obtained in the step 3.51Obtaining a characteristic Fupsample3
Step 3.10: will be characterized by Fupsample3Obtaining a characteristic F through an upsampling operationupsample4And obtaining a segmentation effect Picture through a sigmoid activation functionbinarv
10. A multi-model fused femoral neck fracture region localization and segmentation, comprising:
the preprocessing unit is used for preprocessing the acquired femoral neck X-ray image;
the detection unit inputs the preprocessed X-ray image into a constructed and trained detection neural network for detection to obtain a femoral neck region image Pictureorigin
A segmentation unit for obtaining image Picture of femoral neck regionoriginInputting the data into a constructed and trained segmentation network to obtain a segmentation effect graph Picturebinary
Thermodynamic diagram generation unit for fusing the obtained segmentation effect diagram PicturebinaryAnd image Picture of femoral neck regionoriginTo form a thermodynamic diagram that characterizes the fracture region and extent of the fracture.
CN201911156864.1A 2019-11-22 2019-11-22 Multi-model fusion femoral neck fracture region positioning and segmentation method and system Withdrawn CN111008974A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911156864.1A CN111008974A (en) 2019-11-22 2019-11-22 Multi-model fusion femoral neck fracture region positioning and segmentation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911156864.1A CN111008974A (en) 2019-11-22 2019-11-22 Multi-model fusion femoral neck fracture region positioning and segmentation method and system

Publications (1)

Publication Number Publication Date
CN111008974A true CN111008974A (en) 2020-04-14

Family

ID=70113775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911156864.1A Withdrawn CN111008974A (en) 2019-11-22 2019-11-22 Multi-model fusion femoral neck fracture region positioning and segmentation method and system

Country Status (1)

Country Link
CN (1) CN111008974A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667474A (en) * 2020-06-08 2020-09-15 杨天潼 Fracture identification method, apparatus, device and computer readable storage medium
CN111714145A (en) * 2020-05-27 2020-09-29 浙江飞图影像科技有限公司 Femoral neck fracture detection method and system based on weak supervision segmentation
CN111915554A (en) * 2020-06-19 2020-11-10 杭州深睿博联科技有限公司 Fracture detection and positioning integrated method and device based on X-ray image
CN113705613A (en) * 2021-07-27 2021-11-26 浙江工业大学 X-ray sheet distal radius fracture classification method based on spatial position guidance
CN113706695A (en) * 2021-09-01 2021-11-26 杭州柳叶刀机器人有限公司 System and method for performing 3D femoral head modeling through deep learning and storage medium
CN113850827A (en) * 2021-11-29 2021-12-28 广东电网有限责任公司肇庆供电局 Method for detecting and processing broken strand image caused by abrasion of ground wire and horizontal iron
CN114494192A (en) * 2022-01-26 2022-05-13 西南交通大学 Deep learning-based thoracolumbar fracture identification, segmentation, detection and positioning method
CN117152256A (en) * 2023-10-30 2023-12-01 中国人民解放军总医院第一医学中心 Pelvis model channel positioning method and device based on templates

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492297A (en) * 2017-12-25 2018-09-04 重庆理工大学 The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor
CN109949270A (en) * 2019-01-28 2019-06-28 西北工业大学 Multispectral and full-colour image based on region convolutional network merges space quality evaluation method
CN110232380A (en) * 2019-06-13 2019-09-13 应急管理部天津消防研究所 Fire night scenes restored method based on Mask R-CNN neural network
CN110310292A (en) * 2019-06-28 2019-10-08 浙江工业大学 A kind of wrist portion reference bone dividing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492297A (en) * 2017-12-25 2018-09-04 重庆理工大学 The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor
CN109949270A (en) * 2019-01-28 2019-06-28 西北工业大学 Multispectral and full-colour image based on region convolutional network merges space quality evaluation method
CN110232380A (en) * 2019-06-13 2019-09-13 应急管理部天津消防研究所 Fire night scenes restored method based on Mask R-CNN neural network
CN110310292A (en) * 2019-06-28 2019-10-08 浙江工业大学 A kind of wrist portion reference bone dividing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王志喜 等, 徐州:中国矿业大学出版社 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111714145A (en) * 2020-05-27 2020-09-29 浙江飞图影像科技有限公司 Femoral neck fracture detection method and system based on weak supervision segmentation
CN111667474A (en) * 2020-06-08 2020-09-15 杨天潼 Fracture identification method, apparatus, device and computer readable storage medium
CN111915554A (en) * 2020-06-19 2020-11-10 杭州深睿博联科技有限公司 Fracture detection and positioning integrated method and device based on X-ray image
CN111915554B (en) * 2020-06-19 2024-05-14 杭州深睿博联科技有限公司 Fracture detection and positioning integrated method and device based on X-ray image
CN113705613A (en) * 2021-07-27 2021-11-26 浙江工业大学 X-ray sheet distal radius fracture classification method based on spatial position guidance
CN113705613B (en) * 2021-07-27 2024-02-02 浙江工业大学 X-ray radius distal fracture classification method based on spatial position guidance
CN113706695A (en) * 2021-09-01 2021-11-26 杭州柳叶刀机器人有限公司 System and method for performing 3D femoral head modeling through deep learning and storage medium
CN113706695B (en) * 2021-09-01 2023-06-23 杭州柳叶刀机器人有限公司 System and method for deep learning 3D femoral head modeling and storage medium
CN113850827A (en) * 2021-11-29 2021-12-28 广东电网有限责任公司肇庆供电局 Method for detecting and processing broken strand image caused by abrasion of ground wire and horizontal iron
CN114494192A (en) * 2022-01-26 2022-05-13 西南交通大学 Deep learning-based thoracolumbar fracture identification, segmentation, detection and positioning method
CN117152256A (en) * 2023-10-30 2023-12-01 中国人民解放军总医院第一医学中心 Pelvis model channel positioning method and device based on templates
CN117152256B (en) * 2023-10-30 2024-02-13 中国人民解放军总医院第一医学中心 Pelvis model channel positioning method and device based on templates

Similar Documents

Publication Publication Date Title
CN111008974A (en) Multi-model fusion femoral neck fracture region positioning and segmentation method and system
CN108898595B (en) Construction method and application of positioning model of focus region in chest image
CN110503635B (en) Hand bone X-ray film bone age assessment method based on heterogeneous data fusion network
CN111598867B (en) Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome
CN111862009B (en) Classifying method of fundus OCT (optical coherence tomography) images and computer readable storage medium
CN112215858A (en) Method and system for image segmentation and recognition
CN111986177A (en) Chest rib fracture detection method based on attention convolution neural network
Gupta Pneumonia detection using convolutional neural networks
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN111932522B (en) Alzheimer's disease classifier based on brain imaging big data deep learning
El-Shafai et al. Automated COVID-19 Detection Based on Single-Image Super-Resolution and CNN Models.
CN110660480B (en) Auxiliary diagnosis method and system for spine dislocation
CN110033448B (en) AI-assisted male baldness Hamilton grading prediction analysis method for AGA clinical image
Manikandan et al. Segmentation and Detection of Pneumothorax using Deep Learning
CN110427987A (en) A kind of the plantar pressure characteristic recognition method and system of arthritic
CN113506274A (en) Detection system for human cognitive condition based on visual saliency difference map
Bondfale et al. Convolutional neural network for categorization of lung tissue patterns in interstitial lung diseases
CN112085742A (en) NAFLD ultrasonic video diagnosis method based on context attention
CN111402231A (en) Automatic evaluation system and method for lung CT image quality
CN111932523B (en) Gender classifier based on brain imaging big data deep learning
CN114649092A (en) Auxiliary diagnosis method and device based on semi-supervised learning and multi-scale feature fusion
Al-Eiadeh Automatic Lung Field Segmentation using Robust Deep Learning Criteria
CN112614092A (en) Spine detection method and device
Paul et al. Computer-Aided Diagnosis Using Hybrid Technique for Fastened and Accurate Analysis of Tuberculosis Detection with Adaboost and Learning Vector Quantization
Zhang et al. A deep learning approach for basal cell carcinomas and Bowen’s disease recognition in dermatopathology image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhang Yuehua

Inventor after: Ye Taotao

Inventor before: Hao Pengyi

Inventor before: Ye Taotao

Inventor before: Zhang Yuehua

CB03 Change of inventor or designer information
WW01 Invention patent application withdrawn after publication

Application publication date: 20200414

WW01 Invention patent application withdrawn after publication