CN116051813A - Full-automatic intelligent lumbar vertebra positioning and identifying method and application - Google Patents

Full-automatic intelligent lumbar vertebra positioning and identifying method and application Download PDF

Info

Publication number
CN116051813A
CN116051813A CN202310079137.XA CN202310079137A CN116051813A CN 116051813 A CN116051813 A CN 116051813A CN 202310079137 A CN202310079137 A CN 202310079137A CN 116051813 A CN116051813 A CN 116051813A
Authority
CN
China
Prior art keywords
lumbar vertebra
lumbar
key point
segmentation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310079137.XA
Other languages
Chinese (zh)
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Priority to CN202310079137.XA priority Critical patent/CN116051813A/en
Publication of CN116051813A publication Critical patent/CN116051813A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a full-automatic intelligent lumbar vertebra positioning and identifying method and application, wherein the full-automatic intelligent lumbar vertebra positioning and identifying method comprises the following steps: obtaining a two-dimensional spine image obtained by projection in the direction of an orientation sagittal plane, and obtaining the lumbar vertebra key point position of the two-dimensional spine image by using a lumbar vertebra key point extraction model based on deep learning; fitting a lumbar vertebra curve based on the lumbar vertebra key point positions, so as to restore the actual lumbar vertebra positions in the two-dimensional spine image; based on the lumbar vertebra key point position and the lumbar vertebra curve, extracting a single cone region from the two-dimensional spine image to obtain a single cone image; dividing lumbar vertebra parts in the single cone image by using a lumbar vertebra segmentation model based on deep learning to obtain an initial lumbar vertebra segmentation result; and dividing the adhesion area of the lumbar vertebra part in the initial lumbar vertebra dividing result by using a pit detection dividing algorithm to obtain a target lumbar vertebra dividing result.

Description

Full-automatic intelligent lumbar vertebra positioning and identifying method and application
Technical Field
The application relates to the technical field of image processing, in particular to a full-automatic intelligent lumbar vertebra positioning and identifying method and application.
Background
In recent years, segmentation of vertebral bodies by spinal CT images is critical for pathological diagnosis, surgical planning and post-operative evaluation. However, due to pathological anatomical changes, noise caused by screws and implants, and a wide range of different fields of view, it is difficult to automatically segment the lumbar CT image. The high degree of similarity between vertebrae may interfere with the surgical planning and, therefore, segmentation of the vertebral bodies is of great significance. The lumbar vertebrae segmentation task aims at segmenting lumbar vertebrae by using multi-mode CT images. However, manual segmentation is cumbersome, time-consuming and has poor segmentation accuracy.
Disclosure of Invention
The embodiment of the application aims to provide a full-automatic intelligent lumbar vertebra positioning and identifying method and application thereof, which are used for solving the problems of complex lumbar vertebra segmentation method, time consumption and poor segmentation precision in the prior art.
In order to achieve the above objective, an embodiment of the present application provides a full-automatic intelligent lumbar vertebra positioning and identifying method, including the steps of: obtaining a two-dimensional spine image obtained by projection in the direction of an orientation sagittal plane, and obtaining the lumbar vertebra key point position of the two-dimensional spine image by using a lumbar vertebra key point extraction model based on deep learning;
fitting a lumbar vertebra curve based on the lumbar vertebra key point positions, so as to restore the actual lumbar vertebra positions in the two-dimensional spine image;
Based on the lumbar vertebra key point position and the lumbar vertebra curve, extracting a single cone region from the two-dimensional spine image to obtain a single cone image;
dividing lumbar vertebra parts in the single cone image by using a lumbar vertebra segmentation model based on deep learning to obtain an initial lumbar vertebra segmentation result;
and dividing the adhesion area of the lumbar vertebra part in the initial lumbar vertebra dividing result by using a pit detection dividing algorithm to obtain a target lumbar vertebra dividing result.
Optionally, the lumbar vertebra key point extraction model includes:
and taking the HigherHRNet neural network as a model framework of the lumbar key point extraction model, using average pooling, taking a loss function as a multi-classification loss function, taking an optimization function as Adam, using a ReLU activation function, and classifying by a softmax classifier.
Optionally, before the obtaining the lumbar spine keypoint location of the two-dimensional spine image using the lumbar spine keypoint extraction model based on deep learning, the method further comprises:
acquiring a spine medical image data set for model construction, manually marking the lumbar vertebra key point position to obtain a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into a picture format into a training set, a verification set and/or a test set;
And training the initial HigherHRNet neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae key point extraction model.
Optionally, the lumbar vertebrae segmentation model includes:
an encoder and decoder based on HI-Net neural network structure, wherein the residual error block in the lumbar segmentation model has two 3D convolution layers.
Optionally, before the segmenting the lumbar portion in the single cone image using the lumbar segmentation model based on deep learning, the method further comprises:
acquiring a spine medical image data set for model construction, carrying out manual labeling, extracting a label containing a lumbar part as a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into the picture format into a training set, a verification set and/or a test set;
and training the initial HI-Net neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae segmentation model.
Optionally, the extracting the single cone region from the two-dimensional spine image based on the lumbar spine keypoint location and the lumbar spine curve includes:
And setting the distance between two adjacent lumbar vertebra key points as the length of a boundary frame, making a vertical line with the fitted lumbar vertebra curve at the midpoint position of the two adjacent lumbar vertebra key points, prolonging the vertical line to the length of the boundary frame, and extracting the single-vertebra region from the two-dimensional spine image based on the vertical line and the lumbar vertebra curve.
Optionally, the segmenting the adhesion region of the lumbar vertebra part in the initial lumbar vertebra segmentation result by using the pit detection segmentation algorithm includes:
acquiring a minimum convex closure in the initial lumbar vertebrae segmentation result;
subtracting the minimum convex closure from the corresponding concave pattern to obtain a concave area;
extracting the outline of the concave region, selecting the largest two regions as the regions where the concave points are located according to the area size of the region as the weight, traversing the largest two regions, and acquiring the two points with the shortest distance as the concave points;
segmentation is based on two of said pits.
Optionally, the method further comprises: and carrying out three-dimensional reconstruction on the target lumbar vertebra segmentation result to obtain a three-dimensional image of the lumbar vertebra part.
In order to achieve the above-mentioned purpose, the application still provides a full-automatic intelligent lumbar vertebrae location and recognition device, include:
lumbar vertebra key point extraction module: the method comprises the steps of obtaining a two-dimensional spine image obtained by projection in an orientation sagittal plane direction, and obtaining the lumbar vertebra key point position of the two-dimensional spine image by using a lumbar vertebra key point extraction model based on deep learning;
Lumbar vertebra curve fitting module: fitting a lumbar vertebra curve based on the lumbar vertebra key point positions, so as to restore the actual lumbar vertebra positions in the two-dimensional spine image;
single cone region extraction module: the method is used for extracting a single cone region from the two-dimensional spine image based on the lumbar vertebra key point position and the lumbar vertebra curve to obtain a single cone image;
initial lumbar vertebrae segmentation module: the method comprises the steps of dividing lumbar vertebra parts in a single cone image by using a lumbar vertebra division model based on deep learning to obtain an initial lumbar vertebra division result;
concave segmentation module: and the method is used for segmenting the adhesion area of the lumbar vertebra part in the initial lumbar vertebra segmentation result by using a pit detection segmentation algorithm to obtain a target lumbar vertebra segmentation result.
To achieve the above object, the present application also provides a computer storage medium having stored thereon a computer program which, when executed by a machine, implements the steps of the method as described above.
The embodiment of the application has the following advantages:
1. the embodiment of the application provides a full-automatic intelligent lumbar vertebra positioning and identifying method, which comprises the following steps: obtaining a two-dimensional spine image obtained by projection in the direction of an orientation sagittal plane, and obtaining the lumbar vertebra key point position of the two-dimensional spine image by using a lumbar vertebra key point extraction model based on deep learning; fitting a lumbar vertebra curve based on the lumbar vertebra key point positions, so as to restore the actual lumbar vertebra positions in the two-dimensional spine image; based on the lumbar vertebra key point position and the lumbar vertebra curve, extracting a single cone region from the two-dimensional spine image to obtain a single cone image; dividing lumbar vertebra parts in the single cone image by using a lumbar vertebra segmentation model based on deep learning to obtain an initial lumbar vertebra segmentation result; and dividing the adhesion area of the lumbar vertebra part in the initial lumbar vertebra dividing result by using a pit detection dividing algorithm to obtain a target lumbar vertebra dividing result.
According to the method, the lumbar key point position of the two-dimensional spine image is obtained by using the lumbar key point extraction model based on deep learning, the lumbar part in the single-cone image is segmented by using the lumbar segmentation model based on deep learning, the key point position is not required to be determined by manual marking, the lumbar part is not required to be segmented manually, and the complicated and time-consuming problems of manual segmentation are overcome.
2. Further, the lumbar vertebrae segmentation model includes: an encoder and decoder based on HI-Net neural network structure, wherein the residual error block in the lumbar segmentation model has two 3D convolution layers.
In order to extract more detail information from a CT image and alleviate the problem of undersegmentation of low-contrast edges, a 3D convolution kernel and a multi-scale residual error structure concept are introduced into an HI-Net network, and the structure can learn more complex features by utilizing dense connection between the interiors of different orthogonal views, so that the edges of two vertebral bodies are clearly distinguished by combining a pit detection segmentation algorithm in a subsequent step, and the accuracy of a segmentation result is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
Fig. 1 is a flowchart of a full-automatic intelligent lumbar vertebra positioning and identifying method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a segmentation result of a full-automatic intelligent lumbar positioning and identifying method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a HigherHRNet neural network according to the embodiment of the present application, which is a full-automatic intelligent lumbar positioning and identifying method;
Fig. 4 is a schematic diagram of an HI-Net neural network structure of a full-automatic intelligent lumbar vertebra positioning and identifying method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a residual error acceptance block structure of a full-automatic intelligent lumbar vertebra positioning and identifying method according to an embodiment of the present application;
fig. 6 is a three-dimensional reconstruction effect diagram of a full-automatic intelligent lumbar vertebra positioning and identifying method according to an embodiment of the present application;
fig. 7 is a block diagram of a full-automatic intelligent lumbar positioning and identifying device according to an embodiment of the present application;
fig. 8 is a block diagram of a full-automatic intelligent lumbar positioning and identifying electronic device according to an embodiment of the present application.
Detailed Description
Other advantages and advantages of the present application will become apparent to those skilled in the art from the following description of specific embodiments, which is to be read in light of the present disclosure, wherein the present embodiments are described in some, but not all, of the several embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In addition, the technical features described below in the different embodiments of the present application may be combined with each other as long as they do not collide with each other.
An embodiment of the present application provides a fully automatic intelligent lumbar positioning and identifying method, referring to fig. 1, fig. 1 is a flowchart of a fully automatic intelligent lumbar positioning and identifying method provided in an embodiment of the present application, it should be understood that the method may further include additional blocks not shown and/or blocks not shown may be omitted, and the scope of the present application is not limited in this respect.
At step 101, a two-dimensional spine image projected in a sagittal plane direction is acquired, and a lumbar key point position of the two-dimensional spine image is acquired by using a lumbar key point extraction model based on deep learning. Reference is made to fig. 2. Fig. 2 is a schematic diagram of a segmentation result of a full-automatic intelligent lumbar positioning and identifying method according to an embodiment of the present application, where the segmentation process in fig. 2 includes performing sagittal plane projection to obtain a two-dimensional spine graph, performing lumbar positioning, performing curve fitting to obtain a detection bounding box, extracting a plurality of single cones, performing initial lumbar segmentation and concave segmentation, and finally performing three-dimensional reconstruction, which is specifically described in the following embodiments.
In some embodiments, the lumbar spine keypoint extraction model comprises:
and taking the HigherHRNet neural network as a model framework of the lumbar key point extraction model, using average pooling, taking a loss function as a multi-classification loss function, taking an optimization function as Adam, using a ReLU activation function, and classifying by a softmax classifier.
Specifically, higherHRNet is used as a model framework for lumbar key point extraction.
The current mainstream thermodynamic diagram (hetmap) resolution for bottom-up keypoints detection uses 1/4, and this resolution is less accurate for keypoint identification. Thus, the present embodiment uses the HigherHRNet keypoints of the higher resolution features to detect the neural network.
HighHRNet neural networks use high resolution feature pyramids to solve the multi-scale problem. Conventional feature pyramids typically start with a small resolution, and get 1/4 resolution features through a series of upsampling operations. The high resolution feature pyramid used by the HigherHRNet neural network starts from 1/4 resolution and features with higher resolution are obtained through Transposed Convolution. In the training process, multi-resolution supervision is used to enable features of different layers to learn information of different scales. And simultaneously, the heat maps with different resolutions are uniformly amplified to the original size by utilizing multi-resolution fusion and fused together, so that a feature sensitive to the scale is obtained.
Transposed Convolution (deconvolution) is the inverse of the convolution operation. The theoretical basis of convolution is translational invariance (Transposed Invariance) in statistical invariance, and plays a role in dimension reduction. Deconvolution, inputting the characteristics of the picture, outputting the picture, and playing a role in restoration. The main purpose of deconvolution in HigherHRNet is to generate higher resolution features to improve accuracy. The HighHRNet neural network structure is shown in FIG. 3. Fig. 3 is a schematic structural diagram of a HigherHRNet neural network of a full-automatic intelligent lumbar vertebra positioning and identifying method according to an embodiment of the present application. In fig. 3, the higherhnet network architecture includes three network branches, where the branches of the lower layer of the network are reduced in size by half as compared to the branches of the upper layer, each branch being characterized by a convolutional layer. The feature map is to continuously convolve the operation to reduce the resolution, then continuously upsample to improve the resolution, and maintain the high-resolution features by continuously crossed addition to reduce the feature loss.
In some embodiments, the method of constructing the lumbar spine keypoint extraction model comprises:
acquiring a spine medical image data set for model construction, manually marking the lumbar vertebra key point position to obtain a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into a picture format into a training set, a verification set and/or a test set;
and training the initial HigherHRNet neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae key point extraction model.
Specifically, a spinal medical image dataset is obtained, manual labeling is performed, and for a lumbar vertebra positioning label, the key point positions of the lumbar vertebra are required to be marked, so that an identification database is established. Converting DICOM data of a two-dimensional cross section into a picture in a PNG format, converting a label mask (label file) into the picture in the PNG format, and dividing the picture into a training set, a verification set and a test set according to the proportion of 6:2:2 after the picture is disordered.
Inputting the obtained two-dimensional image and the MASK corresponding to the two-dimensional image into a HigherHRNet neural network for training, and finally obtaining the lumbar vertebra key point position.
At step 102, a lumbar curve is fitted based on the lumbar keypoint locations, thereby restoring the actual location of the lumbar spine in the two-dimensional spinal image. Reference is made to fig. 2.
Specifically, the lumbar curve needs to be fitted to the detected lumbar key points to find the lumbar detection frame, and the present embodiment uses cubic spline interpolation to fit the lumbar curve.
The cubic spline interpolation is a common interpolation method with the characteristics of simple structure, convenient use, accurate fitting, important property of 'convex protection', and the like. The three-time sample interpolation method can be used for better fitting the lumbar curve and restoring the actual position of the lumbar.
At step 103, a single cone region is extracted from the two-dimensional spine image based on the lumbar spine keypoint location and the lumbar spine curve, resulting in a single cone image.
In some embodiments, the method of extracting the single cone region from the two-dimensional spine image based on the lumbar keypoint location and the lumbar curve comprises:
and setting the distance between two adjacent lumbar vertebra key points as the length of a boundary frame, making a vertical line with the fitted lumbar vertebra curve at the midpoint position of the two adjacent lumbar vertebra key points, prolonging the vertical line to the length of the boundary frame, and extracting the single-vertebra region from the two-dimensional spine image based on the vertical line and the lumbar vertebra curve. Reference is made to fig. 2.
Specifically, the distance between two adjacent lumbar vertebra key points is calculated as the length of a boundary frame, a vertical line is drawn between the midpoint position of the two adjacent key points and a fitted lumbar vertebra curve, the vertical line is prolonged to the length of the boundary frame, a single-cone image is obtained based on the vertical line and the lumbar vertebra curve, and the single-cone image is used for single-cone segmentation of lumbar vertebrae.
At step 104, the lumbar vertebra portion in the single cone image is segmented using a lumbar vertebra segmentation model based on deep learning, resulting in an initial lumbar vertebra segmentation result. Reference is made to fig. 2.
In some embodiments, the lumbar segmentation model comprises:
an encoder and decoder based on HI-Net neural network structure, wherein the residual error block in the lumbar segmentation model has two 3D convolution layers.
Specifically, the present embodiment uses a high density HI-Net neural network to segment lumbar vertebrae in a single cone image, which captures multi-scale information by performing additive decomposition on a 3D weighted convolution layer in a residual error acceptance block. With the help of feature reusability, the present embodiment uses the ultra dense connections between the deconvolution layers to extract more context information while using the DICE loss function to handle class imbalance.
In order to extract more detail information from a CT image and alleviate the problem of undersegmentation of low-contrast edges, a 3D convolution kernel and a multi-scale residual error structure concept are introduced into an HI-Net network, and the structure can learn more complex features by utilizing dense connection between the interiors of different orthogonal views, so that the edges of two vertebral bodies are clearly distinguished by combining a pit detection segmentation algorithm in a subsequent step, and the accuracy of a segmentation result is improved.
Fig. 4 shows a HI-Net network architecture. The left side of the network in fig. 4 serves as an encoder to extract features of different levels, while the right side of the network serves as a decoder to aggregate features and segmentation masks. The encoder-decoder subnetwork modified residual acceptance block has two 3D convolutional layers and each layer follows the structure of fig. 5. In fig. 5, the network structure comprises 4 branches, the first three branches extract features through convolution kernels of different scales, and the fourth branch is a jump cascade structure, and takes input features as output. Finally, the step of obtaining the product, four by 1 x 1 convolution and carrying out fusion output on the branch data. This allows more complex features to be learned with dense connections between the interiors of different orthogonal views.
In the encoding stage, the encoder extracts features from multiple scales and generates a fine to coarse feature map. The fine feature map contains lower level features and more spatial information, while the coarse feature map provides the opposite information. The jump connection is used to combine the thickness feature maps to achieve accurate segmentation.
In some embodiments, the method of constructing the initial lumbar segmentation model comprises:
acquiring a spine medical image data set for model construction, carrying out manual labeling, extracting a label containing a lumbar part as a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into the picture format into a training set, a verification set and/or a test set;
And training the initial HI-Net neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae key point extraction model.
Specifically, a spine medical image dataset is obtained, the spine medical image dataset is manually marked with lumbar regions, and finally only a label containing lumbar portions is extracted as a segmentation mask (mark file) so as to establish a segmentation database. Converting DICOM data of a two-dimensional cross section into a picture in a PNG format, dividing the mask into pictures in the PNG format, and dividing the pictures into a training set, a verification set and a test set according to a ratio of 6:2:2 after the pictures are disordered.
And inputting each single spine into the HI-Net network for training to obtain MASK of each single spine, and combining the MASK of each spine to obtain the MASK of the lumbar region.
In the model training process, the trained batch_size is 64, the initial learning rate is set to be 1e-4, a learning rate attenuation strategy is added, the learning rate attenuation is 0.9 in each iteration, the optimizer uses the Adam optimizer, the loss function is DICE loss, each iteration is set to 1000 times, one verification is carried out on a training set and a verification set, the network training stop time is judged through an early stop method, and a final model is obtained.
At step 105, the adhesion region of the lumbar vertebra part in the initial lumbar vertebra segmentation result is segmented by using a pit detection segmentation algorithm, so as to obtain a target lumbar vertebra segmentation result. Reference is made to fig. 2.
In particular, in view of the adhesion between vertebral bodies in the lumbar images, the boundary is unclear, which easily causes the problem of poor segmentation. Based on the initial lumbar vertebra segmentation result obtained by HI-Net segmentation, a pit detection method combining gradients is introduced, edges are detected by utilizing gradient information, detected edge points are stored, pits of each connected region are detected, and then the pits are matched. By segmentation, the edges of the individual vertebral bodies can be clearly resolved.
In some embodiments, the method for segmenting the adhesion region of the lumbar portion in the initial lumbar segmentation result using the pit detection segmentation algorithm comprises:
acquiring a minimum convex closure in the initial lumbar vertebrae segmentation result;
subtracting the minimum convex closure from the corresponding concave pattern to obtain a concave area;
extracting the outline of the concave region, selecting the largest two regions as the regions where the concave points are located according to the area size of the region as the weight, traversing the largest two regions, and acquiring the two points with the shortest distance as the concave points;
Segmentation is based on two of said pits.
Specifically, first, find the minimum convex closure of the image;
secondly, subtracting the concave graph from the convex closure of the image to obtain a concave area;
then, extracting the outline of the concave region, selecting the largest two regions as the regions where the concave points are located according to the size of the region area as the weight, traversing the two regions, and searching the two points with the shortest distance as the concave points;
finally, the segmentation is performed based on the two pits.
For the adhesion portion, the adhesion portion is marked and distinguished by a concave boundary method to distinguish the cone region.
The MASK of HI-Net segmentation is subjected to pit segmentation for distinguishing adhesion areas and precisely segmenting vertebral edge areas.
In some embodiments, further comprising:
and carrying out three-dimensional reconstruction on the target lumbar vertebra segmentation result to obtain a three-dimensional image of the lumbar vertebra part.
Specifically, referring to fig. 6, DICOM data is reconstructed in three dimensions. In fig. 6, the first image represents CT data, the second image represents a target lumbar vertebrae segmentation result, and the third image represents a three-dimensional image result obtained by three-dimensionally reconstructing the segmentation result.
According to the method, the lumbar key point position of the two-dimensional spine image is obtained by using the lumbar key point extraction model based on deep learning, the lumbar part in the single-cone image is segmented by using the lumbar segmentation model based on deep learning, the key point position is not required to be determined by manual marking, the lumbar part is not required to be segmented manually, and the complicated and time-consuming problems of manual segmentation are overcome.
Further, the lumbar vertebra single vertebral body positioning method comprises the following steps: and (3) projecting the CT image in the sagittal plane direction, positioning the lumbar vertebra position through a HigherHRNet network, fitting a lumbar vertebra curve by adopting cubic spline interpolation, and detecting the single-cone position through fitting the curve and combining lumbar vertebra key points.
Further, the lumbar vertebra single vertebral body segmentation method comprises the following steps: in order to extract more detail information from the CT image, the undersegmentation problem of low-contrast edges is alleviated, and a 3D convolution kernel and a multi-scale residual structure concept are introduced into the HI-Net network, and the structure can learn more complex features by utilizing dense connection between the interiors of different orthogonal views. And the identified MASK is combined with the image concave segmentation method to clearly distinguish the edges of two vertebral bodies, so that the accuracy of segmentation results is improved.
Referring to fig. 7, the present application further provides a full-automatic intelligent lumbar vertebra positioning and identifying device, including:
lumbar vertebrae key point extraction module 201: the method comprises the steps of obtaining a two-dimensional spine image obtained by projection in an orientation sagittal plane direction, and obtaining the lumbar vertebra key point position of the two-dimensional spine image by using a lumbar vertebra key point extraction model based on deep learning;
lumbar curve fitting module 202: fitting a lumbar vertebra curve based on the lumbar vertebra key point positions, so as to restore the actual lumbar vertebra positions in the two-dimensional spine image;
Single cone region extraction module 203: the method is used for extracting a single cone region from the two-dimensional spine image based on the lumbar vertebra key point position and the lumbar vertebra curve to obtain a single cone image;
initial lumbar segmentation module 204: the method comprises the steps of dividing lumbar vertebra parts in a single cone image by using a lumbar vertebra division model based on deep learning to obtain an initial lumbar vertebra division result;
concave segmentation module 205: and the method is used for segmenting the adhesion area of the lumbar vertebra part in the initial lumbar vertebra segmentation result by using a pit detection segmentation algorithm to obtain a target lumbar vertebra segmentation result.
In some embodiments, the lumbar spine keypoint extraction module 201 further comprises a lumbar spine keypoint extraction model building unit: the method comprises the steps of obtaining a spine medical image data set for model construction, manually marking the lumbar vertebra key point positions to obtain a labeling file, and dividing spine medical image data converted into a picture format and the corresponding labeling file converted into the picture format into a training set, a verification set and/or a test set;
and training the initial HigherHRNet neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae key point extraction model.
In some embodiments, the initial lumbar segmentation module 204 further includes a lumbar segmentation model construction unit: the method comprises the steps of acquiring a spine medical image data set for model construction, manually labeling, extracting a label containing a lumbar part as a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into the picture format into a training set, a verification set and/or a test set;
and training the initial HI-Net neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae segmentation model.
In some embodiments, the system further includes a three-dimensional reconstruction module, configured to perform three-dimensional reconstruction on the target lumbar vertebra segmentation result, so as to obtain a three-dimensional image of the lumbar vertebra portion.
Reference is made to the foregoing method embodiments for specific implementation methods, and details are not repeated here.
Fig. 8 is a block diagram of a full-automatic intelligent lumbar positioning and identifying electronic device according to an embodiment of the present application. The electronic device includes:
a memory 301; and a processor 302 connected to the memory 301, the processor 302 being configured to: obtaining a two-dimensional spine image obtained by projection in the direction of an orientation sagittal plane, and obtaining the lumbar vertebra key point position of the two-dimensional spine image by using a lumbar vertebra key point extraction model based on deep learning;
Fitting a lumbar vertebra curve based on the lumbar vertebra key point positions, so as to restore the actual lumbar vertebra positions in the two-dimensional spine image;
based on the lumbar vertebra key point position and the lumbar vertebra curve, extracting a single cone region from the two-dimensional spine image to obtain a single cone image;
dividing lumbar vertebra parts in the single cone image by using a lumbar vertebra segmentation model based on deep learning to obtain an initial lumbar vertebra segmentation result;
and dividing the adhesion area of the lumbar vertebra part in the initial lumbar vertebra dividing result by using a pit detection dividing algorithm to obtain a target lumbar vertebra dividing result.
In some embodiments, the processor 302 is further configured to: the lumbar vertebrae key point extraction model comprises:
and taking the HigherHRNet neural network as a model framework of the lumbar key point extraction model, using average pooling, taking a loss function as a multi-classification loss function, taking an optimization function as Adam, using a ReLU activation function, and classifying by a softmax classifier.
In some embodiments, the processor 302 is further configured to: the method for constructing the lumbar vertebra key point extraction model comprises the following steps:
acquiring a spine medical image data set for model construction, manually marking the lumbar vertebra key point position to obtain a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into a picture format into a training set, a verification set and/or a test set;
And training the initial HigherHRNet neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae key point extraction model.
In some embodiments, the processor 302 is further configured to: the lumbar vertebrae segmentation model comprises:
an encoder and decoder based on HI-Net neural network structure, wherein the residual error block in the lumbar segmentation model has two 3D convolution layers.
In some embodiments, the processor 302 is further configured to: the method for constructing the initial lumbar vertebrae segmentation model comprises the following steps:
acquiring a spine medical image data set for model construction, carrying out manual labeling, extracting a label containing a lumbar part as a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into the picture format into a training set, a verification set and/or a test set;
and training the initial HI-Net neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae key point extraction model.
In some embodiments, the processor 302 is further configured to: based on the lumbar spine key point position and the lumbar spine curve, the method for extracting the single cone region from the two-dimensional spine image comprises the following steps:
And setting the distance between two adjacent lumbar vertebra key points as the length of a boundary frame, making a vertical line with the fitted lumbar vertebra curve at the midpoint position of the two adjacent lumbar vertebra key points, prolonging the vertical line to the length of the boundary frame, and extracting the single-vertebra region from the two-dimensional spine image based on the vertical line and the lumbar vertebra curve.
In some embodiments, the processor 302 is further configured to: the method for segmenting the adhesion area of the lumbar vertebra part in the initial lumbar vertebra segmentation result by utilizing the pit detection segmentation algorithm comprises the following steps:
acquiring a minimum convex closure in the initial lumbar vertebrae segmentation result;
subtracting the minimum convex closure from the corresponding concave pattern to obtain a concave area;
extracting the outline of the concave region, selecting the largest two regions as the regions where the concave points are located according to the area size of the region as the weight, traversing the largest two regions, and acquiring the two points with the shortest distance as the concave points;
segmentation is based on two of said pits.
In some embodiments, the processor 302 is further configured to: further comprises:
and carrying out three-dimensional reconstruction on the target lumbar vertebra segmentation result to obtain a three-dimensional image of the lumbar vertebra part.
Reference is made to the foregoing method embodiments for specific implementation methods, and details are not repeated here.
The present application may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing the various aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which may execute the computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Note that all features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic set of equivalent or similar features. Where used, further, preferably, still further and preferably, the brief description of the other embodiment is provided on the basis of the foregoing embodiment, and further, preferably, further or more preferably, the combination of the contents of the rear band with the foregoing embodiment is provided as a complete construct of the other embodiment. A further embodiment is composed of several further, preferably, still further or preferably arrangements of the strips after the same embodiment, which may be combined arbitrarily.
While the application has been described in detail with respect to the general description and specific embodiments thereof, it will be apparent to those skilled in the art that certain modifications and improvements may be made thereto based upon the application. Accordingly, such modifications or improvements may be made without departing from the spirit of the application and are intended to be within the scope of the invention as claimed.

Claims (10)

1. The full-automatic intelligent lumbar vertebra positioning and identifying method is characterized by comprising the following steps of:
obtaining a two-dimensional spine image obtained by projection in the direction of an orientation sagittal plane, and obtaining the lumbar vertebra key point position of the two-dimensional spine image by using a lumbar vertebra key point extraction model based on deep learning;
fitting a lumbar vertebra curve based on the lumbar vertebra key point positions, so as to restore the actual lumbar vertebra positions in the two-dimensional spine image;
based on the lumbar vertebra key point position and the lumbar vertebra curve, extracting a single cone region from the two-dimensional spine image to obtain a single cone image;
dividing lumbar vertebra parts in the single cone image by using a lumbar vertebra segmentation model based on deep learning to obtain an initial lumbar vertebra segmentation result;
and dividing the adhesion area of the lumbar vertebra part in the initial lumbar vertebra dividing result by using a pit detection dividing algorithm to obtain a target lumbar vertebra dividing result.
2. The fully automatic intelligent lumbar vertebra positioning and identifying method according to claim 1, wherein the lumbar vertebra key point extraction model comprises:
and taking the HigherHRNet neural network as a model framework of the lumbar key point extraction model, using average pooling, taking a loss function as a multi-classification loss function, taking an optimization function as Adam, using a ReLU activation function, and classifying by a softmax classifier.
3. The fully automatic intelligent lumbar positioning and identification method according to claim 1 or 2, wherein prior to the obtaining the lumbar keypoint locations of the two-dimensional spine image using the deep learning based lumbar keypoint extraction model, the method further comprises:
acquiring a spine medical image data set for model construction, manually marking the lumbar vertebra key point position to obtain a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into a picture format into a training set, a verification set and/or a test set;
and training the initial HigherHRNet neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae key point extraction model.
4. The fully automatic intelligent lumbar vertebrae positioning and recognition method according to claim 1, wherein the lumbar vertebrae segmentation model comprises:
an encoder and decoder based on HI-Net neural network structure, wherein the residual error block in the lumbar segmentation model has two 3D convolution layers.
5. The fully automatic intelligent lumbar positioning and identification method according to claim 1 or 4, further comprising, prior to said segmenting lumbar portions in said single cone image using said lumbar segmentation model based on deep learning:
Acquiring a spine medical image data set for model construction, carrying out manual labeling, extracting a label containing a lumbar part as a labeling file, and dividing the spine medical image data converted into a picture format and the corresponding labeling file converted into the picture format into a training set, a verification set and/or a test set;
and training the initial HI-Net neural network by using the training set, the verification set and/or the test set to obtain the lumbar vertebrae segmentation model.
6. The fully automatic intelligent lumbar positioning and identification method according to claim 1, wherein the extracting the single cone region from the two-dimensional spine image based on the lumbar key point position and the lumbar curve comprises:
and setting the distance between two adjacent lumbar vertebra key points as the length of a boundary frame, making a vertical line with the fitted lumbar vertebra curve at the midpoint position of the two adjacent lumbar vertebra key points, prolonging the vertical line to the length of the boundary frame, and extracting the single-vertebra region from the two-dimensional spine image based on the vertical line and the lumbar vertebra curve.
7. The method for fully automatic intelligent lumbar positioning and identification according to claim 1, wherein the segmenting the adhesion area of the lumbar portion in the initial lumbar segmentation result by using the pit detection segmentation algorithm comprises:
Acquiring a minimum convex closure in the initial lumbar vertebrae segmentation result;
subtracting the minimum convex closure from the corresponding concave pattern to obtain a concave area;
extracting the outline of the concave region, selecting the largest two regions as the regions where the concave points are located according to the area size of the region as the weight, traversing the largest two regions, and acquiring the two points with the shortest distance as the concave points;
segmentation is based on two of said pits.
8. The fully automatic intelligent lumbar positioning and identification method according to claim 1, further comprising:
and carrying out three-dimensional reconstruction on the target lumbar vertebra segmentation result to obtain a three-dimensional image of the lumbar vertebra part.
9. Full-automatic intelligent lumbar vertebrae location and recognition device, its characterized in that includes:
lumbar vertebra key point extraction module: the method comprises the steps of obtaining a two-dimensional spine image obtained by projection in an orientation sagittal plane direction, and obtaining the lumbar vertebra key point position of the two-dimensional spine image by using a lumbar vertebra key point extraction model based on deep learning;
lumbar vertebra curve fitting module: fitting a lumbar vertebra curve based on the lumbar vertebra key point positions, so as to restore the actual lumbar vertebra positions in the two-dimensional spine image;
single cone region extraction module: the method is used for extracting a single cone region from the two-dimensional spine image based on the lumbar vertebra key point position and the lumbar vertebra curve to obtain a single cone image;
Initial lumbar vertebrae segmentation module: the method comprises the steps of dividing lumbar vertebra parts in a single cone image by using a lumbar vertebra division model based on deep learning to obtain an initial lumbar vertebra division result;
concave segmentation module: and the method is used for segmenting the adhesion area of the lumbar vertebra part in the initial lumbar vertebra segmentation result by using a pit detection segmentation algorithm to obtain a target lumbar vertebra segmentation result.
10. A computer storage medium having stored thereon a computer program, which when executed by a machine performs the steps of the method according to any of claims 1 to 8.
CN202310079137.XA 2023-01-18 2023-01-18 Full-automatic intelligent lumbar vertebra positioning and identifying method and application Pending CN116051813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310079137.XA CN116051813A (en) 2023-01-18 2023-01-18 Full-automatic intelligent lumbar vertebra positioning and identifying method and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310079137.XA CN116051813A (en) 2023-01-18 2023-01-18 Full-automatic intelligent lumbar vertebra positioning and identifying method and application

Publications (1)

Publication Number Publication Date
CN116051813A true CN116051813A (en) 2023-05-02

Family

ID=86121822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310079137.XA Pending CN116051813A (en) 2023-01-18 2023-01-18 Full-automatic intelligent lumbar vertebra positioning and identifying method and application

Country Status (1)

Country Link
CN (1) CN116051813A (en)

Similar Documents

Publication Publication Date Title
Chen et al. Automatic building information model reconstruction in high-density urban areas: Augmenting multi-source data with architectural knowledge
CN112150428B (en) Medical image segmentation method based on deep learning
CN111369581B (en) Image processing method, device, equipment and storage medium
CN113313234A (en) Neural network system and method for image segmentation
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
CN108961180B (en) Infrared image enhancement method and system
CN110570426A (en) Joint registration and segmentation of images using deep learning
CN106408037A (en) Image recognition method and apparatus
Delibasoglu et al. Improved U-Nets with inception blocks for building detection
CN113706562B (en) Image segmentation method, device and system and cell segmentation method
CN113269224A (en) Scene image classification method, system and storage medium
Böhland et al. Influence of synthetic label image object properties on GAN supported segmentation pipelines
Dias et al. Semantic segmentation and colorization of grayscale aerial imagery with W‐Net models
Banerjee et al. A semi-automated approach to improve the efficiency of medical imaging segmentation for haptic rendering
CN112560925A (en) Complex scene target detection data set construction method and system
CN117094895A (en) Image panorama stitching method and system
CN115393730B (en) Mars meteorite crater precise identification method, electronic equipment and storage medium
CN113409324B (en) Brain segmentation method fusing differential geometric information
CN116051813A (en) Full-automatic intelligent lumbar vertebra positioning and identifying method and application
Pratikakis et al. Predictive digitisation of cultural heritage objects
CN113822846A (en) Method, apparatus, device and medium for determining region of interest in medical image
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
CN114723746B (en) Focal region depth omics feature extraction method and device based on knowledge distillation
Baloun et al. FCN-Boosted Historical Map Segmentation with Little Training Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100176 Beijing Daxing District, Beijing Economic and Technological Development Zone (Tongzhou), No. 3 Jinghai Fifth Road, Building 19-5, Floors 8-101

Applicant after: Beijing Changmugu Medical Technology Co.,Ltd.

Applicant after: Zhang Yiling

Address before: 100176 2201, 22 / F, building 1, yard 2, Ronghua South Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant before: BEIJING CHANGMUGU MEDICAL TECHNOLOGY Co.,Ltd.

Applicant before: Zhang Yiling