CN111401417B - Training method and device for spine fracture area analysis model - Google Patents

Training method and device for spine fracture area analysis model Download PDF

Info

Publication number
CN111401417B
CN111401417B CN202010147315.4A CN202010147315A CN111401417B CN 111401417 B CN111401417 B CN 111401417B CN 202010147315 A CN202010147315 A CN 202010147315A CN 111401417 B CN111401417 B CN 111401417B
Authority
CN
China
Prior art keywords
vertebra
bone
output frame
feature map
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010147315.4A
Other languages
Chinese (zh)
Other versions
CN111401417A (en
Inventor
颜立峰
何福金
刘小青
俞益洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenrui Bolian Technology Co Ltd, Shenzhen Deepwise Bolian Technology Co Ltd filed Critical Beijing Shenrui Bolian Technology Co Ltd
Priority to CN202010147315.4A priority Critical patent/CN111401417B/en
Publication of CN111401417A publication Critical patent/CN111401417A/en
Application granted granted Critical
Publication of CN111401417B publication Critical patent/CN111401417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the application provides a training method and device for a spine fracture area analysis model, which solve the problems of low accuracy and low efficiency of the existing spine fracture area analysis mode. The method comprises the following steps: inputting the basic frame into a first bone vertebra extraction feature map obtained by a bone vertebra trunk network; inputting the first bone vertebra extraction feature map into a first bone vertebra progressive layer of the N bone vertebra progressive layers to obtain a first bone vertebra output frame; according to the difference between the first bone vertebra output frame and the standard reference data of the vertebral body, adjusting network parameters of a second bone vertebra progressive layer, and inputting the first bone vertebra output frame into a bone vertebra backbone network to obtain a second extraction feature map; inputting an mth bone vertebra extraction feature map output by a bone vertebra backbone network into an mth bone vertebra progressive layer in the N bone vertebra progressive layers to obtain an mth output frame; and adjusting network parameters of the (m+1) -th output frame according to the difference between the (m) -th output frame and the fracture area standard reference data, and obtaining an (m+1) -th extraction feature map based on the bone backbone network according to the (m) -th output frame.

Description

Training method and device for spine fracture area analysis model
Technical Field
The application relates to the technical field of image analysis, in particular to a spine fracture area analysis model training method, a spine fracture area analysis device, electronic equipment and a computer readable storage medium.
Background
The application of deep learning in assisting radiology diagnosis is one of the research hotspots in the field of artificial intelligence. Where fractures are a high incidence, orthopedics often need to review a large number of radiological images in daily clinical work, and night shifts also have a very large need for reading, which has become an urgent problem to be solved. With the rapid development of computer and medical technology, the possibility of assisting doctors with artificial intelligence is receiving increasing attention from orthopedics doctors and researchers. On one hand, because the photographed centrum X-ray film contains a plurality of other organs of each part of the body, the organs can shield the centrum, so that the centrum is difficult to distinguish, and in addition, a plurality of low-density shadows and fracture low-density shadows are easy to confuse; on the other hand, the types of vertebral body fracture and dislocation are diversified, for example, compression fracture and dislocation have no obvious fracture line, but the shape and the relative position of the vertebral body are different from those of the conventional structure, and the symptoms of the common fracture are greatly different. In clinical practice, the accuracy of identifying vertebral fracture and dislocation based on X-rays alone is low compared with other parts. However, the CT-based judgment is high in damage to patients and high in cost. Studies have shown that deep learning techniques have a congenital advantage over humans in terms of their ability to distinguish overlapping objects and sensitivity. Therefore, the device capable of automatically identifying the vertebral fracture and dislocation in the radioactive flat is manufactured by using a deep learning method, so that doctors are helped to improve the accuracy of analysis of the vertebral fracture area.
Some automated devices employing deep learning techniques have been used to provide cues to a physician to detect suspicious areas of lesions. At present, methods suitable for detection include methods such as 'object detection' and 'example segmentation', and the accuracy rate of the method is greatly higher than that of other machine learning methods based on deep learning.
However, deep learning techniques have not been applied in the identification of vertebral folds. If the existing deep learning method is relied on, the focus area of the spine fracture can be used as a single training target and fed into an artificial neural network for training. But because of the relatively variable morphology of the spinal level plate, it often includes other parts of the body, such as shoulder bone pieces, chest ribs, hip bone pieces, etc., as well as soft tissue. Fractures or low density shadows like fractures in these tissues appear on the spinal radiation plates with such high frequency that they are very noisy for the characteristics of a true spinal fracture, which can affect the effectiveness and accuracy of the algorithm training.
Disclosure of Invention
In view of the above, the embodiment of the application provides a training method and device for a spine fracture area analysis model, which solve the problems of low accuracy and low efficiency of the existing spine fracture area analysis mode.
According to one aspect of the present application, an embodiment of the present application provides a training method for a vertebral fracture area analysis model, including: the spine fracture region analysis model comprises a bone backbone network for extracting a feature map and N bone vertebra progressive layers connected with the bone backbone network respectively, wherein the bone vertebra progressive layers are configured to output an output frame comprising a fracture region prediction result based on the input feature map, and N is an integer greater than or equal to 2; wherein, the training method comprises the following steps: inputting a basic frame into a first bone vertebra extraction feature map obtained by the bone vertebra main trunk network; inputting the first bone vertebra extraction feature map into a first bone vertebra progressive layer of the N bone vertebra progressive layers to obtain a first bone vertebra output frame; according to the difference between the first bone vertebra output frame and the fracture area standard reference data, adjusting network parameters of a second bone vertebra progressive layer, and inputting the first bone vertebra output frame into the bone vertebra backbone network to obtain a second extraction feature map; inputting an mth bone vertebra extraction feature map output by the bone vertebra backbone network into an mth bone vertebra progressive layer in the N bone vertebra progressive layers to obtain an mth output frame, wherein m is an integer variable with N being more than or equal to m being more than or equal to 2; and adjusting network parameters of an (m+1) -th output frame according to the difference between the (m) -th output frame and the standard reference data of the vertebral body, and obtaining an (m+1) -th extraction feature map based on the backbone network of the vertebral body according to the (m) -th output frame.
In an embodiment of the present application, the obtaining the m+1 extraction feature map based on the bone backbone network according to the m output frame includes: generating a fusion feature map according to the mth output frame and the q output frame output by the q vertebra progressive layer; inputting the fusion feature map into the bone vertebral backbone network to obtain the m+1th extraction feature map; the q-th output frame is obtained based on a training process of another vertebral fracture area analysis model, the another vertebral fracture area analysis model comprises a vertebral backbone network for extracting a feature map and P vertebral bone progressive layers respectively connected with the vertebral backbone network, and the vertebral bone progressive layers are configured to output an output frame comprising a vertebral body area prediction result based on the input feature map, wherein P is an integer greater than or equal to 2; wherein the training process of the another vertebra fracture area analysis model comprises the following steps: inputting a basic frame into a first vertebra extraction feature map obtained by the vertebra backbone network; inputting the first vertebra extraction feature map into a first vertebra progressive layer of the P vertebra progressive layers to obtain a first vertebra output frame; according to the difference between the first vertebra output frame and the vertebral body standard reference data, adjusting network parameters of a second vertebra progressive layer, and inputting the first vertebra output frame into the vertebra backbone network to obtain a second extraction feature map; inputting a q-th vertebra extraction feature map output by the vertebra backbone network into the q-th vertebra progressive layer in the P vertebra progressive layers to obtain the q-th output frame, wherein q is an integer variable with P more than or equal to q more than or equal to 2; and adjusting network parameters of the (q+1) th output frame according to the difference between the (q) th output frame and the fracture area standard reference data, and obtaining a q+1 th extraction feature map based on the vertebral backbone network according to the (q) th output frame.
In an embodiment of the present application, the generating the fusion feature map according to the mth output frame and the q output frame of the q vertebral level progressive output includes: and fusing the mth output frame and the q output frame in a feature superposition or feature addition mode to generate the fused feature map.
In one embodiment of the present application, the method further comprises: the basic frame is determined based on the radiation flat sheet, wherein the basic frame is a frame centering on each pixel point.
In one embodiment of the present application, the method further comprises: screening the original data to obtain the radiation flat sheet with a unified data format; marking the radiation flat sheet with the unified data format; and converting the marked radioactive flat sheet from the unified data format into a natural image format meeting the computer identification processing requirement.
According to one aspect of the present application, an embodiment of the present application provides a method for analyzing a fracture region of a vertebra, including: inputting the radioactive flat into a vertebra fracture area analysis model built by training according to any one of the methods; and taking an output frame output by the last progressive layer in the N bone vertebra progressive layers as a final fracture area prediction result.
According to one aspect of the present application, an embodiment of the present application provides a spine fracture region analysis model training device, where the spine fracture region analysis model includes a bone backbone network for extracting a feature map and N bone gradual layers connected to the bone backbone network, respectively, the bone gradual layers being configured to output an output frame including a fracture region prediction result based on an input feature map, where N is an integer greater than or equal to 2; wherein, the training device includes: the feature extraction module is configured to input a basic frame into a first bone vertebra extraction feature map obtained by the bone vertebra backbone network; a prediction module configured to input the first bone-vertebra extraction feature map into a first bone-vertebra progressive layer of the N bone-vertebra progressive layers to obtain a first bone-vertebra output frame; and an adjustment module configured to adjust network parameters of a second bone vertebral progression layer based on differences between the first bone vertebral output frame and fracture region standard reference data; the feature extraction module is further configured to input the first bone vertebra output frame into the bone vertebra backbone network to obtain a second extracted feature map; the prediction module is further configured to input an mth bone vertebra extraction feature map output by the bone vertebra backbone network into an mth bone vertebra progressive layer of the N bone vertebra progressive layers to obtain an mth output frame, wherein m is an integer variable with N being more than or equal to m being more than or equal to 2; and the adjustment module is further configured to adjust network parameters of the m+1th output frame according to the difference between the m-th output frame and the cone standard reference data; wherein the feature extraction module is further configured to obtain an m+1 extraction feature map based on the bone vertebral backbone network according to the m-th output box.
In an embodiment of the present application, the feature extraction module includes: a fusion unit configured to generate a fusion feature map according to the mth output frame and the q output frame output by the q vertebra progressive layer; and a feature extraction execution unit configured to input the fusion feature map into the bone vertebral backbone network to obtain the m+1th extraction feature map; the q-th output frame is acquired based on a training process of another vertebral fracture area analysis model, the another vertebral fracture area analysis model comprises a vertebral backbone network for extracting a feature map and P vertebrae progressive layers connected with the vertebral backbone network respectively, and the vertebrae progressive layers are configured to output an output frame comprising a vertebral body area prediction result based on the input feature map, wherein P is an integer greater than or equal to 2; wherein the training process of the another vertebra fracture area analysis model comprises the following steps: inputting a basic frame into a first vertebra extraction feature map obtained by the vertebra backbone network; inputting the first vertebra extraction feature map into a first vertebra progressive layer of the P vertebra progressive layers to obtain a first vertebra output frame; according to the difference between the first vertebra output frame and the vertebral body standard reference data, adjusting network parameters of a second vertebra progressive layer, and inputting the first vertebra output frame into the vertebra backbone network to obtain a second extraction feature map; inputting a q-th vertebra extraction feature map output by the vertebra backbone network into the q-th vertebra progressive layer in the P vertebra progressive layers to obtain the q-th output frame, wherein q is an integer variable with P more than or equal to q more than or equal to 2; and adjusting network parameters of the (q+1) th output frame according to the difference between the (q) th output frame and the fracture area standard reference data, and obtaining a q+1 th extraction feature map based on the vertebral backbone network according to the (q) th output frame.
In an embodiment of the application, the fusion unit is further configured to: and fusing the mth output frame and the q output frame in a feature superposition or feature addition mode to generate the fused feature map.
In one embodiment of the present application, the apparatus further comprises: and a basic frame acquisition module configured to determine the basic frame based on the radial flat sheet, wherein the basic frame is a frame centered on each pixel point.
In one embodiment of the present application, the apparatus further comprises: a screening module configured to screen the raw data to obtain the radiological plain film having a uniform data format; a marking module configured to mark the radiation flats having a uniform data format; and the format conversion module converts the marked radioactive flat sheet from the unified data format into a natural image format meeting the computer identification processing requirement.
According to one aspect of the present application, there is provided a vertebral fracture region analysis apparatus according to an embodiment of the present application, including: an input module configured to input the radiological flat into a spinal fracture area analysis model built by training in any of the methods described above; and the output module is configured to take an output frame output by the last progressive layer in the N bone vertebra progressive layers as a final fracture area prediction result.
According to an aspect of the present application, an electronic device according to an embodiment of the present application includes: a processor; and a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the method of any of the preceding claims.
According to one aspect of the application, an embodiment of the application provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform a method as described in any of the preceding claims.
The embodiment of the application provides a spine fracture area analysis model training method, a spine fracture area analysis device, electronic equipment and a computer readable storage medium, wherein fracture area standard reference data is used as a learning target of a first bone and vertebra progressive layer, and then vertebral body standard reference data is used as a learning target from a second bone and vertebra progressive layer. Therefore, the spine fracture area analysis model trained by the embodiment of the application can start from a fracture area by utilizing the first bone vertebra progressive layer, and then gradually and progressively search for nearby vertebral bodies from the vertebral bodies by utilizing the second bone vertebra progressive layer, so that the vertebral bodies are searched from the fracture area, the sensitivity to the change of focus areas and scales is realized, and the detection rate of an algorithm is effectively improved.
Drawings
Fig. 1 is a schematic flow chart of a training method for a vertebral fracture area analysis model according to an embodiment of the application.
Fig. 2 is a schematic flow chart illustrating a process of establishing a threshold database in a training method of a vertebral fracture area analysis model according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of a data preprocessing process in a training method of a vertebral fracture area analysis model according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of a training process of another vertebral fracture area analysis model in the training method of the vertebral fracture area analysis model according to an embodiment of the present application.
Fig. 5 is a schematic diagram illustrating a training process of another vertebral fracture area analysis model in the training method of the vertebral fracture area analysis model according to an embodiment of the present application.
Fig. 6 is a schematic flow chart of a radiation flat-plate pretreatment process of a training method of a vertebral fracture area analysis model according to an embodiment of the application.
Fig. 7 is a schematic structural diagram of a training device for analysis model of vertebral fracture according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a training device for analyzing a vertebral fracture area according to another embodiment of the present application.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a flow chart of a training method for a vertebral fracture area analysis model according to an embodiment of the present application. The spine fracture region analysis model comprises a bone backbone network for extracting a feature map and N bone vertebra progressive layers connected with the bone backbone network respectively, wherein the bone vertebra progressive layers are configured to output an output frame comprising a fracture region prediction result based on the input feature map, and N is an integer greater than or equal to 2. As shown in fig. 1, the training method includes:
step 101: and inputting the basic frame into a bone backbone network to obtain a first bone vertebra extraction feature map.
The backbone network of the bone vertebrae is a first part of network structure for extracting features from images, and can be composed of convolution, pooling, normalization functions, activation functions and the like. The backbone network of the bone vertebrae can use various backbone networks such as ResNet, denseNet [ And EfficientNet, etc. Most tasks except classification need to add an FPN network structure after the structure so as to match the characteristic scale diversity caused by the image scale change. To further clearly illustrate the technical solution of the embodiments of the present application, the qualifier "bone vertebra" in the bone vertebra backbone network and the first bone vertebra extraction feature map is used to indicate that the vertebral fracture area analysis model provided by the embodiments of the present application aims to search for a vertebral body from a fracture area, so as to distinguish from the subsequent qualifier "vertebra", which is used to refer to searching for a fracture area from a vertebral body.
The base box may be determined based on the radial tile, and in one embodiment of the application, the base box is a box centered at each pixel point.
Step 102: the first bone vertebra extraction feature map is input into a first bone vertebra progressive layer of the N bone vertebra progressive layers to obtain a first bone vertebra output frame.
As shown in fig. 2, the training method includes N bone vertebrae progressive layers, N being a settable parameter. Each bone vertebra progressive layer extracts the region of the extracted feature map corresponding to the input frame, and the deviation value of the output frame and the input frame can be predicted through a plurality of convolution layers. Taking a one-dimensional coordinate as an example, if the input frame coordinate is x, the extracted feature operation is f (x), the convolution operation is g (x), the output of the convolution is a deviation value Δx=g (x), and the next coordinate predicted according to the original coordinate is x+Δx.
Step 103: and adjusting network parameters of the progressive layer of the second bone vertebra according to the difference between the first bone vertebra output frame and fracture area standard reference data (such as fracture gold standard shown in fig. 2), and inputting the first bone vertebra output frame into a bone vertebra backbone network to obtain a second extraction characteristic diagram.
The output box of the progressive layer is driven to learn to the area of the standard reference data, so the interpolation between x+Δx and the standard reference data gt serves as a penalty for the network, the penalty value of which is used to adjust the network parameters of the progressive layer of the next bone vertebrae, the final goal being to make x+Δx=gt. For the first bone vertebral level, the learning objective is fracture area standard reference data.
Step 104: inputting an mth bone vertebra extraction feature map output by a bone vertebra backbone network into an mth bone vertebra progressive layer in N bone vertebra progressive layers to obtain an mth output frame, wherein m is an integer variable of which N is more than or equal to m is more than or equal to 2.
As shown in fig. 2, starting from the second bone-vertebra progressive layer, each bone-vertebra progressive layer still can predict the deviation value of the output frame from the input frame by extracting the region of the extracted feature map corresponding to the input frame through a plurality of convolution layers.
Step 105: and adjusting network parameters of the (m+1) th output frame according to the difference between the (m) th output frame and the vertebral standard reference data (such as the vertebral gold standard shown in fig. 2), and obtaining the (m+1) th extraction feature map based on the bone vertebral backbone network according to the (m) th output frame.
Starting from the second bone vertebra progressive layer, the learning target is changed from fracture region standard reference data to centrum standard reference data, the process is iterated for N times, and the output frame of the Nth layer finally can be used as the output result of final fracture frame region prediction.
Therefore, according to the spine fracture area analysis model training method provided by the embodiment of the application, the fracture area standard reference data is used as the learning target of the first bone vertebra progressive layer, and then the second bone vertebra progressive layer starts to take the vertebral body standard reference data as the learning target. Therefore, the spine fracture area analysis model trained by the embodiment of the application can start from a fracture area by utilizing the first bone vertebra progressive layer, and then gradually and progressively search for nearby vertebral bodies from the vertebral bodies by utilizing the second bone vertebra progressive layer, so that the vertebral bodies are searched from the fracture area, the sensitivity to the change of focus areas and scales is realized, and the detection rate of an algorithm is effectively improved.
In another embodiment of the present application, the present application may also consider designing a network structure from two directions to solve the problems of low accuracy and low efficiency of the existing vertebral fracture area analysis method. In the first direction, when the data are marked, the embodiment of the application marks not only the fracture area, but also each segment of the vertebral body. Therefore, the embodiment of the application can design the network into two sections, the first section firstly identifies all the cone sections, and then analyzes and learns the image characteristics of the fracture near the cone from each cone section, so that the algorithm network has higher attention to the image characteristics near the cone, and can eliminate the noise interference of bone blocks far away from the vertebra; in another aspect, the presently disclosed embodiments construct a method for identifying a fracture, starting from the fracture identification, and starting from the suspected fracture regions, to resolve and learn image features of surrounding adjacent vertebral bodies, and if the region is not an adjacent region of a vertebral body, the presently disclosed embodiments deliver a penalty gradient to the network along the opposite direction of the branch, such that the algorithm network learns the fracture features in a more sensitive and finer granularity manner than in the first direction, while also excluding fractures of non-vertebral body regions. These two directions each have emphasis and advantages, so embodiments of the present application combine the two networks together to interactively iterate the training to achieve the complementary goal.
In an embodiment of the present application, as shown in fig. 3, when the m+1 extraction feature map is obtained based on the bone backbone network according to the m output frame, the following steps may be specifically performed:
step 301: and generating a fusion characteristic diagram according to the m output frame and the q output frame output by the q vertebra progressive layer.
The q-th output frame is obtained based on a training process of another vertebral fracture region analysis model. The other spine vertebral fracture area analysis model comprises a vertebral backbone network for extracting a feature map and P vertebra progressive layers connected with the vertebral backbone network respectively, wherein the vertebra progressive layers are configured to output an output frame comprising a vertebral body area prediction result based on the input feature map, and P is an integer greater than or equal to 2. The analysis model of the other vertebral fracture region is configured to search for the vertebral body from the fracture region prediction, can be sensitive to the change of the focus region and the size, and can effectively improve the detection rate of the algorithm.
In an embodiment of the present application, the mth output frame and the q output frame may be fused in a feature superposition or feature addition manner to generate a fused feature map. Therefore, the method is to extract and fuse the intermediate result characteristic graphs corresponding to the progressive layers in the training process of the two vertebral fracture area analysis models, and transmit the intermediate result characteristic graphs to the two training processes for continuous use. Thus, the two branches can simultaneously complete the tasks of progressive reasoning and mutual learning. However, it should be understood that the fusion mode of the mth output frame and the q output frame is not limited to the specific mode given above, and the present application is not strictly limited to the specific fusion mode.
Step 302: the fusion feature map is input into a bone backbone network to obtain an m+1th extraction feature map.
Thus, the m+1 extraction feature map comprises feature information in the q output frame, so that interactive iterative training by using the two vertebral fracture region analysis models is realized to achieve the complementary purpose.
Fig. 4 is a schematic flow chart of a training process of another vertebral fracture area analysis model in the training method of the vertebral fracture area analysis model according to an embodiment of the present application. Fig. 5 is a schematic diagram illustrating a training process of another vertebral fracture area analysis model in the training method of the vertebral fracture area analysis model according to an embodiment of the present application. As shown in fig. 4 and 5, the training process of the another vertebral fracture region analysis model includes:
step 401: the basic box is input into a first vertebra extraction feature map obtained from a vertebra backbone network.
Step 402: the first vertebra extraction profile is input to a first vertebra progression level of the P vertebra progression levels to obtain a first vertebra output frame.
Step 403: network parameters of the progressive lamina of the second vertebra are adjusted according to the difference between the first vertebra output frame and the vertebral body standard reference data (the vertebral body gold standard shown in fig. 5), and the first vertebra output frame is input into the vertebral backbone network to obtain a second extraction feature map.
Step 404: inputting a q-th vertebra extraction feature map output by a vertebra backbone network into a q-th vertebra progressive layer in P vertebra progressive layers to obtain a q-th output frame, wherein q is an integer variable with P not less than q not less than 2.
Step 405: network parameters of the (q+1) th output frame are adjusted according to the difference between the q (th) output frame and fracture area standard reference data (bone fracture gold standard as shown in fig. 5), and the (q+1) th extraction feature map is obtained based on the vertebral backbone network according to the q (th) output frame.
It can be seen that the other spine fracture area analysis model training method also comprises P progressive layer compositions, wherein P is a settable parameter. Each progressive layer of vertebrae operates in the same manner and formula as described above for the progressive layer of vertebrae. The difference is that the output frames of the first vertebral level correspond to the learned vertebral body standard reference data, and the output frames from the second vertebral level to the P-th vertebral level correspond to the learned fracture standard reference data, as opposed to the training procedure of fig. 1. The further vertebral fracture region analysis model training method is thus a progressive process that is inferred gradually from the vertebral body towards the fracture region. The spine fracture area analysis model of the module established by the training method of the another spine fracture area analysis model has the advantages that the fracture area is searched from the vertebral body, so that a plurality of accidental noise interferences of the vertebral body can be eliminated, false positives of an algorithm are effectively reduced, and accuracy and efficiency of spine fracture area analysis are improved.
Fig. 6 is a schematic flow chart of a radiation flat-plate pretreatment process of a training method of a vertebral fracture area analysis model according to an embodiment of the application.
As shown in fig. 6, the preprocessing process of the radiation flat sheet includes the following steps before determining the basic frame based on the radiation flat sheet:
step 601: the raw data is filtered to obtain a radial flat having a uniform data format.
Radioflat refers to data collected from corporate data sources that meets the DICOM (Digital Imaging and Communications in Medicine) specification. Raw data acquisition is subjected to compliance screening to remove portions of poor quality or damaged data, and the remainder is used as a radiological flat.
Step 602: the radial flats having a uniform data format are marked.
The marking of the radioactive flat refers to the fact that for each DICOM flat piece data, a doctor delineates the lesion symptom area, and the way of delineating symptoms includes but is not limited to square frames, round frames, segmentation outlines (closed smooth curves or open smooth curves), closed polygons, line segments and the like. Focal indications include fracture lines, fracture sections, soft tissues around fractures, and hard tissues. The process adopts single or multiple marks of doctors, and finally, the doctors with senior resources are uniformly and completely audited.
Step 603: and converting the marked radioactive flat from a unified data format into a natural image format meeting the computer identification processing requirement.
The unified data format conversion means that the DICOM data format is converted into a natural image format, so that the computer can conveniently recognize the DICOM data format. The numerical type of each pixel of the DICOM image is signed 16-bit integer (int 16), and the numerical range is about-4096. The numerical type of the natural image is unsigned 8-bit integer, and the value range is 0-255. The window width level may also be obtained from window width level information read internally by DICOM or by computing the image using computer vision algorithms. We define here that the window width is ww, the window level is wc, the original value of DICOM image is x, the value of computer-recognized image is y, and an intermediate value is y. The calculation formula of this process is as follows:
y=0 if y*<0
y=y* if 0≤y*≤255
y=255 if y*>255
the embodiment of the application also provides a spine fracture area analysis method, which comprises the following steps: inputting the radiation flat piece into a vertebra fracture area analysis model built by training according to the method of any embodiment; and taking an output frame output by the last progressive layer in the N bone vertebra progressive layers as a final fracture area prediction result. Based on the spine fracture area analysis method provided by the embodiment of the application, the fracture area is searched from the vertebral body, so that a plurality of accidental noise interferences of the vertebral body can be eliminated, false positives of an algorithm are effectively reduced, and accuracy and efficiency of spine fracture area analysis are improved.
Fig. 7 is a schematic structural diagram of a training device for analysis model of vertebral fracture according to an embodiment of the present application. The spine fracture region analysis model comprises a bone backbone network for extracting a feature map and N bone vertebra progressive layers connected with the bone backbone network respectively, wherein the bone vertebra progressive layers are configured to output an output frame comprising a fracture region prediction result based on the input feature map, and N is an integer greater than or equal to 2. As shown in fig. 7, the vertebral fracture region analysis model training apparatus 70 includes:
a feature extraction module 701 configured to input a basic frame into a first bone vertebra extraction feature map obtained by a bone vertebra backbone network;
a prediction module 702 configured to input a first bone-vertebra extraction profile into a first bone-vertebra progression layer of the N bone-vertebra progression layers to obtain a first bone-vertebra output frame; and
an adjustment module 703 configured to adjust network parameters of the second bone vertebral progression layer based on differences between the first bone vertebral output frame and the fracture region standard reference data;
wherein the feature extraction module 701 is further configured to input the first bone vertebra output box into a bone vertebra backbone network to obtain a second extracted feature map;
the prediction module 702 is further configured to input an mth bone vertebra extraction feature map output by the bone vertebra backbone network into an mth bone vertebra progressive layer of the N bone vertebra progressive layers to obtain an mth output frame, wherein m is an integer variable with N being greater than or equal to m being greater than or equal to 2; and
The adjustment module 703 is further configured to adjust network parameters of the m+1th output frame according to differences between the m-th output frame and the cone standard reference data;
wherein the feature extraction module 701 is further configured to obtain an m+1 extraction feature map based on the bone vertebral backbone network according to an m-th output box.
In one embodiment of the present application, as shown in fig. 8, the feature extraction module 701 includes:
a fusion unit 7011 configured to generate a fusion feature map from the mth output frame and the q output frame of the q-th vertebral level progressive output; and
a feature extraction execution unit 7012 configured to input the fusion feature map into a bone backbone network to obtain an m+1th extraction feature map;
the q-th output frame is obtained based on a training process of another vertebra fracture area analysis model, the other vertebra fracture area analysis model comprises a vertebra backbone network for extracting a feature map and P vertebra progressive layers connected with the vertebra backbone network respectively, and the vertebra progressive layers are configured to output an output frame comprising a vertebra area prediction result based on the input feature map, wherein P is an integer greater than or equal to 2; the training process of the other vertebra fracture area analysis model comprises the following steps:
inputting a basic frame into a first vertebra extraction feature map obtained by a vertebra backbone network;
Inputting the first vertebra extraction feature map into a first vertebra progressive layer of the P vertebra progressive layers to obtain a first vertebra output frame;
according to the difference between the first vertebra output frame and the vertebral body standard reference data, adjusting network parameters of a second vertebra progressive layer, and inputting the first vertebra output frame into a vertebra backbone network to obtain a second extraction feature map;
inputting a q-th vertebra extraction feature map output by a vertebra backbone network into a q-th vertebra progressive layer in P vertebra progressive layers to obtain a q-th output frame, wherein q is an integer variable with P more than or equal to q more than or equal to 2; and
and adjusting network parameters of the (q+1) th output frame according to the difference between the (q) th output frame and the fracture area standard reference data, and obtaining the (q+1) th extraction feature map based on the vertebral backbone network according to the (q) th output frame.
In an embodiment of the application, the fusion unit is further configured to: and fusing the mth output frame and the q output frame in a feature superposition or feature addition mode to generate a fused feature map.
In one embodiment of the present application, as shown in FIG. 8, the apparatus 70 further comprises:
a base frame acquisition module 704 configured to determine a base frame based on the radial tile, wherein the base frame is a frame centered at each pixel point.
In one embodiment of the present application, as shown in FIG. 8, the apparatus 70 further comprises:
a screening module 705 configured to screen the raw data to obtain a radiological flat having a unified data format;
a marking module 706 configured to mark the radial flats having a uniform data format; and
a format conversion module 707 configured to convert the marked radial slice from a unified data format to a natural image format that meets the computer identification processing requirements.
Another embodiment of the present application also provides a vertebral fracture area analysis apparatus 70, comprising: an input module configured to input the radiation flat plate into the spine fracture region analysis model built by training by any of the spine fracture region analysis model training methods; and the output module is configured to take an output frame output by the last progressive layer in the N bone vertebra progressive layers as a final fracture area prediction result.
The specific functions and operations of the respective modules in the above-described vertebral fracture region analysis model training apparatus 70 have been described in detail in the vertebral fracture region analysis model training method described above with reference to fig. 1 to 6. Therefore, a repetitive description thereof will be omitted herein.
It should be noted that the vertebral fracture region analysis model training apparatus 70 according to the embodiment of the present application may be integrated into the electronic device 90 as a software module and/or a hardware module, in other words, the electronic device 90 may include the vertebral fracture region analysis model training apparatus 70. For example, the spine fracture area analysis model training apparatus 70 may be a software module in the operating system of the electronic device 90 or may be an application developed for it; of course, the vertebral fracture region analysis model training apparatus 70 can also be one of a number of hardware modules of the electronic device 90.
In another embodiment of the present application, the vertebral fracture region analysis model training apparatus 70 and the electronic device 90 may also be separate devices (e.g., servers), and the vertebral fracture region analysis model training apparatus 70 may be connected to the electronic device 90 through a wired and/or wireless network and transmit interactive information according to a agreed data format.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the application. As shown in fig. 9, the electronic device 90 includes: one or more processors 901 and memory 902; and computer program instructions stored in the memory 902 that, when executed by the processor 901, cause the processor 901 to perform the spinal fracture region analysis model training method of any one of the embodiments described above.
The processor 901 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in the electronic device to perform desired functions.
The memory 902 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory may include, for example, random access memory (R vertebra M) and/or cache memory (c vertebra che), etc. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and the processor 901 may execute the program instructions to implement the steps in the spine fracture region analysis model training method of the various embodiments of the present application above and/or other desired functions. Information such as light intensity, compensation light intensity, position of the filter, etc. may also be stored in the computer readable storage medium.
In one example, the electronic device 90 may further include: an input device 903 and an output device 904, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown in fig. 5).
For example, where the electronic device is a robot, such as on an industrial line, the input device 903 may be a camera for capturing the position of the part to be processed. When the electronic device is a stand-alone device, the input means 903 may be a communication network connector for receiving the acquired input signal from an external, removable device. In addition, the input device 903 may also include, for example, a keyboard, a mouse, a microphone, and the like.
The output device 904 may output various information to the outside, and may include, for example, a display, a speaker, a printer, and a communication network and a remote output apparatus connected thereto, and the like.
Of course, only some of the components of the electronic device 90 that are relevant to the present application are shown in fig. 9 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 90 may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in a vertebral fracture region analysis model training method as in any of the embodiments described above.
The computer program product may be written in any combination of one or more programming languages, including an object oriented programming language such as the J vertebra v vertebra, c++, etc., and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the vertebral fracture region analysis model training method according to the various embodiments of the present application described in the above section of the description of the exemplary vertebral fracture region analysis model training method.
A computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random access memory ((R vertebra M), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are merely for purposes of example and understanding, and are not intended to limit the application to the specific details described.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather is to be construed as including any modifications, equivalents, and alternatives falling within the spirit and principles of the application.

Claims (9)

1. The spine fracture area analysis model training method is characterized in that the spine fracture area analysis model comprises a bone backbone network for extracting feature images and N bone vertebra progressive layers connected with the bone backbone network respectively, wherein the bone vertebra progressive layers are configured to output an output frame comprising fracture area prediction results based on the input feature images, and N is an integer greater than or equal to 2;
wherein, the training method comprises the following steps:
inputting a basic frame into a first bone vertebra extraction feature map obtained by the bone vertebra trunk network;
inputting the first bone vertebra extraction feature map into a first bone vertebra progressive layer of the N bone vertebra progressive layers to obtain a first bone vertebra output frame;
according to the difference between the first bone vertebra output frame and the fracture area standard reference data, adjusting network parameters of a second bone vertebra progressive layer, and inputting the first bone vertebra output frame into the bone vertebra backbone network to obtain a second extraction feature map;
inputting an mth bone vertebra extraction feature map output by the bone vertebra backbone network into an mth bone vertebra progressive layer in the N bone vertebra progressive layers to obtain an mth output frame, wherein m is an integer variable with N being more than or equal to m being more than or equal to 2; and
according to the difference between the mth output frame and the standard reference data of the vertebral body, the network parameters of the mth+1 output frame are adjusted, and according to the mth output frame, an mth+1 extraction feature map is obtained based on the bone vertebral backbone network;
The obtaining the m+1 extraction feature map based on the bone backbone network according to the m output frame comprises:
generating a fusion feature map according to the mth output frame and the q output frame output by the q vertebra progressive layer; and
inputting the fusion feature map into the bone vertebral backbone network to obtain the m+1th extraction feature map;
the q-th output frame is acquired based on a training process of another vertebral fracture area analysis model, the another vertebral fracture area analysis model comprises a vertebral backbone network for extracting a feature map and P vertebrae progressive layers connected with the vertebral backbone network respectively, and the vertebrae progressive layers are configured to output an output frame comprising a vertebral body area prediction result based on the input feature map, wherein P is an integer greater than or equal to 2; wherein the training process of the another vertebra fracture area analysis model comprises the following steps:
inputting a basic frame into a first vertebra extraction feature map obtained by the vertebra backbone network;
inputting the first vertebra extraction feature map into a first vertebra progressive layer of the P vertebra progressive layers to obtain a first vertebra output frame;
according to the difference between the first vertebra output frame and the vertebral body standard reference data, adjusting network parameters of a second vertebra progressive layer, and inputting the first vertebra output frame into the vertebra backbone network to obtain a second extraction feature map;
Inputting a q-th vertebra extraction feature map output by the vertebra backbone network into the q-th vertebra progressive layer in the P vertebra progressive layers to obtain the q-th output frame, wherein q is an integer variable with P more than or equal to q more than or equal to 2; and
according to the difference between the q output frame and the fracture area standard reference data, the network parameters of the q+1 output frame are adjusted, and according to the q output frame, a q+1 extraction feature map is obtained based on the vertebral backbone network;
the bone vertebrae are used for indicating that the vertebral body is sought from the fracture region, and the vertebrae are used for indicating that the fracture region is sought from the vertebral body.
2. The method of claim 1, wherein generating a fusion profile from the mth output box and a q output box of a q vertebral level progressive output comprises:
and fusing the mth output frame and the q output frame in a feature superposition or feature addition mode to generate the fused feature map.
3. The method as recited in claim 1, further comprising:
the basic frame is determined based on the radiation flat sheet, wherein the basic frame is a frame centering on each pixel point.
4. The method as recited in claim 2, further comprising:
Screening the original data to obtain a radiation flat sheet with a unified data format;
marking the radiation flat sheet with the unified data format; and
and converting the marked radioactive flat from the unified data format into a natural image format meeting the computer identification processing requirement.
5. A method of analyzing a fracture region of a vertebra, comprising:
inputting the radiation flat piece into a spinal fracture area analysis model built by training according to the method of any one of claims 1 to 4; and taking an output frame output by the last progressive layer in the N bone vertebra progressive layers as a final fracture area prediction result.
6. The spine fracture area analysis model training device is characterized in that the spine fracture area analysis model comprises a bone backbone network for extracting feature images and N bone vertebra progressive layers connected with the bone backbone network respectively, wherein the bone vertebra progressive layers are configured to output an output frame comprising fracture area prediction results based on the input feature images, and N is an integer greater than or equal to 2;
wherein, the training device includes:
the feature extraction module is configured to input a basic frame into a first bone vertebra extraction feature map obtained by the bone vertebra backbone network;
A prediction module configured to input the first bone-vertebra extraction feature map into a first bone-vertebra progressive layer of the N bone-vertebra progressive layers to obtain a first bone-vertebra output frame; and
an adjustment module configured to adjust network parameters of a second bone vertebral progression layer according to differences between the first bone vertebral output frame and fracture region standard reference data;
the feature extraction module is further configured to input the first bone vertebra output frame into the bone vertebra backbone network to obtain a second extracted feature map;
the prediction module is further configured to input an mth bone vertebra extraction feature map output by the bone vertebra backbone network into an mth bone vertebra progressive layer of the N bone vertebra progressive layers to obtain an mth output frame, wherein m is an integer variable with N being more than or equal to m being more than or equal to 2; and
the adjustment module is further configured to adjust network parameters of the (m+1) th output frame according to the difference between the (m) th output frame and the centrum standard reference data;
wherein the feature extraction module is further configured to obtain an m+1 extraction feature map based on the bone vertebral backbone network according to the m output frame;
the obtaining the m+1 extraction feature map based on the bone backbone network according to the m output frame comprises:
Generating a fusion feature map according to the mth output frame and the q output frame output by the q vertebra progressive layer; and
inputting the fusion feature map into the bone vertebral backbone network to obtain the m+1th extraction feature map;
the q-th output frame is acquired based on a training process of another vertebral fracture area analysis model, the another vertebral fracture area analysis model comprises a vertebral backbone network for extracting a feature map and P vertebrae progressive layers connected with the vertebral backbone network respectively, and the vertebrae progressive layers are configured to output an output frame comprising a vertebral body area prediction result based on the input feature map, wherein P is an integer greater than or equal to 2; wherein the training process of the another vertebra fracture area analysis model comprises the following steps:
inputting a basic frame into a first vertebra extraction feature map obtained by the vertebra backbone network;
inputting the first vertebra extraction feature map into a first vertebra progressive layer of the P vertebra progressive layers to obtain a first vertebra output frame;
according to the difference between the first vertebra output frame and the vertebral body standard reference data, adjusting network parameters of a second vertebra progressive layer, and inputting the first vertebra output frame into the vertebra backbone network to obtain a second extraction feature map;
Inputting a q-th vertebra extraction feature map output by the vertebra backbone network into the q-th vertebra progressive layer in the P vertebra progressive layers to obtain the q-th output frame, wherein q is an integer variable with P more than or equal to q more than or equal to 2; and
according to the difference between the q output frame and the fracture area standard reference data, the network parameters of the q+1 output frame are adjusted, and according to the q output frame, a q+1 extraction feature map is obtained based on the vertebral backbone network;
the bone vertebrae are used for indicating that the vertebral body is sought from the fracture region, and the vertebrae are used for indicating that the fracture region is sought from the vertebral body.
7. A spinal fracture area analysis device, comprising:
an input module configured to input the radiological flat into a spinal fracture area analysis model built by training the method according to any one of claims 1 to 4; and
and the output module is configured to take an output frame output by the last progressive layer in the N bone vertebra progressive layers as a final fracture area prediction result.
8. An electronic device, comprising:
a processor; and
a memory in which computer program instructions are stored which, when executed by the processor, cause the processor to perform the method of any one of claims 1 to 5.
9. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1 to 5.
CN202010147315.4A 2020-03-05 2020-03-05 Training method and device for spine fracture area analysis model Active CN111401417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010147315.4A CN111401417B (en) 2020-03-05 2020-03-05 Training method and device for spine fracture area analysis model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010147315.4A CN111401417B (en) 2020-03-05 2020-03-05 Training method and device for spine fracture area analysis model

Publications (2)

Publication Number Publication Date
CN111401417A CN111401417A (en) 2020-07-10
CN111401417B true CN111401417B (en) 2023-10-27

Family

ID=71413271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010147315.4A Active CN111401417B (en) 2020-03-05 2020-03-05 Training method and device for spine fracture area analysis model

Country Status (1)

Country Link
CN (1) CN111401417B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977971A (en) * 2017-11-09 2018-05-01 哈尔滨理工大学 The method of vertebra positioning based on convolutional neural networks
CN109859233A (en) * 2018-12-28 2019-06-07 上海联影智能医疗科技有限公司 The training method and system of image procossing, image processing model

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169087A1 (en) * 2006-03-24 2009-07-02 Kunio Doi Method for detection of vertebral fractures on lateral chest radiographs
US10588589B2 (en) * 2014-07-21 2020-03-17 Zebra Medical Vision Ltd. Systems and methods for prediction of osteoporotic fracture risk
CN107072623A (en) * 2014-08-21 2017-08-18 哈利法克斯生物医药有限公司 System and method for measuring and assessing spinal instability
US10366491B2 (en) * 2017-03-08 2019-07-30 Siemens Healthcare Gmbh Deep image-to-image recurrent network with shape basis for automatic vertebra labeling in large-scale 3D CT volumes
US11166764B2 (en) * 2017-07-27 2021-11-09 Carlsmed, Inc. Systems and methods for assisting and augmenting surgical procedures
JP2020025786A (en) * 2018-08-14 2020-02-20 富士フイルム株式会社 Image processing apparatus, method and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977971A (en) * 2017-11-09 2018-05-01 哈尔滨理工大学 The method of vertebra positioning based on convolutional neural networks
CN109859233A (en) * 2018-12-28 2019-06-07 上海联影智能医疗科技有限公司 The training method and system of image procossing, image processing model

Also Published As

Publication number Publication date
CN111401417A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN113052795B (en) X-ray chest radiography image quality determination method and device
Mansour et al. Internet of things and synergic deep learning based biomedical tongue color image analysis for disease diagnosis and classification
CN111325745A (en) Fracture region analysis method and device, electronic device and readable storage medium
Attallah RADIC: A tool for diagnosing COVID-19 from chest CT and X-ray scans using deep learning and quad-radiomics
CN112528782A (en) Underwater fish target detection method and device
CN110930373A (en) Pneumonia recognition device based on neural network
CN112418299B (en) Coronary artery segmentation model training method, coronary artery segmentation method and device
CN115880266B (en) Intestinal polyp detection system and method based on deep learning
CN111414939B (en) Training method and device for spine fracture area analysis model
Khan Identification of lung cancer using convolutional neural networks based classification
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN111401417B (en) Training method and device for spine fracture area analysis model
Fitriyah et al. Pulmonary Disease Pattern Recognition on X-Ray Radiography Image Using Artificial Neural Network (ANN) Method
CN111340760B (en) Knee joint positioning method based on multitask two-stage convolution neural network
Suneetha et al. Brain tumor detection in MR imaging using DW-MTM filter and region-growing segmentation approach
JP2005198887A (en) Method, apparatus and program for detecting anatomical structure, structure removal picture generation apparatus, and abnormal shadow detector
CN111415333B (en) Mammary gland X-ray image antisymmetric generation analysis model training method and device
CN113837192B (en) Image segmentation method and device, and neural network training method and device
CN111415741B (en) Mammary gland X-ray image classification model training method based on implicit apparent learning
Hilal et al. Design of Intelligent Alzheimer Disease Diagnosis Model on CIoT Environment
CN114155234A (en) Method and device for identifying position of lung segment of focus, storage medium and electronic equipment
Deo et al. A survey on bone fracture detection methods using image processing and artificial intelligence (AI) approaches
Marrocco et al. Mammogram denoising to improve the calcification detection performance of convolutional nets
Kumar et al. Robust Medical X-Ray Image Classification by Deep Learning with Multi-Versus Optimizer
Mannepalli et al. A cad system design based on HybridMultiscale convolutional Mantaray network for pneumonia diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant