WO2022111383A1 - 一种ct肋骨自动计数方法及装置 - Google Patents

一种ct肋骨自动计数方法及装置 Download PDF

Info

Publication number
WO2022111383A1
WO2022111383A1 PCT/CN2021/131649 CN2021131649W WO2022111383A1 WO 2022111383 A1 WO2022111383 A1 WO 2022111383A1 CN 2021131649 W CN2021131649 W CN 2021131649W WO 2022111383 A1 WO2022111383 A1 WO 2022111383A1
Authority
WO
WIPO (PCT)
Prior art keywords
rib
point cloud
contour
point
layer
Prior art date
Application number
PCT/CN2021/131649
Other languages
English (en)
French (fr)
Inventor
刘锋
吴子丰
周振
俞益洲
李一鸣
乔昕
Original Assignee
北京深睿博联科技有限责任公司
杭州深睿博联科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京深睿博联科技有限责任公司, 杭州深睿博联科技有限公司 filed Critical 北京深睿博联科技有限责任公司
Publication of WO2022111383A1 publication Critical patent/WO2022111383A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06MCOUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
    • G06M1/00Design features of general application
    • G06M1/27Design features of general application for representing the result of count in the form of electric signals, e.g. by sensing markings on the counter drum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • the present invention requires the priority of the application proposed by the applicant, the application date is November 27, 2020, the application number is CN2020113564506, and the title is "A CT Rib Automatic Counting Method and Device". The entire contents of the above application are incorporated herein by reference in their entirety.
  • the present application relates to the field of medical image analysis, and in particular, to a CT rib automatic counting method and device.
  • the first is the rule-based method, which usually uses threshold or deep learning method to segment the ribs, extracts the rib area, and then uses a certain morphological regularization method to process it, and finally calculates the connected domain, which is from top to bottom by position.
  • Each connected domain is assigned a class label.
  • this method does not take into account the morphological information of ribs.
  • the first pair of ribs is not scanned by CT, it will give wrong counts; in the case of severe fracture or segmentation failure, each connected domain no longer corresponds to a rib, It is therefore difficult to design reasonable rules to assign rib numbers to each region.
  • the second is a voxel-based segmentation method, which usually treats rib counting as a segmentation problem and uses a deep learning-based 2D or 3D segmentation model to predict each rib as an independent class.
  • This method can learn from the data by labeling a large number of rib counts and avoid manually designing rules.
  • the model can only take part of CT data as input, resulting in inaccurate segmentation of the model due to lack of sufficient context.
  • the segmentation network has a large number of parameters, which requires a large number of CT and its corresponding count labels. Considering the fracture There are various types and parts, and the individual differences of people are large. In practice, it is difficult to collect training data with sufficient diversity to ensure the stability of the model.
  • the segmentation network operates on the original data, it has extremely high performance. Computational complexity and huge resource overhead in actual deployment.
  • the present invention aims to provide a CT rib automatic counting method and device that overcomes the above problems or at least partially solves the above problems.
  • One aspect of the present invention provides an automatic counting method for CT ribs, including: segmenting ribs in CT to obtain a rib mask corresponding to the CT; traversing each layer of the mask, and using each layer mask as a binary value Picture, extract the rib contour; convert each rib contour at each level into a point cloud; use the point cloud image neural network to predict the rib number to obtain the point cloud rib number; inversely map the point cloud rib number to map back to the rib contour , get the rib number to which each rib contour belongs to each layer, and complete the rib count.
  • converting the contour of each rib in each layer into a point cloud includes: using the formula to convert the contour of each rib in each layer into a point cloud: Among them, p is the three-dimensional coordinate, z is the level, is the outline, is the mapping from contours to keypoints.
  • the mapping from the contour to the key point includes but is not limited to: the center of gravity, the centroid and the center point of the enclosing rectangle.
  • the point cloud graph neural network is used to predict the number of ribs, and the number of ribs in the point cloud includes: for a point cloud containing N points, the coordinates of the points are used as input, and each point is obtained through the multi-layer perceptron neural network model. Finally, the feature vector of N points uses the pooling method to form the global feature representation of the point cloud.
  • the global feature of the point cloud is connected with each local feature in series, and through several multi-layer perceptron models, the Encoding predictions.
  • the method further includes: training the point cloud graph neural network; training the point cloud graph neural network includes: using the labeled data for training, and the training process adopts the gradient descent method to calculate the loss based on the predicted result and the real result, and optimize the model parameters.
  • part of the data in the labeled data is obtained by editing the coordinates of the real point cloud data; wherein, the editing of the point cloud includes at least one of the following: simulated fracture, flip, simulated shooting position, simulated cervical rib and simulated lumbar rib.
  • a CT rib automatic counting device comprising: a segmentation module for segmenting ribs in CT to obtain a rib mask corresponding to the CT; an extraction module for traversing each level of the mask , using each layer mask as a binary image to extract the rib contour; the conversion module is used to convert each rib contour at each layer into a point cloud; the prediction module is used to predict the number of ribs using a point cloud image neural network , get the rib number of the point cloud; the inverse mapping module is used to inversely map the rib number of the point cloud, and map it back to the rib contour, to obtain the rib number to which each rib contour belongs to each layer, and complete the rib count.
  • the conversion module converts the contour of each rib in each layer into a point cloud through the following method: the conversion module is specifically used to convert the contour of each rib in each layer into a point cloud by using the formula: Among them, p is the three-dimensional coordinate, z is the level, is the outline, is the mapping from contours to keypoints.
  • the mapping from the contour to the key point includes but is not limited to: the center of gravity, the centroid and the center point of the enclosing rectangle.
  • the prediction module uses the point cloud graph neural network to predict the number of ribs in the following way, and obtains the rib number of the point cloud: the prediction module is specifically used for the point cloud containing N points, taking the coordinates of the points as input, each point The feature space representation of the point is obtained through the multi-layer perceptron neural network model, and finally the feature vector of N points uses the pooling method to form the global feature representation of the point cloud, and the global feature of the point cloud is connected with each local feature. Layer perceptron model to get encoded predictions for each point.
  • the device further includes: a training module for training the point cloud graph neural network; the training module trains the point cloud graph neural network in the following manner: the training module is specifically used for training with labeled data, and the training process adopts the gradient descent method to predict the result Calculate the loss with the real results and optimize the model parameters.
  • part of the data in the labeled data is obtained by editing the coordinates of the real point cloud data; wherein, the editing of the point cloud includes at least one of the following: simulated fracture, flip, simulated shooting position, simulated cervical rib and simulated lumbar rib.
  • Another aspect of the present invention provides a readable medium, the readable medium includes execution instructions, when a processor of an electronic device executes the execution instructions, the electronic device executes the above method.
  • Another aspect of the present invention provides an electronic device, the electronic device includes a processor and a memory storing execution instructions, when the processor executes the execution instructions stored in the memory, the processor executes the following steps: the above method.
  • the rib contour is firstly segmented by using the segmentation model, then the key points are extracted from the rib contour and converted into a point cloud, and then the point cloud segmentation method is adopted to count.
  • the learning-based method can learn from the labeled data, without the need for artificial design rules, which reduces the difficulty of development; only a small number of points need to be calculated, which has high processing efficiency; the rib contour is converted into a point cloud, which can be processed by The operation of points simulates fractures and some congenital deformities, reducing the dependence on the amount of training data; after being converted into point clouds, all contours can be sent to the neural network for inference at the same time, so the relationship between different ribs is easier to model, so it has more advantages. high precision.
  • Fig. 1 is the flow chart of the CT rib automatic counting method that the embodiment of the present invention provides
  • FIG. 2 is a schematic diagram of a rib counting model based on a point cloud graph neural network in the CT rib automatic counting method provided by the embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a CT rib automatic counting device provided by an embodiment of the present invention.
  • the present invention proposes a new rib counting method, which has the characteristics of being learnable, less dependent on the amount of labeled data, accurate and efficient, and the like.
  • the input of the present invention is a CT (i.e. CT image), and the output is the contour of each rib in the CT and the corresponding rib number.
  • CT image i.e. CT image
  • the present invention is realized by the following methods:
  • FIG. 1 shows a flowchart of a CT rib automatic counting method provided by an embodiment of the present invention.
  • the CT rib automatic counting method provided by an embodiment of the present invention includes:
  • a traditional or deep learning-based method is first used to segment the ribs in the CT, and a rib mask M corresponding to the CT is obtained, where 1 represents ribs and 0 represents others.
  • This step can extract the rib contour. Specifically, traverse each layer z of the mask, take each layer M z,:,: of the mask as a binary image, and extract the outer contour of the ribs where N z is the number of independent contours at this level, and each contour is a collection of point coordinates:
  • converting the contour of each rib at each level into a point cloud includes: using the formula to convert the outline of each rib at each level into a point cloud: Among them, p is the three-dimensional coordinate, z is the level, is the outline, is the mapping from contours to keypoints. Among them, the mapping from the contour to the key point includes but is not limited to: the center of gravity, the centroid and the center point of the enclosing rectangle.
  • This step converts rib contours to point clouds. Specifically, each rib contour of each layer is mapped to the three-dimensional coordinate p, which is determined by the layer z and the contour. The calculation is obtained, which is the concatenation of the contour key point and the layer coordinate z:
  • mapping from contours to keypoints There are various calculation methods, such as using the center of gravity, the centroid, the center point of the circumscribed rectangle, etc.
  • the present invention preferably takes the center point of the circumscribed rectangle as an example as the key point mapping method. in:
  • each contour can be represented by a corresponding point, and all rib contour points in CT constitute a point cloud.
  • a point cloud image neural network can be used to predict the number of ribs.
  • the rib number needs to be mapped to a code, and the two are in one-to-one correspondence.
  • the point cloud rib number prediction model proposed by the present invention has nothing to do with the specific coding mode, and any coding should fall within the protection scope of the present invention.
  • using a point cloud graph neural network to predict the rib number, and obtaining the point cloud rib number includes: for a point cloud containing N points, taking the coordinates of the points as the input, each point The feature space representation of the point is obtained through the multi-layer perceptron neural network model, and finally the feature vector of N points uses the pooling method to form the global feature representation of the point cloud, and the global feature of the point cloud is connected with each local feature. Layer perceptron model to get encoded predictions for each point.
  • the present invention uses a point cloud-based graph neural network to predict the rib code.
  • a specific network is used as an example to introduce the specific coding prediction process, but since the graph neural network has different implementation forms, the change of the calculation unit or the addition or deletion of the number of layers is still within the protection scope of the present invention. .
  • the network takes the coordinates of points as input, and each point is represented by a multi-layer perceptron neural network model to obtain the feature space representation of the point.
  • the multi-layer perceptron model can be stacked multiple times, and the final feature vector of N points is pooled.
  • the global feature expression of the point cloud is formed by the transformation method; the global feature of the point cloud is concatenated with each local feature, and then through several multi-layer perceptron models, the encoding prediction of each point can be obtained. By decoding the code, the rib number corresponding to each point can be obtained.
  • the method for automatically counting CT ribs further includes: training a point cloud image neural network; training the point cloud image neural network includes: using labeled data for training, and the training process adopts gradient descent. method to calculate the loss with the predicted results and the real results, and optimize the model parameters.
  • some data in the labeled data are obtained by editing the coordinates of the real point cloud data; wherein, the editing of the point cloud includes but is not limited to: simulated fracture, flip, simulated shooting position, simulated cervical rib and simulated lumbar rib.
  • the point cloud graph neural network needs to be trained with labeled data, and the trained parameters can be used for rib coding prediction.
  • the training process adopts the gradient descent method to calculate the loss based on the predicted results of the model and the real results, and optimize the model parameters.
  • the present invention proposes to edit the coordinates of the point cloud, simulate abnormal conditions, and expand the training data on the basis of the point cloud data actually extracted.
  • the implementation method includes but is not limited to the following items:
  • This step maps the rib number of the point cloud back to the rib contour. Specifically, since each point in the point cloud corresponds to the level and contour of a rib one-to-one, assigning the rib number of the point to the rib contour can obtain the number of each level and the rib to which each rib contour belongs, and complete the rib count. .
  • the rib is first segmented, the rib contour is extracted therefrom, and then the contour is converted into a point cloud, and the graph neural network is used for inference and prediction.
  • the learning-based method of the present invention can automatically learn from the labeling data, so that the manual design of post-processing rules is not required, the stability of rib counting can be increased, the development efficiency can be improved, and the maintenance cost can be reduced.
  • the graph neural network learns from the rib segmentation results, when the rib segmentation is abnormal, the graph neural network model can still count correctly, making the system more stable in actual operation.
  • the coordinate editing method of the present invention can simulate rare situations in actual situations, such as fractures, abnormal shooting positions, congenital deformities, etc., improve the generalization ability of the model, and reduce data collection and labeling. the cost of.
  • This counting model only uses operations such as multi-layer perceptrons, and the number of points in the point cloud is small (6000 points on average), so it has extremely high operating efficiency and reduces deployment costs.
  • FIG. 3 shows a schematic structural diagram of a CT rib automatic counting device provided by an embodiment of the present invention.
  • the CT rib automatic counting device applies the above method. The following only briefly describes the structure of the CT rib automatic counting device. Referring to the relevant description in the above-mentioned CT rib automatic counting method, and referring to FIG. 3 , the CT rib automatic counting device provided by the embodiment of the present invention includes:
  • the segmentation module is used to segment the rib in the CT to obtain the mask of the rib corresponding to the CT;
  • the extraction module is used to traverse each layer of the mask, use each layer of the mask as a binary image, and extract the rib contour;
  • the prediction module is used to predict the rib number by using the point cloud image neural network to obtain the point cloud rib number;
  • the inverse mapping module is used to inversely map the rib number of the point cloud and map it back to the rib contour to obtain the rib number to which each rib contour belongs to at each level, and complete the rib count.
  • the conversion module converts the contour of each rib in each layer into a point cloud by the following method: the conversion module is specifically configured to use a formula to convert the contour of each rib in each layer into Point cloud: Among them, p is the three-dimensional coordinate, z is the level, is the outline, is the mapping from contours to keypoints.
  • the mapping from the contour to the key point includes, but is not limited to, the center of gravity, the centroid, and the center point of the circumscribed rectangle.
  • the prediction module adopts the point cloud graph neural network to predict the number of ribs in the following manner, and obtains the number of the ribs in the point cloud: the prediction module is specifically used for the point cloud containing N points, Taking the coordinates of the point as the input, each point obtains the feature space representation of the point through the multi-layer perceptron neural network model, and finally the feature vector of the N points uses the pooling method to form the global feature representation of the point cloud, and the global feature representation of the point cloud is obtained. Concatenated with each local feature, through several multi-layer perceptron models, the encoding prediction of each point is obtained.
  • the CT rib automatic counting device provided by the embodiment of the present invention further includes: a training module for training the point cloud image neural network; the training module trains the point cloud image neural network in the following manner: the training module , which is specifically used for training with labeled data.
  • the training process adopts the gradient descent method to calculate the loss with the predicted result and the real result, and optimize the model parameters.
  • part of the data in the labeled data is obtained by editing the coordinates of the real point cloud data; wherein, editing the point cloud includes but is not limited to: simulating fractures, flipping, simulating shooting postures, Simulated cervical ribs and simulated lumbar ribs.
  • the rib is firstly segmented, the rib contour is extracted therefrom, and then the contour is converted into a point cloud, and the graph neural network is used for inference and prediction.
  • the learning-based method of the present invention can automatically learn from the labeling data, so that the manual design of post-processing rules is not required, the stability of rib counting can be increased, the development efficiency can be improved, and the maintenance cost can be reduced.
  • the graph neural network learns from the rib segmentation results, when the rib segmentation is abnormal, the graph neural network model can still count correctly, making the system more stable in actual operation.
  • the coordinate editing method of the present invention can simulate rare situations in actual situations, such as fractures, abnormal shooting positions, congenital deformities, etc., improve the generalization ability of the model, and reduce data collection and labeling. the cost of.
  • This counting model only uses operations such as multi-layer perceptrons, and the number of points in the point cloud is small (6000 points on average), so it has extremely high operating efficiency and reduces deployment costs.
  • Embodiments of the present application provide an electronic device.
  • the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory.
  • the memory may include memory, such as high-speed random-access memory (Random-Access Memory, RAM), or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random-Access Memory
  • non-volatile memory such as at least one disk memory.
  • the electronic equipment may also include hardware required for other services.
  • the processor, network interface and memory can be connected to each other through an internal bus, which can be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Component Interconnect Standard) bus. Industry Standard Architecture, extended industry standard structure) bus, etc.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like.
  • Memory for storing execution instructions. Specifically, a computer program that executes instructions can be executed.
  • the memory may include memory and non-volatile memory and provide instructions and data for execution to the processor.
  • a processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the processor reads the corresponding execution instructions from the non-volatile memory into the memory and then executes them, and also obtains the corresponding execution instructions from other devices to form CT ribs at the logical level Automatic counting method.
  • the processor executes the execution instructions stored in the memory, so as to implement the CT rib automatic counting method provided in any embodiment of the present application through the executed execution instructions.
  • the process performed by the CT rib automatic counting method provided in the above embodiment may be applied to a processor, or implemented by a processor.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • An embodiment of the present application also provides a readable medium, where an execution instruction is stored in the readable storage medium, and when the stored execution instruction is executed by a processor of an electronic device, the electronic device can be enabled to execute the execution of the instructions provided in any embodiment of the present application.
  • the CT rib automatic counting method is specifically used to perform the above-mentioned CT rib automatic counting method.
  • the electronic device described in each of the foregoing embodiments may be a computer.
  • the embodiments of the present application may be provided as a method or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
  • the CT rib automatic counting method and device utilizes image recognition technology combined with point cloud segmentation model to determine the position and mutual information of ribs, and fully utilizes the processing basis of program automation in computer technology, so that the fracture type, fracture location and congenital information can be determined.
  • the efficiency of localization and assessment of deformities has been greatly improved.
  • the resulting products can be mass-produced and quickly applied to systems or scenarios with high demand for fracture diagnosis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种CT肋骨自动计数方法及装置,其中方法包括:对CT中肋骨进行分割,得到与CT对应的肋骨掩码;遍历掩码的每个层面,将每一层掩码作为二值图片,提取肋骨轮廓;将每个层面每个肋骨轮廓转换为点云;采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号;将点云肋骨编号进行逆映射,映射回肋骨轮廓,得到每个层面每个肋骨轮廓所属的肋骨编号,完成肋骨计数。

Description

一种CT肋骨自动计数方法及装置
本发明要求由申请人提出的,申请日为2020年11月27日,申请号为CN2020113564506,名称为“一种CT肋骨自动计数方法及装置”的申请的优先权。上述申请的全部内容通过整体引用结合于此。
技术领域
本申请涉及医疗影像分析领域,尤其涉及一种CT肋骨自动计数方法及装置。
发明背景
对CT(Computed Tomography)中出现的骨折(病)进行诊断、描述、报告是影像科医生阅片的重要内容之一。在发现骨折(病)时,需要根据其解剖位置对病灶进行描述,以进行随访分析或供其它科室参考。随着薄层CT的普及,医生可以发现细微的骨折(病),但由于层数的增多,对病灶位置的确认成为了一个难题,尤其是肋骨的描述。人通常有12对肋骨,而每个肋骨均有独立的编号,按从上到下可以分为第1肋,…,第12肋。由于没有可靠的参考点,医生需要从CT的开始层面到病灶层面翻阅一次方可确定一个病灶的位置,如果有多个病灶,则需要反复多次。这个过程极易出错,且严重影响了医生的阅片效率,因此发明自动化的肋骨计数方法对于医生效率,提高诊疗质量至关重要。
目前自动肋骨计数方法主要有两种。第一种为基于规则的方法,该方法通常采用阈值或深度学习方法对肋骨进行分割,提取肋骨区域,然后采用一定的形态学规则化方法处理,最后计算连通域,按位置从上到下为每个连通域分配类别标签。但这种方法未考虑肋骨的形态信息,当CT未扫到第1对肋骨时,将给出错误的计数;在严重骨折或分割失败的情况下,每个连通域不再对应一根肋骨,因此难以设计合理的规则为每个区域分配肋骨编号。
第二种为基于体素分割的方法,此类方法通常将肋骨计数看作分割问题, 采用基于深度学习的2D或3D分割模型,将每个肋骨作为独立的类别进行预测。该方法可以通过标注大量肋骨计数的数据,从数据中学习从而避免手动设计规则。但受显存和计算量的制约,模型仅能以部分CT数据作为输入,导致模型因为缺少足够上下文导致分割不准确;同时分割网络有大量参数,需大量CT和其对应的计数标注,考虑到骨折的类型和部位各种各样,且人的个体差异较大,在实际中难以收集多样性足够的训练数据,以保证模型的稳定性;最后由于分割网络对原始数据进行操作,具有极高的计算复杂度,在实际部署时有巨大的资源开销。
发明内容
本发明旨在提供一种克服上述问题或者至少部分地解决上述问题的CT肋骨自动计数方法及装置。
为达到上述目的,本发明的技术方案具体是这样实现的:
本发明的一个方面提供了一种CT肋骨自动计数方法,包括:对CT中肋骨进行分割,得到与CT对应的肋骨掩码;遍历掩码的每个层面,将每一层掩码作为二值图片,提取肋骨轮廓;将每个层面每个肋骨轮廓转换为点云;采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号;将点云肋骨编号进行逆映射,映射回肋骨轮廓,得到每个层面每个肋骨轮廓所属的肋骨编号,完成肋骨计数。
其中,将每个层面每个肋骨的轮廓转换为点云包括:利用公式将每个层面每个肋骨的轮廓转换为点云:
Figure PCTCN2021131649-appb-000001
其中,p为三维坐标,z为层面,
Figure PCTCN2021131649-appb-000002
为轮廓,
Figure PCTCN2021131649-appb-000003
为从轮廓到关键点的映射。
其中,从轮廓到关键点的映射包括但不限于:重心,形心和外接矩形框中心点。
其中,采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号包括:对于包含N个点的点云,以点的坐标为输入,每个点经多层感知机神经网络模型得到点的特征空间表达,最终N个点的特征向量采用池化方法构成点云的全局特征表达,将点云的全局特征与每个局部特征串联,经过若干多层感知机模型,得 到每个点的编码预测。
其中,方法还包括:训练点云图神经网络;训练点云图神经网络包括:利用标注数据进行训练,训练过程采用梯度下降方法,以预测结果和真实结果计算损失,优化模型参数。
其中,标注数据中的部分数据通过对真实点云数据坐标进行编辑得到;其中,对点云进行编辑至少包括以下一种:模拟骨折、翻转、模拟拍摄体位、模拟颈肋和模拟腰肋。
本发明另一方面提供了一种CT肋骨自动计数装置,包括:分割模块,用于对CT中肋骨进行分割,得到与CT对应的肋骨掩码;提取模块,用于遍历掩码的每个层面,将每一层掩码作为二值图片,提取肋骨轮廓;转换模块,用于将每个层面每个肋骨轮廓转换为点云;预测模块,用于采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号;逆映射模块,用于将点云肋骨编号进行逆映射,映射回肋骨轮廓,得到每个层面每个肋骨轮廓所属的肋骨编号,完成肋骨计数。
其中,转换模块通过如下方式将每个层面每个肋骨的轮廓转换为点云:转换模块,具体用于利用公式将每个层面每个肋骨的轮廓转换为点云:
Figure PCTCN2021131649-appb-000004
Figure PCTCN2021131649-appb-000005
其中,p为三维坐标,z为层面,
Figure PCTCN2021131649-appb-000006
为轮廓,
Figure PCTCN2021131649-appb-000007
为从轮廓到关键点的映射。
其中,从轮廓到关键点的映射包括但不限于:重心,形心和外接矩形框中心点。
其中,预测模块通过如下方式采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号:预测模块,具体用于对于包含N个点的点云,以点的坐标为输入,每个点经多层感知机神经网络模型得到点的特征空间表达,最终N个点的特征向量采用池化方法构成点云的全局特征表达,将点云的全局特征与每个局部特征串联,经过若干多层感知机模型,得到每个点的编码预测。
其中,装置还包括:训练模块,用于训练点云图神经网络;训练模块通过如下方式训练点云图神经网络:训练模块,具体用于利用标注数据进行训练,训练过程采用梯度下降方法,以预测结果和真实结果计算损失,优化模型参数。
其中,标注数据中的部分数据通过对真实点云数据坐标进行编辑得到;其中,对点云进行编辑至少包括以下一种:模拟骨折、翻转、模拟拍摄体位、模拟颈肋和模拟腰肋。
本发明另一方面提供了一种可读介质,所述可读介质包括执行指令,当电子设备的处理器执行所述执行指令时,所述电子设备执行如上述的方法。
本发明另一方面提供了一种电子设备,所述电子设备包括处理器以及存储有执行指令的存储器,当所述处理器执行所述存储器存储的所述执行指令时,所述处理器执行如上述的方法。
由此可见,通过本发明提供的CT肋骨自动计数方法及装置,基于学习的计数模型,首先利用分割模型分割肋骨轮廓,然后从肋骨轮廓提取关键点转化为点云,然后采用点云分割的方法进行计数。基于学习的方法,可以从标注数据中进行学习,不需要人为设计规则,降低了开发难度;只需要对少量点进行计算,具有较高的处理效率;将肋骨轮廓转化为点云,可以通过对点的操作对骨折和一些先天畸形进行模拟,减少对训练数据量的依赖;转化为点云后,所有轮廓可同时送入神经网络进行推理,因此不同肋骨间关系更容易建模,因此具有更高的精度。
上述的非惯用的实施方式所具有的进一步效果将在下文中结合具体实施方式加以说明。
附图简要说明
为了更清楚地说明本申请实施例或现有的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的CT肋骨自动计数方法的流程图;
图2为本发明实施例提供的CT肋骨自动计数方法中基于点云图神经网络的肋骨计数模型示意图;
图3为本发明实施例提供的CT肋骨自动计数装置的结构示意图。
实施本发明的方式
为使本发明的技术目的、技术方案和有益效果更加清楚,下面结合附图对本发明的具体实施方式进行清楚、完整地描述,所描述的具体实施方式只是本发明的一部分实施例,而不是全部的实施例,基于本发明的具体实施方式,本领域技术人员在没有做出创造性劳动的前提下所获得的其他实施例,都属于本发明的保护范围。
本发明提出一种新的肋骨计数方法,该方法具有可学习、标注数据量依赖低、准确高效等特点。本发明的输入为一份CT(即CT影像),输出为CT中每个肋骨的轮廓及其相应的肋骨编号,具体地,本发明是按以下方法实现的:
图1示出了本发明实施例提供的CT肋骨自动计数方法的流程图,参见图1,本发明实施例提供的CT肋骨自动计数方法,包括:
S1,对CT中肋骨进行分割,得到与CT对应的肋骨的掩码。
具体地,首先采用传统或基于深度学习的方法对CT中肋骨进行分割,得到与CT相对应的肋骨掩码M,其中1表示肋骨,0为其它。
S2,遍历掩码的每个层面,将每一层掩码作为二值图片,提取肋骨轮廓。
本步骤可以提取肋骨轮廓。具体地,遍历掩码的每个层面z,将掩码每一层M z,:,:作为二值图片,提取肋骨的外轮廓
Figure PCTCN2021131649-appb-000008
其中N z为该层面独立轮廓的个数,每个轮廓
Figure PCTCN2021131649-appb-000009
为一系列点坐标的集合:
Figure PCTCN2021131649-appb-000010
S3,将每个层面每个肋骨轮廓转换为点云。
作为本发明实施例的一个可选实施方式,将每个层面每个肋骨的轮廓转换为点云包括:利用公式将每个层面每个肋骨的轮廓转换为点云:
Figure PCTCN2021131649-appb-000011
Figure PCTCN2021131649-appb-000012
其中,p为三维坐标,z为层面,
Figure PCTCN2021131649-appb-000013
为轮廓,
Figure PCTCN2021131649-appb-000014
为从轮廓到关键点的映射。其中,从轮廓到关键点的映射包括但不限于:重心,形心和外接矩形框中心点。
本步骤可以将肋骨轮廓转换为点云。具体地,将每个层面每个肋骨轮廓分 别映射三维坐标p,该坐标由层面z和轮廓
Figure PCTCN2021131649-appb-000015
计算得到,为轮廓关键点和层面坐标z的串联:
Figure PCTCN2021131649-appb-000016
其中从轮廓到关键点的映射
Figure PCTCN2021131649-appb-000017
存在多种计算方法,如采用重心,形心,外接矩形框中心点等。本发明优选的以外接矩形框中心点为例作为关键点映射方法,关键点
Figure PCTCN2021131649-appb-000018
其中:
Figure PCTCN2021131649-appb-000019
Figure PCTCN2021131649-appb-000020
这样每个轮廓均可由对应的点表示,CT中所有肋骨轮廓点构成点云。
S4,采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号。
本步骤可以采用点云图神经网络对肋骨的编号进行预测。具体地,该步骤首先需要将肋骨编号映射为编码,两者一一对应。编码的实现方式有多种选择:如将24根肋骨编码为24维的零一(one hot)编码,值为1元素所在的位置表示肋骨的编号,左侧1-12肋采用编号1-12,右侧1-12肋采用编号13-24;也可以采用13维编码,其中前12维表示肋骨编号,最后一维表示左右侧。本发明提出的点云肋骨编号预测模型与具体的编码方式无关,任何编码都应在本发明的保护范围之内。
作为本发明实施例的一个可选实施方式,采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号包括:对于包含N个点的点云,以点的坐标为输入,每个点经多层感知机神经网络模型得到点的特征空间表达,最终N个点的特征向量采用池化方法构成点云的全局特征表达,将点云的全局特征与每个局部特征串联,经过若干多层感知机模型,得到每个点的编码预测。
对于包含N个点的点云,本发明采用基于点云的图神经网络对肋骨编码进行预测。以下通过图2,以一个具体的网络为例介绍具体的编码预测过程,但由于图神经网络具有不同的实现形式,通过计算单元的改变或层数的增删则仍在本发明的保护范围之内。
参见图2,该网络以点的坐标为输入,每个点经多层感知机神经网络模型 得到点的特征空间表达,多层感知机模型可堆叠多次,最终N个点的特征向量采用池化方法构成点云的全局特征表达;将点云的全局特征与每个局部特征串联,然后再经过若干多层感知机模型,即可以得到每个点的编码预测。通过对编码进行解码,即可得到每个点对应的肋骨编号。
作为本发明实施例的一个可选实施方式,本发明实施例提供的CT肋骨自动计数方法还包括:训练点云图神经网络;训练点云图神经网络包括:利用标注数据进行训练,训练过程采用梯度下降方法,以预测结果和真实结果计算损失,优化模型参数。其中,标注数据中的部分数据通过对真实点云数据坐标进行编辑得到;其中,对点云进行编辑包括但不限于:模拟骨折、翻转、模拟拍摄体位、模拟颈肋和模拟腰肋。
具体地,点云图神经网络需要利用标注数据进行训练,训练后的参数方可用于肋骨编码预测。训练过程采用梯度下降方法,以模型的预测结果和真实结果计算损失,优化模型参数。
为了提升点云图神经网络泛化能力,节约标注成本。本发明提出在真实提取的点云数据基础上,对点云坐标进行编辑,模拟异常条件,扩充训练数据,其实施方法包括并不局限于以下几项:
1、模拟骨折:随机选取某根肋骨部分相邻点,采用随机旋转和位移改变该部分点位置;
2、翻转:实现步骤如下,a.计算所有点x坐标的均值
Figure PCTCN2021131649-appb-000021
b.有点x坐标减去均值
Figure PCTCN2021131649-appb-000022
c.所有点x坐标取负号;d.所有点加均值
Figure PCTCN2021131649-appb-000023
e.将类别编码左右侧互换。
3、模拟拍摄体位:对所有点云沿三个坐标轴随机旋转;
4、模拟颈肋:复制第一肋部分点,通过对选取点z坐标减去随机值,将其添加到第一肋上侧;
5、模拟腰肋:复制第12肋部分点,通过对选取点z坐标加上随机值,将其添加到第12肋下侧。
S5,将点云肋骨编号进行逆映射,映射回肋骨轮廓,得到每个层面每个肋骨轮廓所属的肋骨编号,完成肋骨计数。
本步骤可以将点云的肋骨编号映射回肋骨轮廓。具体地,由于点云中每个点与肋骨的层面和轮廓一一对应,因此,将点的肋骨编号赋予肋骨轮廓,即可得到每个层面,每个肋骨轮廓所属肋骨的编号,完成肋骨计数。
由此可见,通过本发明实施例提供的CT肋骨自动计数方法,首先分割肋骨,从中提取肋骨轮廓,然后将轮廓转换为点云,采用图神经网络进行推理与预测。本发明基于学习的方法,可以自动的从标注数据中学习,从而不需要手动设计后处理规则,可增加肋骨计数的稳定性,提升开发效率,降低维护成本。
另外,由于图神经网络从肋骨分割结果进行学习,当肋骨分割异常时,图神经网络模型仍然可以正确计数,使系统在实际运行中更加稳定。
由于点云仅由坐标构成,本发明的坐标编辑方法,可以对实际情况中少见的情况进行模拟,如骨折、非正常拍摄体位、先天畸形等,提升模型的泛化能力,降低数据收集和标注的成本。本计数模型仅使用多层感知机等操作,且点云中点的个数较少(平均6000点),因此具有极高的运行效率,降低了部署成本。
图3示出了本发明实施例提供的CT肋骨自动计数装置的结构示意图,该CT肋骨自动计数装置应用上述方法,以下仅对CT肋骨自动计数装置的结构进行简单说明,其他未尽事宜,请参照上述CT肋骨自动计数方法中的相关描述,参见图3,本发明实施例提供的CT肋骨自动计数装置,包括:
分割模块,用于对CT中肋骨进行分割,得到与CT对应的肋骨的掩码;
提取模块,用于遍历掩码的每个层面,将每一层掩码作为二值图片,提取肋骨轮廓;
转换模块,用于将每个层面每个肋骨轮廓转换为点云;
预测模块,用于采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号;
逆映射模块,用于将点云肋骨编号进行逆映射,映射回肋骨轮廓,得到每个层面每个肋骨轮廓所属的肋骨编号,完成肋骨计数。
作为本发明实施例的一个可选实施方式,转换模块通过如下方式将每个层面每个肋骨的轮廓转换为点云:转换模块,具体用于利用公式将每个层面每个肋骨的轮廓转换为点云:
Figure PCTCN2021131649-appb-000024
其中,p为三维坐标,z为层面,
Figure PCTCN2021131649-appb-000025
为轮廓,
Figure PCTCN2021131649-appb-000026
为从轮廓到关键点的映射。
作为本发明实施例的一个可选实施方式,从轮廓到关键点的映射包括但不限于:重心,形心和外接矩形框中心点。
作为本发明实施例的一个可选实施方式,预测模块通过如下方式采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号:预测模块,具体用于对于包含N个点的点云,以点的坐标为输入,每个点经多层感知机神经网络模型得到点的特征空间表达,最终N个点的特征向量采用池化方法构成点云的全局特征表达,将点云的全局特征与每个局部特征串联,经过若干多层感知机模型,得到每个点的编码预测。
作为本发明实施例的一个可选实施方式,本发明实施例提供的CT肋骨自动计数装置还包括:训练模块,用于训练点云图神经网络;训练模块通过如下方式训练点云图神经网络:训练模块,具体用于利用标注数据进行训练,训练过程采用梯度下降方法,以预测结果和真实结果计算损失,优化模型参数。
作为本发明实施例的一个可选实施方式,标注数据中的部分数据通过对真实点云数据坐标进行编辑得到;其中,对点云进行编辑包括但不限于:模拟骨折、翻转、模拟拍摄体位、模拟颈肋和模拟腰肋。
由此可见,通过本发明实施例提供的CT肋骨自动计数装置,首先分割肋骨,从中提取肋骨轮廓,然后将轮廓转换为点云,采用图神经网络进行推理与预测。本发明基于学习的方法,可以自动的从标注数据中学习,从而不需要手动设计后处理规则,可增加肋骨计数的稳定性,提升开发效率,降低维护成本。
另外,由于图神经网络从肋骨分割结果进行学习,当肋骨分割异常时,图神经网络模型仍然可以正确计数,使系统在实际运行中更加稳定。
由于点云仅由坐标构成,本发明的坐标编辑方法,可以对实际情况中少见的情况进行模拟,如骨折、非正常拍摄体位、先天畸形等,提升模型的泛化能 力,降低数据收集和标注的成本。本计数模型仅使用多层感知机等操作,且点云中点的个数较少(平均6000点),因此具有极高的运行效率,降低了部署成本。
本申请实施例提供一种电子设备。在硬件层面,该电子设备包括处理器,可选地还包括内部总线、网络接口、存储器。其中,存储器可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器等。当然,该电子设备还可能包括其他业务所需要的硬件。
处理器、网络接口和存储器可以通过内部总线相互连接,该内部总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。
存储器,用于存放执行指令。具体地,执行指令即可被执行的计算机程序。存储器可以包括内存和非易失性存储器,并向处理器提供执行指令和数据。
处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
在一种可能实现的方式中,处理器从非易失性存储器中读取对应的执行指令到内存中然后运行,也可从其它设备上获取相应的执行指令,以在逻辑层面上形 成CT肋骨自动计数方法。处理器执行存储器所存放的执行指令,以通过执行的执行指令实现本申请任一实施例中提供的CT肋骨自动计数方法。上述实施例提供的CT肋骨自动计数方法执行的过程可以应用于处理器中,或者由处理器实现。
结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
本申请实施例还提出了一种可读介质,该可读存储介质存储有执行指令,存储的执行指令被电子设备的处理器执行时,能够使该电子设备执行本申请任一实施例中提供的CT肋骨自动计数方法,并具体用于执行上述CT肋骨自动计数方法。
前述各个实施例中所述的电子设备可以为计算机。
本领域内的技术人员应明白,本申请的实施例可提供为方法或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例,或软件和硬件相结合的形式。
本申请中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的 任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。
工业实用性
本发明提供的CT肋骨自动计数方法及装置利用图像识别技术结合点云分割模型确定肋骨的位置和相互间信息,充分利用了计算机技术中程序自动化的处理基础,使得对骨折类型、骨折部位和先天畸形的定位和评估效率获得极大地提高。形成的产品可以批量生产,快速应用于对骨折诊断具有高需求的系统或场景。

Claims (14)

  1. 一种CT肋骨自动计数方法,其特征在于,包括:
    对CT中肋骨进行分割,得到与所述CT对应的肋骨掩码;
    遍历所述掩码的每个层面,将每一层掩码作为二值图片,提取肋骨轮廓;
    将每个层面每个肋骨轮廓转换为点云;
    采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号;
    将所述点云肋骨编号进行逆映射,映射回肋骨轮廓,得到每个层面每个肋骨轮廓所属的肋骨编号,完成肋骨计数。
  2. 根据权利要求1所述的方法,其特征在于,所述将每个层面每个所述肋骨的轮廓转换为点云包括:
    利用公式将每个层面每个所述肋骨的轮廓转换为点云:
    Figure PCTCN2021131649-appb-100001
    其中,p为三维坐标,z为层面,
    Figure PCTCN2021131649-appb-100002
    为轮廓,
    Figure PCTCN2021131649-appb-100003
    为从轮廓到关键点的映射。
  3. 根据权利要求2所述的方法,其特征在于,所述从轮廓到关键点的映射包括但不限于:重心,形心和外接矩形框中心点。
  4. 根据权利要求1所述的方法,其特征在于,所述采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号包括:
    对于包含N个点的点云,以点的坐标为输入,每个点经多层感知机神经网络模型得到点的特征空间表达,最终N个点的特征向量采用池化方法构成点云的全局特征表达,将点云的全局特征与每个局部特征串联,经过若干多层感知机模型,得到每个点的编码预测。
  5. 根据权利要求1或4所述的方法,其特征在于,还包括:训练所述点云图神经网络;
    所述训练所述点云图神经网络包括:利用标注数据进行训练,训练过程采用梯度下降方法,以预测结果和真实结果计算损失,优化模型参数。
  6. 根据权利要求5所述的方法,其特征在于,所述标注数据中的部分数据通过对真实点云数据坐标进行编辑得到;其中,所述对点云进行编辑至少包括以下一种:模拟骨折、模拟翻转、模拟拍摄体位、模拟颈肋和模拟腰肋。
  7. 一种CT肋骨自动计数装置,其特征在于,包括:
    分割模块,用于对CT中肋骨进行分割,得到与所述CT对应的肋骨的掩码;
    提取模块,用于遍历所述掩码的每个层面,将每一层掩码作为二值图片,提取肋骨轮廓;
    转换模块,用于将每个层面每个肋骨轮廓转换为点云;
    预测模块,用于采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号;
    逆映射模块,用于将所述点云肋骨编号进行逆映射,映射回肋骨轮廓,得到每个层面每个肋骨轮廓所属的肋骨编号,完成肋骨计数。
  8. 根据权利要求7所述的装置,其特征在于,所述转换模块通过如下方式将每个层面每个所述肋骨的轮廓转换为点云:
    所述转换模块,具体用于利用公式将每个层面每个所述肋骨的轮廓转换为点云:
    Figure PCTCN2021131649-appb-100004
    其中,p为三维坐标,z为层面,
    Figure PCTCN2021131649-appb-100005
    为轮廓,
    Figure PCTCN2021131649-appb-100006
    为从轮廓到关键点的映射。
  9. 根据权利要求8所述的装置,其特征在于,所述从轮廓到关键点的映射包括但不限于:重心,形心和外接矩形框中心点。
  10. 根据权利要求7所述的装置,其特征在于,所述预测模块通过如下方式采用点云图神经网络对肋骨的编号进行预测,得到点云肋骨编号:
    所述预测模块,具体用于对于包含N个点的点云,以点的坐标为输入,每个点经多层感知机神经网络模型得到点的特征空间表达,最终N个点的特征向量采用池化方法构成点云的全局特征表达,将点云的全局特征与每个局部特征串联,经过若干多层感知机模型,得到每个点的编码预测。
  11. 根据权利要求7或10所述的装置,其特征在于,还包括:训练模块,用于训练所述点云图神经网络;
    所述训练模块通过如下方式训练所述点云图神经网络:所述训练模块,具体用于利用标注数据进行训练,训练过程采用梯度下降方法,以预测结果和真实结果计算损失,优化模型参数。
  12. 根据权利要求11所述的装置,其特征在于,所述标注数据中的部分数据通过对真实点云数据坐标进行编辑得到;其中,所述对点云进行编辑至少包括以下一种:模拟骨折、翻转、模拟拍摄体位、模拟颈肋和模拟腰肋。
  13. 一种可读介质,其特征在于,所述可读介质包括执行指令,当电子设备的处理器执行所述执行指令时,所述电子设备执行如权利要求1至6中任一所述的方法。
  14. 一种电子设备,其特征在于,所述电子设备包括处理器以及存储有执行指令的存储器,当所述处理器执行所述存储器存储的所述执行指令时,所述处理器执行如权利要求1至6中任一所述的方法。
PCT/CN2021/131649 2020-11-27 2021-11-19 一种ct肋骨自动计数方法及装置 WO2022111383A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011356450.6A CN112529849B (zh) 2020-11-27 2020-11-27 一种ct肋骨自动计数方法及装置
CN202011356450.6 2020-11-27

Publications (1)

Publication Number Publication Date
WO2022111383A1 true WO2022111383A1 (zh) 2022-06-02

Family

ID=74994054

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131649 WO2022111383A1 (zh) 2020-11-27 2021-11-19 一种ct肋骨自动计数方法及装置

Country Status (2)

Country Link
CN (1) CN112529849B (zh)
WO (1) WO2022111383A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529849B (zh) * 2020-11-27 2024-01-19 北京深睿博联科技有限责任公司 一种ct肋骨自动计数方法及装置
CN114049358A (zh) * 2021-11-17 2022-02-15 苏州体素信息科技有限公司 肋骨实例分割、计数与定位的方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555860A (zh) * 2018-06-04 2019-12-10 青岛海信医疗设备股份有限公司 医学图像中肋骨区域标注的方法、电子设备和存储介质
CN110992376A (zh) * 2019-11-28 2020-04-10 北京推想科技有限公司 基于ct图像的肋骨分割方法、装置、介质及电子设备
US20200334897A1 (en) * 2019-04-18 2020-10-22 Zebra Medical Vision Ltd. Systems and methods for reconstruction of 3d anatomical images from 2d anatomical images
CN111915620A (zh) * 2020-06-19 2020-11-10 杭州深睿博联科技有限公司 一种ct肋骨分割方法及装置
CN112529849A (zh) * 2020-11-27 2021-03-19 北京深睿博联科技有限责任公司 一种ct肋骨自动计数方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452577B (zh) * 2008-11-26 2010-12-29 沈阳东软医疗系统有限公司 一种肋骨自动标定的方法及装置
WO2019041262A1 (en) * 2017-08-31 2019-03-07 Shenzhen United Imaging Healthcare Co., Ltd. SYSTEM AND METHOD FOR IMAGE SEGMENTATION
CN110866905A (zh) * 2019-11-12 2020-03-06 苏州大学 一种肋骨识别与标注方法
CN111091605B (zh) * 2020-03-19 2020-07-07 南京安科医疗科技有限公司 一种肋骨可视化方法、识别方法及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555860A (zh) * 2018-06-04 2019-12-10 青岛海信医疗设备股份有限公司 医学图像中肋骨区域标注的方法、电子设备和存储介质
US20200334897A1 (en) * 2019-04-18 2020-10-22 Zebra Medical Vision Ltd. Systems and methods for reconstruction of 3d anatomical images from 2d anatomical images
CN110992376A (zh) * 2019-11-28 2020-04-10 北京推想科技有限公司 基于ct图像的肋骨分割方法、装置、介质及电子设备
CN111915620A (zh) * 2020-06-19 2020-11-10 杭州深睿博联科技有限公司 一种ct肋骨分割方法及装置
CN112529849A (zh) * 2020-11-27 2021-03-19 北京深睿博联科技有限责任公司 一种ct肋骨自动计数方法及装置

Also Published As

Publication number Publication date
CN112529849B (zh) 2024-01-19
CN112529849A (zh) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2020215984A1 (zh) 基于深度学习的医学图像检测方法及相关设备
WO2018108129A1 (zh) 用于识别物体类别的方法及装置、电子设备
WO2022077917A1 (zh) 实例分割模型样本筛选方法、装置、计算机设备及介质
WO2019200747A1 (zh) 分割股骨近端的方法、装置、计算机设备和存储介质
WO2022111383A1 (zh) 一种ct肋骨自动计数方法及装置
CN108335303B (zh) 一种应用于手掌x光片的多尺度手掌骨骼分割方法
WO2021189913A1 (zh) 图像中目标物的分割方法、装置、电子设备及存储介质
CN111882560A (zh) 一种基于加权全卷积神经网络的肺实质ct图像分割方法
WO2024021523A1 (zh) 基于图网络的大脑皮层表面全自动分割方法及系统
CN111402216B (zh) 基于深度学习的三维碎骨分割方法和装置
Chen et al. Attention-guided discriminative region localization and label distribution learning for bone age assessment
CN108986115A (zh) 医学图像分割方法、装置及智能终端
CN115063425B (zh) 基于读片知识图谱的结构化检查所见生成方法及系统
WO2023151237A1 (zh) 人脸位姿估计方法、装置、电子设备及存储介质
CN115062165A (zh) 基于读片知识图谱的医学影像诊断方法及装置
Chen Automated bone age classification with deep neural networks
Zhang et al. An Algorithm for Automatic Rib Fracture Recognition Combined with nnU‐Net and DenseNet
Xu et al. Identification of benign and malignant lung nodules in CT images based on ensemble learning method
CN111341438B (zh) 图像处理方法、装置、电子设备及介质
CN110348311B (zh) 一种基于深度学习的道路交叉口识别系统及方法
Wang et al. Image recognition of pediatric pneumonia based on fusion of texture features and depth features
CN114119447A (zh) 肺结节多属性分类模型的构建方法、系统、介质及装置
CN115018780B (zh) 一种融合全局推理和mlp架构的甲状腺结节分割方法
CN113393445B (zh) 乳腺癌影像确定方法及系统
CN114049358A (zh) 肋骨实例分割、计数与定位的方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896884

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896884

Country of ref document: EP

Kind code of ref document: A1