CN112200780B - Bone tissue positioning method, device, computer equipment and storage medium - Google Patents

Bone tissue positioning method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112200780B
CN112200780B CN202011052786.3A CN202011052786A CN112200780B CN 112200780 B CN112200780 B CN 112200780B CN 202011052786 A CN202011052786 A CN 202011052786A CN 112200780 B CN112200780 B CN 112200780B
Authority
CN
China
Prior art keywords
point cloud
target
dimensional scanning
boundary
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011052786.3A
Other languages
Chinese (zh)
Other versions
CN112200780A (en
Inventor
翁馨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202011052786.3A priority Critical patent/CN112200780B/en
Publication of CN112200780A publication Critical patent/CN112200780A/en
Application granted granted Critical
Publication of CN112200780B publication Critical patent/CN112200780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5235Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)

Abstract

The application relates to a bone tissue positioning method, a bone tissue positioning device, computer equipment and a storage medium. The method comprises the following steps: acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues; converting the two-dimensional scanning images to obtain target point clouds corresponding to the bone tissues; the target point cloud consists of point cloud points corresponding to multiple types of bone tissues; and identifying bone tissues according to the target point cloud to obtain target areas where various bone tissues are located. The method can shorten the bone tissue positioning time.

Description

Bone tissue positioning method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of tissue positioning technologies, and in particular, to a bone tissue positioning method, a bone tissue positioning device, a bone tissue positioning computer device, and a bone tissue storage medium.
Background
With the development of medical image technology, organ positioning is used as basic pretreatment of medical images, and plays a very important role in applications such as image registration, organ segmentation and focus detection.
In the related art, a two-dimensional slice image of a CT (Computed Tomography, electronic computed tomography) image is generally classified, and organ positioning is performed based on the classification result. However, the classification of the edges of the organ is often not accurate enough and takes a long time because of the need to classify the lamellae in three directions, the transverse coronaries.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a bone tissue positioning method, apparatus, computer device, and storage medium that can shorten the positioning time.
A method of bone tissue localization, the method comprising:
acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
converting the two-dimensional scanning images to obtain target point clouds corresponding to various bone tissues; the target point cloud consists of point cloud points corresponding to multiple types of bone tissues;
and identifying bone tissues according to the target point cloud to obtain target areas where various bone tissues are located.
In one embodiment, the converting the plurality of two-dimensional scan images to obtain the target point cloud corresponding to the plurality of bone tissues includes:
threshold dividing processing is carried out on each two-dimensional scanning image, so that mask images corresponding to each two-dimensional scanning image are obtained;
respectively extracting boundaries of each mask image to obtain boundary points of bone tissues in each mask image;
and respectively carrying out coordinate conversion processing on each boundary point according to a preset mapping relation to obtain point cloud points corresponding to each boundary point, and forming a target point cloud by a plurality of point cloud points.
In one embodiment, the performing the thresholding processing on each two-dimensional scan image to obtain a mask image corresponding to each two-dimensional scan image includes:
setting the pixel value of a first pixel point in each two-dimensional scanning image as a first value, and setting the pixel value of a second pixel point in each two-dimensional scanning image as a second value;
and generating mask images corresponding to the two-dimensional scanning images according to the pixel values of each pixel point in the two-dimensional scanning images.
In one embodiment, the extracting the boundary of each mask image to obtain the boundary point of the bone tissue in each mask image includes:
carrying out communication processing on pixel points with pixel values of a first value in each mask image to obtain a communication region in each mask image;
and extracting the boundary of the connected region in each mask image to obtain boundary points of bone tissues in each mask image.
In one embodiment, before the coordinate conversion processing is performed on each boundary point according to the preset mapping relationship, the method further includes:
sampling the plurality of boundary points to obtain a preset number of target boundary points;
correspondingly, coordinate conversion processing is respectively carried out on each boundary point according to a preset mapping relation to obtain point cloud points corresponding to each boundary point, and a plurality of point cloud points form a target point cloud, and the method comprises the following steps:
And respectively carrying out coordinate conversion processing on each target boundary point according to the mapping relation to obtain point cloud points corresponding to each target boundary point, and forming a target point cloud by a plurality of point cloud points.
In one embodiment, before the coordinate conversion processing is performed on each boundary point according to the preset mapping relationship, the method further includes:
and establishing a mapping relation according to the inter-layer resolution among the two-dimensional scanning images and the intra-layer resolution of each two-dimensional scanning image.
In one embodiment, the performing bone tissue recognition according to the target point cloud to obtain the target area where the various bone tissues are located includes:
inputting the target point cloud into a pre-trained recognition network to obtain a recognition result output by the recognition network; the identification result comprises identification frames and classification marks corresponding to various bone tissues;
and carrying out coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data.
In one embodiment, the method further comprises:
and adjusting the target area where each bone tissue is positioned according to the size of the two-dimensional scanning image so as to enable the target area where each bone tissue is positioned to be matched with the size of the two-dimensional scanning image.
A bone tissue positioning device, the device comprising:
the volume data acquisition module is used for acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
the point cloud conversion module is used for carrying out conversion processing on the plurality of two-dimensional scanning images to obtain target point clouds corresponding to various bone tissues; the target point cloud consists of point cloud points corresponding to various bone tissues;
and the positioning module is used for identifying bone tissues according to the target point cloud to obtain target areas where various bone tissues are located.
In one embodiment, the point cloud conversion module includes:
the threshold dividing sub-module is used for carrying out threshold dividing processing on each two-dimensional scanning image to obtain mask images corresponding to each two-dimensional scanning image;
the boundary extraction submodule is used for respectively carrying out boundary extraction on each mask image to obtain boundary points of bone tissues in each mask image;
and the coordinate conversion sub-module is used for respectively carrying out coordinate conversion processing on each boundary point according to a preset mapping relation to obtain point cloud points corresponding to each boundary point, and forming a target point cloud by a plurality of point cloud points.
In one embodiment, the threshold dividing sub-module is specifically configured to set a pixel value of a first pixel point in each two-dimensional scanned image to a first value, and set a pixel value of a second pixel point in each two-dimensional scanned image to a second value; and generating mask images corresponding to the two-dimensional scanning images according to the pixel values of each pixel point in the two-dimensional scanning images.
In one embodiment, the boundary extraction submodule is specifically configured to perform a connection process on a pixel point with a pixel value being a first value in each mask image, so as to obtain a connection region in each mask image; and extracting the boundary of the connected region in each mask image to obtain boundary points of bone tissues in each mask image.
In one embodiment, the method further comprises:
the sampling processing module is used for sampling the plurality of boundary points to obtain a preset number of target boundary points;
correspondingly, the point cloud conversion module is specifically configured to perform coordinate conversion processing on each target boundary point according to the mapping relationship, obtain a point cloud point corresponding to each target boundary point, and form a target point cloud by a plurality of point cloud points.
In one embodiment, the method further comprises:
and the mapping relation establishing module is used for establishing a mapping relation according to the interlayer resolution among the two-dimensional scanning images and the intra-layer resolution of each two-dimensional scanning image.
In one embodiment, the positioning module is specifically configured to input the target point cloud into a pre-trained recognition network, so as to obtain a recognition result output by the recognition network; the identification result comprises identification frames and classification marks corresponding to various bone tissues; and carrying out coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data.
In one embodiment, the method further comprises:
the region adjustment module is used for adjusting the target region where each bone tissue is located according to the size of the two-dimensional scanning image so as to enable the target region where each bone tissue is located to be matched with the size of the two-dimensional scanning image.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
converting the two-dimensional scanning images to obtain target point clouds corresponding to various bone tissues; the target point cloud consists of point cloud points corresponding to multiple types of bone tissues;
and identifying bone tissues according to the target point cloud to obtain target areas where various bone tissues are located.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
converting the two-dimensional scanning images to obtain target point clouds corresponding to various bone tissues; the target point cloud consists of point cloud points corresponding to multiple types of bone tissues;
And identifying bone tissues according to the target point cloud to obtain target areas where various bone tissues are located.
The bone tissue positioning method, the bone tissue positioning device, the computer equipment and the storage medium, wherein the computer equipment acquires target volume data, the target volume data comprises a plurality of two-dimensional scanning images, and each two-dimensional scanning image comprises a plurality of types of bone tissues; then, the computer equipment carries out conversion processing on the two-dimensional scanning images to obtain target point clouds corresponding to various bone tissues; and converting the two-dimensional scanning images to obtain target point clouds corresponding to the bone tissues of multiple classes. In the embodiment of the disclosure, as the multi-class bone tissues in the two-dimensional scanning image are converted into the three-dimensional point cloud, bone tissue identification can be performed according to the three-dimensional point cloud, so that the multi-class bone tissues can be accurately positioned in one-time identification, the positioning time is shortened, and the occupation amount of computing resources is reduced.
Drawings
FIG. 1 is a diagram of an application environment of a bone tissue positioning method in one embodiment;
FIG. 2 is a flow chart of a bone tissue positioning method according to one embodiment;
FIG. 3 is a flow chart illustrating the steps of converting a plurality of two-dimensional scanned images in one embodiment;
FIG. 4a is a schematic diagram of a two-dimensional scanned image in one embodiment;
FIG. 4b is a schematic illustration of a mask image in one embodiment;
FIG. 4c is a schematic diagram of a point cloud image in one embodiment;
FIG. 5 is a flowchart illustrating a bone tissue identification procedure according to a target point cloud in one embodiment;
FIG. 6 is a schematic diagram of an architecture of an identification network in one embodiment;
FIG. 7a is a schematic diagram of an identification box in a point cloud image in one embodiment;
FIG. 7b is a schematic diagram of a target region in a two-dimensional scanned image in one embodiment;
FIG. 8 is a flow chart of a bone tissue positioning method according to another embodiment;
FIG. 9 is a block diagram of a bone tissue positioning device according to one embodiment;
fig. 10 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The bone tissue positioning method provided by the application can be applied to an application environment shown in figure 1. The application environment may include, among other things, various medical imaging systems, such as a CT system or an MR (Magnetic Resonance ) system. The medical imaging system comprises a scanning device 101 and a computer device 102. Wherein the scanning device 101 may include, but is not limited to, an X-ray tube, a detector, and a gantry (gantry); the computer device 102 may include at least one terminal that may include a processor, a display, and a memory.
In one embodiment, as shown in fig. 2, there is provided a bone tissue positioning method, which is exemplified by the application of the method to the computer device in fig. 1, and includes the following steps:
in step 201, target volume data is acquired.
Wherein the target volume data comprises a plurality of two-dimensional scan images; each two-dimensional scan image includes multiple types of bone tissue.
The computer equipment controls the scanning equipment to scan, then the computer equipment performs image reconstruction according to the scanning data of the scanning equipment to obtain a plurality of two-dimensional scanning images, and the two-dimensional scanning images are sequenced according to a certain sequence to form target volume data.
The scanning device may perform a CT scan or a Magnetic Resonance (MR) scan. Correspondingly, the two-dimensional scan image may be a CT image or a Magnetic Resonance (MR) image. The embodiments of the present disclosure do not limit the scanning manner and the scanned image in detail.
Step 202, performing conversion processing on the plurality of two-dimensional scanning images to obtain target point clouds corresponding to the bone tissues of multiple classes.
The target point cloud consists of point cloud points corresponding to multiple types of bone tissues.
After the computer equipment acquires a plurality of two-dimensional scanning images, conversion processing is carried out on each two-dimensional scanning image. Specifically, images of multiple types of bone tissues are extracted from each two-dimensional scanning image, then pixel points in the images of the various types of bone tissues are converted into point cloud points corresponding to the various types of bone tissues, and point cloud points corresponding to the multiple types of bone tissues form a target point cloud.
For example, the computer device acquires two-dimensional scan image 1 to two-dimensional scan image 10; then, images of bone tissues such as spine, rib, arm bone, and hand bone are extracted from the two-dimensional scan image 1 to the two-dimensional scan image 10, respectively. And then, converting pixel points in images of various bone tissues into point cloud points to obtain the point cloud points of bone tissues such as the spine, the ribs, the arm bones, the hand bones and the like, and forming a target point cloud.
And 203, identifying bone tissues according to the target point cloud to obtain target areas where various bone tissues are located.
The computer equipment can preset a target detection model, and after the target point cloud is obtained, bone tissue identification is carried out by using the target detection model, so that target areas where various bone tissues are located are obtained, namely, the positioning of the bone tissues is realized, wherein the target areas where the various bone tissues are located are three-dimensional areas. The target detection model may employ a deep learning model, a neural network model, or the like. The embodiment of the disclosure does not limit the identification mode.
In the bone tissue positioning method, the computer equipment acquires target volume data, wherein the target volume data comprises a plurality of two-dimensional scanning images, and each two-dimensional scanning image comprises a plurality of types of bone tissue; then, the computer equipment carries out conversion processing on the two-dimensional scanning images to obtain target point clouds corresponding to various bone tissues; and converting the two-dimensional scanning images to obtain target point clouds corresponding to the bone tissues of multiple classes. In the embodiment of the disclosure, as the multi-class bone tissues in the two-dimensional scanning image are converted into the three-dimensional point cloud, bone tissue identification can be performed according to the three-dimensional point cloud, so that the multi-class bone tissues can be accurately positioned in one-time identification, the positioning time is shortened, and the occupation amount of computing resources is reduced.
In one embodiment, as shown in fig. 3, the step of converting the plurality of two-dimensional scan images to obtain the target point cloud corresponding to the plurality of types of bone tissue may include:
step 301, performing threshold value division processing on each two-dimensional scanning image to obtain mask images corresponding to each two-dimensional scanning image.
In the process of converting the two-dimensional scanning images by the computer equipment, threshold dividing processing of pixel values is carried out on each two-dimensional scanning image. In practical application, different thresholds can be selected according to imaging information such as manufacturers of medical image equipment and reconstruction kernels of two-dimensional scanning images; the adaptive threshold may also be calculated based on algorithms such as statistical mean and standard deviation.
In one embodiment, the computer device determines a first pixel point and a second pixel point in each two-dimensional scanned image according to a preset threshold. The computer device then sets the pixel value of the first pixel in each two-dimensional scanned image to a first value and the pixel value of the second pixel in each two-dimensional scanned image to a second value. Finally, the computer equipment generates mask images corresponding to the two-dimensional scanning images according to the pixel values of each pixel point in the two-dimensional scanning images.
For CT scan images, the ray absorption value of the first pixel point is greater than a preset threshold value, the ray absorption value of the second pixel point is less than or equal to the preset threshold value, and the ray absorption value HU (Hounsfiled Unit) reflects the absorption degree of muscle tissue and bone tissue on X-rays. For other scanned images, for example, a PET image, there is a case where the image intensity value of the first pixel point is smaller than a preset threshold value, and the image intensity value of the second pixel point is greater than or equal to the preset threshold value.
In practical application, for the two-dimensional scan image shown in fig. 4a, if the ray absorption value of the pixel point 1 is greater than the preset threshold, the pixel point 1 is the first pixel point; if the ray absorption value of the pixel 100 is less than or equal to the preset threshold, the pixel 100 is a second pixel. Similarly, whether each pixel point in the two-dimensional scanning image is a first pixel point or a second pixel point is determined. After that, the computer device sets the pixel value of the first pixel point to 255 and the pixel value of the second pixel point to 0. Next, a mask image corresponding to the two-dimensional scan image is generated from the set pixel values, as shown in fig. 4 b. And for other two-dimensional scanning images, the same processing mode is adopted, so that corresponding mask images can be obtained.
The embodiment of the disclosure does not limit the preset threshold value, the first value and the second value, and can be selected according to actual conditions.
And step 302, respectively extracting boundaries of each mask image to obtain boundary points of bone tissues in each mask image.
After the computer equipment obtains the mask images corresponding to the two-dimensional scanning images, boundary extraction is carried out on each mask image. The specific process comprises the following steps: the computer carries out communication processing on pixel points with the pixel value of the first value in each mask image to obtain a communication area in each mask image; and extracting the boundary of the connected region in each mask image to obtain boundary points of bone tissues in each mask image.
For the mask image shown in fig. 4b, the pixel points with the pixel values of 255 are subjected to the connection processing, so that a connection region in the mask image is obtained; and then, carrying out boundary extraction on the connected region to obtain boundary points of bone tissues in the mask image.
In practice, the two-dimensional scan image may be an enhanced image, and in the enhanced image, the tissue enhanced by the contrast agent may be present in the connected region of the mask image. If the boundary points of the connected region are not extracted, the corresponding pixel points with the pixel values of the first value in the mask image not only contain the pixel points corresponding to the bone tissue, but also contain tissues enhanced by the contrast agent, such as heart and kidney, so that the pixel points with the pixel values of the first value cannot well represent the surface shape characteristics of the bone tissue, and the boundary points of the connected region are extracted, the ratio of the tissues enhanced by the contrast agent in the target point cloud can be reduced, so that the subsequently acquired point cloud points better represent the surface shape characteristics of the bone tissue.
And 303, respectively carrying out coordinate conversion processing on each boundary point according to a preset mapping relation to obtain point cloud points corresponding to each boundary point, and forming a target point cloud by a plurality of point cloud points.
The mapping relation is used for representing the coordinate conversion relation between the boundary points and the point cloud points.
The computer equipment sets a mapping relation in advance according to the coordinate conversion relation between the boundary points and the point cloud points. After the boundary points of the bone tissues in the mask image are extracted, the computer equipment performs coordinate conversion processing on each boundary point according to the mapping relation, and converts the coordinates of the boundary points into coordinates of point cloud points, so that the point cloud points corresponding to each boundary point are obtained. Finally, the target point cloud is composed of a plurality of point cloud points, as shown in fig. 4 c.
The target point cloud may be stored in a format of n×3, where N is the number of point cloud points, and 3 is the three-dimensional coordinate of the boundary point.
In the step of performing conversion processing on the plurality of two-dimensional scanning images to obtain target point clouds corresponding to the bone tissues of multiple classes, performing threshold dividing processing on each two-dimensional scanning image to obtain mask images corresponding to each two-dimensional scanning image; and respectively carrying out coordinate conversion processing on each boundary point according to a preset mapping relation to obtain point cloud points corresponding to each boundary point, and forming a target point cloud by a plurality of point cloud points. Through the embodiment of the disclosure, coordinate conversion processing is performed on each pixel point in the two-dimensional scanning image, so that target point clouds corresponding to multiple types of bone tissues are obtained, and therefore bone tissue positioning can be performed on the target point clouds by using a neural network in the follow-up process. Compared with the prior art, the positioning time can be shortened, and the occupation amount of computing resources is reduced.
In one embodiment, as shown in fig. 5, the step of identifying bone tissue according to the target point cloud to obtain the target area where the various bone tissues are located includes:
step 401, inputting the target point cloud into a pre-trained recognition network to obtain a recognition result output by the recognition network.
The identification result comprises identification frames and classification marks corresponding to various bone tissues; the classification identifies a class used to characterize bone tissue.
The computer device pre-trains the recognition network, which can use VoteNet, as shown in FIG. 6, the whole network structure is divided into two parts, the first part obtains the Vote point from the input point, the second part obtains the box proteins and the categories thereof from the votes, and finally the 3D NMS obtains the final result. The identification network may also be other neural networks. The embodiment of the disclosure does not limit the structure of the identification network.
After the target point cloud is obtained, the computer equipment inputs the target point cloud in the format of N x 3 into the recognition network, and the recognition network outputs recognition frames and classification identifiers corresponding to various bone tissues. The three-dimensional identification frames corresponding to various bone tissues are three-dimensional identification frames, the computer equipment respectively carries out projection processing on the coronal position and the sagittal position on the three-dimensional identification frames of various bone tissues, and then an image shown in fig. 7a can be obtained, and the numbers at the edges of the identification frames are classification marks.
And step 402, performing coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data.
After the recognition network outputs the recognition frames of the bone tissues of various types, the computer equipment performs coordinate conversion processing on the three-dimensional recognition frames in the target point cloud according to the preset mapping relation, and converts the point cloud points of the three-dimensional recognition frames into pixel points in the target volume data to obtain target areas where the bone tissues of various types are located in the target volume data. Wherein the target area where each kind of bone tissue is located is a three-dimensional area, and the computer device performs the projection processing of coronal and sagittal positions on the three-dimensional area where each kind of bone tissue is located, so that an image as shown in fig. 7b can be obtained.
In the step of identifying the bone tissue according to the target point cloud to obtain the target areas where the various bone tissues are located, the computer equipment inputs the target point cloud into a pre-trained identification network to obtain an identification result output by the identification network; and carrying out coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data. In the embodiment of the disclosure, the bone tissue can be identified by utilizing the identification network, so that multiple types of bone tissues can be accurately positioned in one-time identification, the positioning time is shortened, and the occupation amount of computing resources is reduced.
In one embodiment, as shown in fig. 8, there is provided a bone tissue positioning method, which is exemplified by the application of the method to the computer device in fig. 1, including the steps of:
in step 501, target volume data is acquired.
Wherein the target volume data comprises a plurality of two-dimensional scan images; each two-dimensional scan image includes multiple types of bone tissue.
Step 502, performing threshold value division processing on each two-dimensional scanning image to obtain mask images corresponding to each two-dimensional scanning image.
In one embodiment, a pixel value of a first pixel point in each two-dimensional scanning image is set to a first value, and a pixel value of a second pixel point in each two-dimensional scanning image is set to a second value; and generating mask images corresponding to the two-dimensional scanning images according to the pixel values of each pixel point in the two-dimensional scanning images.
In step 503, boundary extraction is performed on each mask image, so as to obtain boundary points of bone tissue in each mask image.
In one embodiment, the pixel points with the pixel values of the first values in each mask image are subjected to communication processing to obtain the communication areas in each mask image; and extracting the boundary of the connected region in each mask image to obtain boundary points of bone tissues in each mask image.
And 504, sampling the plurality of boundary points to obtain a preset number of target boundary points.
Since the number of the boundary points is large, the data processing amount is excessive when the coordinate conversion processing is performed subsequently, so that the sampling processing is performed on the plurality of boundary points to obtain the target boundary points with the preset number, and the data processing amount in the coordinate conversion processing is reduced.
For example, sampling processing is performed on a plurality of boundary points, resulting in 80000 target boundary points.
In step 505, a mapping relationship is established according to the inter-layer resolution between the plurality of two-dimensional scan images and the intra-layer resolution of each two-dimensional scan image.
An inter-layer resolution exists between the plurality of two-dimensional scan images in the object volume data, and an intra-layer resolution exists in each of the two-dimensional scan images. For each pixel point, the computer equipment can obtain the pixel coordinate and the three-dimensional coordinate of the pixel point according to the interlayer resolution and the in-layer resolution, and then establish the coordinate conversion relation of the pixel point according to the pixel coordinate and the three-dimensional coordinate. And finally, the computer equipment establishes a mapping relation according to the coordinate conversion relation of the plurality of pixel points.
And step 506, respectively carrying out coordinate conversion processing on each target boundary point according to the mapping relation to obtain point cloud points corresponding to each target boundary point, and forming a target point cloud by a plurality of point cloud points.
And the computer equipment performs coordinate conversion processing on the extracted target boundary points according to the mapping relation, namely, converting the pixel coordinates of the target boundary points into three-dimensional coordinates to obtain corresponding point cloud points. And finally, forming a target point cloud by a plurality of point cloud points.
And 507, inputting the target point cloud into a pre-trained recognition network to obtain a recognition result output by the recognition network.
The identification result comprises identification frames and classification marks corresponding to various bone tissues; the classification identifies a class used to characterize bone tissue.
And step 508, performing coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain a target area where the various bone tissues are located in the target volume data.
Step 509, adjusting the target area where each bone tissue is located according to the size of the two-dimensional scan image, so as to match the target area where each bone tissue is located with the size of the two-dimensional scan image.
The target area where each kind of bone tissue is located, which is obtained by the computer equipment, may have a partial area exceeding the boundary of the two-dimensional scanning image, so that the target area where each kind of bone tissue is located is subjected to reduction processing according to the size of the two-dimensional scanning image, so that the target area where each kind of bone tissue is located is matched with the size of the two-dimensional scanning image. As shown in fig. 7b, the target area where the skull is located partially exceeds the boundary of the CT image, and thus the target area where the skull is located is subjected to the reduction process.
It should be understood that, although the steps in the flowcharts of fig. 2-8 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps of fig. 2-8 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
It should be understood that, in the above embodiment of the present application, bone tissue is used as a treatment object, but the method of the present application is equally applicable to various soft tissue related organs of human/animal, and the specific treatment method is not described in detail.
In one embodiment, as shown in fig. 9, there is provided a bone tissue positioning device comprising:
a volume data acquisition module 601, configured to acquire target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
The point cloud conversion module 602 is configured to perform conversion processing on the plurality of two-dimensional scan images to obtain a target point cloud corresponding to the plurality of bone tissues; the target point cloud consists of point cloud points corresponding to various bone tissues;
the positioning module 603 is configured to identify bone tissue according to the target point cloud, and obtain a target area where various bone tissues are located.
In one embodiment, the point cloud conversion module 602 includes:
the threshold dividing sub-module is used for carrying out threshold dividing processing on each two-dimensional scanning image to obtain mask images corresponding to each two-dimensional scanning image;
the boundary extraction submodule is used for respectively carrying out boundary extraction on each mask image to obtain boundary points of bone tissues in each mask image;
the coordinate conversion sub-module is used for respectively carrying out coordinate conversion processing on each boundary point according to a preset mapping relation to obtain point cloud points corresponding to each boundary point, and forming a target point cloud by a plurality of point cloud points; the mapping relation is used for representing the coordinate conversion relation between the boundary points and the point cloud points.
In one embodiment, the threshold dividing sub-module is specifically configured to set a pixel value of a first pixel point in each two-dimensional scanned image to a first value, and set a pixel value of a second pixel point in each two-dimensional scanned image to a second value; and generating mask images corresponding to the two-dimensional scanning images according to the pixel values of each pixel point in the two-dimensional scanning images.
In one embodiment, the boundary extraction submodule is specifically configured to perform a connection process on a pixel point with a pixel value being a first value in each mask image, so as to obtain a connection region in each mask image; and extracting the boundary of the connected region in each mask image to obtain boundary points of bone tissues in each mask image.
In one embodiment, the method further comprises:
the sampling processing module is used for sampling the plurality of boundary points to obtain a preset number of target boundary points;
correspondingly, the point cloud conversion module is specifically configured to perform coordinate conversion processing on each target boundary point according to the mapping relationship, obtain a point cloud point corresponding to each target boundary point, and form a target point cloud by a plurality of point cloud points.
In one embodiment, the method further comprises:
and the mapping relation establishing module is used for establishing a mapping relation according to the interlayer resolution among the two-dimensional scanning images and the intra-layer resolution of each two-dimensional scanning image.
In one embodiment, the positioning module 603 is specifically configured to input the target point cloud into a pre-trained recognition network, so as to obtain a recognition result output by the recognition network; the identification result comprises identification frames and classification marks corresponding to various bone tissues; the classification identifies a class for characterizing bone tissue; and carrying out coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data.
In one embodiment, the method further comprises:
the region adjustment module is used for adjusting the target region where each bone tissue is located according to the size of the two-dimensional scanning image so as to enable the target region where each bone tissue is located to be matched with the size of the two-dimensional scanning image.
For specific limitations of the bone tissue positioning device, reference is made to the above limitations of the bone tissue positioning method, and no further description is given here. The various modules in the bone tissue positioning device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a bone tissue positioning method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 10 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
converting the two-dimensional scanning images to obtain target point clouds corresponding to various bone tissues; the target point cloud consists of point cloud points corresponding to multiple types of bone tissues;
and identifying bone tissues according to the target point cloud to obtain target areas where various bone tissues are located.
In one embodiment, the processor when executing the computer program further performs the steps of:
threshold dividing processing is carried out on each two-dimensional scanning image, so that mask images corresponding to each two-dimensional scanning image are obtained;
Respectively extracting boundaries of each mask image to obtain boundary points of bone tissues in each mask image;
and respectively carrying out coordinate conversion processing on each boundary point according to a preset mapping relation to obtain point cloud points corresponding to each boundary point, and forming a target point cloud by a plurality of point cloud points.
In one embodiment, the processor when executing the computer program further performs the steps of:
setting the pixel value of a first pixel point in each two-dimensional scanning image as a first value, and setting the pixel value of a second pixel point in each two-dimensional scanning image as a second value;
and generating mask images corresponding to the two-dimensional scanning images according to the pixel values of each pixel point in the two-dimensional scanning images.
In one embodiment, the processor when executing the computer program further performs the steps of:
carrying out communication processing on pixel points with pixel values of a first value in each mask image to obtain a communication region in each mask image;
and extracting the boundary of the connected region in each mask image to obtain boundary points of bone tissues in each mask image.
In one embodiment, the processor when executing the computer program further performs the steps of:
sampling the plurality of boundary points to obtain a preset number of target boundary points;
And respectively carrying out coordinate conversion processing on each target boundary point according to the mapping relation to obtain point cloud points corresponding to each target boundary point, and forming a target point cloud by a plurality of point cloud points.
In one embodiment, the processor when executing the computer program further performs the steps of:
and establishing a mapping relation according to the inter-layer resolution among the two-dimensional scanning images and the intra-layer resolution of each two-dimensional scanning image.
In one embodiment, the processor when executing the computer program further performs the steps of:
inputting the target point cloud into a pre-trained recognition network to obtain a recognition result output by the recognition network; the identification result comprises identification frames and classification marks corresponding to various bone tissues;
and carrying out coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data.
In one embodiment, the processor when executing the computer program further performs the steps of:
and adjusting the target area where each bone tissue is positioned according to the size of the two-dimensional scanning image so as to enable the target area where each bone tissue is positioned to be matched with the size of the two-dimensional scanning image.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
converting the two-dimensional scanning images to obtain target point clouds corresponding to various bone tissues; the target point cloud consists of point cloud points corresponding to multiple types of bone tissues;
and identifying bone tissues according to the target point cloud to obtain target areas where various bone tissues are located.
In one embodiment, the computer program when executed by the processor further performs the steps of:
threshold dividing processing is carried out on each two-dimensional scanning image, so that mask images corresponding to each two-dimensional scanning image are obtained;
respectively extracting boundaries of each mask image to obtain boundary points of bone tissues in each mask image;
coordinate conversion processing is carried out on each boundary point according to a preset mapping relation, point cloud points corresponding to each boundary point are obtained, and a plurality of point cloud points form a target point cloud; the mapping relation is used for representing the coordinate conversion relation between the boundary points and the point cloud points.
In one embodiment, the computer program when executed by the processor further performs the steps of:
setting the pixel value of a first pixel point in each two-dimensional scanning image as a first value, and setting the pixel value of a second pixel point in each two-dimensional scanning image as a second value;
And generating mask images corresponding to the two-dimensional scanning images according to the pixel values of each pixel point in the two-dimensional scanning images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out communication processing on pixel points with pixel values of a first value in each mask image to obtain a communication region in each mask image;
and extracting the boundary of the connected region in each mask image to obtain boundary points of bone tissues in each mask image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
sampling the plurality of boundary points to obtain a preset number of target boundary points;
and respectively carrying out coordinate conversion processing on each target boundary point according to the mapping relation to obtain point cloud points corresponding to each target boundary point, and forming a target point cloud by a plurality of point cloud points.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and establishing a mapping relation according to the inter-layer resolution among the two-dimensional scanning images and the intra-layer resolution of each two-dimensional scanning image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Inputting the target point cloud into a pre-trained recognition network to obtain a recognition result output by the recognition network; the identification result comprises identification frames and classification marks corresponding to various bone tissues; the classification identifies a class for characterizing bone tissue;
and carrying out coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and adjusting the target area where each bone tissue is positioned according to the size of the two-dimensional scanning image so as to enable the target area where each bone tissue is positioned to be matched with the size of the two-dimensional scanning image.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (8)

1. A method of bone tissue localization, the method comprising:
acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
threshold dividing processing is carried out on each two-dimensional scanning image, so that mask images corresponding to each two-dimensional scanning image are obtained;
carrying out communication processing on pixel points with pixel values of a first value in each mask image to obtain a communication region in each mask image;
Extracting the boundary of the communication area in each mask image to obtain boundary points of bone tissues in each mask image;
performing coordinate conversion processing on each boundary point according to a preset mapping relation to obtain a point cloud point corresponding to each boundary point, and forming a target point cloud by a plurality of point cloud points;
inputting the target point cloud into a pre-trained recognition network to obtain a recognition result output by the recognition network; the identification result comprises identification frames and classification marks corresponding to various bone tissues;
and carrying out coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data.
2. The method according to claim 1, wherein the thresholding each of the two-dimensional scan images to obtain a mask image corresponding to each of the two-dimensional scan images includes:
setting a pixel value of a first pixel point in each two-dimensional scanning image to be the first value, and setting a pixel value of a second pixel point in each two-dimensional scanning image to be the second value;
and generating mask images corresponding to the two-dimensional scanning images according to the pixel values of each pixel point in the two-dimensional scanning images.
3. The method according to claim 2, wherein the method further comprises:
different thresholds are selected according to manufacturers of medical image equipment and reconstruction kernels of the two-dimensional scanning images;
alternatively, the adaptive threshold is calculated according to at least one algorithm of the statistical mean and the standard deviation.
4. The method according to claim 1, further comprising, before the coordinate conversion processing is performed on each of the boundary points according to the preset mapping relationship, respectively:
sampling the boundary points to obtain target boundary points with preset numbers;
correspondingly, the coordinate conversion processing is performed on each boundary point according to a preset mapping relation to obtain a point cloud point corresponding to each boundary point, and the point cloud is formed by a plurality of point cloud points, including:
and respectively carrying out coordinate conversion processing on each target boundary point according to the mapping relation to obtain a point cloud point corresponding to each target boundary point, and forming the target point cloud by a plurality of point cloud points.
5. The method according to claim 1, wherein before the coordinate conversion processing is performed on each of the boundary points according to the preset mapping relationship, the method further comprises:
And establishing the mapping relation according to the inter-layer resolution among the two-dimensional scanning images and the intra-layer resolution of each two-dimensional scanning image.
6. A bone tissue positioning device, the device comprising:
the volume data acquisition module is used for acquiring target volume data; the target volume data includes a plurality of two-dimensional scan images; each two-dimensional scanning image comprises multiple types of bone tissues;
the point cloud conversion module is used for carrying out threshold division processing on each two-dimensional scanning image to obtain mask images corresponding to each two-dimensional scanning image; carrying out communication processing on pixel points with pixel values of a first value in each mask image to obtain a communication region in each mask image; extracting the boundary of the communication area in each mask image to obtain boundary points of bone tissues in each mask image; performing coordinate conversion processing on each boundary point according to a preset mapping relation to obtain a point cloud point corresponding to each boundary point, and forming a target point cloud by a plurality of point cloud points;
the positioning module is used for inputting the target point cloud into a pre-trained recognition network to obtain a recognition result output by the recognition network; the identification result comprises identification frames and classification marks corresponding to various bone tissues; and carrying out coordinate conversion processing on the identification frames corresponding to the various bone tissues according to the mapping relation to obtain target areas where the various bone tissues are located in the target volume data.
7. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 5 when the computer program is executed.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
CN202011052786.3A 2020-09-29 2020-09-29 Bone tissue positioning method, device, computer equipment and storage medium Active CN112200780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011052786.3A CN112200780B (en) 2020-09-29 2020-09-29 Bone tissue positioning method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011052786.3A CN112200780B (en) 2020-09-29 2020-09-29 Bone tissue positioning method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112200780A CN112200780A (en) 2021-01-08
CN112200780B true CN112200780B (en) 2023-09-29

Family

ID=74007965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011052786.3A Active CN112200780B (en) 2020-09-29 2020-09-29 Bone tissue positioning method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112200780B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801915A (en) * 2021-02-21 2021-05-14 张燕 DR four-limb image bone-meat separation optimization image development method
CN113724310A (en) * 2021-08-02 2021-11-30 卡本(深圳)医疗器械有限公司 Spine point cloud extraction algorithm based on three-dimensional CT

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318563A (en) * 2014-10-22 2015-01-28 北京航空航天大学 Organ skeleton extraction method based on medical images
CN105139442A (en) * 2015-07-23 2015-12-09 昆明医科大学第一附属医院 Method for establishing human knee joint three-dimensional simulation model in combination with CT (Computed Tomography) and MRI (Magnetic Resonance Imaging)
CN111127485A (en) * 2019-12-25 2020-05-08 东软集团股份有限公司 Method, device and equipment for extracting target region in CT image
CN111402216A (en) * 2020-03-10 2020-07-10 河海大学常州校区 Three-dimensional broken bone segmentation method and device based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318563A (en) * 2014-10-22 2015-01-28 北京航空航天大学 Organ skeleton extraction method based on medical images
CN105139442A (en) * 2015-07-23 2015-12-09 昆明医科大学第一附属医院 Method for establishing human knee joint three-dimensional simulation model in combination with CT (Computed Tomography) and MRI (Magnetic Resonance Imaging)
CN111127485A (en) * 2019-12-25 2020-05-08 东软集团股份有限公司 Method, device and equipment for extracting target region in CT image
CN111402216A (en) * 2020-03-10 2020-07-10 河海大学常州校区 Three-dimensional broken bone segmentation method and device based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Deep Hough Voting for 3D Object Detection in Point Clouds;Charles R. Qi 等;《2019 IEEE/CVF International Conference on Computer Vision》;20191231;正文第3-5节 *
THREE DIMENSIONAL POINT CLOUD GENERATIONS FROM CT SCAN IMAGES FOR BIO-CAD MODELING;Vikas N. Chougule 等;《International Conference on Additive Manufacturing Technologies》;20131031;正文第3-5页、图2-3 *
孙水发 等.《视频前景检.及其在水电工程监中的应用》.国防工业出版社,2014,第14-15页. *
谢晓竹 等译.《传感器平台的视频监控算法和结构》.国防工业出版社,2018,第123页. *

Also Published As

Publication number Publication date
CN112200780A (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
US20210225003A1 (en) Image processing method and apparatus, server, and storage medium
EP3611699A1 (en) Image segmentation using deep learning techniques
CN111047572A (en) Automatic spine positioning method in medical image based on Mask RCNN
EP2715663B1 (en) Apparatus for generating assignments between image regions of an image and element classes
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
CN111325714B (en) Method for processing region of interest, computer device and readable storage medium
CN112861961B (en) Pulmonary blood vessel classification method and device, storage medium and electronic equipment
US20230177698A1 (en) Method for image segmentation, and electronic device
KR20150073628A (en) System and method for adapting diagnosis model of computer aided diagnosis
CN110916707B (en) Two-dimensional bone image acquisition method, system and device
KR20200137768A (en) A Method and Apparatus for Segmentation of Orbital Bone in Head and Neck CT image by Using Deep Learning and Multi-Graylevel Network
CN111568451A (en) Exposure dose adjusting method and system
KR101885562B1 (en) Method for mapping region of interest in first medical image onto second medical image and apparatus using the same
CN112102235A (en) Human body part recognition method, computer device, and storage medium
CN111462139A (en) Medical image display method, medical image display device, computer equipment and readable storage medium
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
CN111192268A (en) Medical image segmentation model construction method and CBCT image bone segmentation method
CN115063397A (en) Computer-aided image analysis method, computer device and storage medium
CN109712186B (en) Method, computer device and storage medium for delineating a region of interest in an image
CN112330640A (en) Segmentation method, device and equipment for nodule region in medical image
CN110811662A (en) Method, device and equipment for modulating scanning dose and storage medium
CN116168097A (en) Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image
CN113674254B (en) Medical image outlier recognition method, apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant