CN115457056A - Skeleton image segmentation method, device, equipment and storage medium - Google Patents

Skeleton image segmentation method, device, equipment and storage medium Download PDF

Info

Publication number
CN115457056A
CN115457056A CN202211141687.1A CN202211141687A CN115457056A CN 115457056 A CN115457056 A CN 115457056A CN 202211141687 A CN202211141687 A CN 202211141687A CN 115457056 A CN115457056 A CN 115457056A
Authority
CN
China
Prior art keywords
image
bone
model
data
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211141687.1A
Other languages
Chinese (zh)
Inventor
刘豆豆
李宗阳
代昂然
郭双双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weigao Intelligent Technology Co ltd
Original Assignee
Beijing Weigao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weigao Intelligent Technology Co ltd filed Critical Beijing Weigao Intelligent Technology Co ltd
Priority to CN202211141687.1A priority Critical patent/CN115457056A/en
Publication of CN115457056A publication Critical patent/CN115457056A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for segmenting a bone image, wherein the method comprises the following steps: acquiring object boundary frame data corresponding to at least one bone object in the bone image; determining a distance proportion data set corresponding to each bone object based on the bone image and the data of each object boundary box; determining a bone segmentation result corresponding to the bone image based on each distance proportion data set and a pre-trained target image segmentation model; the object boundary frame data represent space information of a minimum external cuboid corresponding to a bone object in a bone image, the distance proportion data set comprises at least one distance proportion data, the distance proportion data represent the distance proportion of image pixel points in the bone image relative to the minimum external cuboid, and a bone segmentation result comprises segmentation images corresponding to the at least one bone object respectively. The embodiment of the invention solves the problem of poor segmentation effect of the existing image segmentation model.

Description

Skeleton image segmentation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for segmenting a bone image.
Background
Information technology represented by artificial intelligence and high-end medical image technology are continuously developed, and machine learning technology is paid more and more attention in the field of medical image processing. The implicit rules can be mined from massive medical images by adopting a machine learning algorithm and used for predicting useful medical information, such as image segmentation results, image classification results, image positioning results and the like.
For a bone image, a conventional machine learning algorithm directly uses the bone image as input data of an image segmentation model, and trains the image segmentation model so that the trained image segmentation model sequentially outputs segmentation images corresponding to one or more bone objects in the bone image. However, the input data of the conventional machine learning algorithm is too simple, so that the segmentation effect of the segmented image output by the trained image segmentation model is not good, particularly the segmentation effect at the bone boundary. In addition, there is a large determination error rate in the bone object class to which the segmented image output by the image segmentation model belongs.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for segmenting a bone image, which are used for solving the problem of poor segmentation effect of the conventional image segmentation model, improving the accuracy of the segmented image output by the image segmentation model and reducing the error rate of the image segmentation model in judging the bone object.
According to an embodiment of the present invention, a method for segmenting a bone image is provided, the method comprising:
acquiring object boundary box data corresponding to at least one bone object in a bone image;
determining a distance proportion data set corresponding to each bone object based on the bone image and the data of each object boundary box;
determining a bone segmentation result corresponding to the bone image based on each distance proportion data set and a pre-trained target image segmentation model;
the object boundary frame data represent space information of a minimum external cuboid corresponding to a bone object in the bone image, the distance proportion data set comprises at least one distance proportion data, the distance proportion data represent the distance proportion of an image pixel point in the bone image relative to the minimum external cuboid, and the bone segmentation result comprises segmentation images corresponding to the at least one bone object respectively.
According to another embodiment of the present invention, there is provided a bone image segmentation apparatus including:
the system comprises an object boundary frame data acquisition module, a skeleton image acquisition module and a skeleton image processing module, wherein the object boundary frame data acquisition module is used for acquiring object boundary frame data corresponding to at least one skeleton object in a skeleton image;
a distance proportion data set determining module, configured to determine, based on the bone image and the data of each object bounding box, a distance proportion data set corresponding to each bone object;
a bone segmentation result determining module, configured to determine a bone segmentation result corresponding to the bone image based on each of the distance proportion data sets and a pre-trained target image segmentation model;
the object boundary frame data represent space information of a minimum external cuboid corresponding to a bone object in the bone image, the distance proportion data set comprises at least one distance proportion data, the distance proportion data represent the distance proportion of an image pixel point in the bone image relative to the minimum external cuboid, and the bone segmentation result comprises segmentation images corresponding to the at least one bone object respectively.
According to another embodiment of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform a method for segmenting a bone image according to any of the embodiments of the present invention.
According to another embodiment of the present invention, a computer-readable storage medium is provided, which stores computer instructions for causing a processor to implement a method for segmenting a bone image according to any of the embodiments of the present invention when the computer instructions are executed.
According to the technical scheme, the object boundary frame data corresponding to at least one bone object in a bone image are obtained, the distance proportion data set corresponding to each bone object is determined based on the bone image and the object boundary frame data, the bone segmentation result corresponding to the bone image is determined based on the distance proportion data sets and the pre-trained target image segmentation model, wherein the object boundary frame data represent the space information of a minimum external cuboid corresponding to the bone object in the bone image, the distance proportion data in the distance proportion data sets represent the distance proportion of image pixel points in the bone image relative to the minimum external cuboid, and the bone segmentation result comprises the segmentation image corresponding to at least one bone object respectively.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a method for segmenting a bone image according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for segmenting a bone image according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a method for determining a mean model of a target bone according to a second embodiment of the present invention;
FIG. 4 is a schematic view of a model view of a femur mean model provided in the second embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for segmenting a bone image according to a third embodiment of the present invention;
FIG. 6 is a schematic view of a bone projection image provided by a third embodiment of the present invention;
FIG. 7 is a schematic diagram of a skeleton blurred image according to a third embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a bone image segmentation apparatus according to a fourth embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a method for segmenting a bone image according to an embodiment of the present invention, which is applicable to a case where an image segmentation model is used to segment a bone object in a bone image and to determine the bone object to which the segmented image belongs, and the method can be executed by a device for segmenting a bone image, which can be implemented in the form of hardware and/or software, and the device for segmenting a bone image can be configured in a terminal device. As shown in fig. 1, the method includes:
and S110, acquiring object boundary box data corresponding to at least one bone object in the bone image.
Specifically, a bone image acquired by the medical imaging device is acquired. By way of example, the medical Imaging device includes, but is not limited to, a direct Digital Radiography (DR), a Computed Tomography (CT), a Magnetic Resonance Imaging (MRI) or Positron Emission Tomography (PET) system, and the like. The type of medical imaging apparatus is not limited herein.
Illustratively, when the bone image is an upper limb image, each bone object in the bone image includes at least one of a humerus, a radius and an ulna. When the bone image is a lower limb image, each bone object in the bone image includes at least one of a femur, a tibia, and a fibula. Due to the difference of the shooting position and the shooting parameters, one or more bone objects can be contained in the bone image.
In this embodiment, the object bounding box data represents spatial information of a smallest circumscribed cuboid corresponding to a bone object in the bone image. In an alternative embodiment, the sides of the smallest circumscribed rectangle are perpendicular to the X, Y, or Z axes, respectively, of the spatial coordinate system. For example, the object bounding box data may include 8 vertex position coordinates of a minimum bounding cuboid, center position coordinates, length data of the minimum bounding cuboid, width data, height data, and the like.
In an optional embodiment, an object detection algorithm is used to locate at least one bone object in the bone image and obtain object bounding box data corresponding to each bone object. The object detection algorithm may be, for example, a YOLO algorithm. The object detection algorithm used herein is not limited, and any object detection algorithm that can achieve the above functions is within the protection of the present application.
And S120, determining a distance proportion data set corresponding to each bone object based on the bone image and the data of each object boundary box.
In this embodiment, the distance proportion data set includes at least one distance proportion data, and the distance proportion data represents a distance proportion of an image pixel point in the bone image with respect to the minimum circumscribed cuboid.
In an alternative embodiment, determining a distance scale data set corresponding to each bone object based on the bone image and the data of the bounding box of each object comprises: for each object boundary box data, determining size data and at least one zone bit coordinate of a minimum circumscribed cuboid based on the object boundary box data; and determining a distance proportion data set of the bone object corresponding to the object boundary frame data based on the coordinate of each zone bit, the size data and the pixel point coordinate corresponding to each image pixel point in the bone image.
Specifically, the size data includes length data, width data, and height data of the minimum circumscribed cuboid. In an alternative embodiment, when the object bounding box data includes 8 corner position coordinates, the length data is equal to a difference between a maximum x-axis coordinate and a minimum x-axis coordinate of the 8 corner position coordinates, the width data is equal to a difference between a maximum y-axis coordinate and a minimum y-axis coordinate of the 8 corner position coordinates, and the height data is equal to a difference between a maximum z-axis coordinate and a minimum z-axis coordinate of the 8 corner position coordinates.
Specifically, based on the data of the object bounding box, the zone bit coordinates corresponding to at least one preset zone bit in the minimum circumscribed cuboid are determined. In this example, the preset flag is any vertex angle position or center point position.
In an optional embodiment, determining a distance ratio data set of the bone object corresponding to the object bounding box data based on the coordinates of each flag bit, the size data, and the coordinates of pixel points corresponding to each image pixel point in the bone image, includes: determining at least one target pixel point coordinate based on the object boundary frame data and the pixel point coordinate corresponding to each image pixel point in the skeleton image; aiming at each zone bit coordinate, obtaining difference value coordinates corresponding to the zone bit coordinate and each target pixel point coordinate respectively; and regarding each difference coordinate, taking the ratio of the difference coordinate to the size data as distance proportion data, and adding the distance proportion data into a distance proportion data set of the bone object corresponding to the object boundary box data.
In an optional embodiment, determining at least one target pixel coordinate based on pixel coordinates corresponding to the object bounding box data and each image pixel in the skeleton image, respectively, includes: determining a pixel point screening range based on a preset external expansion ratio and object bounding box data; respectively taking each image pixel point of the skeleton image with the pixel point coordinate in the pixel point screening range as a target image pixel point; and acquiring target pixel point coordinates corresponding to the target image pixel points respectively.
Wherein, it is specific, when predetermineeing the scale of expanding outward and be 100%, pixel screening scope includes the three-dimensional space region of whole skeleton image representation, when predetermineeing the scale of expanding outward and be 0%, pixel screening scope includes the inner space region of the minimum external cuboid of object boundary frame data representation, when predetermineeing the scale of expanding outward and be 0% -100%, pixel screening scope includes the inner space region of the minimum external cuboid of object boundary frame data representation and the partial outer space region of minimum external cuboid. The preset external expansion ratio is not limited, and a user can customize the setting according to actual requirements.
Specifically, the difference coordinate includes an x difference coordinate, a y difference coordinate, and a z difference coordinate respectively corresponding to the x-axis direction, the y-axis direction, and the z-axis direction.
Illustratively, distance proportion data corresponding to the target pixel point coordinate a and the zone bit coordinate B satisfy a formula:
Figure BDA0003853800310000071
wherein p is x 、p y And p z Respectively representing the distance proportion of the target pixel point coordinate A and the zone bit coordinate B in the x-axis direction, the y-axis direction and the z-axis direction, x A 、y A And z A Respectively representing the axis coordinates of the target pixel point coordinate A in the x-axis direction, the y-axis direction and the z-axis direction, x B 、y B And z B Respectively representing the axis coordinates of the mark position coordinate B in the x-axis direction, the y-axis direction and the z-axis direction, wherein maxx-minx represents the length data of the minimum external rectangular solid, maxy-miny represents the width data of the minimum external rectangular solid, and maxz-minz represents the height data of the minimum external rectangular solid.
Wherein, exemplarily, when the flag bit coordinate B is a front upper left corner position coordinate, x B =minx,y B =miny。
Specifically, the distance proportion data set corresponding to the bone object includes distance proportion data corresponding to coordinates of each target pixel point and coordinates of at least one flag bit.
And S130, determining a bone segmentation result corresponding to the bone image based on the distance proportion data sets and the pre-trained target image segmentation model.
In an optional embodiment, determining a bone segmentation result corresponding to the bone image based on each distance proportion data set and a pre-trained target image segmentation model includes: acquiring image parameter data corresponding to the bone image; wherein the image parameter data includes image pixel value data of the bone image and/or image gradient data respectively corresponding to at least one scale; and inputting the image parameter data and the distance proportion data sets into a pre-trained target image segmentation model to obtain a bone segmentation result corresponding to the output bone image.
Specifically, the image pixel value data can be used for representing a bone image, and the image gradient data is used for representing the pixel value change rate of a certain pixel point in the bone image in the X-axis direction, the Y-axis direction and the Z-axis direction respectively. In an optional embodiment, filtering templates of a gaussian filter are respectively constructed based on at least one scale, and a gaussian filtering operation is respectively performed on the bone image based on each filtering template to obtain at least one image gradient data. Where, for example, the dimensions may include 0, 1.2, 1.5, and 1.8.
In an alternative embodiment, the image parameter data comprises image gradient data corresponding to at least three scales, respectively. The advantage of this arrangement is that the segmentation quality of the target image segmentation model can be guaranteed.
Exemplary types of models for the target image segmentation model include, but are not limited to, a two-dimensional random forest, a full convolution network model, a U-net, a SegNet, a PSPNet, a hole convolution network model, and the like. The model type of the target image segmentation model is not limited herein.
In this embodiment, the bone segmentation result includes segmented images corresponding to at least one bone object respectively. Specifically, the target image segmentation model sequentially segments each bone object in the bone image, and determines the bone objects to which each segmented image belongs according to the output sequence of different segmented images. For example, assuming that the bone image is a lower limb image and the preset segmentation order is femur and tibia, the first segmented image output by the target image segmentation model is a segmented image of femur and the second segmented image is a segmented image of tibia.
On the basis of the above embodiment, the method further includes: acquiring a training bone image set; wherein the training bone image set comprises a plurality of training bone images; aiming at each training bone image, acquiring object boundary box data corresponding to at least one bone object in the training bone image; determining a distance proportion data set corresponding to each bone object based on the training bone image and the data of each object boundary box; acquiring image parameter data corresponding to each training bone image, and inputting each image parameter data and each distance proportion data set into an initial image segmentation model to obtain a prediction segmentation result corresponding to each output training bone image; and adjusting model parameters of the initial image segmentation model based on each prediction segmentation result and the standard segmentation result corresponding to each training bone image to obtain a trained target image segmentation model.
According to the technical scheme, object boundary frame data corresponding to at least one bone object in a bone image are obtained, a distance proportion data set corresponding to each bone object is determined based on the bone image and the object boundary frame data, a bone segmentation result corresponding to the bone image is determined based on each distance proportion data set and a pre-trained target image segmentation model, wherein the object boundary frame data represent space information of a minimum external cuboid corresponding to the bone object in the bone image, the distance proportion data in the distance proportion data set represent the distance proportion of image pixel points in the bone image relative to the minimum external cuboid, and the bone segmentation result comprises segmentation images corresponding to the at least one bone object respectively.
Example two
Fig. 2 is a flowchart of a bone image segmentation method according to a second embodiment of the present invention, which further details technical features of "obtaining object bounding box data corresponding to at least one bone object in a bone image" in the foregoing embodiment. As shown in fig. 2, the method includes:
s210, obtaining target skeleton average models respectively corresponding to at least one skeleton object.
Wherein in particular the target bone mean model may be used to characterize the mean atlas model of the corresponding bone object.
In an alternative embodiment, obtaining the target bone mean model corresponding to at least one bone object respectively comprises: acquiring a standard mask image and at least one reference mask image corresponding to each bone object, and respectively registering each reference mask image with the standard mask image to obtain a registered mask image set; wherein the registration mask image set comprises at least one registration mask image; based on the standard mask image and the set of registered mask images, a target bone mean model corresponding to the bone object is determined.
Specifically, the standard mask image and the reference mask image can ensure the contour information of the bone object. Illustratively, a mask image with higher image quality is artificially selected from a plurality of mask images as a standard mask image. Such as the contour of the bone object in the standard mask image being the most standard, the size being the most appropriate, etc.
Specifically, in the process of registering the reference mask image with the standard mask image, the reference mask image is used as a floating image, the standard mask image is used as a reference image, and the registered mask image is a mask image obtained by registering the reference mask image with the standard mask image.
In an alternative embodiment, determining a target bone mean model corresponding to the bone object based on the standard mask image and the set of registered mask images comprises: acquiring a current registration mask image in the registration mask image set, and registering the standard mask image to a previous skeleton average model to obtain a middle mask image; determining a current skeleton summation model based on the current registration mask image and the previous skeleton summation model, and determining a current skeleton average model based on the intermediate mask image and the current skeleton summation model; taking the current skeleton summation model as a last skeleton summation model and taking the current skeleton average model as a last skeleton average model, and repeatedly executing the step of acquiring the current registration mask image in the registration mask image set; until the iteration times reach the number of images corresponding to the registration mask image set, taking the current skeleton average model as a target skeleton average model corresponding to the skeleton object; when the iteration number is 1, the last bone average model is the current registration mask image, and the last bone summation model is 0.
Specifically, in the process of registering the standard mask image with the previous average skeleton model, the standard mask image is used as a floating image, the previous average skeleton model is used as a reference image, and the intermediate mask image is a mask image obtained by registering the standard mask image with the previous average skeleton model.
Wherein, for example, the current bone summation model satisfies the formula:
sumModel[i]=list[i]+sumModel[i-1]
specifically, list [ i +1] represents the i +1 th registration mask image in the registration mask image set, and i is an integer greater than or equal to 1. When i =1, sumModel [0] =0.
Wherein, for example, the last bone mean model in the ith iteration process satisfies the formula:
Figure BDA0003853800310000111
wherein reg _ ref [ i-1] represents a middle mask image in the i-1 th iteration process, list [1] represents the 1 st registration mask image in the registration mask image set acquired in the 1 st iteration process, k represents the superposition times of the current skeleton summation model, and k increases along with the increase of the iteration times.
Fig. 3 is a schematic diagram illustrating a method for determining a target bone average model according to a second embodiment of the present invention. Specifically, in the ith iteration process, the ith registration mask image (list [ i ]) in the registration mask image set is obtained, the standard mask image (ref) is registered with the previous skeleton mean model [ i-1 ]), and the ith intermediate mask image (reg _ ref [ i ]) is obtained. A current bone summation model (sumModel [ i ]) is determined based on the current registration mask image (list [ i ]) and the previous bone summation model (sumModel [ i-1 ]), and a current bone averaging model (meanModel [ i ]) is determined based on the reticle image (reg _ ref [ i ]) and the current bone summation model (sumModel [ i ]). And judging whether the current iteration times are equal to the number (N) of images corresponding to the registration mask image set, if so, outputting a current skeleton average model (meanModel [ N ]) as a target skeleton average model corresponding to the skeleton object, if not, adding 1 to i and adding 1 to k, and repeatedly executing the step of obtaining the ith registration mask image (list [ i ]) in the registration mask image set.
Fig. 4 is a schematic diagram of a model view of a femur mean model according to a second embodiment of the present invention. Specifically, the 3 views in fig. 4 sequentially show, from left to right, a model view of the coronal position, a model view of the sagittal position, and a model view of the transverse position of the femur mean model.
S220, aiming at each bone object, registering the target bone average model corresponding to the bone object with a bone image to obtain object boundary box data corresponding to the bone object.
Exemplary registration algorithms employed include, but are not limited to, affine registration or rigid registration. The registration algorithm employed is not limited herein.
Specifically, the target skeleton average model includes model boundary box data of a skeleton object, the skeleton image is registered through the target skeleton average model to obtain a registration transformation matrix corresponding to the target skeleton average model and the skeleton image, and the model boundary box data is transformed based on the registration transformation matrix to obtain the skeleton boundary box data corresponding to the skeleton object in the skeleton image. In the process of registering the target bone average model with the bone image, the floating image is the target bone average model, and the reference image is the bone image.
And S230, determining a distance proportion data set corresponding to each bone object based on the bone image and the data of each object boundary box.
S240, determining a bone segmentation result corresponding to the bone image based on the distance proportion data sets and the pre-trained target image segmentation model.
On the basis of the foregoing embodiment, optionally, the target bone average model includes model feature point coordinates corresponding to at least one bone feature point, and accordingly, the method further includes: aiming at each bone object, acquiring a segmentation image corresponding to the bone object and a target bone average model; and registering the target skeleton average model with the segmentation image to obtain actual feature point coordinates corresponding to at least one skeleton feature point in the segmentation image.
In particular, the bone feature points are used to characterize anatomical feature points on the bone object. Illustratively, each bone characteristic point includes, but is not limited to, a femoral head rotation center, a distal medial ankle far point, a distal lateral ankle far point, an ankle medial bulge, an ankle lateral bulge, a tibia center and the like, and the setting of the bone characteristic points is not limited herein, and a technician can customize the setting according to actual needs.
Specifically, in the process of registering the target bone average model to the segmentation image, the target bone average model is used as a floating image, the segmentation image is used as a reference image, and actual feature point coordinates corresponding to at least one bone feature point in the segmentation image are obtained through registration.
The method has the advantages that the bone feature points in the existing segmented images need to be manually labeled one by one, and when the number of the obtained segmented images is large, huge workload is brought, and time is consumed. And because the gradient change of the peripheral image where a certain characteristic point is located is very small and no obvious local characteristic exists, the marking result of the existing automatic marking algorithm is not accurate enough. According to the method, the target skeleton average model is manually marked once, so that the marking efficiency of the skeleton characteristic points is improved, the marking accuracy of the skeleton characteristic points is guaranteed, and reliable data support is provided for subsequent tasks such as registration of a mechanical arm coordinate system. Experiments prove that the average coordinate error between the actual characteristic point coordinate of the bone characteristic point obtained by the embodiment and the manual labeling result is 3-9mm.
According to the technical scheme of the embodiment, the target bone average models corresponding to at least one bone object are obtained, and the target bone average models corresponding to the bone objects are registered with the bone image aiming at each bone object, so that the object boundary frame data corresponding to the bone objects are obtained, the problem of obtaining the object boundary frame data in the process of segmenting the bone image is solved, and the accuracy of the object boundary frame data is ensured. Further, in the embodiment, a standard mask image and at least one reference mask image corresponding to each bone object are obtained, the reference mask images are respectively registered with the standard mask image to obtain a registered mask image set, and a target bone average model corresponding to the bone object is determined based on the standard mask image and the registered mask image set, so that a determination method of the target bone average model is optimized, and the accuracy of the target bone average model is ensured. Therefore, the embodiment of the invention further improves the segmentation effect of the image segmentation model.
EXAMPLE III
Fig. 5 is a flowchart of a bone image segmentation method according to a third embodiment of the present invention, and this embodiment further optimizes a registration process between a target bone average model and a bone image in the foregoing embodiment. As shown in fig. 5, the method includes:
s310, obtaining target skeleton average models respectively corresponding to at least one skeleton object.
S320, obtaining actual joint coordinates corresponding to the joint object in the skeleton image and obtaining model joint coordinates corresponding to the joint object in the target skeleton average model.
Specifically, the joint object is a joint region between two bone objects, and illustratively, the joint object is a knee joint when the bone image is a lower limb image, and the joint object is an elbow joint when the bone image is an upper limb image.
Specifically, since the target bone average model is a bone average model corresponding to a single bone object, for example, when the bone image is a lower limb image and the joint object is a knee joint, if the target bone average model is a femur average model, the model joint coordinates corresponding to the joint object in the target bone average model are the lower end position coordinates in the femur average model; and if the target bone average model is a tibia average model, the model joint coordinates corresponding to the joint object in the target bone average model are the upper end position coordinates in the femur average model. And model joint coordinates corresponding to the joint object in the target skeleton average model can be acquired by performing artificial labeling once in the target skeleton average model.
In an alternative embodiment, the actual joint coordinates include first axis coordinates, second axis coordinates, and third axis coordinates corresponding to the first projection direction, the second projection direction, and the third projection direction, respectively, and accordingly, the actual joint coordinates corresponding to the joint object in the bone image are acquired, including: respectively projecting the bone image along a first projection direction and a second projection direction to obtain a first bone projection image and a second bone projection image; determining second axis coordinates and third axis coordinates of the joint object in the bone image, which correspond to the second projection direction and the third projection direction respectively, based on the first bone projection image; based on the second bone projection image, a first axis coordinate of the joint object in the bone image corresponding to the first projection direction is determined.
Specifically, the first projection direction and the second projection direction include an X-axis direction and a Y-axis direction, an X-axis direction and a Z-axis direction, or a Y-axis direction or a Z-axis direction. The combination of any two projection directions is not limited herein.
In an alternative embodiment, when the bone image is a lower limb image and the lower limb image includes a left leg image and a right leg image, the first projection direction and the second projection direction include a Y-axis direction and a Z-axis direction. Fig. 6 is a schematic diagram of a bone projection image provided by a third embodiment of the invention. Specifically, the left diagram in fig. 6 shows a first bone projection image obtained by projecting the lower limb image in the Y-axis direction, and the right diagram shows a second bone projection image obtained by projecting the lower limb image in the Z-axis direction.
This arrangement is advantageous in that, when the lower limb image includes the left leg image and the right leg image, if the left leg and the right leg are projected in the X-axis direction, it may occur that the left leg and the right leg overlap, so that the actual joint coordinates of the left knee joint and the actual joint coordinates of the right knee joint in the bone image cannot be distinguished.
On the basis of the foregoing embodiment, optionally, before projecting the bone image along the second projection direction to obtain a second bone projection image, the method further includes: and when the second projection direction is the Z-axis direction, cutting the lower limb image along the second projection direction based on the preset cutting proportion range to obtain the cut lower limb image. The preset cutting ratio is, for example, in a range of 25% to 75%, and the preset cutting ratio is not limited herein.
This has the advantage that the second bone projection image can be made to contain partial bone projection images of the joint object, and the accuracy of the subsequently determined first axis coordinates can be improved while reducing the subsequent calculation effort.
In an optional embodiment, when the image type of the bone image is a CT image, before projecting the bone image along the first projection direction and the second projection direction respectively to obtain the first bone projection image and the second bone projection image, the method further comprises: and acquiring CT values corresponding to all image pixel points in the skeleton image, and setting the pixel value of the image pixel point corresponding to the CT value smaller than the first CT threshold value as 0. For example, the first CT threshold may be 0 or 100. The first CT threshold is not limited herein.
The method has the advantages that the target area image corresponding to the bone object in the bone image is kept, meanwhile, the non-target area image in the bone image is weakened, and the purpose of reducing noise information in the bone image is achieved.
In an alternative embodiment, determining second and third axis coordinates of the joint object in the bone image corresponding to the second and third projection directions, respectively, based on the first bone projection image comprises: performing Gaussian blur processing on the first skeleton projection image to obtain a first skeleton blur image; and determining second axis coordinates and third axis coordinates of the joint object in the skeleton image, which correspond to the second projection direction and the third projection direction respectively, based on the pixel point coordinates of the pixel point corresponding to the pixel maximum value in the first skeleton blurred image.
Specifically, pixel point coordinates of a pixel point corresponding to a pixel maximum value in the first skeleton blurred image are obtained, axis coordinates corresponding to the second projection direction in the pixel point coordinates are used as second axis coordinates of the joint object, and axis coordinates corresponding to the third projection direction are used as third axis coordinates of the joint object.
In an alternative embodiment, determining the first axis coordinates of the joint object in the bone image corresponding to the first projection direction based on the second bone projection image comprises: performing Gaussian blur processing on the second skeleton projection image to obtain a second skeleton blur image; and determining a first axis coordinate corresponding to the joint object in the skeleton image and the first projection direction based on the pixel point coordinate of the pixel point corresponding to the pixel maximum value in the second skeleton blurred image.
Specifically, pixel point coordinates of a pixel point corresponding to a pixel maximum value in the second skeleton blurred image are obtained, and an axis coordinate corresponding to the first projection direction in the pixel point coordinates is used as a first axis coordinate of the joint object.
Fig. 7 is a schematic diagram of a skeleton blurred image according to a third embodiment of the present invention. Specifically, fig. 7 takes the first bone projection image and the second bone projection image shown in fig. 6 as an example, a left diagram in fig. 7 shows a first bone blurred image corresponding to the first bone projection image shown in fig. 6, and a right diagram shows a second bone blurred image corresponding to the second bone projection image shown in fig. 6. From the left and right images of fig. 7, two local maximum values can be obtained, respectively, corresponding to the actual joint coordinates of the left knee joint and the actual joint coordinates of the right knee joint.
In an alternative embodiment, when at least two joint objects are included in the bone image and the first projection direction is the X-axis direction or the Y-axis direction, the pixel maximum value in the first bone projection image is a pixel maximum value within a preset proportion range. Specifically, based on a preset proportion range, the first skeleton blurred image is cut in the X-axis direction or the Y-axis direction to obtain a cut first skeleton blurred image, and a pixel maximum value corresponding to the cut first skeleton blurred image is determined. The preset proportion range may be, for example, 40% to 60%, and the preset proportion range is not limited herein.
For example, when the bone image includes a hip joint and/or an ankle joint in addition to the knee joint, the pixel maxima at the hip joint and/or the ankle joint may interfere with determining the pixel maxima at the knee joint. The method has the advantages that the first skeleton blurred image after cutting only comprises the part of the skeleton blurred image including the knee joint or the elbow joint, and the accuracy of the pixel point coordinates corresponding to the pixel maximum value determined subsequently can be improved while the subsequent calculation amount is reduced.
On the basis of the foregoing embodiment, optionally, when the bone image is a lower limb image and the lower limb image includes a left leg image and a right leg image, the joint object includes a left knee joint and a right knee joint, the first bone projection image includes a first left leg projection image and a first right leg projection image, the second bone projection image includes a second left leg projection image and a second right leg projection image, and accordingly, the bone image is projected in the first projection direction and the second projection direction respectively to obtain the first bone projection image and the second bone projection image, including: performing segmentation operation on the skeleton image to obtain a left leg image and a right leg image; respectively projecting the left leg image along a first projection direction and a second projection direction to obtain a first left leg projection image and a second left leg projection image; and respectively projecting the right leg image along the first projection direction and the second projection direction to obtain a first right leg projection image and a second right leg projection image.
The segmentation algorithm corresponding to the above-mentioned segmentation operation includes, but is not limited to, a threshold-based segmentation algorithm, a region growing algorithm, an image edge segmentation algorithm, an image threshold segmentation algorithm, a region-based segmentation algorithm or a watershed algorithm, for example. The segmentation algorithm employed is not limited herein.
In an alternative embodiment, the first projection direction is a Y-axis direction and the second projection direction is an X-axis direction.
In this embodiment, the subsequent processing operations performed on the first left leg projection image and the first right leg projection image are similar to the processing operations corresponding to the first bone projection image, and the subsequent processing operations performed on the second left leg projection image and the second right leg projection image are similar to the processing operations corresponding to the second bone projection image, and are not described again here.
The advantage of this arrangement is that when the bending angle of the left leg and the right leg of the measured object is large and the first projection direction is the Z-axis direction, the Y-axis coordinates of the left leg projection image and the right leg projection image in the bone projection image projected along the Z-axis direction may have a large error. Meanwhile, in order to avoid the situation that the left leg and the right leg are overlapped due to X-direction projection, the left leg and the right leg are distinguished from each other in the embodiment from each other in projection, and the actual joint coordinates of the left knee joint and the actual joint coordinates of the right knee joint are respectively determined based on the first right leg projection image and the second right leg projection image corresponding to the X-axis direction.
S330, determining a model displacement difference based on the actual joint coordinates and the model joint coordinates, and adjusting the spatial position of the target skeleton average model based on the model displacement difference so as to align the spatial position of the target skeleton average model with the spatial position of the skeleton image.
Specifically, the target bone average model may have a default spatial start coordinate after being built, for example, the default spatial start coordinate is (0, 0). For example, the model displacement difference may be added to the spatial start coordinates of the target bone mean model to obtain the target bone mean model aligned with the spatial position of the bone image.
It should be noted that this step achieves approximate alignment of the spatial positions of the bone image and the target bone average model, and a subsequent registration operation is performed to improve the spatial alignment effect of the bone image and the target bone average model.
And S340, aiming at each bone object, registering the target bone average model corresponding to the bone object with a bone image to obtain object boundary box data corresponding to the bone object.
And S350, determining a distance proportion data set corresponding to each bone object based on the bone image and the data of each object boundary box.
And S360, determining a bone segmentation result corresponding to the bone image based on the distance proportion data sets and the pre-trained target image segmentation model.
The target skeleton average model has a default spatial position after being established, the skeleton image acquired by medical imaging equipment also has a corresponding spatial position, and when the spatial position difference between the target skeleton average model and the skeleton image is large, the accuracy of the object boundary box data obtained by direct registration is poor. According to the technical scheme of the embodiment, the target bone average model corresponding to at least one bone object is obtained, the actual joint coordinates corresponding to the joint object in the bone image and the model joint coordinates corresponding to the joint object in the target bone average model are obtained, the model displacement difference is determined based on the actual joint coordinates and the model joint coordinates, the spatial position of the target bone average model is adjusted based on the model displacement difference, so that the spatial position of the target bone average model is aligned with the spatial position of the bone image, the target bone average model corresponding to the bone object is registered with the bone image for each bone object, the object boundary box data corresponding to the bone object is obtained, the problem that the error of the object boundary box data obtained by registering the target bone average model and the bone image is large is solved, and the segmentation effect of a subsequent image segmentation model is further improved.
Example four
Fig. 8 is a schematic structural diagram of a skeleton image segmentation apparatus according to a fourth embodiment of the present invention. As shown in fig. 8, the apparatus includes: an object bounding box data acquisition module 410, a distance scale dataset determination module 420, and a bone segmentation result determination module 430.
The object bounding box data acquiring module 410 is configured to acquire object bounding box data corresponding to at least one bone object in the bone image;
a distance ratio data set determining module 420, configured to determine, based on the bone image and the data of the bounding box of each object, a distance ratio data set corresponding to each bone object;
a bone segmentation result determining module 430, configured to determine a bone segmentation result corresponding to the bone image based on each distance proportion data set and a pre-trained target image segmentation model;
the object boundary frame data represent space information of a minimum external cuboid corresponding to a bone object in a bone image, the distance proportion data set comprises at least one distance proportion data, the distance proportion data represent the distance proportion of image pixel points in the bone image relative to the minimum external cuboid, and a bone segmentation result comprises segmentation images corresponding to the at least one bone object respectively.
According to the technical scheme, object boundary frame data corresponding to at least one bone object in a bone image are obtained, a distance proportion data set corresponding to each bone object is determined based on the bone image and the object boundary frame data, a bone segmentation result corresponding to the bone image is determined based on each distance proportion data set and a pre-trained target image segmentation model, wherein the object boundary frame data represent space information of a cuboid which is connected to the minimum external side of the corresponding bone object in the bone image, the distance proportion data in the distance proportion data set represent the distance proportion of image pixel points in the bone image relative to the minimum external side, and the bone segmentation result comprises segmentation images corresponding to the at least one bone object respectively.
On the basis of the foregoing embodiment, optionally, the distance proportion data set determining module 420 includes:
a flag coordinate determination unit for determining, for each object bounding box data, size data of a minimum circumscribed cuboid and at least one flag coordinate based on the object bounding box data;
and the distance proportion data set determining unit is used for determining a distance proportion data set of the skeleton object corresponding to the object boundary frame data based on the coordinates of each zone bit, the size data and the coordinates of pixel points corresponding to the image pixel points in the skeleton image.
On the basis of the foregoing embodiment, optionally, the distance proportion data set determining unit is specifically configured to:
determining at least one target pixel point coordinate based on the object boundary frame data and the pixel point coordinate corresponding to each image pixel point in the skeleton image;
acquiring difference value coordinates corresponding to the zone bit coordinates and the target pixel point coordinates respectively aiming at each zone bit coordinate;
and regarding each difference coordinate, taking the ratio of the difference coordinate to the size data as distance proportion data, and adding the distance proportion data into a distance proportion data set of the bone object corresponding to the object boundary box data.
On the basis of the foregoing embodiment, optionally, the bone segmentation result determining module 430 is specifically configured to:
acquiring image parameter data corresponding to a bone image; wherein the image parameter data includes image pixel value data of the bone image and/or image gradient data respectively corresponding to at least one scale;
and inputting the image parameter data and the distance proportion data sets into a pre-trained target image segmentation model to obtain a bone segmentation result corresponding to the output bone image.
On the basis of the foregoing embodiment, optionally, the object bounding box data obtaining module 410 includes:
a target bone average model obtaining unit for obtaining target bone average models respectively corresponding to at least one bone object;
and the object boundary box data determining unit is used for registering the target bone average model corresponding to the bone object with the bone image aiming at each bone object to obtain the object boundary box data corresponding to the bone object.
On the basis of the foregoing embodiment, optionally, the object bounding box data obtaining module 410 further includes:
the actual joint coordinate acquisition unit is used for acquiring actual joint coordinates corresponding to joint objects in the bone images and acquiring model joint coordinates corresponding to the joint objects in the target skeleton average model before registering the target skeleton average model corresponding to the skeleton objects with the skeleton images to obtain object boundary box data corresponding to the skeleton objects;
and the target bone average model alignment unit is used for determining a model displacement difference based on the actual joint coordinates and the model joint coordinates, and adjusting the spatial position of the target bone average model based on the model displacement difference so as to align the spatial position of the target bone average model with the spatial position of the bone image.
On the basis of the foregoing embodiment, optionally, the actual joint coordinates include a first axis coordinate, a second axis coordinate, and a third axis coordinate respectively corresponding to the first projection direction, the second projection direction, and the third projection direction, and the actual joint coordinate acquiring unit includes:
the bone image projection subunit is used for projecting the bone image along a first projection direction and a second projection direction respectively to obtain a first bone projection image and a second bone projection image;
a second axis coordinate determination subunit, configured to determine, based on the first bone projection image, a second axis coordinate and a third axis coordinate of the joint object in the bone image, which correspond to the second projection direction and the third projection direction, respectively;
and the first axis coordinate determination subunit is used for determining the first axis coordinate, corresponding to the first projection direction, of the joint object in the bone image based on the second bone projection image.
On the basis of the foregoing embodiment, optionally, the second axis coordinate determination subunit is specifically configured to:
performing Gaussian blur processing on the first skeleton projection image to obtain a first skeleton blur image;
and determining second axis coordinates and third axis coordinates of the joint object in the skeleton image, which correspond to the second projection direction and the third projection direction respectively, based on the pixel point coordinates of the pixel point corresponding to the pixel maximum value in the first skeleton blurred image.
On the basis of the above embodiment, optionally, when the bone image is a lower limb image and the lower limb image includes a left leg image and a right leg image, the joint object includes a left knee joint and a right knee joint, the first bone projection image includes a first left leg projection image and a first right leg projection image, the second bone projection image includes a second left leg projection image and a second right leg projection image, and accordingly,
on the basis of the above embodiment, optionally, the bone image projection subunit is specifically configured to:
performing segmentation operation on the bone image to obtain a left leg image and a right leg image;
respectively projecting the left leg image along a first projection direction and a second projection direction to obtain a first left leg projection image and a second left leg projection image;
and projecting the right leg image along the first projection direction and the second projection direction respectively to obtain a first right leg projection image and a second right leg projection image.
On the basis of the above embodiment, optionally, the first projection direction is a Y-axis direction, and the second projection direction is an X-axis direction.
On the basis of the foregoing embodiment, optionally, the target bone average model obtaining unit includes:
the registration mask image set determining subunit is used for acquiring a standard mask image and at least one reference mask image corresponding to each bone object, and registering each reference mask image with the standard mask image respectively to obtain a registration mask image set; wherein the registration mask image set comprises at least one registration mask image;
and the target bone average model determining subunit is used for determining a target bone average model corresponding to the bone object based on the standard mask image and the registration mask image set.
On the basis of the foregoing embodiment, optionally, the target bone average model determining subunit is specifically configured to:
acquiring a current registration mask image in the registration mask image set, and registering the standard mask image to a previous skeleton average model to obtain a middle mask image;
determining a current skeleton summation model based on the current registration mask image and the previous skeleton summation model, and determining a current skeleton average model based on the intermediate mask image and the current skeleton summation model;
taking the current skeleton summation model as a last skeleton summation model and taking the current skeleton average model as a last skeleton average model, and repeatedly executing the step of acquiring the current registration mask image in the registration mask image set;
until the iteration times reach the number of images corresponding to the registration mask image set, taking the current skeleton average model as a target skeleton average model corresponding to the skeleton object;
when the iteration number is 1, the last bone average model is the current registration mask image, and the last bone summation model is 0.
On the basis of the foregoing embodiment, optionally, the target bone average model includes model feature point coordinates corresponding to at least one bone feature point, and accordingly, the apparatus further includes:
the actual characteristic point coordinate determination module is used for acquiring a segmented image and a target skeleton average model corresponding to each skeleton object;
and registering the target bone average model with the segmentation image to obtain actual feature point coordinates corresponding to at least one bone feature point in the segmentation image.
The bone image segmentation device provided by the embodiment of the invention can execute the bone image segmentation method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 9 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. The electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown in the embodiments of the present invention, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 9, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor 11, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 performs the various methods and processes described above, such as a segmentation method of a bone image.
In some embodiments, the method of segmentation of a bone image may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the above described method of segmentation of a bone image may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the segmentation method of the bone image by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
The computer program for implementing the method for segmentation of a bone image of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
EXAMPLE six
The sixth embodiment of the present invention further provides a computer-readable storage medium, which stores computer instructions for causing a processor to execute a method for segmenting a bone image, where the method includes:
acquiring object boundary frame data corresponding to at least one bone object in the bone image;
determining a distance proportion data set corresponding to each bone object based on the bone image and the data of each object boundary box;
determining a bone segmentation result corresponding to the bone image based on each distance proportion data set and a pre-trained target image segmentation model;
the object boundary frame data represent space information of a minimum external cuboid corresponding to a bone object in a bone image, the distance proportion data set comprises at least one distance proportion data, the distance proportion data represent the distance proportion of image pixel points in the bone image relative to the minimum external cuboid, and a bone segmentation result comprises at least one segmentation image corresponding to the bone object respectively.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired result of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. A method for segmenting a bone image, comprising:
acquiring object boundary frame data corresponding to at least one bone object in the bone image;
determining a distance proportion data set corresponding to each bone object respectively based on the bone image and the data of each object boundary box;
determining a bone segmentation result corresponding to the bone image based on each distance proportion data set and a pre-trained target image segmentation model;
the object boundary frame data represent space information of a minimum external cuboid corresponding to a bone object in the bone image, the distance proportion data set comprises at least one distance proportion data, the distance proportion data represent the distance proportion of an image pixel point in the bone image relative to the minimum external cuboid, and the bone segmentation result comprises segmentation images corresponding to the at least one bone object respectively.
2. The method of claim 1, wherein determining a distance scale dataset for each of the bone objects based on the bone image and the object bounding box data comprises:
for each object bounding box data, determining size data and at least one flag coordinate of the minimum bounding cuboid based on the object bounding box data;
and determining a distance proportion data set of the bone object corresponding to the object boundary frame data based on the mark position coordinates, the size data and pixel point coordinates corresponding to the image pixel points in the bone image.
3. The method of claim 2, wherein determining a distance scale dataset for the bone object corresponding to the object bounding box data based on the landmark coordinates, the size data, and pixel coordinates corresponding to respective image pixels in the bone image comprises:
determining at least one target pixel point coordinate based on the object boundary frame data and the pixel point coordinate corresponding to each image pixel point in the skeleton image;
obtaining difference value coordinates corresponding to the zone bit coordinates and the target pixel point coordinates respectively aiming at each zone bit coordinate;
and regarding each difference coordinate, taking the ratio of the difference coordinate to the size data as distance proportion data, and adding the distance proportion data into a distance proportion data set of the bone object corresponding to the object bounding box data.
4. The method according to claim 1, wherein the determining a bone segmentation result corresponding to the bone image based on each of the distance scale data sets and a pre-trained target image segmentation model comprises:
acquiring image parameter data corresponding to the bone image; wherein the image parameter data comprises image pixel value data of the bone image and/or image gradient data corresponding to at least one scale, respectively;
and inputting the image parameter data and each distance proportion data set into a pre-trained target image segmentation model to obtain an output bone segmentation result corresponding to the bone image.
5. The method according to any one of claims 1-4, wherein said obtaining object bounding box data corresponding to at least one bone object in the bone image comprises:
obtaining target bone average models respectively corresponding to at least one bone object;
for each bone object, registering a target bone mean model corresponding to the bone object to the bone image, resulting in object bounding box data corresponding to the bone object.
6. The method of claim 5, wherein prior to registering the bone image with a target bone mean model corresponding to the bone object, resulting in object bounding box data corresponding to the bone object, the method further comprises:
acquiring actual joint coordinates corresponding to joint objects in the skeleton image and acquiring model joint coordinates corresponding to the joint objects in the target skeleton average model;
and determining a model displacement difference based on the actual joint coordinates and the model joint coordinates, and adjusting the spatial position of the target average bone model based on the model displacement difference so as to align the spatial position of the target average bone model with the spatial position of the bone image.
7. The method of claim 6, wherein the actual joint coordinates comprise first, second and third axis coordinates corresponding to the first, second and third projection directions, respectively, and wherein obtaining the actual joint coordinates corresponding to the joint object in the bone image comprises:
respectively projecting the skeleton image along a first projection direction and a second projection direction to obtain a first skeleton projection image and a second skeleton projection image;
determining second axis coordinates and third axis coordinates of the joint object in the bone image, which correspond to the second projection direction and the third projection direction respectively, based on the first bone projection image;
and determining a first axis coordinate of the joint object in the bone image corresponding to the first projection direction based on the second bone projection image.
8. The method of claim 7, wherein said determining second and third axis coordinates of the joint object in the bone image corresponding to the second and third projection directions, respectively, based on the first bone projection image comprises:
performing Gaussian blur processing on the first skeleton projection image to obtain a first skeleton blur image;
and determining second axis coordinates and third axis coordinates of the joint object in the skeleton image, which correspond to the second projection direction and the third projection direction respectively, based on the pixel point coordinates of the pixel point corresponding to the pixel maximum value in the first skeleton blurred image.
9. The method of claim 7, wherein when the bone image is a lower limb image and the lower limb image includes a left leg image and a right leg image, the joint object includes a left leg knee joint and a right leg knee joint, the first bone projection image includes a first left leg projection image and a first right leg projection image, the second bone projection image includes a second left leg projection image and a second right leg projection image, and accordingly, the projecting the bone image along the first projection direction and the second projection direction respectively obtains the first bone projection image and the second bone projection image, and the method comprises:
performing segmentation operation on the skeleton image to obtain a left leg image and a right leg image;
projecting the left leg image along a first projection direction and a second projection direction respectively to obtain a first left leg projection image and a second left leg projection image;
and projecting the right leg image along a first projection direction and a second projection direction respectively to obtain a first right leg projection image and a second right leg projection image.
10. The method of claim 9, wherein the first projection direction is a Y-axis direction and the second projection direction is an X-axis direction.
11. The method of claim 5, wherein obtaining the target bone mean model corresponding to each of the at least one bone object comprises:
aiming at each bone object, acquiring a standard mask image and at least one reference mask image corresponding to the bone object, and respectively registering each reference mask image with the standard mask image to obtain a registered mask image set; wherein the set of registration mask images comprises at least one registration mask image;
based on the standard mask image and the set of registered mask images, a target bone mean model corresponding to the bone object is determined.
12. The method of claim 11, wherein determining a target average bone model corresponding to the bone object based on the standard mask image and the set of registered mask images comprises:
acquiring a current registration mask image in the registration mask image set, and registering the standard mask image with a last bone average model to obtain a middle mask image;
determining a current skeleton summation model based on the current registration mask image and a previous skeleton summation model, and determining a current skeleton average model based on the intermediate mask image and the current skeleton summation model;
taking the current bone sum model as a previous bone sum model and the current bone mean model as a previous bone mean model, and repeating the step of obtaining a current registration mask image in the set of registration mask images;
until the number of iterations reaches the number of images corresponding to the registration mask image set, taking the current bone average model as a target bone average model corresponding to the bone object;
when the iteration number is 1, the last bone average model is the current registration mask image, and the last bone summation model is 0.
13. The method of claim 5, wherein the target bone mean model comprises model feature point coordinates corresponding to at least one bone feature point, respectively, and the method further comprises:
for each bone object, acquiring a segmented image and a target bone average model corresponding to the bone object;
and registering the target skeleton average model with the segmentation image to obtain actual feature point coordinates corresponding to at least one skeleton feature point in the segmentation image.
14. An apparatus for segmenting a bone image, comprising:
the object boundary frame data acquisition module is used for acquiring object boundary frame data corresponding to at least one bone object in the bone image;
a distance scale data set determining module, configured to determine, based on the bone image and the data of the object bounding boxes, a distance scale data set corresponding to each of the bone objects, respectively;
a bone segmentation result determining module, configured to determine a bone segmentation result corresponding to the bone image based on each of the distance proportion data sets and a pre-trained target image segmentation model;
the object boundary box data represent space information of a minimum circumscribed cuboid corresponding to a bone object in the bone image, the distance proportion data set comprises at least one distance proportion data, the distance proportion data represent the distance proportion of image pixel points in the bone image relative to the minimum circumscribed cuboid, and the bone segmentation result comprises at least one segmentation image corresponding to the bone object respectively.
15. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of bone image segmentation of any one of claims 1-13.
16. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer instructions for causing a processor to carry out a method of segmentation of a bone image according to any one of claims 1-13 when executed.
CN202211141687.1A 2022-09-20 2022-09-20 Skeleton image segmentation method, device, equipment and storage medium Pending CN115457056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211141687.1A CN115457056A (en) 2022-09-20 2022-09-20 Skeleton image segmentation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211141687.1A CN115457056A (en) 2022-09-20 2022-09-20 Skeleton image segmentation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115457056A true CN115457056A (en) 2022-12-09

Family

ID=84304731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211141687.1A Pending CN115457056A (en) 2022-09-20 2022-09-20 Skeleton image segmentation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115457056A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117346285A (en) * 2023-12-04 2024-01-05 南京邮电大学 Indoor heating and ventilation control method, system and medium
CN117442395A (en) * 2023-09-06 2024-01-26 北京长木谷医疗科技股份有限公司 Method, device and equipment for acquiring femoral head rotation center based on clustering algorithm

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117442395A (en) * 2023-09-06 2024-01-26 北京长木谷医疗科技股份有限公司 Method, device and equipment for acquiring femoral head rotation center based on clustering algorithm
CN117346285A (en) * 2023-12-04 2024-01-05 南京邮电大学 Indoor heating and ventilation control method, system and medium
CN117346285B (en) * 2023-12-04 2024-03-26 南京邮电大学 Indoor heating and ventilation control method, system and medium

Similar Documents

Publication Publication Date Title
CN115457056A (en) Skeleton image segmentation method, device, equipment and storage medium
US8073227B2 (en) System and method for geometric modeling of tubular structures
US8571278B2 (en) System and methods for multi-object multi-surface segmentation
JP6564018B2 (en) Radiation image lung segmentation technology and bone attenuation technology
US11348216B2 (en) Technologies for determining the accuracy of three-dimensional models for use in an orthopaedic surgical procedure
CN112382359B (en) Patient registration method and device, electronic equipment and computer readable medium
AU2020217368A1 (en) Technologies for determining the accuracy of three-dimensional models for use in an orthopaedic surgical procedure
Jacinto et al. Multi-atlas automatic positioning of anatomical landmarks
US11461914B2 (en) Measuring surface distances on human bodies
CN115222879A (en) Model surface reduction processing method and device, electronic equipment and storage medium
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
Fischer et al. Automated morphometric analysis of the hip joint on MRI from the German National Cohort Study
CN115147359B (en) Lung lobe segmentation network model training method and device, electronic equipment and storage medium
CN114037719B (en) Bone region determination method and device, electronic equipment and storage medium
US10687899B1 (en) Bone model correction angle determination
CN115482261A (en) Blood vessel registration method, device, electronic equipment and storage medium
CN111583240B (en) Method and device for determining anterior and posterior axis of femur end and computer equipment
CN114049432A (en) Human body measuring method and device, electronic equipment and storage medium
Souza et al. Multi-frame adaptive non-rigid registration for markerless augmented reality
CN116630276A (en) Method and device for determining lower limb bone anatomical points, electronic equipment and storage medium
Kang et al. Correspondenceless 3D-2D registration based on expectation conditional maximization
Chen et al. Topology noise removal for curve and surface evolution
CN117745989B (en) Nerve root blocking target injection path planning method and system based on vertebral canal structure
US11664116B2 (en) Medical image data
US20240233103A9 (en) Technologies for determining the accuracy of three-dimensional models for use in an orthopaedic surgical procedure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination