CN115035136B - Method, system, device and storage medium for bone subregion segmentation in knee joint image - Google Patents

Method, system, device and storage medium for bone subregion segmentation in knee joint image Download PDF

Info

Publication number
CN115035136B
CN115035136B CN202210948517.8A CN202210948517A CN115035136B CN 115035136 B CN115035136 B CN 115035136B CN 202210948517 A CN202210948517 A CN 202210948517A CN 115035136 B CN115035136 B CN 115035136B
Authority
CN
China
Prior art keywords
bone
knee joint
position information
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210948517.8A
Other languages
Chinese (zh)
Other versions
CN115035136A (en
Inventor
张晓东
张志勇
陈少龙
钟丽洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Affiliated Hospital Of Southern Medical University (academy Of Orthopaedics Guangdong Province)
Original Assignee
Third Affiliated Hospital Of Southern Medical University (academy Of Orthopaedics Guangdong Province)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Affiliated Hospital Of Southern Medical University (academy Of Orthopaedics Guangdong Province) filed Critical Third Affiliated Hospital Of Southern Medical University (academy Of Orthopaedics Guangdong Province)
Priority to CN202210948517.8A priority Critical patent/CN115035136B/en
Publication of CN115035136A publication Critical patent/CN115035136A/en
Application granted granted Critical
Publication of CN115035136B publication Critical patent/CN115035136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a method, a system, a device and a storage medium for segmenting bone subregions in a knee joint image, belonging to the technical field of image processing; identifying bone position information in the knee joint magnetic resonance image; the bone in the knee joint magnetic resonance image is divided into a plurality of bone regions. The automatic segmentation of the bone of the magnetic resonance image of the knee joint is realized by automatically identifying the bone position information, and the automation degree, efficiency and precision of the bone segmentation are improved.

Description

Method, system, device and storage medium for segmenting bone subregions in knee joint image
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a system, a device and a storage medium for segmenting bone subregions in a knee joint image.
Background
Knee osteoarthritis is one of common diseases of middle-aged and elderly people, and is mainly characterized by hyperosteogeny and cartilage abrasion. The clinical requirement is accurate, noninvasive and comprehensive evaluation of the knee osteoarthritis pathological change condition. The prior art generally uses knee joint magnetic resonance images to evaluate and analyze the pathological condition.
Knee joint magnetic resonance images are a common type of image in medicine. Magnetic resonance images of the knee joint are conventionally available in the transverse axis, sagittal and coronal positions, the latter two being most commonly used. The knee joint includes femur and cartilage thereof, tibia and cartilage thereof, patella and cartilage thereof.
When studying the pathological condition of the knee joint, it is necessary to divide the bone, even into a plurality of sub-regions. In the prior art, doctors usually divide the traditional Chinese medicine by hands, so that time and labor are consumed, and the accuracy is low.
Disclosure of Invention
In view of this, the present invention provides a method, a system, a device and a storage medium for segmenting bone subregions in a knee joint image, which are used to solve the problems of low automation degree and low accuracy of knee joint magnetic resonance image segmentation in the prior art. To achieve one or a part of or all of the above or other objects, the present invention provides a method, a system, a device and a storage medium for segmenting bone subregions in a knee joint image, the first aspect of the invention:
a bone subregion segmentation method in a knee joint image comprises the following steps:
acquiring a knee joint magnetic resonance image;
identifying bone position information in the knee joint magnetic resonance image;
and dividing the bone in the knee joint magnetic resonance image into a plurality of bone areas according to the bone position information.
Preferably, the step of dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information comprises:
identifying target position information of a target bone from the bone position information;
determining boundary information according to the target position information;
the target bone is divided based on the boundary information to obtain the bone region.
Preferably, the step of identifying target position information of a target bone from the bone position information includes:
acquiring slice direction information of the target bone; the slice direction information is used for describing the slice direction of the knee joint magnetic resonance image;
determining a slice image corresponding to the slice direction information;
determining a location of the target bone in the slice image based on the bone location information;
and obtaining the target position information according to the voxel parameters corresponding to the positions in the slice images.
Preferably, the step of obtaining the target position information according to the voxel parameter corresponding to the position in the slice image includes:
according to a preset traversal direction, traversing voxel parameters of corresponding positions in the slice image line by line, and judging whether the voxel parameters matched with a preset parameter threshold exist in the corresponding positions in the slice image;
if so, recording the number of the rows, the number of the columns and the number of the parameters of the voxel parameters matched with the parameter threshold;
and calculating the target position information of the target bone according to the row number, the column number and the parameter number.
Preferably, the step of determining boundary information according to the target location information includes:
acquiring a boundary strategy corresponding to the target bone; the boundary strategy is used for describing the relation between different boundary positions and the target position information;
and determining the boundary information according to the boundary strategy and the target position information.
Preferably, the step of identifying bone position information in the knee joint magnetic resonance image comprises:
and carrying out first segmentation on the knee joint magnetic resonance image by utilizing a polarization self-attention network model to obtain the bone position information of each bone.
Preferably, after the first segmentation of the knee joint magnetic resonance image using the polarized self-attention network model, the method further comprises:
calculating contour position information of each bone according to the bone position information of each bone from a preset slicing direction;
segmenting the knee joint magnetic resonance image into a plurality of region of interest images based on the contour position information; the region of interest image of each of the bones comprises a plurality of slice images;
performing a second segmentation on each of the slice images in the region of interest image using the polarized self-attention network model to update the bone location information;
and mapping each region-of-interest image back to the knee joint magnetic resonance image according to the contour position information to obtain the bone position information and obtain the updated knee joint magnetic resonance image.
Preferably, prior to said identifying bone location information in said knee joint magnetic resonance image, said method further comprises:
performing histogram equalization processing on the knee joint magnetic resonance image; and/or the presence of a gas in the gas,
and carrying out normalization processing on the knee joint magnetic resonance image.
Preferably, the step of normalizing the knee joint magnetic resonance image comprises:
acquiring original voxel parameters in the knee joint magnetic resonance image;
comparing the voxel original parameters to obtain a voxel maximum value and a voxel minimum value;
and calculating to obtain a voxel conversion parameter corresponding to the voxel original parameter by using the voxel original parameter, the voxel maximum value and the voxel minimum value.
Preferably, before the dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information, the method further comprises:
acquiring knee joint medial direction information and/or knee joint lateral direction information of the knee joint magnetic resonance image;
the step of dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information comprises:
identifying target position information of a target bone using the knee joint medial direction information and/or the knee joint lateral direction information, and the bone position information;
determining boundary information according to the target position information;
the target bone is divided based on the boundary information to obtain the bone region.
Preferably, before the dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information, the method further comprises:
detecting a characteristic bone in the knee joint magnetic resonance image to generate characteristic bone position information;
determining knee joint medial direction information and/or knee joint lateral direction information according to the characteristic bone position information;
the step of dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information comprises:
identifying target position information of a target bone using the knee joint medial direction information and/or the knee joint lateral direction information, and the bone position information;
determining boundary information according to the target position information;
the target bone is divided based on the boundary information to obtain the bone region.
Preferably, the step of detecting a characteristic bone in the magnetic resonance image of the knee joint and generating characteristic bone position information includes:
detecting whether the characteristic bones exist in each slice image from the corresponding slice direction according to preset slice detection direction information;
and generating characteristic bone position information according to the image number corresponding to the slice image with the characteristic bone.
In a second aspect:
a bone subregion segmentation system in knee joint image, including obtaining the module, is used for obtaining the magnetic resonance image of knee joint;
an identification module for identifying bone position information in the knee joint magnetic resonance image;
and the segmentation module is used for dividing the bone in the knee joint magnetic resonance image into a plurality of bone areas according to the bone position information.
Preferably, the segmentation module comprises a target location unit for identifying target location information of a target bone from the bone location information;
a boundary information unit for determining boundary information according to the target position information;
a segmentation unit, configured to segment the target bone based on the boundary information to obtain the bone region.
Preferably, the target position unit includes a slice subunit for acquiring slice direction information of the target bone; the slice direction information is used for describing the slice direction of the knee joint magnetic resonance image;
a slice image subunit configured to determine a slice image corresponding to the slice direction information;
a target position subunit for determining a position of the target bone in the slice image based on the bone position information;
and the voxel parameter subunit is used for obtaining the target position information according to the voxel parameter corresponding to the position in the slice image.
Preferably, the voxel parameter subunit includes a traversal subunit, configured to traverse, line by line, the voxel parameters at the corresponding positions in the slice image according to a preset traversal direction, and determine whether the voxel parameter matching a preset parameter threshold exists in the corresponding position in the slice image;
if yes, recording the number of lines, the number of columns and the number of parameters of the voxel parameters matched with the parameter threshold;
and the calculating subunit is used for calculating the target position information of the target bone according to the number of the rows, the number of the columns and the number of the parameters.
Preferably, the boundary information unit includes a strategy subunit for acquiring a boundary strategy corresponding to the target bone; the boundary strategy is used for describing the relation between different boundary positions and the target position information;
and the boundary subunit is used for determining the boundary information according to the boundary strategy and the target position information.
Preferably, the identification module includes a first segmentation module, configured to perform a first segmentation on the knee joint magnetic resonance image by using a polarized self-attention network model, so as to obtain the bone position information of each bone.
Preferably, the system further comprises a contour module, configured to calculate contour position information of each bone according to the bone position information of each bone from a preset slice direction after the first segmentation of the knee joint magnetic resonance image by using the polarized self-attention network model;
a cropping module for segmenting the knee joint magnetic resonance image into a plurality of region of interest images based on the contour position information; the region of interest image of each of the bones comprises a plurality of slice images;
a second segmentation module for performing a second segmentation on each of the slice images in the region of interest image using the polarized self-attention network model to update the bone location information;
and the mapping module is used for mapping each interested region image back to the knee joint magnetic resonance image according to the contour position information to obtain the bone position information and obtain the updated knee joint magnetic resonance image.
Preferably, the system further comprises a preprocessing module for performing histogram equalization processing on the knee joint magnetic resonance image before the identifying bone position information in the knee joint magnetic resonance image; and/or the presence of a gas in the gas,
and carrying out normalization processing on the knee joint magnetic resonance image.
Preferably, the preprocessing module comprises a voxel original parameter unit for acquiring voxel original parameters in the knee joint magnetic resonance image;
the comparison unit is used for comparing the voxel original parameters to obtain a voxel maximum value and a voxel minimum value;
and the conversion unit is used for calculating the voxel conversion parameter corresponding to the voxel original parameter by using the voxel original parameter, the voxel maximum value and the voxel minimum value.
Preferably, the system further comprises a first orientation module for acquiring knee joint medial orientation information and/or knee joint lateral orientation information of the knee joint magnetic resonance image before the bone in the knee joint magnetic resonance image is divided into a plurality of bone regions according to the bone position information;
the segmentation module comprises a first segmentation unit, a second segmentation unit and a third segmentation unit, wherein the first segmentation unit is used for identifying target position information of a target bone by utilizing the knee joint medial direction information and/or the knee joint lateral direction information and the bone position information;
the second dividing unit is used for determining boundary information according to the target position information;
a third dividing unit, configured to divide the target bone based on the boundary information to obtain the bone region.
Preferably, the system further comprises a second orientation module, configured to detect a characteristic bone in the knee joint magnetic resonance image before the bone in the knee joint magnetic resonance image is divided into a plurality of bone regions according to the bone position information, and generate characteristic bone position information;
the characteristic bone module is used for determining the inner direction information of the knee joint and/or the outer direction information of the knee joint according to the characteristic bone position information;
the segmentation module comprises a fourth segmentation unit, a fourth segmentation unit and a fourth segmentation unit, wherein the fourth segmentation unit is used for identifying target position information of a target bone by utilizing the knee joint medial direction information and/or the knee joint lateral direction information and the bone position information;
a fifth dividing unit, configured to determine boundary information according to the target location information;
a sixth dividing unit, configured to divide the target bone based on the boundary information to obtain the bone region.
Preferably, the second direction module includes a judging unit, configured to detect whether the characteristic bone exists in each slice image from a corresponding slice direction according to preset slice detection direction information;
and the generating unit is used for generating the position information of the characteristic bones according to the image numbers corresponding to the slice images with the characteristic bones.
In a third aspect:
a bone subregion segmentation device in knee joint image comprises a memory and a processor, wherein the memory stores a bone subregion segmentation method in knee joint image, and the processor is used for adopting the method when executing the bone subregion segmentation method in knee joint image.
In a fourth aspect:
a storage medium storing a computer program which can be loaded by a processor and which performs the method as described above.
The embodiment of the invention has the following beneficial effects:
by identifying the bone position information from the obtained knee joint magnetic resonance image, the positions of the respective bones in the knee joint magnetic resonance image, that is, the ranges of the respective bones in the knee joint magnetic resonance image can be obtained. The bone is then divided into a plurality of bone regions. In the period, the automatic bone region segmentation is realized without manual participation, the bone segmentation efficiency is improved, the error rate caused by human factors during the segmentation is reduced, and the bone segmentation precision is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a flowchart illustrating an overall method for segmenting bone subregions in a knee joint image according to an embodiment.
FIG. 2 is a flow chart of a method for segmenting bone regions in a knee joint image according to an embodiment.
FIG. 3 is a flowchart illustrating the identification of target location information for a method for bone segmentation in a knee joint image according to an embodiment.
Fig. 4 is a structural diagram of a polarization self-attention network model of a bone subregion segmentation method in a knee joint image in an embodiment.
Fig. 5 is a frame diagram illustrating a difference between a polarized self-attention network model and a UNet + + network in a bone sub-region segmentation method in a knee joint image according to an embodiment.
FIG. 6 is a block diagram of a bone subregion segmentation system in a knee joint image in one embodiment.
FIG. 7 is a schematic structural diagram of a device for segmenting bone subregions in a knee joint image according to an embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
In the prior art, when bone and cartilage subregions in a knee joint magnetic resonance image are segmented, the segmentation is performed manually by a doctor or semi-automatically by using an auxiliary tool, but the segmentation method is not only easy to reduce the segmentation precision due to human factors, but also low in efficiency. In view of the above, the present application provides a method for segmenting bone subregions in a knee joint image, as shown in fig. 1, comprising:
101. knee joint magnetic resonance images are acquired.
In one embodiment, the knee magnetic resonance image is a three-dimensional image of the knee using Magnetic Resonance Imaging (MRI) techniques, including sagittal, coronal, and transverse axis 3 scan slices. Typical magnetic resonance images of the knee joint include tibia, tibial cartilage, femur, femoral cartilage, patellar cartilage, and fibula.
The acquisition of the knee joint magnetic resonance image comprises active acquisition and passive acquisition, wherein in one embodiment, the current execution subject periodically retrieves the knee joint magnetic resonance image from the database, and specifically, the retrieval can be triggered by a timer or a clock signal; in another embodiment, the currently executing subject ends one knee magnetic resonance image per treatment, i.e. calls the next knee magnetic resonance image to be treated into the database. In addition, in addition to retrieving the magnetic resonance image of the knee joint from the database, an acquisition request may be transmitted to a designated terminal, device or interface to obtain the magnetic resonance image of the knee joint.
In other embodiments, the currently executing subject detects the magnetic resonance image of the knee joint in real time or periodically, and if the magnetic resonance image of the knee joint is detected, receives the magnetic resonance image of the knee joint and performs subsequent processing on the magnetic resonance image of the knee joint. The knee joint magnetic resonance image can be received by a designated interface, can be actively transmitted to a current execution main body by other equipment, or is imported by a man-machine interaction device.
102. Bone position information in the knee joint magnetic resonance image is identified.
The bone position information is used to describe bones included in the magnetic resonance image of the knee joint, such as bones and cartilages of the knee joint, specifically, femur cartilage, tibia cartilage, patella, patellar cartilage, and the like, which are located at positions in the magnetic resonance image of the knee joint. It should be noted that, because the knee joint magnetic resonance image is a three-dimensional image and includes 3 scanning layers, in order to facilitate accurate bone position information of each bone and cartilage and subsequent processing, in an embodiment, for the same bone, position information is corresponding to different scanning layers, and the position information corresponding to all the scanning layers constitutes the bone position information of the bone. For ease of understanding, in one embodiment, for example, the voxels of the magnetic resonance image of the knee joint are 160 × 384, and for the sagittal, 160 two-dimensional images can be obtained, the size of the two-dimensional image being 384 × 384, i.e., the two-dimensional image is composed of 384 rows and 384 columns of pixel points; for the crown position, 384 two-dimensional images can be obtained, wherein the size of the two-dimensional images is 384 × 160, namely the two-dimensional images are formed by 384 rows and 160 columns of pixel points; for the horizontal axis, 384 two-dimensional images can be obtained, the pixels of the two-dimensional images are 384 × 160, that is, the two-dimensional images are composed of 384 rows and 160 columns of pixel points. A two-dimensional image is a slice image.
It should be noted that the voxels and the pixels are substantially the same and both refer to an indivisible unit or element in the picture. The difference is that the voxels correspond to stereo images, i.e. three-dimensional images, and the pixels correspond to two-dimensional images, in this embodiment the voxels and pixels have the same effect.
In one embodiment, the bone position information is voxel point location information corresponding to the bone occupied in the knee magnetic resonance image, for example, (52,74,105), which means that a three-dimensional coordinate system is established based on the knee magnetic resonance image, and the point location is the tibia. In another embodiment, the bone position information is pixel point location information occupied by the corresponding bone in each scanning layer of the knee joint magnetic resonance image.
The identification of the bone position information can be realized through a neural network model, and the bone position information of each bone can also be identified according to the special properties of the bones in the magnetic resonance image of the knee joint, wherein the special properties of the bones comprise the gray scale of corresponding voxels and the like.
103. The bone in the knee joint magnetic resonance image is divided into a plurality of bone regions according to the bone position information.
In an embodiment, the knee joint magnetic resonance image includes 6 tissues including tibia, tibia cartilage, femur cartilage, patella, and patellar cartilage, and after bone position information of each tissue is obtained, the knee joint magnetic resonance image can be divided into 6 regions, each region corresponding to one tissue. In another embodiment, the magnetic resonance image of the knee joint can also be divided into 3 regions, namely a tibia region, including tibia and tibia cartilage; the femoral region, including the femur and femoral cartilage; the patellar region, including the patella and patellar cartilage. In other embodiments, the knee joint magnetic resonance image further includes a fibula, and the medial and lateral sides of the knee joint can be known through bone position information of the fibula, so that the knee joint magnetic resonance image is further subdivided, and the medial and lateral sides of each tissue are distinguished. It should be noted that the medial and lateral sides are for the human knee joint. Since the bone position information is known, the number of regions into which the knee joint magnetic resonance image is divided, and which tissues are located in the same region, or the number of small regions into which a tissue is divided can be set according to actual requirements, and the present embodiment is not limited.
Through the bone position information of each tissue in the knee joint magnetic resonance image, the tissue is automatically divided into a plurality of bone regions, namely subregions, without manual participation, the automatic segmentation is automatically completed, the automation degree is high, the subregion segmentation efficiency is high, errors are not easy to occur, and the accuracy of the subregion segmentation is improved.
In another embodiment of the present invention, for further definition and explanation, as shown in fig. 2, the step of dividing the bone in the magnetic resonance image of the knee joint into a plurality of bone regions according to the bone position information comprises:
201. target position information of the target bone is identified from the bone position information.
The bone position information includes bone position information of all bones in the knee joint magnetic resonance image, and when the bone region is divided, the target bone is determined in advance in order to improve the dividing precision. In an embodiment, the target bone is identified by a bone identifier, specifically, if the current execution subject calls the bone identifier of the target bone from a preset target bone queue, the bone identifiers are, for example, 1, 2 and 3, and when the appointed bone identifier is 1, the target femur refers to femur; when the bone mark is 2, the target femoral is the tibia; with bone designation 3, the target femoral patella. When the target position information is recognized, if the bone identifier called from the target femoral queue is 1, it is proved that the target position information of the femur needs to be recognized. At this time, positions of 1 are retrieved from the bone position information, and the positions of all 1 are combined to form target position information.
In another embodiment, the target bone is identified by characteristic voxel information, specifically, as the current execution subject calls the characteristic voxel information from a preset characteristic voxel table, the characteristic voxel information is, for example, (256,256,156) for the femur, (85,85,85) for the tibia, and (120,120,10) for the patella, and the femur including (256,256,156) voxel point location information in the bone location information is identified, so as to identify the target bone. After identifying the corresponding target bone in the bone position information, the corresponding bone position information is the target position information of the target bone.
It should be noted that, in some embodiments, in order to improve the bone segmentation accuracy, the bones included in the magnetic resonance image of the knee joint are more finely segmented, for example, the tibia and the tibia cartilage are segmented into 13 regions, the femur and the femur cartilage are segmented into 12 regions, and the patella cartilage are segmented into 4 regions. For example, the target bone is femoral cartilage on the left side in a coronal slice image. The target position information may be information of a left limit position of a bone in a slice image in the coronal, sagittal, or abscissa position, or information of an upper limit position of a bone in a slice image in the coronal, sagittal, or abscissa position. The limit position is a position closest to a predetermined direction of the bone in the slice image, and is, for example, a left limit position, i.e., a leftmost position, and an upper limit position, i.e., an uppermost position. Here, the left and upper sides refer to the upper, lower, left, and right sides of the slice image after the slice image is set on its vertical position.
202. And determining boundary information according to the target position information.
After the target position information is known, boundary information is determined. The boundary information is used to describe the position information of the boundary in bone segmentation. For example, in one embodiment, the boundary information is position information of a boundary on the coronal plane and on the medial and lateral sides of the femur, and specifically, information of a plurality of voxel point locations. When the boundary information is determined, the boundary information is determined according to the number of positions included in the target position information, for example, when the target position information only describes position information of one point, voxel point position information in a preset direction is obtained to form the boundary information, and the boundary information includes the point position information described in the target position information. For example, the target location information describes location information of two point locations, and the point location information passed after the two point locations are connected constitutes boundary information.
203. The target bone is divided based on the boundary information to obtain a bone region.
And after the boundary information is obtained, dividing the target bone according to the boundary information, and dividing the target bone into at least one bone area. In one embodiment, in order to improve the accuracy of bone region segmentation, when the boundary information is determined, the outer boundary of the target bone is determined as the boundary information, and the outer boundary of the target bone is repositioned. In another embodiment, the position information of the outer boundary may be directly extracted from the target position information of the target bone. The outer boundary refers to the outer contour of the target bone, and the outer contour of the target bone is a straight line or a curved line in the slice image.
The target bone is automatically divided in a mode of setting boundary information, a plurality of bone areas are divided, the process is simple, the division is accurate, manual participation is not needed, and the automation degree is high.
In another embodiment of the present invention, for further definition and explanation, as shown in fig. 3, the step of identifying target location information of the target bone from the bone location information comprises:
301. slice direction information of the target bone is acquired.
The slice direction information is used for describing the slice direction of the knee joint magnetic resonance image, and the slice direction is a scanning layer of the knee joint magnetic resonance image and comprises a coronal position, a sagittal position and a transverse axis position. In an embodiment, the slice direction information is preset, and corresponding slice direction information is preset for different target bones. In another embodiment, in order to improve the accuracy of the target position information, the target bone is associated with different target position information, and each target position information is associated with preset slice direction information. The method is convenient to execute from the most scientific and rapid slicing direction when target position information is identified.
302. Slice images corresponding to the slice direction information are determined.
Note that, the slice directions are different, so are slice images obtained based on the knee magnetic resonance image, and the total number of slice images. The slice image can be determined by the slice direction described in the slice direction information. The information describing the slice direction in the slice direction information may be characters, letters, numbers, symbols, and other marks capable of representing the slice direction.
303. The position of the target bone in the slice image is determined based on the bone position information.
The position refers to a range of the target bone covering the slice image, and in one embodiment, the position of the target bone is described by pixel point location information in the slice image.
304. And obtaining target position information according to the voxel parameters of the corresponding positions in the slice images.
The voxel parameter refers to pixel point location information of the slice image, for example, (30,40), and refers to pixel point locations in the 30 th row from top to bottom and in the 40 th column from left to right. The target position information may be one pixel point location or a plurality of continuous or discontinuous pixel point locations.
By determining the target position information according to the pixel parameters in the specific slicing direction, the acquisition convenience, the acquisition efficiency and the acquisition accuracy of the target position information are improved.
In another embodiment of the present invention, for further definition and explanation, the step of obtaining target location information according to voxel parameters of corresponding locations in the slice image comprises:
401. and traversing the voxel parameters of the corresponding positions in the slice image line by line according to a preset traversing direction, and judging whether the voxel parameters matched with a preset parameter threshold exist in the corresponding positions in the slice image.
In one embodiment, the voxel parameters in the slice image are between 0-1, the parameter threshold is a value between 0-1, such as 0.5, and voxel parameters greater than 0.5 are determined to match the parameter threshold, and voxel parameters less than or equal to 0.5 are determined to not match the parameter threshold.
And if so, recording the number of rows, columns and the number of parameters of the voxel parameters matching the parameter threshold.
If the corresponding position comprises a plurality of voxel parameters, namely a plurality of pixel point positions, and a plurality of voxel parameters matched with the parameter threshold value exist, the number of the parameters is recorded, and the number of rows and columns corresponding to each voxel parameter is recorded.
402. And calculating the target position information of the target bone according to the number of the lines, the number of the columns and the number of the parameters.
The relevant voxel parameters are screened in a mode of judging whether the voxel parameters are matched with the parameter threshold values, so that the line number and the column number of the voxel parameters relevant to the target position information are obtained, the target position information is obtained by screening the minimum unit (pixel point position) in the slice image, and the accuracy of the target position information is improved.
In another embodiment of the present invention, for further definition and explanation, the step of determining boundary information according to the target location information includes:
501. a boundary policy corresponding to the target bone is obtained.
The boundary policy is used for describing the relationship between different boundary positions and target position information. For ease of understanding, in one embodiment, the boundary policy is stored in association with the target bone, and the boundary policy may be obtained together when the target bone is retrieved. After the target position information is obtained, how to obtain the boundary position according to the target position information can be known according to the limitation or condition in the boundary strategy. For example, in an application scenario, the boundary policy describes that the boundary position is a column corresponding to the target position information (i.e., the number of pixel columns where the boundary position is located is the same as the number of pixel columns described by the target position information). In another application scenario, the boundary policy describes that the boundary position is a connecting line of two pixel points in the target position information.
502. And determining boundary information according to the boundary strategy and the target position information.
After the boundary policy is obtained, the boundary position can be determined based on the target position information according to the limitations or conditions in the boundary policy, and it should be noted that the boundary information includes information of the boundary position.
And determining boundary information when the target bone is segmented through a preset boundary strategy. The target bone can be segmented according to the boundary information after the boundary information is obtained. Since the number of bones included in the knee joint is substantially the same as the bone structure, that is, the bone growth mechanism in the knee joint is the same, even if the bones change with time or disease, the basic characteristics, for example, the relative positional relationship between two bones or the basic structure of the bones, are not easy to change, when the same target bone in different knee joint magnetic resonance images is determined, the same boundary strategy is adopted, which is helpful for improving the efficiency of boundary information determination, thereby improving the efficiency of bone segmentation.
In another embodiment of the present invention, for further definition and illustration, the step of identifying bone position information in the magnetic resonance image of the knee joint comprises:
and performing first segmentation on the knee joint magnetic resonance image by using the polarization self-attention network model to obtain bone position information of each bone.
The polarization self-attention network model (PSA-UNet + + network) combines a polarization self-attention mechanism with the UNet + + network on the basis of the UNet + + network, focuses attention on key features through the polarization self-attention mechanism, inhibits redundant features, and improves the bone segmentation effect of the network. Specifically, the structure of the polarization self-attention network model is shown in fig. 4. The polarized self-attention network model differs from UNet + + networks in that a polarized self-attention mechanism, i.e., a polarized self-attention convolution block (PSA), is added to the L0 layer convolution block as shown in fig. 5.
In another embodiment of the present invention, for further definition and explanation, after the step of performing the first segmentation on the knee joint magnetic resonance image by using the polarized self-attention network model, the method further comprises:
701. and calculating contour position information of each bone according to the bone position information of each bone from a preset slicing direction.
The bone position information is position information of each bone in the knee joint magnetic resonance image, and when a slice image is obtained from a fixed slice direction, contour position information may be calculated from the bone position information. Specifically, the contour position included in the contour position information may be a contour position intended to cover the corresponding bone, and need not completely coincide with the edge of the bone.
702. The knee joint magnetic resonance image is segmented into a plurality of region of interest images based on the contour position information.
Wherein the region of interest image of each bone comprises a plurality of slice images. In the foregoing, it is mentioned that a slice image obtained from a certain slice direction is related to pixels of a magnetic resonance image of the knee joint. A corresponding number of slice images can be obtained when segmenting the region of interest image. For convenience of understanding, for example, when the slice direction is coronal, the knee joint magnetic resonance image includes 384 slice images of 384 × 160, and when the femur region-of-interest image is segmented, the femur region-of-interest image includes 384 slice images, and the 384 slice images are superimposed to obtain the femur region-of-interest image, and other bone principles are the same and are not repeated.
703. Each slice image in the region of interest image is second segmented using a polarized self-attention network model to update bone position information.
704. And mapping each region-of-interest image back to the knee joint magnetic resonance image according to the contour position information to obtain bone position information and obtain an updated knee joint magnetic resonance image.
When the first segmentation is performed using the polarized self-attention network model, less accurate bone position information is obtained. The first segmented knee joint magnetic resonance image is then divided into a plurality of independent region of interest images, and a second segmentation is performed on each region of interest image. The same polarization self-attention network model is used, so that network resources are saved, and the calculation amount and the early preparation amount are reduced. The first segmentation and the second segmentation are performed, improving the accuracy of the bone position information.
In another embodiment of the present invention, for further definition and explanation, prior to the step of identifying bone position information in the magnetic resonance image of the knee joint, the method further comprises:
and performing histogram equalization processing on the knee joint magnetic resonance image and/or performing normalization processing on the knee joint magnetic resonance image.
The histogram equalization processing is a method for adjusting contrast by using an image histogram in the field of image processing, and is used for enhancing local contrast without influencing overall contrast.
In one embodiment, histogram equalization is performed on the knee joint magnetic resonance image, and then normalization is performed on the processed knee joint magnetic resonance image.
In another embodiment, one of the histogram equalization processing and the normalization processing may be performed.
Before identifying the bone position information, the knee joint magnetic resonance image is preprocessed, such as histogram equalization processing, normalization processing and the like, so that the subsequent processing efficiency is improved, and the bone segmentation efficiency is improved.
In another embodiment of the present invention, for further definition and illustration, the step of normalizing the knee joint magnetic resonance image comprises:
901. the voxel original parameters in the knee joint magnetic resonance image are obtained.
The voxel original parameters are parameter values of each voxel in the magnetic resonance image of the knee joint obtained by the MRI technology.
902. And comparing the voxel original parameters to obtain a voxel maximum value and a voxel minimum value.
The maximum voxel value refers to the maximum value in all original voxel parameters in the magnetic resonance image of the knee joint; the voxel minimum refers to the minimum value of all voxel original parameters in the knee joint magnetic resonance image. The comparison mode may be one-to-one comparison, or the current maximum value or minimum value may be retained, and compared with the value to be compared, the maximum value or minimum value is retained until all voxel original parameters are traversed.
903. And calculating to obtain the voxel conversion parameter corresponding to the voxel original parameter by using the voxel original parameter, the voxel maximum value and the voxel minimum value.
In one embodiment, the formula is as follows:
Figure 508964DEST_PATH_IMAGE001
;
wherein y is a voxel conversion parameter; x is a voxel original parameter; min is the voxel minimum; max is the voxel maximum.
All the original voxel parameters in the magnetic resonance image of the knee joint are converted to ensure that the voxel parameters are between 0 and 1. It should be noted that, in other embodiments, the normalization processing is not performed on the magnetic resonance image of the knee joint, and the voxel parameter refers to a voxel original parameter; and (4) carrying out normalization processing on the knee joint magnetic resonance image, wherein the voxel parameter refers to a voxel transformation parameter.
In another embodiment of the present invention, for further definition and explanation, before the step of dividing the bone in the magnetic resonance image of the knee joint into a plurality of bone regions according to the bone position information, the method further comprises:
1001. and acquiring knee joint medial direction information and/or knee joint lateral direction information of the knee joint magnetic resonance image.
Wherein, the knee joint inner side direction information refers to the information of the inner side which is judged by taking the knee joint as the object; the knee joint lateral direction information is information on the lateral side determined for the knee joint. When the information of the inner side direction of the knee joint is obtained, the information of the outer side direction of the knee joint can be deduced; similarly, when the lateral direction information of the knee joint is obtained, the medial direction information of the knee joint can be deduced.
In one embodiment, the knee magnetic resonance image has 384 slice images in the coronal position, wherein the slice image including the fibula is the knee outer side, and the corresponding label of the slice image is the knee outer side direction information. For example, the 384 slice images are numbered from 1 to 384, wherein the fibula is included in the 100 to 384 slice images, and it is proved that the knee joint lateral direction information is 100 to 384, and the knee joint medial direction information is 1 to 100, respectively.
The knee joint medial direction information and/or the knee joint lateral direction information can be directly extracted and obtained from a database or a preset path.
The step of dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information comprises:
identifying target position information of the target bone by using the knee joint medial direction information and/or the knee joint lateral direction information and the bone position information;
determining boundary information according to the target position information;
the target bone is divided based on the boundary information to obtain a bone region.
After the information of the inner side direction of the knee joint and/or the information of the outer side direction of the knee joint is obtained, more accurate target position information can be identified when the target position information is identified. Such as the lateral tibia (referring to the tibia in a direction proximal to the lateral side of the knee joint), the medial femur, and the like. The bone segmentation accuracy is improved.
In another embodiment of the present invention, for further definition and illustration, before the step of dividing the bone in the magnetic resonance image of the knee joint into a plurality of bone regions according to the bone position information, the method further comprises:
1101. and detecting the characteristic bones in the magnetic resonance image of the knee joint to generate characteristic bone position information.
The characteristic bone refers to a bone inside the knee joint and/or outside the knee joint that is capable of discriminating magnetic resonance images of the knee joint. In one embodiment, the characteristic bone is a fibula; when the fibula is detected, a single-stage target detection algorithm (YOLOv 5) is adopted. The generated characteristic bone position information refers to the position information of the fibula in the magnetic resonance image of the knee joint.
1102. And determining the knee joint medial direction information and/or the knee joint lateral direction information according to the characteristic bone position information.
The knee joint medial direction information and/or the knee joint lateral direction information are/is automatically determined according to the characteristic bones, so that the manual workload is reduced, the manual participation is reduced, and the automation degree, the accuracy and the efficiency of bone segmentation are improved.
The step of dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information comprises the following steps:
identifying target position information of the target bone by using the knee joint medial direction information and/or the knee joint lateral direction information and the bone position information;
determining boundary information according to the target position information;
the target bone is divided based on the boundary information to obtain a bone region.
After the information of the inner side direction of the knee joint and/or the information of the outer side direction of the knee joint are obtained, more accurate information of the target position can be identified when the information of the target position is identified. Such as the lateral tibia (referring to the tibia in a direction proximal to the lateral side of the knee joint), the medial femur, and the like. The bone segmentation accuracy is improved.
In another embodiment of the present invention, for further definition and illustration, the step of detecting a characteristic bone in the magnetic resonance image of the knee joint, the generating the characteristic bone position information comprises:
1201. and detecting whether the characteristic bones exist in each slice image from the corresponding slice direction according to preset slice detection direction information.
1202. And generating the position information of the characteristic bones according to the image numbers corresponding to the slice images with the characteristic bones.
And each slice image corresponds to a unique image number, and whether the characteristic bones exist in the slice images is detected one by one according to the arrangement sequence of the slice images and in a front-to-back or back-to-front mode. Specifically, a YOLOv5 detection network model is used for detecting the slice image. Then, the characteristic bone position information is generated according to the image number. In one embodiment, the characteristic bone position information is information of a corresponding image number. For example, when a characteristic bone is detected in the 100 th to 150 th slice images of the image number, the generated characteristic bone position information includes the image numbers 100 to 150.
Whether the characteristic bones exist in the slice images or not is judged, the characteristic bones are detected and the characteristic bone position information is generated, the process is simple, convenient and fast, the detection efficiency of the characteristic bone position information is improved, and therefore a small amount of resources are utilized to obtain the knee joint lateral direction information and/or the knee joint medial direction information, and the bone segmentation precision is improved.
For the convenience of understanding, the embodiment of the present application discloses a bone subregion segmentation implementation process in a knee joint image, which segments a knee joint magnetic resonance image into 29 bone regions, and the implementation process is based on the above bone subregion segmentation method in the knee joint image.
S1, acquiring a knee joint magnetic resonance image.
The knee joint magnetic resonance image is a knee joint MRI image of a case.
And S2, carrying out histogram equalization processing on the knee joint magnetic resonance image.
And S3, performing normalization processing on the knee joint magnetic resonance image after the histogram equalization processing so as to convert the voxel parameters in the knee joint magnetic resonance image to be between 0 and 1.
And S4, processing the normalized knee joint magnetic resonance image by using the polarization self-attention network model so as to roughly divide the knee joint magnetic resonance image.
And after coarse segmentation, obtaining bone position information of the femur, the femoral cartilage, the tibia, the tibial cartilage, the patella and the patellar cartilage in the knee joint magnetic resonance image. The bone position information comprises a plurality of pieces of information used for representing voxel point positions corresponding to the bone positions. The bone location information for each bone includes a plurality of voxel point locations.
In the course of rough segmentation, 160 slice images corresponding to the sagittal position are input into the polarization self-attention network model one by one, and the 160 slice images obtained by output are superimposed to form a roughly segmented knee joint magnetic resonance image.
And S5, cutting the roughly divided knee joint magnetic resonance image into 6 relatively independent interested area images from the slice direction of the sagittal position.
Wherein, the 6 images of the interested areas are images of femur, femoral cartilage, tibia, tibial cartilage, patella and patellar cartilage respectively. During cutting, 4 pieces of contour position information are set for each interested area image, and each contour position information corresponds to one voxel point position. The 4 contour position information constitutes a rectangular area within which the corresponding bone position is located.
And S6, from the slice direction of the sagittal position, inputting the slice directions of the images of the region of interest into the polarization self-attention network model one by one for fine segmentation.
And S7, splicing the images of the interested areas back to the magnetic resonance image of the knee joint according to the contour position information.
The bone position information of the bone is optimized through the fine segmentation, the accuracy of the bone position information is improved, and the similarity between the area covered by the voxel point position representing the bone in the slice image and the actual bone size is higher.
And S8, carrying out fibula detection on the knee joint magnetic resonance image by using the detection network model, and generating knee joint inner side direction information and knee joint outer side direction information.
The detection network is a YOLOv5 network, and can automatically detect the position of the fibula in the magnetic resonance image of the knee joint, so that the information of the inner side direction of the knee joint and the information of the outer side direction of the knee joint are obtained. When the fibula is detected, slice images corresponding to the sagittal position are input into the detection network model one by one. After the detection is finished, the image number corresponding to the slice image containing the fibula can be obtained, so that the knee joint inner side direction information containing the image number representing the knee joint inner side and the knee joint outer side direction information containing the image number representing the knee joint outer side can be obtained.
And S9, identifying target position information from the bone position information of the tibial cartilage.
Specifically, the slice direction specified in the slice direction information is a coronal bit. The right extreme position of the left tibial cartilage and the left extreme position of the right tibial cartilage are identified from the slice images corresponding to the coronal position. Wherein both the left and right sides are relative to the slice image. When the slice image is placed upright, it faces the front of the slice image, the left-hand side is the left side, and the right-hand side is the right side. Both the left and right tibial cartilage were the target bone. The voxel point position corresponding to the right limit position and the voxel point position corresponding to the left limit position are both target position information. When identifying the limit positions, the limit positions of all the slice images corresponding to the coronal, sagittal, or latitudinal axes are found, and the most extreme is selected as the final limit position. For example, when the left limit position is identified, the left limit positions in all slice images corresponding to the coronal positions are identified one by one, and then compared, and the leftmost position is determined as the left limit position. The identification modes of other limit positions are the same, and are not described again.
When the right limit position is identified, traversing voxel parameters of the voxel point points corresponding to the left tibia cartilage from right to left, and finding the voxel point located on the rightmost side, wherein the voxel point includes a row number and a column number. In this embodiment, the voxel point location with a voxel parameter greater than 0.5 is the tibial cartilage. The identification of the left limit position is the same and will not be described again.
S10, acquiring a first boundary strategy and a second boundary strategy.
The first boundary strategy is to use the number of columns corresponding to the right limit position of the left tibial cartilage as the first boundary of the tibia, and the first boundary information includes the corresponding number of columns. The second boundary strategy is to use the number of columns corresponding to the left limit position of the right tibial cartilage as the second boundary of the tibia, and the second boundary information includes the corresponding number of columns.
And S11, judging whether the side of the first boundary information, which is far away from the second boundary information, is the inner side of the tibia according to the inner side direction information of the knee joint and the outer side direction information of the knee joint.
If yes, the tibia is divided into 3 regions, the left part of the boundary position corresponding to the first boundary information is the inner side of the tibia, the right part of the boundary position corresponding to the second boundary information is the outer side of the tibia, and the part between the first boundary information and the second boundary information is the cartilage-free coverage area of the tibia. Meanwhile, the tibial cartilage is divided into 2 areas, the left part of the boundary position corresponding to the first boundary information is the inner side of the tibial cartilage, and the right part of the boundary position corresponding to the second boundary information is the outer side of the tibial cartilage.
When the height of the tibia is determined, the slice direction is in a coronal state, and the target bone is the tibial cartilage; the upper limit position and the lower limit position of the tibial cartilage at the target position; the voxel point points of the corresponding positions are target position information. The mode of searching the target position is the same as the mode of searching the right limit position of the tibial cartilage, and the description is omitted. The third boundary strategy of the tibia is to take the middle point position of the upper limit position and the lower limit position of the tibia cartilage, extend downwards for 2cm, and the corresponding line number is third boundary information.
And S12, identifying target position information from the bone position information of the tibia.
Specifically, the slice direction specified in the slice direction information is a vector. Left and right extreme positions of the medial tibia and left and right extreme positions of the lateral tibia are identified from slice images corresponding to the sagittal position. The tibia is the target bone. The left limit position and the right limit position are target positions, and the corresponding voxel point points are target position information. The mode of searching the target position is the same as the mode of searching the right limit position of the tibial cartilage, and the description is omitted.
And S13, acquiring a fourth boundary strategy and a fifth boundary strategy.
The fourth boundary strategy is to trisect the area between the left limit position and the right limit position of the inner side of the tibia, and the number of columns corresponding to the boundary is determined as fourth boundary information; the fifth boundary strategy is to trisect the region between the left limit position and the right limit position of the lateral side of the tibia, and the number of columns corresponding to the boundary is determined as fifth boundary information.
On the basis of dividing the tibia and the tibial cartilage into the inner side and the outer side, the tibia is divided into a tibial outer anterior area, a tibial outer middle area and a tibial outer posterior area, and the tibial cartilage is divided into a tibial cartilage outer anterior area, a tibial cartilage outer middle area and a tibial cartilage outer posterior area. I.e. a total of 12 bone regions, plus 13 bone regions of the tibia over the area of the cartilaginous free coverage.
And S14, traversing the coronal slice image and detecting the slice image with the femur concavity appearing for the first time.
The femur fovea is divided into left and right pieces, the first occurrence is, for example, traversed from the 1 st slice image, the femur fovea appears for the first time in 60 th slices, but 60-65 slices each contain the femur fovea, and the 60 th slice image is the slice image of the femur fovea appearing for the first time.
And S15, identifying target position information of the femoral cartilage in the slice image in which the femoral concavity appears for the first time.
Specifically, the slice direction specified in the slice direction information is a coronal position. The target positions are the right extreme position of the left femoral cartilage and the left extreme position of the right femoral cartilage. The voxel point position corresponding to the right limit position and the voxel point position corresponding to the left limit position are target position information.
And S16, acquiring a corresponding first femoral boundary strategy.
The first femoral boundary strategy is to use target position information corresponding to a target position close to the outer side of the knee joint as first femoral boundary information. That is, when the right limit position is closer to the outer side of the knee joint than the left limit position, the voxel point position corresponding to the right limit position is used as the first femoral boundary information. The femur and the femoral cartilage are classified into medial femur, lateral femur, medial femoral cartilage, and lateral femoral cartilage according to the first femoral boundary information.
And S17, acquiring an image number of a sagittal section image corresponding to the first femoral boundary information.
And S18, identifying target position information from the bone position information of the femoral cartilage at the sagittal position.
Specifically, the femoral cartilage is divided into a femoral cartilage anterior block, a femoral cartilage middle block and a femoral cartilage posterior block according to an intersection line of the femoral cartilage and the left limit position of the tibia and an intersection line of the femoral cartilage and the right limit position of the tibia. The target positions are the upper limit position of the anterior femoral cartilage mass and the upper limit position of the posterior femoral cartilage mass or the middle femoral cartilage mass. The voxel point position corresponding to the upper limit position of the femoral cartilage anterior block is first femoral target position information; the voxel point positions corresponding to the upper limit positions of the femoral cartilage posterior block or the femoral cartilage middle block are second femoral target position information.
And S19, acquiring a second femoral boundary strategy.
The second femoral boundary strategy comprises that the boundary position is a midpoint of the first femoral target position information and the second femoral target position information. That is, the second femur boundary information is the voxel point position and of the first femur target position information and the voxel point positions corresponding to the middle points of the voxel point positions of the second femur target position information.
And S20, identifying target position information from the bone position information of the tibia in the sagittal position.
Specifically, the target positions are a tibia left limit position and a tibia right limit position. The target position information includes the number of columns corresponding to the tibia left side limit position and the number of columns corresponding to the tibia right side limit position.
And S21, acquiring a third femur boundary strategy.
The third femoral boundary strategy comprises defining boundary positions as an intersection point of the femoral cartilage and the left limit position of the tibia and an intersection point of the femoral cartilage and the right limit position of the tibia. The third femoral boundary strategy further comprises defining a line connecting the intersection point and the second femoral boundary information as third boundary information. The two intersection points are connected with the middle point to obtain two boundary lines, the femur is divided into three bone areas, and the femoral cartilage is divided into three bone areas; the anterior femoral region, the middle femoral region, the posterior femoral region, the anterior femoral chondral region, the middle femoral chondral region and the posterior femoral chondral region. The femur and femoral cartilage were divided into 12 bone regions in combination with the medial and lateral sides of the femur and femoral cartilage.
And S22, identifying target position information from the bone position information of the patella on the horizontal axis position.
Specifically, the target position is the lowest point of the patellar cartilage in the slice image with the largest patellar area. The target position information is the voxel point position corresponding to the lowest point.
And S23, acquiring a patella boundary strategy.
The patellar boundary strategy comprises slice images of the sagittal position corresponding to the boundary position as the lowest point, and the positions of the slice images are moved to the outer side of the knee joint by 8.
Firstly, a section image with the maximum patella area on the transverse axis position is obtained. The lowest point of the patella or patellar cartilage in the slice image is then identified. The image number of the slice image of the sagittal position corresponding to the lowest point is acquired, and the image number is shifted 8 times to the outer side of the knee joint, so that the corresponding number of columns in the slice image of the horizontal axis position is obtained, and the number of columns is patellar boundary information. The patella and the patellar cartilage are divided into 4 bone areas including a patellar inner side, a patellar outer side, a patellar cartilage inner side and a patellar cartilage outer side according to patellar boundary information.
In the above, the knee joint magnetic resonance image was divided into 29 bone regions in total. The boundary information of each bone is determined by automatically identifying the bone position information and combining a preset boundary strategy. The target bone is then segmented based on the boundary information. The method does not need manual participation, improves the efficiency of bone segmentation, and reduces the error rate of segmentation, thereby improving the accuracy rate of bone segmentation.
The embodiment of the application provides a bone subregion segmentation system in a knee joint image, as shown in fig. 6, which includes an obtaining module 1 for obtaining a knee joint magnetic resonance image;
the identification module 2 is used for identifying bone position information in the knee joint magnetic resonance image;
and the segmentation module 3 is used for dividing the bone in the knee joint magnetic resonance image into a plurality of bone areas according to the bone position information.
Preferably, the segmentation module 3 comprises a target location unit for identifying target location information of a target bone from the bone location information;
a boundary information unit for determining boundary information according to the target position information;
a segmentation unit, configured to segment the target bone based on the boundary information to obtain the bone region.
Preferably, the target position unit includes a slice subunit for acquiring slice direction information of the target bone; the slice direction information is used for describing the slice direction of the knee joint magnetic resonance image;
a slice image subunit configured to determine a slice image corresponding to the slice direction information;
a target position subunit configured to determine a position of the target bone in the slice image based on the bone position information;
and the voxel parameter subunit is used for obtaining the target position information according to the voxel parameter corresponding to the position in the slice image.
Preferably, the voxel parameter subunit includes a traversal subunit, configured to traverse, line by line, the voxel parameters at the corresponding positions in the slice image according to a preset traversal direction, and determine whether the voxel parameter matching a preset parameter threshold exists in the corresponding position in the slice image;
if so, recording the number of rows, the number of columns and the number of parameters of the voxel parameters matched with the parameter threshold;
and the calculating subunit is used for calculating the target position information of the target bone according to the number of the rows, the number of the columns and the number of the parameters.
Preferably, the boundary information unit includes a strategy subunit for acquiring a boundary strategy corresponding to the target bone; the boundary strategy is used for describing the relation between different boundary positions and the target position information;
and the boundary subunit is used for determining the boundary information according to the boundary strategy and the target position information.
Preferably, the identification module 2 includes a first segmentation module, configured to perform a first segmentation on the knee joint magnetic resonance image by using a polarized self-attention network model, so as to obtain the bone position information of each bone.
Preferably, the system further comprises a contour module, configured to calculate contour position information of each bone according to the bone position information of each bone from a preset slice direction after the first segmentation of the knee joint magnetic resonance image by using the polarized self-attention network model;
a cropping module for segmenting the knee joint magnetic resonance image into a plurality of region of interest images based on the contour position information; the region of interest image of each of the bones comprises a plurality of slice images;
a second segmentation module for performing a second segmentation on each of the slice images in the region of interest image using the polarized self-attention network model to update the bone location information;
and the mapping module is used for mapping each interested region image back to the knee joint magnetic resonance image according to the contour position information to obtain the bone position information and obtain the updated knee joint magnetic resonance image.
Preferably, the system further comprises a preprocessing module for performing histogram equalization processing on the knee joint magnetic resonance image before the identifying bone position information in the knee joint magnetic resonance image; and/or the presence of a gas in the gas,
and carrying out normalization processing on the knee joint magnetic resonance image.
Preferably, the preprocessing module comprises a voxel original parameter unit for acquiring voxel original parameters in the knee joint magnetic resonance image;
the comparison unit is used for comparing the voxel original parameters to obtain a voxel maximum value and a voxel minimum value;
and the conversion unit is used for calculating the voxel conversion parameter corresponding to the voxel original parameter by using the voxel original parameter, the voxel maximum value and the voxel minimum value.
Preferably, the system further comprises a first orientation module for acquiring knee joint medial orientation information and/or knee joint lateral orientation information of the knee joint magnetic resonance image before the bone in the knee joint magnetic resonance image is divided into a plurality of bone regions according to the bone position information;
the segmentation module 3 comprises a first segmentation unit for identifying target position information of a target bone by using the knee joint medial direction information and/or the knee joint lateral direction information, and the bone position information;
the second dividing unit is used for determining boundary information according to the target position information;
a third dividing unit, configured to divide the target bone based on the boundary information to obtain the bone region.
Preferably, the system further comprises a second orientation module, configured to detect a characteristic bone in the knee joint magnetic resonance image before the bone in the knee joint magnetic resonance image is divided into a plurality of bone regions according to the bone position information, and generate characteristic bone position information;
the characteristic bone module is used for determining the inner direction information of the knee joint and/or the outer direction information of the knee joint according to the characteristic bone position information;
the segmentation module 3 comprises a fourth dividing unit for identifying target position information of a target bone by using the knee joint medial direction information and/or the knee joint lateral direction information, and the bone position information;
a fifth dividing unit, configured to determine boundary information according to the target location information;
a sixth dividing unit, configured to divide the target bone based on the boundary information to obtain the bone region.
Preferably, the second direction module includes a determining unit, configured to detect whether the characteristic bone exists in each slice image from a corresponding slice direction according to preset slice detection direction information;
and the generating unit is used for generating the position information of the characteristic bones according to the image numbers corresponding to the slice images with the characteristic bones.
Here, it should be noted that: the above description applied to the embodiment of the bone subregion segmentation system in the knee joint image is similar to the above description of the method, and has the same beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiment of the bone subregion segmentation system in the knee joint image of the present invention, those skilled in the art should understand that the description of the embodiment of the method of the present invention refers to.
It should be noted that, in the embodiment of the present invention, if the method is implemented in the form of a software functional module and sold or used as a standalone product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the application also discloses a storage medium which stores a computer program capable of being loaded by a processor and executing the method.
The embodiment of the present application further discloses a device for segmenting bone subregions in a knee joint image, as shown in fig. 7, which includes a processor 100, at least one communication bus 200, a user interface 300, at least one external communication interface 400 and a memory 500. Wherein the communication bus 200 is configured to enable connective communication between these components. Wherein the user interface 300 may comprise a display screen and the external communication interface 400 may comprise a standard wired interface and a wireless interface. The memory 500 stores therein a bone subregion segmentation method in the knee joint image. Wherein the processor 100 is configured to employ the above method in performing the bone subregion segmentation method in the knee joint image stored in the memory 500.
The above description applied to the embodiments of the bone subregion segmenting device and the storage medium in the knee joint image is similar to the description of the above method embodiments, and has similar beneficial effects to the method embodiments. For technical details not disclosed in the embodiments of the device for segmenting bone subregions in knee joint images and the storage medium of the invention, reference should be made to the description of the embodiments of the method of the invention for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a device to perform all or part of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (12)

1. A method for segmenting bone subregions in a knee joint image is characterized by comprising the following steps:
acquiring a knee joint magnetic resonance image;
identifying bone position information in the knee joint magnetic resonance image;
dividing bones in the knee joint magnetic resonance image into a plurality of bone areas according to the bone position information;
the step of dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information comprises:
identifying target position information of a target bone from the bone position information;
determining boundary information according to the target position information;
dividing the target bone based on the boundary information to obtain the bone region;
the step of identifying target position information of a target bone from the bone position information includes:
acquiring slice direction information of the target bone; the slice direction information is used for describing the slice direction of the knee joint magnetic resonance image;
determining a slice image corresponding to the slice direction information;
determining a location of the target bone in the slice image based on the bone location information;
obtaining the target position information according to the voxel parameter corresponding to the position in the slice image;
the step of determining boundary information according to the target location information includes:
acquiring a boundary strategy corresponding to the target bone; the boundary strategy is used for describing the relation between different boundary positions and the target position information;
and determining the boundary information according to the boundary strategy and the target position information.
2. The method of bone subregion segmentation in knee joint image according to claim 1, wherein the step of obtaining the target position information based on the voxel parameter corresponding to the position in the slice image comprises:
according to a preset traversal direction, traversing voxel parameters of corresponding positions in the slice image line by line, and judging whether the voxel parameters matched with a preset parameter threshold exist in the corresponding positions in the slice image;
if yes, recording the number of lines, the number of columns and the number of parameters of the voxel parameters matched with the parameter threshold;
and calculating the target position information of the target bone according to the row number, the column number and the parameter number.
3. The method of bone subregion segmentation in knee joint image according to claim 1, characterized in that the step of identifying bone position information in the knee joint magnetic resonance image comprises:
and carrying out first segmentation on the knee joint magnetic resonance image by utilizing a polarization self-attention network model to obtain the bone position information of each bone.
4. The method for bone subregion segmentation in knee joint image according to claim 3, characterized in that after said first segmentation of said knee joint magnetic resonance image by using polarized self-attention network model, said method further comprises:
calculating contour position information of each bone according to the bone position information of each bone from a preset slicing direction;
segmenting the knee joint magnetic resonance image into a plurality of region of interest images based on the contour position information; the region of interest image of each of the bones comprises a plurality of slice images;
performing a second segmentation on each of the slice images in the region of interest image using the polarized self-attention network model to update the bone location information;
and mapping each region-of-interest image back to the knee joint magnetic resonance image according to the contour position information to obtain the bone position information and obtain the updated knee joint magnetic resonance image.
5. The method of bone subregion segmentation in knee joint image of any one of claims 1 to 4, characterized in that before said identifying bone position information in said knee joint magnetic resonance image, said method further comprises:
performing histogram equalization processing on the knee joint magnetic resonance image; and/or the presence of a gas in the gas,
and carrying out normalization processing on the knee joint magnetic resonance image.
6. The method for bone subregion segmentation in knee joint image according to claim 5, wherein the step of normalizing the knee joint magnetic resonance image comprises:
acquiring original voxel parameters in the knee joint magnetic resonance image;
comparing the voxel original parameters to obtain a voxel maximum value and a voxel minimum value;
and calculating to obtain a voxel conversion parameter corresponding to the voxel original parameter by using the voxel original parameter, the voxel maximum value and the voxel minimum value.
7. The method for bone sub-region segmentation in knee joint image according to claim 1, wherein before the bone in knee joint magnetic resonance image is divided into a plurality of bone regions according to the bone position information, the method further comprises:
acquiring knee joint medial direction information and/or knee joint lateral direction information of the knee joint magnetic resonance image;
the step of dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information comprises:
identifying target position information of a target bone using the knee joint medial direction information and/or the knee joint lateral direction information, and the bone position information;
determining boundary information according to the target position information;
the target bone is divided based on the boundary information to obtain the bone region.
8. The method for bone subregion segmentation in knee joint image according to claim 1, characterized in that before said dividing the bone in knee joint magnetic resonance image into a plurality of bone regions according to the bone position information, the method further comprises:
detecting a characteristic bone in the knee joint magnetic resonance image to generate characteristic bone position information;
determining knee joint medial direction information and/or knee joint lateral direction information according to the characteristic bone position information;
the step of dividing the bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information comprises:
identifying target position information of a target bone using the knee joint medial direction information and/or the knee joint lateral direction information, and the bone position information;
determining boundary information according to the target position information;
the target bone is divided based on the boundary information to obtain the bone region.
9. The method of bone subregion segmentation in knee joint image according to claim 8, characterized in that the step of detecting the characteristic bone in the knee joint magnetic resonance image and generating the characteristic bone position information includes:
detecting whether the characteristic bones exist in each slice image from the corresponding slice direction according to preset slice detection direction information;
and generating characteristic bone position information according to the image number corresponding to the slice image with the characteristic bone.
10. A bone subregion segmentation system in knee joint image is characterized by comprising an acquisition module, a segmentation module and a segmentation module, wherein the acquisition module is used for acquiring knee joint magnetic resonance images;
an identification module for identifying bone position information in the knee joint magnetic resonance image;
a segmentation module, configured to divide a bone in the knee joint magnetic resonance image into a plurality of bone regions according to the bone position information;
the segmentation module comprises a target position unit for identifying target position information of a target bone from the bone position information;
a boundary information unit for determining boundary information according to the target position information;
a segmentation unit, configured to segment the target bone based on the boundary information to obtain the bone region;
the target position unit comprises a slicing subunit, and is used for acquiring the slicing direction information of the target bone; the slice direction information is used for describing the slice direction of the knee joint magnetic resonance image;
a slice image subunit configured to determine a slice image corresponding to the slice direction information;
a target position subunit for determining a position of the target bone in the slice image based on the bone position information;
the voxel parameter subunit is used for obtaining the target position information according to the voxel parameter corresponding to the position in the slice image;
the boundary information unit comprises a strategy subunit used for acquiring a boundary strategy corresponding to the target bone; the boundary strategy is used for describing the relation between different boundary positions and the target position information;
and the boundary subunit is used for determining the boundary information according to the boundary strategy and the target position information.
11. A device for bone sub-region segmentation in knee joint images, comprising a memory and a processor, wherein the memory stores therein a method for bone sub-region segmentation in knee joint images, and the processor is configured to employ the method of any one of claims 1 to 9 when performing the method for bone sub-region segmentation in knee joint images.
12. A storage medium storing a computer program which can be loaded by a processor and which executes the method according to any of claims 1-9.
CN202210948517.8A 2022-08-09 2022-08-09 Method, system, device and storage medium for bone subregion segmentation in knee joint image Active CN115035136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210948517.8A CN115035136B (en) 2022-08-09 2022-08-09 Method, system, device and storage medium for bone subregion segmentation in knee joint image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210948517.8A CN115035136B (en) 2022-08-09 2022-08-09 Method, system, device and storage medium for bone subregion segmentation in knee joint image

Publications (2)

Publication Number Publication Date
CN115035136A CN115035136A (en) 2022-09-09
CN115035136B true CN115035136B (en) 2023-01-24

Family

ID=83130248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210948517.8A Active CN115035136B (en) 2022-08-09 2022-08-09 Method, system, device and storage medium for bone subregion segmentation in knee joint image

Country Status (1)

Country Link
CN (1) CN115035136B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009052562A1 (en) * 2007-10-23 2009-04-30 Commonwealth Scientific And Industrial Research Organisation Automatic segmentation of articular cartilage in mr images
JP5330041B2 (en) * 2009-03-17 2013-10-30 株式会社日立メディコ Magnetic resonance imaging system
CN104809740B (en) * 2015-05-26 2017-12-08 重庆大学 Knee cartilage image automatic segmentation method based on SVM and Hookean region growth
US11471096B2 (en) * 2018-10-25 2022-10-18 The Chinese University Of Hong Kong Automatic computerized joint segmentation and inflammation quantification in MRI
CN111080573B (en) * 2019-11-19 2024-02-27 上海联影智能医疗科技有限公司 Rib image detection method, computer device and storage medium
CN114723762A (en) * 2022-04-22 2022-07-08 瓴域影诺(北京)科技有限公司 Automatic knee joint CT image segmentation method and device and electronic equipment

Also Published As

Publication number Publication date
CN115035136A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN108520519B (en) Image processing method and device and computer readable storage medium
KR101883258B1 (en) Detection of anatomical landmarks
JP5186269B2 (en) Image recognition result determination apparatus, method, and program
US9922268B2 (en) Image interpretation report creating apparatus and image interpretation report creating system
WO2021017297A1 (en) Artificial intelligence-based spine image processing method and related device
Subburaj et al. Automated identification of anatomical landmarks on 3D bone models reconstructed from CT scan images
US7599539B2 (en) Anatomic orientation in medical images
JP4545971B2 (en) Medical image identification system, medical image identification processing method, medical image identification program, and recording medium thereof
JP5314614B2 (en) MEDICAL IMAGE DISPLAY DEVICE, MEDICAL IMAGE DISPLAY METHOD, AND PROGRAM
US8953856B2 (en) Method and system for registering a medical image
US20140093153A1 (en) Method and System for Bone Segmentation and Landmark Detection for Joint Replacement Surgery
US8384735B2 (en) Image display apparatus, image display control method, and computer readable medium having an image display control program recorded therein
CN112037200A (en) Method for automatically identifying anatomical features and reconstructing model in medical image
US20110188706A1 (en) Redundant Spatial Ensemble For Computer-Aided Detection and Image Understanding
EP3424017B1 (en) Automatic detection of an artifact in patient image data
EP3373194B1 (en) Image retrieval apparatus and image retrieval method
Emrich et al. CT slice localization via instance-based regression
US20060078184A1 (en) Intelligent splitting of volume data
Zou et al. Semi-automatic segmentation of femur based on harmonic barrier
CN115035136B (en) Method, system, device and storage medium for bone subregion segmentation in knee joint image
CN104146766B (en) Scanning means, scan method and medical image equipment
CN112734740B (en) Method for training target detection model, target detection method and device
Feng et al. Automatic fetal weight estimation using 3d ultrasonography
CN115700740A (en) Medical image processing method, apparatus, computer device and storage medium
JP7228332B2 (en) MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant