CN110458850B - Segmentation method and segmentation system for large joint tissues - Google Patents

Segmentation method and segmentation system for large joint tissues Download PDF

Info

Publication number
CN110458850B
CN110458850B CN201910708060.1A CN201910708060A CN110458850B CN 110458850 B CN110458850 B CN 110458850B CN 201910708060 A CN201910708060 A CN 201910708060A CN 110458850 B CN110458850 B CN 110458850B
Authority
CN
China
Prior art keywords
tissue
single pixel
contour
image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910708060.1A
Other languages
Chinese (zh)
Other versions
CN110458850A (en
Inventor
林海晓
武正强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Linkmed Technology Co ltd
Original Assignee
Beijing Linkmed Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Linkmed Technology Co ltd filed Critical Beijing Linkmed Technology Co ltd
Priority to CN201910708060.1A priority Critical patent/CN110458850B/en
Publication of CN110458850A publication Critical patent/CN110458850A/en
Application granted granted Critical
Publication of CN110458850B publication Critical patent/CN110458850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The invention provides a segmentation method and a segmentation system for large joint tissues, which solve the technical problem of low efficiency of the conventional joint tissue segmentation. The method comprises the following steps: forming an image segmentation reference; forming tissue class and position characteristics of a single pixel in the MRI image according to the image segmentation reference to determine a tissue plane contour; and forming a tissue stereo contour according to the tissue plane contour in each MRI image and the image segmentation reference. The method has the advantages that the tissue segmentation process is simplified by using the bone characteristics, the automatic processing and identification accuracy of the object contour formed by using the related image and information characteristics is improved, the image quantization rule, the pixel connotation information and the modeling framework formed by using the segmentation reference are effectively fused, so that the main body automatic segmentation, the contour accurate positioning and the automatic three-dimensional modeling of the large-joint structure tissue can be realized in the MRI image, and the identification efficiency of professional human identification resources is effectively improved.

Description

Segmentation method and segmentation system for large joint tissues
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method and a system for segmenting large joint tissues.
Background
The joint structures are complex, for example, the wrist joints include bones, tendons, vessels and nerves, the contours between tissues except the bones are often not clear due to the Imaging gray scale resolution of MRI (Magnetic Resonance Imaging), and the edge gray scale regions between the tissues are combined, so that the image segmentation between the tissues is difficult.
In the prior art, selective editing, defect compensation processing, artifact and tedious data separation of various tissue patterns need to be performed manually, and then a region growing method is used for generating a segmentation result so as to establish a complete digital model, such as contour segmentation of larger bone tissues. This can consume a significant amount of time for the operator, and the time effectiveness cannot be met by professional resources when more data needs to be processed. In the prior art, there is an image processing process for classifying pattern elements associated with an organization by using a computer algorithm such as a random forest algorithm, and the main process is as follows:
a process of building a series of decision trees by learning labeled data sample sets in a random fashion. The process of training each tree is a process of building a series of nodes. In the nodes of each decision tree, the nodes are divided into intermediate nodes and leaf nodes. Each intermediate node is a weak classifier, i.e. an intermediate node contains a question, which is split into left and right child nodes according to the answer of the question, in order to maximize some measure obtained after splitting. Information Gain Ratio (IGR) is used as a measure of the split nodes of the tree. The definition is as follows:
Figure BDA0002152805450000021
G(R)=Info(D)-InfoR(D)
Figure BDA0002152805450000022
Figure BDA0002152805450000023
Figure BDA0002152805450000024
where D represents a sample set, R represents an arbitrary classification, and p represents a probability of a certain class. G (R) represents information gain, and I (D) represents split information amount. After the current sample set is split at a certain time, the higher the obtained information gain rate is, the better the splitting effect is, and the purer the split subset is. The data structure of the split left and right child nodes is the same as that of the parent node. And the sample set in the child node should be purer than that of the parent node, which means that the samples of a certain category occupy a higher proportion than other samples, so that the determination of the category attribution problem is more convenient. The establishment of a tree is a process of continuously splitting a node downwards, a leaf node is usually the last layer of a decision tree, and the leaf node contains the classification result and does not need to be further split. Training of a tree begins with splitting the root node and ends with the leaf node. A plurality of decision trees trained in a random fashion form a random forest. The intermediate nodes in each tree will contain a classifier consisting of a feature and a threshold corresponding to that feature. And the result of one classification (the probability of being judged to be each class) is contained in the leaf node.
But the effect of boundary gray scale features for MRI images is limited.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a segmentation method and a segmentation system for large joint tissues, which solve the technical problem of low efficiency in existing joint tissue segmentation.
The segmentation method of the large joint organization of the embodiment of the invention comprises the following steps:
forming an image segmentation reference;
forming tissue class and position characteristics of a single pixel in the MRI image according to the image segmentation reference to determine a tissue plane contour;
and forming a tissue stereo contour according to the tissue plane contour in each MRI image and the image segmentation reference.
In an embodiment of the present invention, the forming the image segmentation reference includes:
establishing a coordinate space of the MRI images according to the pixel size of each MRI image;
determining a bone contour in the MRI image according to the bone image characteristics;
marking a range reference point of the tissue to be segmented in the reference MRI image;
forming a dummy pixel surrounding a single pixel in the reference MRI image;
and forming a random forest classification model of the tissue to be segmented through a pixel training set.
In an embodiment of the present invention, the dummy pixels may be formed by three single pixels arranged in a straight line, or may be formed by three single pixels arranged in a polygonal line.
In an embodiment of the present invention, the forming tissue type and position characteristics of a single pixel in an MRI image according to the image segmentation reference to determine a tissue plane contour includes:
determining the category of each pseudo pixel through a random forest classification model;
determining the category of a single pixel through the random forest classification model;
comparing the category weights of the single pixel and the associated pseudo pixels to confirm the tissue category of the single pixel;
and comparing the gray weight values of the single pixel and the associated pseudo pixels to confirm the position characteristics of the single pixel and form a tissue plane outline.
In an embodiment of the present invention, the comparing the class weights of the single pixel and the associated dummy pixels to determine the tissue class of the single pixel includes:
clustering by using a range reference point to obtain an aggregation range set of the single pixel;
in each aggregation range, comparing each single pixel class with the surrounding pseudo pixel classes, confirming that the single pixel class is accurate when the classes of at least two adjacent pseudo pixels are the same as the classes of the single pixels, and confirming the tissue class corresponding to the classes; when the category of the absence of at least two adjacent dummy pixels is the same as the category of the single pixel, the single pixel is excluded from the aggregation range.
In an embodiment of the present invention, the comparing the class weights of the single pixel and the associated dummy pixels to determine the tissue class of the single pixel includes:
determining each aggregation range one by one, and determining a peripheral single pixel of the edge in one aggregation range;
comparing the gray level of each peripheral single pixel with the gray level of the surrounding dummy pixels, and when the gray level of at least two adjacent dummy pixels of the peripheral single pixel and the gray level of the single pixel reach a gray level jump threshold value, determining that the contour of the corresponding single pixel is accurate; and when the gray values of two adjacent dummy pixels and the gray value of a single pixel do not reach the gray jump threshold value, excluding the single pixel from the aggregation range.
In an embodiment of the present invention, the forming a tissue volume contour from the tissue plane contour in each MRI image in combination with the image segmentation reference includes:
establishing relative position characteristics between the tissue plane contour and the bone contour in each MRI image;
forming a fitting coefficient between the adjacent MRI images according to the variation trend of the relative position characteristics between tissues of the adjacent MRI images;
forming the tissue volume contour in the coordinate space in combination with the tissue plane contour and the fitting coefficient in an MRI image.
The segmentation system of the large joint organization of the embodiment of the invention comprises:
a memory for storing program codes corresponding to the processing procedures of the segmentation method of the large joint tissues;
a processor for executing the program code.
The segmentation system of the large joint tissue in the embodiment of the invention comprises:
reference forming means for forming an image division reference;
two-dimensional contour forming means for forming tissue type and position characteristics of a single pixel in the MRI image based on the image segmentation reference to determine a tissue plane contour;
and the three-dimensional contour forming device is used for forming a tissue three-dimensional contour according to the tissue plane contour in each MRI image and the image segmentation reference.
According to the segmentation method and the segmentation system for the large-joint tissue, provided by the embodiment of the invention, the tissue segmentation process is simplified by utilizing the bone characteristics, the automatic processing and identification accuracy of the object contour formed by utilizing the related image and information characteristics is improved, and the image quantization rule, the pixel content information and the modeling framework are effectively fused by utilizing the segmentation benchmark, so that the large-joint tissue can realize automatic main body segmentation, accurate contour positioning and automatic three-dimensional modeling in an MRI image, and the identification efficiency of professional human identification resources is effectively improved.
Drawings
Fig. 1 is a schematic flow chart illustrating a segmentation method of a large joint organization according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a forming process of a segmentation reference in the segmentation method of a large joint tissue according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a pseudo pixel in a segmentation method of a large joint tissue according to an embodiment of the invention.
Fig. 4 is a schematic diagram illustrating a forming process of tissue type and location attribute in a segmentation method of a large joint tissue according to an embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating a process of forming a solid contour in the segmentation method of a large joint tissue according to an embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating an architecture of a segmentation system for large-joint organization according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and more obvious, the present invention is further described below with reference to the accompanying drawings and the detailed description. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The segmentation method of the large joint organization according to an embodiment of the present invention is shown in fig. 1. In fig. 1, the present embodiment includes:
step 100: an image segmentation basis is formed.
Those skilled in the art will appreciate that a set of parallel section image references includes, but is not limited to, an intra-image planar coordinate reference, an inter-image coordinate conversion reference, and a color space of the image. Different sectional images of the same object represented in each image can be associated sectional objects.
One skilled in the art will appreciate that a single pixel and associated region or graphical object may be located according to both coordinate space and color space.
The skilled person can understand that the definition of bone tissue influenced by an imaging mechanism in an MRI image is better, the section edge is clear, and the skeleton section contour positioning can be obtained by utilizing the edge detection and extraction method of the existing image processing technology.
Those skilled in the art can understand that a plurality of dominant and recessive characteristics of a determined tissue can be obtained by performing diversified label processing on the graphical characteristics of the tissue in the MRI image of the major joint, and an effective random forest classification model for the tissue consisting of the major joint can be formed by forming a training set and adopting a supervised learning mode and utilizing a random forest algorithm.
Step 200: tissue classification and location features of a single pixel in the MRI image are formed from the image segmentation basis to determine tissue plane contours.
As can be understood by those skilled in the art, the tissue type can be obtained through the information connotation of the image, the position feature can be obtained through the color excess of the image, and the mutual verification of the two related recessive features can improve the tissue plane contour precision in the image.
Step 300: and forming a tissue three-dimensional contour according to the tissue plane contour in each MRI image and the image segmentation reference.
As can be understood by those skilled in the art, in the existing three-dimensional modeling process, the three-dimensional contour of the same object can be formed according to the number of parallel sections and the contour of the object in the sections, and the smoothness and the precision of the three-dimensional contour can be improved by quantifying the change trend of the associated object in different sections.
According to the segmentation method of the large-joint organization, the automatic processing and identification accuracy of the object contour is improved by utilizing the relevant image and information characteristics, the image quantization rule, the pixel connotation information and the modeling framework are effectively fused by utilizing the segmentation benchmark, so that the large-joint organization can realize the automatic main body segmentation, the accurate contour positioning and the automatic three-dimensional modeling in the MRI image, and the identification efficiency of professional human identification resources is effectively improved.
The division reference formed in the division method of the large joint tissue according to an embodiment of the present invention is shown in fig. 2. In fig. 2, the present embodiment includes:
step 110: and establishing a coordinate space of the MRI images according to the pixel size of each MRI image.
And establishing an x-y coordinate space by using the number of pixels in the MRI image and the image resolution, and further establishing an x-y-z coordinate space by using the actual distance of the tomography interval of the MRI image and the image resolution correspondingly. And setting a proportionality coefficient of the coordinate space for adapting the posture and the position of each MRI image in the x-y-z coordinate space.
Step 120: and determining the bone contour in the MRI image according to the bone image characteristics.
The image contour tracking technology is utilized to determine the contour of the skeleton in each MRI image, the contour range of the skeleton is excluded from the MRI image, the data processing amount of other tissues to be segmented can be quickly reduced, and meanwhile, the relative position of each tissue to be segmented in the image processing is established by utilizing the skeleton contour.
Step 130: the range reference point of the tissue to be segmented is marked in the fiducial MRI image.
The method provides supervised initial judgment conditions for automatically carrying out the tissue segmentation process and the clustering process by manually confirming the range reference point in the approximate outline range of the tissue to be segmented. The marking of the range reference points is only a qualitative marking and does not require determining that a specific position within the approximate outline has randomness but that a specific coordinate has certainty, greatly reducing the time cost of manual validation.
Step 140: a dummy pixel surrounding a single pixel is formed in the reference MRI image.
The dummy pixel is formed around the single pixel, is formed by surrounding pixels of the single pixel, is superposition of common figure characteristics of the surrounding pixels, and is also a mixture of common information characteristics of the surrounding pixels.
The process of forming the pseudo pixels in the segmentation method of the large joint tissues in one embodiment of the invention comprises the following steps:
around a single pixel, the graphic features and/or information features of consecutive odd pixels are superimposed to form a dummy pixel.
As shown in fig. 3, the dummy pixel formed in the present embodiment may be a dummy pixel formed by three single pixels arranged in a straight line, or may be a dummy pixel formed by three single pixels arranged in a polygonal line.
The pseudo pixels have orientation attributes, and the pseudo pixels form superposition of the image features in one orientation, so that the change trend of the image features in the orientation related to a single pixel is clear, the gradual change feature or the jump feature of the orientation image features can be embodied, and the analysis of the gray change and the information meaning of the periphery of the single pixel is facilitated.
Step 150: and forming a random forest classification model of the tissue to be segmented through the pixel training set.
The pixel training set is derived from the existing high-quality MRI image and comprises a large number of characteristic labels, appropriate interference data are included after pre-processing, a large number of decision tree models are formed through supervised learning training, and then a random forest classification model aiming at the gray data and the organization characteristic information of the MRI image is formed.
The middle node of each decision tree in the random forest classification model is used as a classifier, and the leaf node has probability distribution p (c) of each category for characteristic judgment conditions and judgment thresholds such as gray scale characteristics, information carrying characteristics, inter-pixel negative correlation characteristics or inter-pixel positive correlation characteristicsj|v,leaf(treet) The category attribution of the single pixel v can be judged according to the probability distribution. The result of all the T decision trees is integrated to be the final judgment result of a single pixel v.
The probability distribution of the leaf node where the single pixel v is located in each decision tree is generally integrated by using an average voting method. Its mathematical expression can be written as follows:
Figure BDA0002152805450000081
the probability distributions of the T leaf nodes are obtained and synthesized by the single pixel through the T decision trees, and then the class with the highest probability is the final result of the single pixel v. And after all pixels in the image to be detected are classified by the random forest, the image segmentation task is completed.
The forming process of the tissue type and the position attribute in the segmentation method of the large-joint tissue according to an embodiment of the present invention is shown in fig. 4. In fig. 4, the present embodiment includes:
step 210: and determining the category of each pseudo pixel through a random forest classification model.
The change trend of the single pixel in the corresponding direction can be obtained by utilizing the pixel attribute superposition characteristic of the pseudo pixel, and the dominant category in the mixed information formed by superposing the pseudo pixel information can be obtained by utilizing the random forest classification model. The classification class expectation of the single pixel periphery determining orientation and the gray level expectation of the single pixel periphery determining orientation can be obtained through the dummy pixels, and the quantitative trend of the single pixel edge is formed.
Step 220: and determining the category of the single pixel through a random forest classification model.
And forming a determined category by utilizing the quantitative analysis of the dominant attribute and the recessive feature of the single pixel obtained by utilizing the random forest classification model. The classification category may be expressed in terms of a percentage probability.
Step 230: comparing the class weights of the single pixel and the associated pseudo pixels identifies the tissue class of the single pixel.
The comparison process comprises the following steps:
clustering by using the range reference point to obtain an aggregation range set of a single pixel; clustering may employ, for example, a k-means clustering algorithm.
In each aggregation range, comparing each single pixel class with the surrounding pseudo pixel classes, confirming that the single pixel class is accurate when the classes of at least two adjacent pseudo pixels are the same as the classes of the single pixels, and confirming the tissue class corresponding to the classes; when the category of the absence of at least two adjacent dummy pixels is the same as the category of the single pixel, the single pixel is excluded from the aggregation range.
Considering that the two-dimensional graph of the tissues to be segmented is usually a closed graph, the accurate range clustering of the same type of tissues can be well formed by using the range reference point as a clustering basic point. And determining connectivity among single pixels by using continuous pseudo pixels of the same class, and ensuring that the aggregation range edge of the same class of tissues is basically stable.
Step 240: comparing the gray weight of the single pixel and the associated pseudo pixel to confirm the position characteristic of the single pixel and form the tissue plane outline.
The comparison process comprises the following steps:
determining each aggregation range one by one, and determining a peripheral single pixel of the edge in one aggregation range;
comparing the gray level of each peripheral single pixel with the gray level of the surrounding dummy pixels, and when the gray level of at least two adjacent dummy pixels of the peripheral single pixel and the gray level of the single pixel reach a gray level jump threshold value, determining that the contour of the corresponding single pixel is accurate; and when the gray values of two adjacent dummy pixels and the gray value of a single pixel do not reach the gray jump threshold value, excluding the single pixel from the aggregation range.
Considering that the gray level of the tissue plane contour of the tissue to be segmented is usually staggered or close to the gray level of the outer pixel, the gray values of the adjacent dummy pixels are used for obtaining weighted comparison at a preset gray level jump threshold value (percentage probability value), so that the accuracy of the aggregation range contour of the same type of tissue is ensured.
The process of forming the solid contour in the segmentation method of the large joint tissue according to an embodiment of the present invention is shown in fig. 5. In fig. 5, the present embodiment includes:
step 310: and establishing the relative position characteristics between the tissue plane contour and the bone contour in each MRI image.
The tissues and the bone contours in the MRI image have a determined relative position relationship, the relative position relationship comprises a determined shape of the tissues and the bone contours, a position and an interval of each tissue contour which are closest to the bone contours, and a closest position and an interval between the tissue contours, and quantized relative position features are formed by using the relative position relationship.
Step 320: and forming a fitting coefficient between the adjacent MRI images according to the change trend of the relative position characteristics between tissues of the adjacent MRI images.
The plane outlines of the same tissues and skeleton outlines between adjacent MRI images have similarity, the change of the same tissues between the adjacent MRI images is limited, and detailed quantitative parameters of relative change of relative position characteristics between the adjacent MRI images can be obtained, so that a fitting coefficient of each tissue and skeleton between the adjacent MRI images is formed.
Step 330: the tissue volume contour is formed in the coordinate space in combination with the tissue plane contour and the fitting coefficients in the MRI image.
And forming a three-dimensional contour of the skeleton and each segmented tissue by using the relative position characteristics in each MRI image and the fitting coefficient between each MRI image, and completing the segmentation of the large joint tissue.
The segmentation method of the large joint tissue in the embodiment of the invention separates the tissue and the bone in the large joint into the two-dimensional contour and the three-dimensional object, realizes the automation of segmentation and the objectification of the tissue, and greatly improves the utilization efficiency and the observation dimension of an MRI image.
The segmentation system of the large joint organization of the embodiment of the invention comprises:
the memory is used for storing the program codes corresponding to the processing procedures of the segmentation method of the large-joint organization in the embodiment;
and the processor is used for executing the program codes corresponding to the processing procedures of the segmentation method of the large joint organization in the embodiment.
The processor may be a dsp (digital Signal processing) digital Signal processor, an FPGA (Field-Programmable Gate Array), an mcu (microcontroller unit) system board, an soc (system on a chip) system board, or a plc (Programmable Logic controller) minimum system including I/O.
A segmentation system for large joint organization according to an embodiment of the present invention is shown in fig. 6. In fig. 6, the present embodiment includes:
a reference forming device 1100 for forming an image division reference;
a two-dimensional contour forming device 1200, for forming tissue class and position feature of a single pixel in the MRI image according to the image segmentation reference to determine the tissue plane contour;
the three-dimensional contour forming apparatus 1300 is configured to form a tissue three-dimensional contour from the tissue plane contour in each MRI image in combination with the image segmentation criteria.
As shown in fig. 6, in an embodiment of the present invention, a reference forming apparatus 1100 includes:
a space forming module 1110 for establishing a coordinate space of the MRI images according to pixel sizes of the MRI images;
a bone formation module 1120 for determining a bone contour in the MRI image based on the bone image features;
a reference point forming module 1130 for marking a range reference point of the tissue to be segmented in the reference MRI image;
a dummy pixel formation module 1140 for forming a dummy pixel surrounding the single pixel in the reference MRI image;
in an embodiment of the invention, the dummy pixel forming module 1140 is configured to form a dummy pixel by overlapping the pattern features of consecutive odd pixels around a single pixel.
And a model forming module 1150, configured to form a random forest classification model of the tissue to be segmented through the pixel training set.
As shown in fig. 6, in an embodiment of the present invention, a two-dimensional outline forming apparatus 1200 includes:
a first class forming module 1210, configured to determine a class of each dummy pixel through a random forest classification model;
a second class forming module 1220, configured to determine a class of a single pixel through a random forest classification model;
a first class comparing module 1230, configured to compare the class weights of the single pixel and the associated dummy pixels to determine a tissue class of the single pixel;
the first category comparing module 1240 is used for comparing the gray scale weights of the single pixel and the associated dummy pixels to confirm the position characteristics of the single pixel and form the tissue plane outline.
As shown in fig. 6, in an embodiment of the present invention, a three-dimensional contour forming apparatus 1300 includes:
a feature formation module 1310 for establishing relative positional features between the tissue plane contour and the bone contour in each MRI image;
a coefficient forming module 1320, configured to form a fitting coefficient between adjacent MRI images according to a variation trend of a relative position feature between tissues of the adjacent MRI images;
a volume generation module 1330 for forming a tissue volume contour in the coordinate space in combination with the tissue plane contour and the fitting coefficients in the MRI image.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A segmentation method for large joint tissues is characterized by comprising the following steps:
forming an image segmentation reference;
forming tissue class and position characteristics of a single pixel in the MRI image according to the image segmentation reference to determine a tissue plane contour;
forming a tissue stereo contour according to the tissue plane contour in each MRI image and the image segmentation reference, comprising:
establishing relative position characteristics between the tissue plane contour and the bone contour in each MRI image, comprising: establishing a relative position relation determined between each tissue and each bone contour in an MRI image, wherein the relative position relation comprises the determined shape of each tissue and each bone contour, the closest position and distance between each tissue contour and each bone contour, and the closest position and distance between each tissue contour, and forming relative position characteristics by utilizing the quantification of the relative position relation;
forming a fitting coefficient between the adjacent MRI images according to the variation trend of the relative position features between tissues of the adjacent MRI images, comprising: obtaining detailed quantitative parameters of relative position characteristics of adjacent MRI images which are relatively changed to form fitting coefficients of tissues and bones between the adjacent MRI images;
forming the tissue volume contour in a coordinate space in combination with the tissue plane contour and the fitting coefficients in an MRI image, comprising: and forming a three-dimensional contour of the skeleton and each tissue to be segmented by using the relative position characteristics in each MRI image and the fitting coefficient between each MRI image.
2. The method of segmentation of large joint tissues according to claim 1, wherein said forming an image segmentation reference comprises:
establishing a coordinate space of the MRI images according to the pixel size of each MRI image;
determining a bone contour in the MRI image according to the bone image characteristics;
marking a range reference point of the tissue to be segmented in the reference MRI image;
forming a dummy pixel surrounding a single pixel in the reference MRI image;
and forming a random forest classification model of the tissue to be segmented through a pixel training set.
3. The method for segmenting the large joint tissues according to claim 2, wherein the dummy pixels can be formed by three single pixels arranged in a straight line or three single pixels arranged in a broken line.
4. The method for segmenting large joint tissues according to claim 1, wherein the forming tissue classes and position characteristics of a single pixel in an MRI image according to the image segmentation reference to determine the tissue plane contour comprises:
determining the category of each pseudo pixel through a random forest classification model;
determining the category of a single pixel through the random forest classification model;
comparing the category weights of the single pixel and the associated pseudo pixels to confirm the tissue category of the single pixel;
and comparing the gray weight values of the single pixel and the associated pseudo pixels to confirm the position characteristics of the single pixel and form a tissue plane outline.
5. The method of segmenting large-joint tissues according to claim 4, wherein the comparing the class weights of the single pixel and the associated pseudo pixels to confirm the tissue class of the single pixel comprises:
clustering by using a range reference point to obtain an aggregation range set of the single pixel;
in each aggregation range, comparing each single pixel class with the surrounding pseudo pixel classes, confirming that the single pixel class is accurate when the classes of at least two adjacent pseudo pixels are the same as the classes of the single pixels, and confirming the tissue class corresponding to the classes; when the category of the absence of at least two adjacent dummy pixels is the same as the category of the single pixel, the single pixel is excluded from the aggregation range.
6. The method of segmenting large-joint tissues according to claim 5, wherein the comparing the class weights of the single pixel and the associated pseudo pixels to confirm the tissue class of the single pixel comprises:
determining each aggregation range one by one, and determining a peripheral single pixel of the edge in one aggregation range;
comparing the gray level of each peripheral single pixel with the gray level of the surrounding dummy pixels, and when the gray level of at least two adjacent dummy pixels of the peripheral single pixel and the gray level of the single pixel reach a gray level jump threshold value, determining that the contour of the corresponding single pixel is accurate; and when the gray values of two adjacent dummy pixels and the gray value of a single pixel do not reach the gray jump threshold value, excluding the single pixel from the aggregation range.
7. A segmentation system for large joint tissues, comprising:
a memory for storing program codes corresponding to the processing procedures of the segmentation method of the large-joint organization according to any one of claims 1 to 6;
a processor for executing the program code.
CN201910708060.1A 2019-08-01 2019-08-01 Segmentation method and segmentation system for large joint tissues Active CN110458850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910708060.1A CN110458850B (en) 2019-08-01 2019-08-01 Segmentation method and segmentation system for large joint tissues

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910708060.1A CN110458850B (en) 2019-08-01 2019-08-01 Segmentation method and segmentation system for large joint tissues

Publications (2)

Publication Number Publication Date
CN110458850A CN110458850A (en) 2019-11-15
CN110458850B true CN110458850B (en) 2020-12-11

Family

ID=68484628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910708060.1A Active CN110458850B (en) 2019-08-01 2019-08-01 Segmentation method and segmentation system for large joint tissues

Country Status (1)

Country Link
CN (1) CN110458850B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091365A (en) * 2014-07-12 2014-10-08 大连理工大学 Acetabulum tissue model reconstruction method for serialization hip joint CT image
CN108447063A (en) * 2017-12-15 2018-08-24 浙江中医药大学 The multi-modal nuclear magnetic resonance image dividing method of Gliblastoma
CN109448008A (en) * 2018-10-22 2019-03-08 盐城吉大智能终端产业研究院有限公司 A kind of brain MRI image dividing method of combination biological characteristic
CN109636730A (en) * 2017-09-29 2019-04-16 交互数字Ce专利控股公司 Method for the dummy pixel in filter depth figure
CN110060253A (en) * 2019-05-06 2019-07-26 西安交通大学 Based on Gabor multi-feature extraction and preferred compound sleeper porosity defects recognition methods

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7003161B2 (en) * 2001-11-16 2006-02-21 Mitutoyo Corporation Systems and methods for boundary detection in images
JP4760288B2 (en) * 2005-10-13 2011-08-31 ソニー株式会社 Image display system, display device, image resynthesis device, image resynthesis method, and program
CN102663729A (en) * 2012-03-11 2012-09-12 东华大学 Method for colorizing vehicle-mounted infrared video based on contour tracing
CN103440665B (en) * 2013-09-13 2016-09-14 重庆大学 Automatic segmentation method of knee joint cartilage image
CN103729875B (en) * 2013-12-09 2016-09-07 深圳先进技术研究院 The left ventricle three-D profile method for reconstructing of cardiac magnetic resonance images and system
CN103871057A (en) * 2014-03-11 2014-06-18 深圳市旭东数字医学影像技术有限公司 Magnetic resonance image-based bone segmentation method and system thereof
CN108510507A (en) * 2018-03-27 2018-09-07 哈尔滨理工大学 A kind of 3D vertebra CT image active profile dividing methods of diffusion-weighted random forest

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091365A (en) * 2014-07-12 2014-10-08 大连理工大学 Acetabulum tissue model reconstruction method for serialization hip joint CT image
CN109636730A (en) * 2017-09-29 2019-04-16 交互数字Ce专利控股公司 Method for the dummy pixel in filter depth figure
CN108447063A (en) * 2017-12-15 2018-08-24 浙江中医药大学 The multi-modal nuclear magnetic resonance image dividing method of Gliblastoma
CN109448008A (en) * 2018-10-22 2019-03-08 盐城吉大智能终端产业研究院有限公司 A kind of brain MRI image dividing method of combination biological characteristic
CN110060253A (en) * 2019-05-06 2019-07-26 西安交通大学 Based on Gabor multi-feature extraction and preferred compound sleeper porosity defects recognition methods

Also Published As

Publication number Publication date
CN110458850A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
Liu et al. Attribute-aware face aging with wavelet-based generative adversarial networks
CN107977671B (en) Tongue picture classification method based on multitask convolutional neural network
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN108830326B (en) Automatic segmentation method and device for MRI (magnetic resonance imaging) image
CN107633522B (en) Brain image segmentation method and system based on local similarity active contour model
CN108205806B (en) Automatic analysis method for three-dimensional craniofacial structure of cone beam CT image
CN110610197B (en) Method and device for mining difficult sample and training model and electronic equipment
EP3101594A1 (en) Saliency information acquisition device and saliency information acquisition method
CN108921057B (en) Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device
CN105719243B (en) Image processing apparatus and method
CN107730507A (en) A kind of lesion region automatic division method based on deep learning
CN108062749B (en) Identification method and device for levator ani fissure hole and electronic equipment
CN109993750A (en) A kind of segmentation recognition method and system, terminal and readable storage medium storing program for executing of hand jnjuries
Mahapatra et al. Cardiac LV and RV segmentation using mutual context information
CN104268552B (en) One kind is based on the polygonal fine classification sorting technique of part
CN107464234B (en) Lung nodule image deep learning identification system based on RGB channel superposition method and method thereof
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN111259735B (en) Single-person attitude estimation method based on multi-stage prediction feature enhanced convolutional neural network
CN107871316A (en) A kind of X-ray hand bone interest region extraction method based on deep neural network
CN110929728A (en) Image region-of-interest dividing method, image segmentation method and device
CN115546605A (en) Training method and device based on image labeling and segmentation model
CN109558801B (en) Road network extraction method, medium, computer equipment and system
CN110443790B (en) Cartilage identification method and system in medical image
CN112528058B (en) Fine-grained image classification method based on image attribute active learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant