CN110443790B - Cartilage identification method and system in medical image - Google Patents

Cartilage identification method and system in medical image Download PDF

Info

Publication number
CN110443790B
CN110443790B CN201910708912.7A CN201910708912A CN110443790B CN 110443790 B CN110443790 B CN 110443790B CN 201910708912 A CN201910708912 A CN 201910708912A CN 110443790 B CN110443790 B CN 110443790B
Authority
CN
China
Prior art keywords
edge
pixel
image
tissue
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910708912.7A
Other languages
Chinese (zh)
Other versions
CN110443790A (en
Inventor
林海晓
武正强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Linkmed Technology Co ltd
Original Assignee
Beijing Linkmed Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Linkmed Technology Co ltd filed Critical Beijing Linkmed Technology Co ltd
Priority to CN201910708912.7A priority Critical patent/CN110443790B/en
Publication of CN110443790A publication Critical patent/CN110443790A/en
Application granted granted Critical
Publication of CN110443790B publication Critical patent/CN110443790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a cartilage identification method and system in a medical image, and solves the technical problem that the existing identification method is low in cartilage identification adaptability. The method comprises the following steps: preprocessing the gray level image to obtain a corresponding gradient image; generating a first edge identification and a second edge identification to form an exact tissue contour according to the first edge identification and the second edge identification which are respectively formed in the gradient image; and forming a three-dimensional contour of the cartilage tissue according to the change trend among the tissue contours. Automatic processing and classification are formed by utilizing the abrupt change edge of the single quantitative information, so that the identification accuracy of the tissue edge is improved. The method utilizes different contour generation methods to check and avoid overfitting, so that the cartilage tissue realizes automatic main body segmentation, accurate contour positioning and automatic three-dimensional modeling in the gray level image, and the identification efficiency of professional human identification resources is effectively improved.

Description

Cartilage identification method and system in medical image
Technical Field
The invention relates to the technical field of medical image recognition, in particular to a cartilage recognition method and system in a medical image.
Background
In the prior art, in an MRI (Magnetic Resonance Imaging) image, the distinction between articular cartilage and cartilage peripheral tissues is fuzzy, the contrast is low, the cartilage thickness is thin, and the cartilage is in a gray scale connected state at certain parts and connective tissues and the like in the image, which all cause great difficulty in automatic segmentation of the articular cartilage.
In recent research, algorithm research on cartilage automatic segmentation is ongoing, and some methods for cartilage automatic or semi-automatic segmentation are proposed, wherein most cartilage segmentation algorithms are based on statistical shape models or pattern recognition. At present, related patents of automatic or semi-automatic segmentation of articular cartilage do not exist in China.
In the prior art, selective editing, defect compensation processing, artifact and tedious data separation are performed on various tissue patterns in a manual mode, and then a segmentation result is generated by using a region growing method to establish a complete digital model. This can consume a significant amount of time for the operator, and the time effectiveness cannot be met by professional resources when more data needs to be processed. In the prior art, a random forest algorithm is also used for processing a cluster to find a boundary, and a series of decision trees are established by learning a data sample set with marks in a random form. The process of training each tree is a process of building a series of nodes. In the nodes of each decision tree, the nodes are divided into intermediate nodes and leaf nodes. Each intermediate node is a weak classifier, i.e. an intermediate node contains a question, which is split into left and right child nodes according to the answer of the question, in order to maximize some measure obtained after splitting. Information Gain Ratio (IGR) is used as a measure of the split nodes of the tree. The definition is as follows:
Figure GDA0002963150370000021
G(R)=Info(D)-InfoR(D)
Figure GDA0002963150370000022
Figure GDA0002963150370000023
Figure GDA0002963150370000024
where D represents a sample set, R represents an arbitrary classification, and p represents a probability of a certain class. G (R) represents information gain, and I (D) represents split information amount. After the current sample set is split at a certain time, the higher the obtained information gain rate is, the better the splitting effect is, and the purer the split subset is. The data structure of the split left and right child nodes is the same as that of the parent node. And the sample set in the child node should be purer than that of the parent node, which means that the samples of a certain category occupy a higher proportion than other samples, so that the determination of the category attribution problem is more convenient. The establishment of a tree is a process of continuously splitting a node downwards, a leaf node is usually the last layer of a decision tree, and the leaf node contains the classification result and does not need to be further split. Training of a tree begins with splitting the root node and ends with the leaf node. A plurality of decision trees trained in a random fashion form a random forest. The intermediate nodes in each tree will contain a classifier consisting of a feature and a threshold corresponding to that feature. And the result of one classification (the probability of being judged to be each class) is contained in the leaf node. However, the gray scale features of the cartilage have identification defects, and the contour identification deviation is large.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a method and a system for identifying cartilage in a medical image, which solve the technical problem of low suitability of the existing identification method for cartilage identification.
The cartilage identification method in the medical image comprises the following steps:
preprocessing the gray level image to obtain a corresponding gradient image;
generating a first edge identification and a second edge identification to form an exact tissue outline according to the respectively formed first edge identification and second edge identification in the gradient image;
and forming a three-dimensional contour of the cartilage tissue according to the change trend among the tissue contours.
In an embodiment of the present invention, the preprocessing the grayscale image to obtain a corresponding gradient image includes:
carrying out smoothing treatment on the gray level image to form a smooth image;
acquiring the gradient of each pixel in the smooth image to form a gradient image;
cartilage reference points are manually marked in the grayscale image.
In an embodiment of the present invention, the forming of the exact tissue contour according to the first edge recognition and the second edge recognition in the gradient image includes:
determining potential edge pixels in the gradient image according to magnitude comparison of pixels and pixel neighborhoods in a gradient direction;
determining a base edge pixel by non-maxima suppression of the potential edge pixels;
performing noise identification on the basic edge pixels to form a first edge pixel image, and finishing the first edge identification;
determining each pixel category in the gradient image through a random forest classification model to form a second edge pixel image and finish second edge identification;
determining edge pixels by overlapping the second edge pixel image and the first edge pixel image, and overlapping the edge pixels to the gray level image to determine a tissue contour.
In an embodiment of the present invention, the forming of the three-dimensional contour of the cartilage tissue according to the variation trend between the tissue contours includes:
establishing relative position characteristics among tissue outlines in each gray level image;
forming a fitting coefficient of the cartilage tissue contour between the adjacent gray level images according to the variation trend of the relative position features between the tissue contours of the adjacent gray level images;
and combining the tissue contour in the gray-scale image and the fitting coefficient to form a three-dimensional contour of the cartilage tissue.
The embodiment of the invention provides a cartilage identification system in medical images, which comprises:
the memory is used for storing program codes corresponding to the processing procedures of the cartilage identification method in the medical image;
a processor for executing the program code.
The embodiment of the invention provides a cartilage identification system in medical images, which comprises:
the smooth forming module is used for preprocessing the gray level image to obtain a corresponding gradient image;
a gradient formation module for forming an exact tissue contour within the gradient image from a first edge identification and a second edge identification;
and the artificial identification module is used for forming a three-dimensional contour of the cartilage tissue according to the change trend among the tissue contours.
The cartilage identification method and the separation system in the medical image of the embodiment of the invention utilize the automatic processing and classification of the mutation edge formation of the single quantitative information, so that the accuracy of tissue edge identification is improved. The method utilizes different contour generation methods to check and avoid overfitting, so that the cartilage tissue can realize automatic main body segmentation, accurate contour positioning and automatic three-dimensional modeling in the gray level image, and the identification efficiency of professional human identification resources is effectively improved.
Drawings
Fig. 1 is a flowchart illustrating a cartilage recognition method in a medical image according to an embodiment of the invention.
Fig. 2 is a schematic flow chart illustrating preprocessing in the method for identifying cartilage in medical images according to an embodiment of the present invention.
Fig. 3 is a schematic flow chart illustrating two steps of pixel edge identification in the cartilage identification method in medical images according to an embodiment of the present invention.
Fig. 4 is a schematic flow chart illustrating modeling in the cartilage recognition method in medical images according to an embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating an architecture of a cartilage recognition system in a medical image according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and more obvious, the present invention is further described below with reference to the accompanying drawings and the detailed description. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows a method for identifying cartilage in a medical image according to an embodiment of the present invention. In fig. 1, an embodiment of the present invention includes:
step 100: and preprocessing the gray level image to obtain a corresponding gradient image.
The preprocessing may obtain quantization of a single kind of information in the grayscale image, for example, the present embodiment obtains quantization of pixel luminance information. The gradient image is composed of a brightness gradient, including direction and magnitude, for each gray pixel in the gray image. The interference of the color tone can be effectively eliminated by using the quantization change of the single brightness information.
Step 200: the exact tissue contour is generated within the gradient image from the first and second edge identifications formed separately.
The first edge identification and the second edge identification can form a mutual verification identification relation and an identification process ensures the identification precision of the tissue outline.
Step 300: and forming a three-dimensional contour of the cartilage tissue according to the variation trend among the tissue contours.
As can be understood by those skilled in the art, in the existing three-dimensional modeling process, a three-dimensional contour of the same object can be formed according to the number of parallel sections and the contour of the object in the sections, and the smoothness and the precision of the three-dimensional contour can be improved by quantifying the change trend of the associated object in different sections.
The cartilage identification method in the medical image provided by the embodiment of the invention utilizes the automatic processing and classification of the mutation edge formation of single quantitative information, so that the accuracy of tissue edge identification is improved. The method utilizes different contour generation methods to check and avoid overfitting, so that the cartilage tissue can realize automatic main body segmentation, accurate contour positioning and automatic three-dimensional modeling in the gray level image, and the identification efficiency of professional human identification resources is effectively improved.
Fig. 2 shows preprocessing in the method for identifying cartilage in medical image according to an embodiment of the present invention. In fig. 2, an embodiment of the present invention includes:
step 110: and smoothing the gray level image to form a smooth image.
The smooth processing can eliminate the sudden change amplitude of the single type information of the pixels in the gray level image, and avoid the peak value formed by interference from being mistakenly identified as an extreme value or a change inflection point. In one embodiment of the present invention, a gaussian filter is used to smooth an image, and the main process includes:
and applying Gaussian filtering to smooth the image. The output result of the smoothing process is obtained by convolution of the gray-scale image pixels with a two-dimensional gaussian function:
I(x,y)=f(x,y)*g(x,y)
where f (x, y) is the input pixel, g (x, y) is a two-dimensional gaussian function, and I (x, y) is the smoothed pixel. The expression of g (x, y) is
Figure GDA0002963150370000061
Wherein sigma is a distribution parameter of a Gaussian function, and the smoothing degree is controlled. As σ increases, the accuracy of edge location decreases and the signal-to-noise ratio increases, i.e., the larger σ, the smoother the image.
Step 120: and acquiring the gradient of each pixel in the smoothed image to form a gradient image.
Quantizing the single type of information includes the magnitude and direction of the constituent vectors. Taking the brightness as an example, obtaining the gradient of each pixel in the smoothed image includes:
the derivatives in the x and y directions of the smoothed image are:
Figure GDA0002963150370000062
on the basis of calculating the partial derivative of the smoothed image pixel I' (x, y), the magnitude and direction of the pixel gradient are obtained:
Figure GDA0002963150370000063
Figure GDA0002963150370000064
where M (x, y) is the gradient magnitude and θ (x, y) is the angle between the M (x, y) vector and the x coordinate axis.
Step 130: cartilage reference points are manually marked in the grayscale image.
The cartilage reference point is a random position within the range of the cartilage image in the gray scale image of the artificial marker. The cartilage reference points correspond to the general locations of the gray scale image, the gradient image and the smooth image.
The cartilage identification method in the medical image of the embodiment of the invention has the advantages that the processing processes of smoothing processing and extracting single type information effectively inhibit the influence of interference and noise interference formed by redundant information types in the original image, and avoid the difficulty and processing load of data processing.
Fig. 3 shows two identification steps in the cartilage identification method in the medical image according to an embodiment of the present invention. In fig. 3, an embodiment of the present invention includes:
step 210: potential edge pixels are determined in the gradient image from a comparison of the magnitude of the pixel and the neighborhood of pixels in the gradient direction.
In this embodiment, the brightness information is used for comparison, and edge pixels with abrupt brightness change are extracted according to the brightness gradient to serve as potential edge pixels.
Step 220: non-maxima suppression of the potential edge pixels determines the base edge pixels.
The potential edge pixel is a pixel with vector mutation in the gradient image, and is obtained in the gradient image by adopting a non-maximum suppression method as edge information according to an amplitude comparison mode of the pixel and a pixel neighborhood in the gradient direction, wherein the potential edge pixel mainly comprises the following steps:
testing a 3 multiplied by 3 neighborhood of each potential pixel in the gradient direction, and comparing the gradient amplitude of the potential pixel in the center with that of the adjacent pixels;
if the potential pixel in the center has the largest gradient magnitude, it is considered as the base edge pixel, otherwise it is excluded as the background pixel.
Step 230: and carrying out noise identification on the basic edge pixels to form a first edge pixel image, and finishing the first edge identification.
The basic edge pixels are detected by a double threshold method to exclude noise pixels, and the connectivity of the proximity pixels is judged according to the formed extensibility of the determination pixels. Performing noise identification includes:
detecting a determined pixel, an interference pixel and an undetermined pixel by using a dual-threshold method;
lagged binarization is used to connect the determined pixels and the undetermined pixels.
In particular, if the gradient magnitude of at least one pixel is greater than or equal to the high threshold ThThe gradient amplitudes of the other pixels are less than the low threshold TlThen, the final edge result is determined. The calculation formula is as follows:
Figure GDA0002963150370000081
wherein, L (x, y) is the pixel value in the non-maximum suppression map, L' (x, y) is the pixel value in the final edge detection result map, ThIs a high threshold, TlFor a low threshold, s may be 0 or 1, which represents the adjacent condition of the pixel point and the edge pixel, and when the pixel point and the edge pixel are not adjacent, 0 is taken, and the adjacent is 1.
Step 240: and determining each pixel category in the gradient image through a random forest classification model to form a second edge pixel image and finish secondary edge identification.
The random forest classification model is from a random forest classification model formed by training of a brightness training set. The middle node of each decision tree in a general random forest classification model is used as a classifier, and the probability distribution p (c) of each category exists in leaf nodes under the characteristic judgment conditions and judgment thresholds such as brightness, gray characteristic, information carrying characteristic, inter-pixel negative correlation characteristic or inter-pixel positive correlation characteristicj|v,leaf(treet) The category attribution of the single pixel v can be judged according to the probability distribution. The result of all the T decision trees is integrated to be the final judgment result of a single pixel v.
The probability distribution of the leaf node where the single pixel v is located in each decision tree is generally integrated by using an average voting method. Its mathematical expression can be written as follows:
Figure GDA0002963150370000082
the probability distributions of the T leaf nodes are obtained and synthesized by the single pixel through the T decision trees, and then the class with the highest probability is the final result of the single pixel v. And after all pixels in the image to be detected are classified by the random forest, the image segmentation task is completed.
Step 250: and determining edge pixels by overlapping the second edge pixel image and the first edge pixel image, and overlapping the edge pixels to the gray level image to determine the tissue outline.
The second edge pixel image and the first edge pixel image are overlapped to form the differential verification of the edge pixels. And (4) overlapping the edge pixels determined after the differential verification to form a determined contour in the gray level image, and acquiring the cartilage tissue contour by combining the cartilage reference point. Or carrying out XOR operation on corresponding superposed pixels to form weighted obtaining pixels to confirm and maintain edge continuity, and carrying out OR operation to form weighted obtaining pixel verification.
The cartilage identification method in the medical image reduces the extraction range of the edge pixels by utilizing differential edge identification, and improves the influence of interference factors on the edge pixels.
The modeling in the cartilage recognition method in the medical image according to an embodiment of the present invention is shown in fig. 5. In fig. 5, an embodiment of the present invention includes:
step 310: and establishing relative position characteristics between the tissue outlines in each gray-scale image.
And each tissue contour in the gray-scale image has a determined relative position relationship, the relative position relationship comprises each tissue determined shape, the closest position and the closest distance between each tissue contour, and the quantized relative position features are formed by vector description.
Step 320: and forming a fitting coefficient of the cartilage tissue contour between the adjacent gray level images according to the change trend of the relative position features between the tissue contours of the adjacent gray level images.
The plane outlines of the same tissues between the adjacent gray level images have similarity, the change of the same tissues between the adjacent gray level images is limited, and the detailed vector parameters of the relative change of the relative position characteristics between the adjacent gray level images can be obtained, so that the fitting coefficient of the cartilage tissue outline between the adjacent gray level images is formed.
Step 330: and combining the tissue contour in the gray level image and the fitting coefficient to form a three-dimensional contour of each cartilage tissue.
And forming a three-dimensional contour of the cartilage tissue by using the relative position characteristics in each gray level image and the fitting coefficients among the gray level images to finish the segmentation of the cartilage tissue.
The cartilage identification method in the medical image separates the cartilage tissue into a two-dimensional contour and a three-dimensional object, realizes segmentation automation and tissue objectification, and greatly improves the utilization efficiency of professional human resources and the observation dimension. The cartilage recognition system in medical image according to an embodiment of the present invention includes:
the memory is used for storing program codes corresponding to the processing procedures of the cartilage identification method in the medical image;
and the processor is used for executing the program codes corresponding to the processing procedures of the cartilage identification method in the medical image.
An embodiment of the present invention is shown in fig. 5. In fig. 5, an embodiment of the present invention includes:
a preprocessing device 1100, configured to preprocess the grayscale image to obtain a corresponding gradient image;
a recognition means 1200 for generating an exact tissue contour within the gradient image from the respectively formed first and second edge recognition;
the modeling device 1300 is used for forming the three-dimensional contour of the tissue according to the variation trend among the tissue contours.
As shown in fig. 5, in an embodiment of the present invention, the preprocessing unit 1100 includes:
a smoothing forming module 1110, configured to perform smoothing processing on the grayscale image to form a smoothed image;
a gradient forming module 1120, configured to obtain a gradient forming gradient image of each pixel in the smoothed image;
a manual marking module 1130 for manually marking the cartilage reference points in the gray-scale image.
As shown in fig. 5, in an embodiment of the present invention, the recognition apparatus 1200 includes:
a potential pixel determination module 1210 for determining a potential edge pixel in the gradient image based on a comparison of magnitudes of the pixel and a neighborhood of pixels in the gradient direction;
a base pixel determination module 1220 for non-maxima suppression of potential edge pixels to determine base edge pixels;
a first edge determining module 1230, configured to perform noise identification on the basic edge pixels to form a first edge pixel image, so as to complete first edge identification;
the second edge determining module 1240 is configured to determine each pixel category in the gradient image through the random forest classification model to form a second edge pixel image, and complete second edge identification;
an edge overlay determining module 1250 configured to determine an edge pixel by overlaying the second edge pixel image and the first edge pixel image, and overlay the edge pixel to the gray scale image to determine the tissue contour.
As shown in fig. 5, in an embodiment of the present invention, the modeling apparatus 1300 includes:
a position feature forming module 1310 for establishing relative position features between tissue contours in each gray scale image;
a fitting coefficient forming module 1320, configured to form a fitting coefficient of the cartilage tissue contour between adjacent grayscale images according to a variation trend of the relative position features between tissue contours of adjacent grayscale images;
a constructing module 1330 for combining the tissue contour in the gray-scale image and the fitting coefficients to form a stereo contour of each cartilage tissue.
The processor may be a dsp (digital Signal processing) digital Signal processor, an FPGA (Field-Programmable Gate Array), an mcu (microcontroller unit) system board, an soc (system on a chip) system board, or a plc (Programmable Logic controller) minimum system including I/O.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (4)

1. A method for cartilage recognition in medical images, comprising:
preprocessing the gray level image to obtain a corresponding gradient image;
generating an exact tissue contour within the gradient image from the respectively formed first and second edge recognition, comprising:
determining potential edge pixels in the gradient image from a comparison of magnitudes of pixels and pixel neighborhoods in a gradient direction, comprising: extracting edge pixels with abrupt brightness change according to the brightness gradient as potential edge pixels;
non-maximum suppression of the potential edge pixels to determine base edge pixels, including testing a 3 x 3 neighborhood of each potential edge pixel in the gradient direction, comparing the gradient magnitude of the central potential edge pixel with that of neighboring pixels, and if the central potential edge pixel has the largest gradient magnitude, then it is taken as the base edge pixel, otherwise it is excluded as the background pixel;
performing noise identification on the basic edge pixels to form a first edge pixel image, completing the first edge identification, detecting a determined pixel, an interference pixel and an undetermined pixel by using a dual-threshold method, connecting the determined pixel and the undetermined pixel by using hysteresis binarization, and including if the gradient amplitude of at least one pixel is greater than or equal to a high threshold ThThe gradient amplitudes of the other pixels are less than the low threshold TlThen, the final edge result is determined. The calculation formula is as follows:
Figure FDA0002963150360000011
wherein, L (x, y) is the pixel value in the non-maximum suppression map, L' (x, y) is the pixel value in the final edge detection result map, ThIs a high threshold, TlThe threshold value is low, s can be 0 or 1, representing the adjacent condition of the pixel point and the edge pixel, when the pixel point and the edge pixel are not adjacent, 0 is taken, and 1 is taken as the adjacent condition;
determining each pixel category in the gradient image through a random forest classification model to form a second edge pixel image and finish second edge identification;
determining edge pixels by overlapping the second edge pixel image and the first edge pixel image, and overlapping the edge pixels to the gray level image to determine a tissue outline;
forming a three-dimensional contour of the cartilage tissue according to the variation trend among the tissue contours, comprising the following steps:
establishing relative position characteristics between tissue outlines in each gray-scale image, comprising: establishing a relative position relationship among all tissue outlines in the gray level image, wherein the relative position relationship comprises the determined shape of each tissue, the closest position and the closest distance among all the tissue outlines, and describing the relative position relationship by using a vector to form a quantized relative position characteristic;
forming a fitting coefficient of the cartilage tissue contour between the adjacent gray scale images according to the variation trend of the relative position features between the tissue contours of the adjacent gray scale images, wherein the fitting coefficient comprises the following steps: obtaining detailed vector parameters of relative position features between adjacent gray level images which are relatively changed, and further forming a fitting coefficient of cartilaginous tissue outlines between the adjacent gray level images;
combining the tissue contour in the gray-scale image and the fitting coefficient to form a three-dimensional contour of each cartilage tissue, comprising: and forming the three-dimensional contour of the cartilage tissue by using the relative position characteristics in each gray level image and the fitting coefficients between the gray level images.
2. The method for cartilage recognition in medical images according to claim 1, wherein the preprocessing the gray-scale image to obtain the corresponding gradient image comprises:
carrying out smoothing treatment on the gray level image to form a smooth image;
acquiring the gradient of each pixel in the smooth image to form a gradient image;
cartilage reference points are manually marked in the grayscale image.
3. A system for cartilage recognition in medical images, comprising:
a memory for storing program codes corresponding to the processing procedures of the cartilage recognition method in the medical image according to any one of claims 1 to 2;
a processor for executing the program code.
4. A system for cartilage recognition in medical images, comprising:
the preprocessing device is used for preprocessing the gray level image to obtain a corresponding gradient image;
identification means for forming an exact tissue contour within the gradient image from the first and second edge identifications; the identification device comprises:
a potential pixel determination module for determining potential edge pixels in the gradient image from a comparison of magnitudes of pixels and pixel neighborhoods in a gradient direction, comprising: extracting edge pixels with abrupt brightness change according to the brightness gradient as potential edge pixels;
a base pixel determination module for non-maximum rejection of the potential edge pixels to determine base edge pixels, including testing a 3 x 3 neighborhood of each potential edge pixel in the gradient direction, comparing the gradient amplitudes of the central potential edge pixel with neighboring pixels, and if the central potential edge pixel has the largest gradient amplitude, then treating it as a base edge pixel, otherwise, excluding it as a background pixel;
a first edge determining module for performing noise identification on the basic edge pixels to form a first edge pixel image, and performing first edge identification including detecting and determining pixels, interference pixels and undetermined pixels by using a dual-threshold methodConnecting the determined pixel and the undetermined pixel by lagged binarization, including if the gradient magnitude of at least one pixel is greater than or equal to a high threshold ThThe gradient amplitudes of the other pixels are less than the low threshold TlThen, the final edge result is determined. The calculation formula is as follows:
Figure FDA0002963150360000031
wherein, L (x, y) is the pixel value in the non-maximum suppression map, L' (x, y) is the pixel value in the final edge detection result map, ThIs a high threshold, TlThe threshold value is low, s can be 0 or 1, representing the adjacent condition of the pixel point and the edge pixel, when the pixel point and the edge pixel are not adjacent, 0 is taken, and 1 is taken as the adjacent condition;
the second edge determining module is used for determining each pixel category in the gradient image through the random forest classification model to form a second edge pixel image and finish second edge identification;
the edge superposition determining module is used for determining edge pixels through superposition of the second edge pixel image and the first edge pixel image and superposing the edge pixels to the gray level image to determine a tissue outline;
the modeling device is used for forming a three-dimensional contour of the cartilage tissue according to the variation trend among the tissue contours; the modeling apparatus includes:
a position feature forming module for establishing relative position features between tissue contours in each of the grayscale images, comprising: establishing a relative position relationship among all tissue outlines in the gray level image, wherein the relative position relationship comprises the determined shape of each tissue, the closest position and the closest distance among all the tissue outlines, and describing the relative position relationship by using a vector to form a quantized relative position characteristic;
the fitting coefficient forming module is used for forming the fitting coefficient of the cartilage tissue contour between the adjacent gray scale images according to the change trend of the relative position features between the tissue contours of the adjacent gray scale images, and comprises: obtaining detailed vector parameters of relative position features between adjacent gray level images which are relatively changed, and further forming a fitting coefficient of cartilaginous tissue outlines between the adjacent gray level images;
a construction module for combining the tissue contour in the gray-scale image and the fitting coefficient to form a stereo contour of each cartilage tissue, comprising: and forming the three-dimensional contour of the cartilage tissue by using the relative position characteristics in each gray level image and the fitting coefficients between the gray level images.
CN201910708912.7A 2019-08-01 2019-08-01 Cartilage identification method and system in medical image Active CN110443790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910708912.7A CN110443790B (en) 2019-08-01 2019-08-01 Cartilage identification method and system in medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910708912.7A CN110443790B (en) 2019-08-01 2019-08-01 Cartilage identification method and system in medical image

Publications (2)

Publication Number Publication Date
CN110443790A CN110443790A (en) 2019-11-12
CN110443790B true CN110443790B (en) 2021-05-11

Family

ID=68432815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910708912.7A Active CN110443790B (en) 2019-08-01 2019-08-01 Cartilage identification method and system in medical image

Country Status (1)

Country Link
CN (1) CN110443790B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827285B (en) * 2019-11-15 2022-07-26 上海联影智能医疗科技有限公司 Cartilage thickness detection method and device, computer equipment and readable storage medium
CN111354000A (en) * 2020-04-22 2020-06-30 南京汇百图科技有限公司 Automatic segmentation method for articular cartilage tissue in three-dimensional medical image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809740A (en) * 2015-05-26 2015-07-29 重庆大学 Automatic knee cartilage image partitioning method based on SVM (support vector machine) and elastic region growth

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186901A (en) * 2013-03-29 2013-07-03 中国人民解放军第三军医大学 Full-automatic image segmentation method
WO2014165972A1 (en) * 2013-04-09 2014-10-16 Laboratoires Bodycad Inc. Concurrent active contour segmentation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809740A (en) * 2015-05-26 2015-07-29 重庆大学 Automatic knee cartilage image partitioning method based on SVM (support vector machine) and elastic region growth

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Canny算子检测焊缝的GPU并行加速研究;白东阳 等;《长春理工大学学报(自然科学版)》;20181031;第41卷(第5期);第93-96页 *
基于数据融合的边缘检测新方法研究;孙李辉 等;《航空计算技术》;20080531;第38卷(第3期);第22-24页 *

Also Published As

Publication number Publication date
CN110443790A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN106599854B (en) Automatic facial expression recognition method based on multi-feature fusion
CN109255344B (en) Machine vision-based digital display type instrument positioning and reading identification method
CN104077579B (en) Facial expression recognition method based on expert system
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN110853009B (en) Retina pathology image analysis system based on machine learning
CN108022233A (en) A kind of edge of work extracting method based on modified Canny operators
US20080285856A1 (en) Method for Automatic Detection and Classification of Objects and Patterns in Low Resolution Environments
CN111507426B (en) Non-reference image quality grading evaluation method and device based on visual fusion characteristics
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN116137036B (en) Gene detection data intelligent processing system based on machine learning
CN110443790B (en) Cartilage identification method and system in medical image
CN112419278B (en) Solid wood floor classification method based on deep learning
CN116740728B (en) Dynamic acquisition method and system for wafer code reader
CN115829942A (en) Electronic circuit defect detection method based on non-negative constraint sparse self-encoder
CN117542067B (en) Region labeling form recognition method based on visual recognition
Wang et al. An edge detection method by combining fuzzy logic and neural network
CN108256578B (en) Gray level image identification method, device, equipment and readable storage medium
CN107220612B (en) Fuzzy face discrimination method taking high-frequency analysis of local neighborhood of key points as core
CN110458853B (en) Ankle ligament separation method and system in medical image
CN110929681B (en) Wrinkle detection method
CN113177499A (en) Tongue crack shape identification method and system based on computer vision
CN113239790A (en) Tongue crack feature identification and length measurement method and system
CN107480672A (en) Image-recognizing method and system and autofocus control method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant